This is Part II of The Limitations Of AI. See Part I: Predictive Ability.
Forgive me this banality, but reminders of the obvious can bring clarity: AI runs on computers. Computers are discrete calculation engines, machines that take or hold states (like sets of 0-1) in their components in some multidimensional space. These states can be mapped, and can be known and directed. They obviously can be known and manipulated, because if they were not, then the machines would produce gibberish. With that comes the argument against so-called general artificial intelligence or “strong AI”.
Discrete States
Computers run on clocks, each state of the machine advancing by set rules one step at a time. Take any tick of the clock inside the machine. Since computers are large, and growing larger, it will take a lot of paper to write the list of states, and their relations, but it can be done. If you say the computer is conscious, because the AI has taken on life or exhibits general intelligence, is the paper conscious, too? If not, why not?
At a second tick of the clock, the machine will be a new state. Write that one down, too. Perform the same service for the third, fourth, and fifth ticks, and so on until you run out of lead or paper. Stack the papers in time order. Then “flip” the pages, so that the states progress. You did this as a kid making cartoons with drawings.
If the individual pieces of paper are not conscious, is the set that is flipped? If not, why not? Be specific.
It cannot be the electricity that aids in changing and holding states that brings the computer to life. Because we could just as easily, in conception, build the same silicon-based computer in wood, like an abacus, and use running water and gears to do the state changing. True, the wood-based computer will be a lot larger than the semiconductor-based one, but unless you are arguing it is the silicon itself, or its doping, that is intelligent, which is absurd, if you say the semiconductor-based computer is intelligent, so must be its wooden replica. And so must be the paper.
What is the essential difference between the wood-based computer, the silicon-based computer, and the map of states on paper? None. It can’t be speed: if the fast electronic computer is alive, so is the clunky wooden machine, but at a slower rate of life. And so is the fast electronic version with the clock speed set to one tick an hour.
Zenophobia
Now we run into a problem, akin to Zeno’s paradox of the arrow (which we’re not using directly). Zeno made that thought experiment in hopes it showed movement was impossible. Briefly, the idea is an arrow shot from A to B must exist at some moment in time somewhere in between. In that moment or instant of time, he said, the arrow is not moving, because it is at a place, and if it’s at a place it cannot be moving. Where did the movement go? How does the arrow get its movement back to so the arrow progresses to a new place in the next moment? The modern solution is to say the arrow possess instantaneous movement at every point along the continuum, which proof involves the idea of infinitesimals. If you’ve taken calculus and remember seeing “dx”, then you have seen an infinitesimal.
But there is another meaning to Zeno’s arrow for us. If physical space, the space in which we live, is discrete, it is made up of points with nothing in between them. That “nothing” is as strict an interpretation of the word as you can make it. For if you allow something in between points in space, including fields, then real space exists at finer levels than points. So put the points in our imagined discrete space as close together as you like, but with nothing in between.
If you get a magnifying glass and a sharp pencil, you can draw on paper dots in the shape of an arrow which looks continuous when you remove the glass, as objects in life only appear continuous if physical space is discrete. This paper arrow can be made to move in the same way as our paper computer above, by erasing the tail-most dot and drawing a new dot one point in space ahead for the tip of the arrow. Real arrows in discrete space must move like that. One can imagine that each dot of a real (not paper) arrow, I mean the part of the arrow that fills points in real space, can possess instantaneous velocity, but it’s hard to imagine how each part of the discrete arrow communicates with each other part to remain a coherent whole. Each part cannot communicate through space, because space is (we are supposing) discrete.
We also do not know how one dot of the arrow dematerializes from its discrete position and materializes at the next position, and the whole in concert. There is never a doubling up of arrows bits at a point. There is no space to cross, because we are supposing space is discrete. It has to be some kind of teleportation that moves each point in concert (and recall the arrow will be “jagged” when moving at angles to whatever axes we imagine space to have). That being so, there must be some ordering force “above” (or, if you like, “below”) space that provides the teleportation, the energy and information.
And it has to work for everything, not just arrows. Even you, since you cannot exist outside yourself, not if space is discrete and you yourself exist only in space. All things, including all life, in discrete space is thus a board game. Played by some Ordering Force outside space.
The alternate to this mysterious Ordering Force, at least for arrows, is to suppose space is absolutely continuous, which was the solution for the arrow’s movement. The arrow remains a concerted whole substance through continuous space. And so do you. The proof of this, as said, involved infinitesimals and indeed all manner of complex ideas of Infinity. Which, you’ll be grateful to learn, we’ll mostly skip here.
Movement happens in thought, too; which is to say, change occurs. Now if you believe the mind is entirely material—which I do not: see below—for change to take place absent an Ordering Force, continuity is needed.
Whether or not all this is interesting to you, it remains true that AI does exist in discrete space. For any movement, by which I mean change of state, to take place, some kind of higher Ordering Force must exist to push the states along. It cannot be the states themselves doing the pushing. Of course, the continuous space applies to the movement (changes) in the semiconductors, just as it does in the wood of our simulacra, but we have already agreed that it is not the semiconductors or wood themselves that are intelligent.
AI is the set of discrete states, and that set advances to the next state by following fixed rules, even if the rules themselves change in time. There is no way to remove the Ordering Force, which is the set of meta-rules which allow the pushing rules to exist. There is never any point where the states themselves “take over” and become the Ordering Force. It will turn out, as is likely obvious anyway, that you are that Ordering Force.
Mind Your Matter
A curiosity in AI research is how definitions of intelligence often go unmentioned. They become a “I know it when I see it” sort of thing, or simply defined as doing certain tasks man can do. Cars can travel, and much faster than man, but few suggest cars have become a kind of animal because of its superiority. Computers seem to have memories much larger than man’s. And they seem to be able to calculate faster. But computers don’t memorize or calculate anything. It is we who read into the computer’s states meanings such as memory and calculation. Computers cannot do this task, i.e. give meaning, for themselves, as we can for ourselves. Therefore, computers will never be intelligent.
Our minds are not wholly material; that is, the intellect is not entirely made of matter; it is incorporeal. There is a long line of philosophical argument here, wholly different from Descartean dualism, which most readers won’t be familiar with. There is also, in recent days, some new physical ideas on how the incorporeal parts of our minds might work.
Here is an argument by Ross showing the intellect is incorporeal. A detailed explanation is provided by Ed Feser, from whom I’m borrowing: All intellection is about the meaning of objective facts of the world; No physical process, which are mere arrangements of matter, gives meaning to objective facts; Therefore, the intellect is not a physical process. Now the explanation and proof of that is long, rich, and detailed. To grasp it (using your intellect!), you must at least read Ross’s original paper, Feser’s expansion, and his answers to common objections.
It would also pay to read a book such as Thomistic Psychology by Robert E Brennan, which lays out the entire view of human thought, beginning with the senses, showing our similarities to animals, and where we depart (at the intellect). If there is interest, we’ll cover all this in separate posts. If you feel (the right word) moved to comment on these matters, please read the source material first.
What we need to take away today is the idea that meaning is not in things, but in our minds. Meaning cannot therefore be in computers, but must reside outside of them, given the states of computers semantic meaning.
Meanwhile—a good pun—it also pays to understand newer arguments on how incorporeal intellects might work, and not just the philosophical arguments that they are different in kind from machines. For a start, I recommend the paper “Hard Problem and Free Will: an information-theoretical approach” by Giacomo Mauro D’Ariano and Federico Faggin, which argues consciousness is quantum in nature (they take great pains to separate the ontic from the epistemic, and the errors which result from ignoring the distinction, which those taking the Class know is a favorite subject of mine). You might also read Faggin’s popular account Irreducible. I don’t wholly endorse all his ideas, but they need to be taken seriously. Such as: “The most notable difference between a computer and a cell is that a computer is made of permanent classical matter, while a cell is made of dynamic quantum ‘matter’.“ I’ll later review this book.
Lastly, there is also the book Why Machines Will Never Rule the World: Artificial Intelligence without Fear by Jobst Landgrebe and Barry Smith. Jobst Landgrebe, Barry Smith. Their argument is not as strong, involving complex systems. Which means computers merely lack better simulation ability. Feser, Ross, Faggin’s and my own is that no machine is capable of intellection, even in theory.
In short, whether you grasp or even reject the arguments of this section, it remains true that physical states do not have meaning, and that meaning must come from outside those systems. That argument is fully developed by Searle.
Sterling Searle
Here, as briefly but as coherently as I can make it, is an explanation of John Searle’s famous 1990 paper “Is the brain a digital computer?” His answer, and ours, is No. The paper is on-line, is not terribly difficult to read, and must be read. Alas, many working in AI have not read it. One wonders if it is even known. I am only giving the smallest summary I can, so if you think you have spotted a flaw, refer the original paper itself before commenting.
This paper differs from his 1980 “Chinese room” argument, which comes in “Mind, brains, and programs“. In that, he develops the difference between syntax and semantics. Syntax is the instruction set which advances the computer from state to state. Semantics is the understanding of what those states mean. Semantics does not happen in computers, but outside of them, in the minds that create the syntax. Hence a computer processing Chinese syntax following set rules possesses no understanding of the Chinese language.
This is also thus the proof the mind is not a computer program. A program has no awareness of itself, nor of Chinese, nor of any language, nor of any thing. I take it as axiomatic we, using our minds, have understanding of things: which even by disagreeing means you agree. Because you at least understood something, which means understanding is possible.
Nor Searle, nor no one I know, denies that the operations of real brains can be modeled, to various degree of veracity, on computers. But, computers being discrete, these simulations will always remain models. And so claims of “general” AI invoke the Deadly Sin of Reification, which is forgetting the map is not the territory.
I’ll skip his background on universal Turing machines, which are general purpose computers, machines carrying out syntactical tasks. None doubt their proofs. But this brings up the question, not what the brain is, but what a computer is.
If a computer is a set of states, these progressing from state to state, then, say some, everything is a computer. The abacus above is a computer, but so is the ever-changing arrangement of grains of sand on a beach. Indeed, some take this to be an argument that the world itself, by which I mean all that exists, is a computer. This is another form of pantheism.
You’ll see this kind of argument applied to things like ant colonies. The behavior of the colony as a whole, acting like a computer, possesses “emergent” properties in the moving states of the colony that convey or produce some kind of intelligence. But the seeds of this argument are its own destruction. For if the ant colony is a computer, so is the any colony plus the neighboring tree, with its blowing leaves. Or any tree anywhere, since these discrete states (leaf and ant positions, etc.) do not communicate with each other physically, the trees can be anywhere.
And indeed, the colony plus the tree plus the clouds, plus everything is also a computer. It is only our minds that makes the “cut off” distinction that says “only the colony”. Searle also makes the opposite point, which is that nobody argues the inverse, by insisting, say, that we can “make carburetors out of pigeons.” Computers maintain their mysteriousness, for most, because they lack understanding of how they work.
In other words, an intelligence must judge any particular collection of objects a computer. One such collection is the assortment of switches, diodes, resistors, capacitors, and whatnot that make up certain designed machines, those which carry out our instructions according to rules we fix, we call computers. It is we who create the syntax, but more importantly it is our outside intelligence that reads the semantics into the computer, or its output.
All right, but what about the brain?
…we wanted to know how the brain works, specifically how it produces mental phenomena. And it would not answer that question to be told that the brain is a digital computer in the sense in which stomach, liver, heart, solar system, and the state of Kansas are all digital computers. The model we had was that we might discover some fact about the operation of the brain which would show that it is a computer. We wanted to know if there was not some sense in which brains were intrinsically digital computers in a way that green leaves intrinsically perform photosynthesis or hearts intrinsically pump blood. It is not a matter of us arbitrarily or “conventionally” assigning the word “pump” to hearts or “photosynthesis” to leaves.
And:
A physical state of a system is a computational state only relative to the assignment to that state of some computational roe, function, or interpretation. The same problem arises without O’s and 1’s because notions such as computation, algorithm and program do not name intrinsic physical features of systems. Computational states are not discovered within the physics, they are assigned to the physics.
By us; which is to say, beings with intellects.
This point has to be understood precisely. I am not saying there are a priori limits on the patterns we could discover in nature. We could no doubt discover a pattern of events in my brain that was isomorphic to the implementation of the vi program [a text editor] on this computer. But to say that something is functioning as a computational process is to say something more than that a pattern of physical events is occurring. It requires the assignment of a computational interpretation by some agent. Analogously, we might discover in nature objects which had the same sort of shape as chairs and which could therefore bused as chairs; but we could not discover objects in nature which were functioning as chairs, except relative to some agents who regarded them or used them as chairs.
Some AI workers recognize all this, but then commit what Searle calls the “homunculus fallacy: The idea always is to treat the brain as if there were some agent inside it using it to compute with.” This makes claims that computers have intellect circular: they assume that which they attempt to prove. There still has to be something outside the syntactical engine giving meaning to the syntax: “Without a homunculus that stands outside the recursive decomposition, we do not even have a syntax to operate with.” There is no getting around the output (and of course input) of computer programs is observer relative.
There is no causal explanation:
But the difficulty is that the O’s and 1’s as such have no causal powers at all because they do not even exist except in the eyes of the beholder. The implemented program has no causal powers other than those of the implementing medium because the program has no real existence, no ontology, beyond that of the implementing medium. Physically speaking there is no such thing as a separate “program level”.
And, of course, “the mechanical computer is not literally following any rules at all. It is designed to behave exactly as if it were following rules, and so for practical, commercial purposes it does not matter…But without a homunculus, both commercial computer and brain have only patterns and the patterns have no causal powers in addition to those of the implementing media.”
As far as models, i.e. simulations, of minds goes, and calling that simulation “a” mind:
And we do not in general take “X is a computational simulation of Y” to name a symmetrical relation. That is, we do not suppose that because the computer simulates atypewriter that therefore the typewriter simulates a computer. We do not suppose that because a weather program simulates a hurricane, that the causal explanation of the behavior of the hurricane is provided by the program. So why should we make an exception to these principles where unknown brain processes are concerned? Are there any good grounds for making the exception?
And: “In sum, the fact that the attribution of syntax identifies no further causal powers is fatal to the claim that programs provide causal explanations of cognition.”
It is said computers “process” information, so maybe brains do, too. The computer, made of silicon or wood, does what it is told, by us, and its output is given semantic meaning, by us.
But now contrast that with the brain. In the case of the brain, none of the relevant neurobiological processes are observer relative (though of course, like anything, they can be described from an observe relative point of view) and the specificity of the neurophysiology matters desperately.
Searle uses the example of a car coming at you. Your senses offer up a (what is called in philosophy) “phantasm”, a thought, which is taken by the intellect and given meaning. (The book by Brenan mentioned above details this.)
The biological reality is not that of a bunch of words or symbols being produced by the visual system, rather it is a matter of a concrete specific conscious visual event; this very visual experience. Now, that concrete visual event is as specific and as concrete as a hurricane or the digestion of a meal. We can, with the computer, do an information processing model of that event or of its production, as we can do an information processing model of the weather, digestion or any other phenomenon, but the phenomena themselves are not thereby information processing system.
This is another form of saying everything is not a computer.
Sad News?
Computers will never come alive: there will be no “strong AI”. There’s also no hope for so-called quantum computers bridging whatever gap you might think exists between strictly mechanical machines, to which we assign meaning to, and actual intellects. Recall, it is claimed a qubit takes an infinite number of values (though, I say, only potentially infinite, but finite in practice, because qubits are not made of prime matter but of imperfect matter), but when a qubit is made to interact with a device to measure its state, it still only takes a single definite state. So even if somebody can get a quantum computer to work, it’s still in the end yet another machine, doing what it’s told. And it will still be us supplying semantical meaning to its output.
A real possibility is organic computers, which is to say, life. Not artificial life, because anything artificial will always remain a model, and you’ll never achieve strong AI with a model. I mean engineered life.
We can do this with animals even now. Say, by training your dog to fetch your slippers (do people still wear slippers?). But you won’t teach your dog intellection, because animals are as different from us in kind, as animals or we are to machines. So if we’re to have living computers, we’re going to have to go the mentat route, or something like it. Yet mere gene editing won’t get you there, since as above intellects are incorporeal. Which doesn’t mean efforts alone some as yet undiscovered line are impossible. Just unlikely.
Subscribe or donate to support this site and its wholly independent host using credit card click here. Or use the paid subscription at Substack. Cash App: \$WilliamMBriggs. For Zelle, use my email: matt@wmbriggs.com, and please include yours so I know who to thank. BUY ME A COFFEE.
There’s meaning in nature, intelligibility.
Discussing the agent intellect as Averroes conceived it, Saint Thomas, in order to refute its supposed separateness along with its uniqueness, observed that, the imagination conveys knowledgeable objects to the intellect, but, if the individual man doesn’t have his own intellect, he doesn’t know intellectually. That is because Averroes pretended that we all know intellectually by the unique intellect grasping the intelligibility carried by the imaginations of every single man. He said: it’s not the same to carry intelligibility and to understand. The sensible reality is intelligible, it does have meaning, but you need to have an intellect to grasp that meaning. Based on that, I discuss the materiality of our brains and artificial intelligence. One important Chilean thinker, Maturana, said that there’s no real knowledge sincere the brain is just electrochemical synapses; but he doesn’t consider the problem adequately: of course electrochemical activity can carry information, but that’s not the same as to understand that information; tv and radio signals are electromagnetic and take a lot of information to our homes; but, a turned on tv, with my wife in the kitchen is a waste of pesos (she screams: “I am watching that, don’t turn it off”: she is that mystical). But, if you deny reality has meaning, you are falling into the nominalist trap and you are denying the possibility of order, intellect, essences, morality, etc. There is intelligibility, but you need the intellect in order to grasp it.
Consciousness can’t be quantum.
My second comment regards the assertion that consciousness is quantum… The first time I heard that, Penrose was the one saying it (precisely, speaking about artificial intelligence: he rejects it, on the bases of its impossibility to grasp principles, first propositions). But he is a materialist, so he thinks it must be a quantum state. Nonsense, quantum mechanics is about matter, about elemental matter. So, if the intellect is immaterial, it’s not quantum.
Notice: I haven’t finished the article, I’m reading the Searle part, but I need to go to work, but it is very interesting and I will finish it later. If I find something worth commenting on, I’ll let you know.
Carlos,
I’m with you on consciousness can’t be quantum in the “discrete” meaning of that word, but I think in the non-local meanings it can be. Like EPR-type experiments, i.e, entanglement and its implications of non-locality, which means non-material cause. That, I think, has some legs to it.
Your argument amounts, I think, the claim that only humans can exhibit human intelligence. Well duh – but this does not mean that a machine cannot demonstrate machine intelligence – nor does it mean that the machine intelligence cannot out-perform the human one on many tasks.
To believe otherwise is stultumentious.
(the word was invented by perplexity.ai – which defined it as:
—
This neologism is derived from Latin roots:
“stultus” meaning foolish, stupid, or silly
“-mentia” from “mens” meaning mind, understanding, or intellect (as in “dementia”)
So “stultumentia” would literally translate to “foolish mind” or “stupid thinking,” aptly describing the adoption of conspiracy theories that contradict well-established scientific facts by people who lack critical thinking skills.
This term combines the connotation of willful ignorance with a clinical-sounding suffix, giving it a pseudo-scientific air that ironically mirrors the way conspiracy theorists often present their unfounded beliefs.
—
)
No, the point is not that the machines can’t have human intelligence, the point, which William demonstrates quite completely, is that computers don’t have intelligence in any sense except, maybe, asexpression of the intelligence of the program designers. Semiconductors are not aware or the concepts that are written into them.
William, I don’t know if this is the place to tell you this but this morning, itwasvery hardfor me to access the blog and write and post my comment (it might be my server or something like that, though, because now I’m at work and the problems all but disappeared)