Machines Can’t Learn (Universals): The Abacus As Brain Part II

Read Our Intellects Are Not Computers: The Abacus As Brain Part I first.

Machines can learn, all right. But they can’t learn like us. Machines cannot apprehend universals, ideas of truth and falsity, and logical connections. We can.

Here’s a recent headline: Google’s New AI Is Better at Creating AI Than the Company’s Engineers. The article under it is breathless and full of wonder as these things tend to be. It’s last sentence (from which I’ll allow you to infer the theme of the rest, something a computer cannot do) is “The potential of AI then draws to mind the title of another sci-fi film: limitless.” Alas, the promises implied here will not be kept.

Let’s return to our abacus, which has been appended with extra levers and beads so that it can translate, à la Searle, from Chinese to Nepali. The beads of our giant abacus are put into a position so that one state—the full and complete picture of the beads and slides and whatever other steam-driven gears we might have in there at any moment—represents (we say) a Chinese word and its equivalent Nepali word. Whenever the abacus input is slid into the position such that the beads represents this Chinese word, other beads are shifted so that its output (some portion of the abacus) is said to represent the equivalent Nepali word.

This state, or rather these two states, are a form of learning, if you like. The abacus, the purely mechanical contraption, has “learned” in this very weak sense how to translate. But don’t forget that each “learned” word is just a position of beads.

Obviously, we can extend this “learning” to include more words, and even sentences. We can have the abacus grow (add more beads and slides) such that texts which it has never seen before are also translated. To do this, we can “train” the abacus to form states that tell of translation success and failure, say by sliding a bead up for every success, and down for every failure. And we can “try out” various possible states via some purely mechanical process to try and maximize this reward of beads. (There is a great deal of silliness talked about random checking of states, and Monte Carlo, and so on, all of which is false. No matter what kind of process you have to run through states, it is always seen as purely mechanical in the end. There is no “randomness” to it. But it will do little harm here to believe in “randomness” if you want. An article on this is coming soon.)

Well, this is “machine learning”. Our machine has “learned.” That is, it will form certain states in response to certain stimuli, i.e. inputs. Two things should be of perfect clarity. One, the machine does not know in the sense of understand or apprehend what it is doing. It is just a bunch of beads, levers, and slides: a tremendous pile of wood. Two, this wooden computer is no different than the machine learning electronic computers on which AI systems are being built. The electronic ones are just faster and smaller. But we learned last time speed of computation does not bring us to knowing, i.e. to intellection.

Our abacus is a neural net, if you like fancy marketing language, but it isn’t rational. It does not have an intellect or will. Understanding or apprehension is more than just a position of beads on slides. Prove this to yourself in the following way.

Your intellect will agree that, for natural numbers A, B, and C, if A > B, and B > C, then A > C. Now within an awful limitation, we can build an abacus that either makes a test of this for any three natural numbers, or we can represent, as with the Chinese characters, the “theorem” in beads. Either way, we have a pile of wood, and in neither case does that pile of wood know the truth of the theorem. And in the first case we’re limited by space and cost, since we cannot build an abacus big enough to test very large numbers. Though our intellects can test any numbers short of infinity instantaneously.

Suppose you think that knowing, i.e. apprehension of universals like our theorem, is a purely material process. Then it should be possible to build this knowing-unit on top of the rest of the abacus. Let’s do this. I don’t know how it would work, and neither do you, but I know what it will look like. It will be another collection of beads and slides, indistinguishable from the other beads and slides. And the whole will just be—what else?—an enormous pile of wood.

It is no limitation that our abacus only takes one state at a time. So do our brains, albeit the phase space of our brains is huge and the abacus small. Still, like we said last time, this is a thought experiment (which computers can’t do), and so size is no limitation. The abacus, since it will be powered by steam and wooden gears, won’t run fast, but speed is not of the essence, again as we already proved.

We can go on adding beads and slides, but no matter what, in the end all we have is a pile of wood. There will be no understanding in it. It can’t learn to apprehend, though it can “learn” any non-intellective task. It we add wheels to our abacus, it can, in the end, say, simulate an animal, and respond to stimuli of all sorts. But it will never know truth. It can register stimuli and report on its internal states, but it will never know, and can never be said to have free will. It will never be self-aware in the sense that we are. It is just a pile of wood.

The translation algorithm, and the algorithms we design to do whatever task our abacus might do, turn out to be just positions of beads and slides. And the same is true of the “deep learning”, “machine learning”, and AI built into electric abacuses. A neural net (i.e. a parameterized non-linear regression) is just an abacus with the beads/parameters set a certain way. The meta-algorithm that puts the neural net beads in this certain way is itself just a set of beads/parameters set a certain way. And so on for any algorithm that can be represented.

This is not to say the position of those beads, whether made of wood or electrical states, are not useful to us in some way. Of course they often are. But so is a lawnmower useful, yet no one would make the mistake of thinking it possessed of an intellect. There are strong, unbroachable limits on the abacus, no matter from what material it is built. The abacus can have no experience of the infinite. We can—and do.

Every time we apprehend a universal, we experience the infinite, for a universal is true everywhere and everywhen; universals are eternal. This is a commonplace in mathematics, where the infinite appears on a routine basis (as it did when we knew, given the stated premises, A > C). Now whether it is true that a facility for experiencing the infinite is necessary to possess intellect and will, and I think it is, it is true we can speak of the (various kinds or flavors or properties) infinite, and no machine can. Of course, a machine can be built that has beads which we say mean this or that infinity, but the machine cannot be said to understand or grasp this, as shown.

An abacus as self aware and comprised of an intellect and will is absurd. That it is made of wood and could therefore only chug along at a sedate pace made this absurdity stark. There is something mystical about an electric machine, though, even when the person building the machine knows where every transistor goes.

To many, computers “feel” alive. We tell ourselves stories where they are, just as we used to tell stories of intellective mice and birds.

38 Comments

  1. There is no obvious mechanism in an abacus that will move the beads. You can of course use one (or a lot) human to do the moving, but than yiu can just as wall ssay that the human uses the abacus to extend its memory, while providing the smarts.

    An electronic computer changes state because part of its state is a program, i.e. a bit of state to compute the next state from the current one.

    Note that this means that a program can potentially chance itself. This is not a popular thing to do, as debugging such a program is so hard that almost nobody can do it.

    Anyway, for the abacus analogy to work it must look more like a computer.

  2. In order to define intelligent proper, a lot of other things have to exist already. One case is idea. If the [temporally] very first existence of idea cannot be determined, then “to be intelligent” is based on foolery assertion and inevitably leads to conjectural nonsense.

    Another case is the superlative of (here exemplarily) beautiful. At any rate without getting past these and other hurdles intelligence is mal-defined and does not allow rational examination by man and/or by machine.

  3. The comment about a lawnmower reminded me of “Lawnmower Man”.

    The fact that Google thinks its AI is better than its engineers says something about the quality of people employed by Google.

    Magic is alive and well in the 21st century. You see it everytime you read “science”. Psychics and magic. We have devolved several centuries. (Sander’s comment is especially telling—if you can see that the item doing the “thinking” is wooden beads, you won’t fall for the magic. It’s only magic when it’s zeros and ones in an invisible world no one can see. Out of sight, out of mind. Devolved.)

  4. A modern day Field Programmable Gate Array (FPGA) is much like the abacus as described here. You can create a cpu or microcontroller from the logical blocks. You can massively parallel in hardware computational units. You can use bit sizes well beyond 64 bits that are the usual native sizes in use, for example if you want to better retain precision after many math computations you could use 1024 bit or 2048 bit sizes.
    You can put more than one FPGA on a circuit board and have several boards in a cabinet.
    As a computer scientist and software engineer, all I see are the logic gates that makes up the whole system. I recently spent a few hundred hours reviewing my several books on the basics of computers, from books that I have had for up to 40 years now. The basic technology has not changed a bit, still just “and gates” and “or gates”, etc. arranged in various configurations.
    I remember a coworker in 1980 studying A.I. I haven’t personally spent much time with that technology, but I have spent thousands of hours developing software.
    You can see the internal workings of the logic gates in this video of a computer built out of transistors with an led for every input and every output:
    https://m.youtube.com/watch?v=0-ksw-BDAiw

    To learn more about the modern FPGA, here is a free book;
    http://design.altera.com/New2FPGAeBook

    And a more extensive book that goes into deep details, and is free for the pdf version;
    http://www.zynqbook.com/

  5. “It is just a pile of wood.”

    Wisdom 13: 11-19

    A carpenter may cut down a suitable tree and skillfully scrape off all its bark,
    And deftly plying his art produce something fit for daily use,
    And use the scraps from his handiwork in preparing his food, and have his fill;

    Then the good-for-nothing refuse from these remnants,
    crooked wood grown full of knots,
    he takes and carves to occupy his spare time.
    This wood he models with mindless skill,
    and patterns it on the image of a human being
    or makes it resemble some worthless beast.

    When he has daubed it with red and crimsoned its surface with red stain,
    and daubed over every blemish in it, He makes a fitting shrine for it
    and puts it on the wall, fastening it with a nail.

    Thus he provides for it lest it fall down, knowing that it cannot help itself;
    for, truly, it is an image and needs help.

    But when he prays about his goods or marriage or children,
    he is not ashamed to address the thing without a soul.
    For vigor he invokes the powerless; for life he entreats the dead;
    For aid he beseeches the wholly incompetent;
    for travel, something that cannot even walk;

    For profit in business and success with his hands
    he asks power of a thing with hands utterly powerless.

  6. I approach this from the opposite direction; start with a classical computer, but scale up. A million cores at a million GHz, with a million Gb of working memory.

    Very likely to be able to give a decent impression of self-awareness. Keep improving the specification until you think it’s possible, and keep refining it’s programming with the best human minds.

    Now keep the colossal memory and football fields covered in interconnected cores, but start to reduce the clock speed. Down through the 100’s of 1000’s of GHz, 10’s of 1,000’s of GHz. Keep going, slower, and slower, and slower.

    At what point would you stop considering the possibility that it’s self-aware?. Keep cranking it down. 10 ticks a second, 1 tick a second, 1 tick every 10 seconds, 1 tick every week.

    How slow before there’s no question it’s just inert copper and silicon?

  7. Sander, an abacus can’t look “”more like” a computer. It *is* a computer, albeit an analog computer whose binary is expressed using wooden beads.

    A program can’t change itself. It can however, be programmed by an outside intellect to perform changes when the conditions are met.

  8. “Our abacus is a neural net, if you like fancy marketing language, but it isn’t rational. It does not have an intellect or will.”

    According to the materialists, neither do you.

  9. The whole notion of using an abacus as an example to refute AI, or the possibility of AI leading to something that might exhibit some degree of intelligence because no abacus ever could no matter how many beads were added is an example of the Absurd Absolute — inquiring thinking minds with free will can look up “Absurd Absolute” on Scott Adam’s Blog.

    Will a machine ever be invented that exhibits intelligence, including some degree of free will and values…who knows. The number of neural-type connections needed appears to be a staggering. But we do know what happens when animal (e.g. human) neural nets are diminished, and loss of intellect, morality, etc. are among the effects.

    Briggs mentions “neural net” today, but only a mention and in a context that is so absurdly ludicrous as to be meaningless (an Absurd Absolute embedded within the larger Absurd Absolute). For those that desire a particular answer, however, this suffices to create the illusion that the real-world issues associated with neural nets have somehow been addressed. Of course, that simply ain’t so.

  10. A brain is nothing more than an exceedingly complex chemical soup from which emerges intelligence (sometimes), consciousness (sometimes), self awareness (sometimes) and even awareness of being self aware. Unless one assumes another influence like magic.

    In principle there is no reason why the same emerging properties cannot be implemented in some other medium.

    After all, Apple computers and Microsoft can often produce the same output for a given input while they are implemented in very different architectures and with very different operating systems and software. No magic required.

  11. again, it’s possible for a machine to be self-referential (which is basically what I take your understanding as “self-awareness” and “will” and “experience” and other magic words).

    but you have proven nothing, you just repeated the thesis over and over.

  12. From when you said “let’s do this” I knew you would just take me through the process of building it up and then say that it doesn’t become an intellect. But you still set no threshold for an intellect. I think you’re implying there isn’t one, but I mean, just say it already.
    Also, you just said that it doesn’t matter what material the abacus is made of. So what if the material is flesh and blood? Surely now there’s a difference?
    Also, you just said that we experience the infinite, while computers can’t, they just have states which we say represent the infinite. But you also note that our brain also goes through different states, albeit through a larger “phase space”. So what makes our experience of infinity the “real thing”? If a computer can have perhaps even more states than our brain, and not only represent some infinities, but also more infinities than we can, and do math with those infinities, where is this “realness” attribute an advantage exactly?
    It seems you’re trying to get to the idea of immaterial souls, but unconvincingly (to me). I actually believe in those already, but for pragmatic reasons. I have no idea how to argue for them, and I’m disappointed that you don’t seem to either.

    —————–

    Also, since there’s no better thread for this, how about that new web design? Here’s what I think changed. Maybe not all of it is new.
    – Obviously, it’s way brighter, which makes it harder on the eyes in my opinion.
    – The favicon on the other hand is way darker, which also makes it less readable.
    – Things are more spaced out now.
    – You have killed the classic posts in favor of some links to search terms.
    – You’re now selling a coffee mug and playing cards.
    – There’s a “BOOKS & E-CLASS” link, in which I see no e-class, which I reckon means there’s going to be an e-class.

    I mostly dislike the changes. I understand killing the classic posts, as you might not want to update that page anymore. And I look forward to the e-class. But I hope that the site is going back to a more readable level of brightness sometime soon. And if you want people to even notice the guy with the hat in the favicon, go back to the old one, or just make this one more zoomed in and with more contrast.

  13. Experience, which is individual, prompts me to post the comment on hell, written but not posted.

    However, intellective power of a mouse or a bird is an unknown; not completely unknown. Everybody knows there’s never been one, not one that studied at Cambridge.
    Watching and observing animals, not in a trick, a set up but observing animals with higher intelligence demonstrates clearly to me that they do experience universals. Truths such as how objects behave. This does not happen reliably or repeatedly but appears to me to happen in moments of need, food, or some fear of something. When encouraged by the owner they can repeat the behaviour or do so if they find it conducive. The intellective part is the part where they worked out a situation. Others will say this is luck, perhaps it’s transference. I’m allowed to use that word because one of my favourite GP’s used it.

    If everybody agreed what intellect was and what intelligence was then there could be no disagreement.

    Intelligence where it exists is tied up in what it is to be alive.
    Computers cannot possess life. Their power is outside of that system. They have energy and some features bestowed upon them by humans.

    If a child cries because his robot is broken everybody understands that the grief is a figment of his own head.
    If a grown man or a little girl cries because an animal has died, nobody (sane) would argue that this loss is not something more than the loss of a biological machine.

    There is some point where some of the powers of higher animals share something of those of humans.

    Maybe being around humans more somehow imparts information in ways other than which the owners are aware. It is those things, the untrained things which are surprising.

  14. As far as the overlords are concerned, it’s not necessary that a machine apprehend the universals, only that it appears to do so and that enough people believe/accept that it can. Illusion often will suffice for achieving a nefarious goal.

  15. Sean — Well put. Possibly analogous: we watch a film go at so many frames per second and believe we are watching continuous motion, but if we look at each frame individually, we know that we are not. Even if this is not analogous, your point is sound.

  16. …Machines can learn, all right. But they can’t learn like us. Machines cannot apprehend universals, ideas of truth and falsity, and logical connections….

    That’s where you lost me. Because it’s obvious that they can. If you program them to do this, they will. Indeed, much AI works by spotting universals in data.

    The fact that no one has yet built a system to do this of sufficient complexity to be self-aware at the same time is simply a limit of our current technology – not a fundamental barrier…

  17. It doesn’t matter how many billions of specially arranged black marks are scrawled onto white paper, none of those letters which form words and sentences, which we read as grand theories – none of those marks are aware in any way of the meaning.

  18. Senghendrake,

    A slide rule is analog; an abacus is not.

    A program can’t change itself. It can however, be programmed by an outside intellect to perform changes [to itself] when the conditions are met

    Er, it can’t change itself unless it can?

    What you really mean is you don’t know of a way that it could do this on its own then arguing this lack is proof it is impossible. Fallacious reasoning. Unlikely perhaps but not impossible.

  19. Many of the arguments against AI could also have had similar counterparts in say 800 AD to support arguments against building a flying machine when knowledge of aerodynamics was largely unknown.

    You can make things out of wood that look like birds. They won’t fly no matter how much wood is added.

    Gluing feathers to your wooden bird has the same problem.

    Only living things can fly. A wooden bird is not alive. A dead bird doesn’t fly.

    Etc.

    But wait! We now have machines that fly although there may undoubtedly be those among us who would mince words and differentiate modes saying things like soaring is not flying and the machines are only soaring. Similar mincing is done by some to differentiate animal behavior from human when it comes to mental activity.

  20. @Plantagenet

    ..@ Dodgy Geezer
    Yes…if YOU program them…

    Well, for a starter, so what? Programming them is part of making them, and the postulate was that we couldn’t make thinking machines.

    Furthermore, how do you think brains work? They are self-reproducing, and come with a hard-wired structure ready for self-programming – a BIOS.

    And that’s how you would make a thinking machine. Set it up for learning, and let it learn for itself. Which, incidentally, we are already starting to do…

  21. DAV —

    I don’t think your suggested analogy holds up. Even the most ancient of peoples know that arrows fly. The operation of a boomerang is definitely counterintuitive, but people know of it as well. Can you cite one truly credible reference that attempts to argue that sustained mechanical flight is impossible? If your analogy is relevant, please demonstrate that it is.

  22. Arrows don’t takeoff nor stay aloft like birds. Same for boomerangs, rocks and javelins. Same word but clearly different actions unlikely to be confused with what birds do.

  23. I have to admit that I personally think there *is* a miracle involved in the phenomenon of human consciousness, but I would not claim to be able to prove this, only to say that it is the only way I can square two apparently irreconcileable facts, those being:
    1) I am unmistakeably conscious, and take it as read that all other humans are as well;
    2) I can see no logical way in which a purely mechanical process can become conscious merely through adding complexity.

    After all, suppose we substitute the words “neurons” for “beads”, “brain” for “abacus”, and “chemical processes” for “mechanical movement”; at what point of additional complexity does a biologically alive clump of neural tissue become conscious and self-aware? What makes the deterministic chemical processes of neurons in the brain qualitatively different from the deterministic (as far as the abacus itself is concerned) mechanical positioning of beads on a frame, in that the former can achieve consciousness but the latter cannot?

  24. What makes the deterministic chemical processes of neurons in the brain qualitatively different from the deterministic (as far as the abacus itself is concerned) mechanical positioning of beads on a frame, in that the former can achieve consciousness but the latter cannot?

    Neurons are interconnected and can influence each other. The beads of an abacus don’t interact.

  25. Perhaps if we delved much deeper into the Hebrew language of the Old Testament (Genesis 1 creation narrative, and elsewhere) re: the designations, or descriptive terms, of which the Ruach HaKodesh utilized for rational/moral H$@o sapiens; as well as what we see utilized for those “soulish” creatures comprising the higher-order mammals, if you will – i.e., see “neshama” & “nephesh” – perhaps then we might be able to come to grips with what WB is conveying here…at least in my “ignorant and unlearned opinion”??

    After all, there is an unbridgeable chasm, in kind, between those rational/moral specially-created “spirit creatures” – aka H@$o sapiens, in whom the “imago Dei” inextricably, or permanently remains – and the highest-order primate that’s ever existed here on earth! This is why I’ve always struggled with trying to biblically, philosophically, thus scientifically reconcile, the seemingly oxymoronic concept of Theistic evolution – with any purely naturalistic process of strict, “unguided” neo-Darwinian macroevolutionary philosophy.

    The fossil record, or nature itself – throughout planet earth’s continents – has always provided unassailable, consistent testimony to stasis, and that familiar Genesis refrain: “everything produces AFTER its own kind”?? Can’t wait for some helpful insight, or robust debate on this.

  26. Yes, neurons influence each other, but they are strict stimulus-response automata. Intelligence cannot emerge from it. Intelligence can be embodied in it. Not quite the same thing.

  27. “Intelligence cannot emerge from it. Intelligence can be embodied in it. Not quite the same thing.”

    I’m certainly willing to grant they’re not the same thing, but how would you explain the difference?

  28. @Senghendrake

    If you want to compare an abacus to an electronic computer, you only have the memory and register part. What is missing is the control part, the CPU.

    Computers add, compare values, store values in memory, read values from memory, and execute the instructions which tell them to add, load values, store values, compare values.

    Secondly, it is quite possible to let programs change themselves. This can be done at the CPU level, changing assembly instructions. One example is last century’s game of Core Wars. Higher-level programs can do the same. Check your Abelson and Schussman, The Structure and Interpretation of Computer Programs.

  29. I suggest EF Schumacher’s A Guide to the Perplexed. It clearly explains the four realms–mineral, plant, animal, and rational. Physics is restricted to mineral i.e. inanimate. Contrary to slanders against vitalism, even the most basic living organism can not be explained within physics/chemistry framework.

  30. “But so is a lawnmower useful, yet no one would make the mistake of thinking it possessed of an intellect.”

    Maybe not an intellect, but I’ve used lawnmowers that had a mind of their own 🙂

    Your argument is really just the Argument From Personal Incredulity: You find it absurd to imagine a learning, thinking abacus, so one can’t exist.

    How do you know? Couldn’t you say the same about cells? One cell can’t think and no matter how many cells are added, the result is just a pile of cells interacting with one another according to the laws of physics. Consider the inverse situation: remove cells from a living, thinking brain – it would degrade and eventually fail altogether, suggesting that it depends on its physical construction to be able to think.

  31. “Your argument is really just the Argument From Personal Incredulity: You find it absurd to imagine a learning, thinking abacus, so one can’t exist.”

    One can reasonably have objections to the argument on the OP (in part, because it is not made completely clear that the difference is one in kind not in degree), but characterizing it as an “Argument From Personal Incredulity” is more a comment on the utterer than on the OP.

    There is a good deal of bad faith in this kind of comments; not only they persist in misunderstanding the structure and thrust of the arguments, not only they cannot show how human intellect can be reduced to computation without smuggling in by the backdoor that which is intended to explain away and without falling into some brand or other of incoherent eliminativism, they more or less explicitly acknowledge they have no idea how specifically human intellection arose, but they posit that it can be done and given enough time it will be figured out, with no justification whatsoever. It seems that the “Argument From Personal Credulity” is much better than the “Argument From Personal Incredulity”. Rarely in the land of the Internet does one encounter such a touching, and no doubt admirable, faith.

  32. So interesting that all you ultra-intellectuals individuals (i.e., three-digit IQ’s) – as mine is rapidly approaching two-digits, while “I” continue to take orders from the neurophysical “neurotwaddle” controlling-the-shots in MY life – haven’t even touched-upon the reality of the more-than-compelling research adduced in mind-brain quantum physics – specifically that of the “Orthodox/von Neumann” kind.

    Quantum mechanics, the “new physics,” long-ago supplanted the “old physics” of Newtonian/Classical physics (perhaps some 90-years now); yet here we find this very important, though futile discussion, still bogged-down in the now “empirically invalidated” paradigm of reductionist/deterministic Newtonian physics (including Decartes & LaPlace) – through whose hopelessly myopic scientific lens, “the hard problem of consciousness” remains a perpetually intractable conundrum.

    Yet this perennially recalcitrant conundrum of H$@o sapiens’ consciousness (when viewed through a strictly reductionist/deterministic lens) – completely disappears, or is beautifully resolved, under the rigorously established paradigm of mind-brain quantum physics, aka the “new physics.”

    Oh yeah: that isn’t my ignorant opinion either; but that of the now-retired, eminent “Orthodox/von Neumann” mind-brain quantum physicist, Dr. Henry P. Stapp; whose academic pedigree traces-back through four (4) Nobel laureate’s, including working with such luminaries as Wolfgang Pauli, and Werner Heisenberg. Here’s a small sampling of his conclusory remarks made at a seminar in Paris (2013), whose lecture was entitled “Quantum Theory of Consciousness.”

    (See also Dr. Stapp’s powerful [’08/09] essay, “Philosophy of Mind and the Problem of Free Will, in the Light of Quantum Mechanics”; a rather technical, and still-valid critique of the materialistic “physicalism” of John Searle & Jaegwon Kim.)

    * * *

    “Orthodox quantum mechanics provides a conceptual framework very well suited to studying this problem of the causal effects of the actions of an observers probing mind upon the brain that this mind is probing. Materialistic classical mechanics does not.

    “The classical-physics-based claim that science has shown us to be essentially mechanical automata has had a large impact upon our lives: our teachers teach it: our courts uphold it; our governmental and official agencies accept it; and our pundits proclaim it. Consequently, we are incessantly being told that we are physically equivalent to mindless robots, and are treated as such. Even we ourselves are confused, and disempowered, by this supposed verdict of science, which renders our lives meaningless.

    “We are now in the twenty-first century. It is time to abandon the mechanical conception of ourselves fostered by empirically invalidated nineteenth-century physics.

    “Contemporary physics is built on conscious experience, not material substance. Its mathematically described physical aspect enters as potentialities for future experiences. The unfolding of the future is governed by von Neumann’s mathematical laws, into which our conscious free choices enter as essential inputs.”

  33. @ Stephen J.,

    Well, consider the difference between a dead body and a living one. Both are sacks of chemicals, yet we cannot add energy to a dead body and make it living.

  34. Just offering the comment, not responding to anyone. A computer, like an abacus, always has a largest number it can hold. Whatever logic is used to produce negative numbers and fractions, a computer will also have a largest negative number it can hold, and a smallest fractional number it can hold or operate on. So in fact every computer possible can work with only a finite number of numbers, while we know there are many infinities of numbers. If you ask a computer to calculate the smallest number (fraction) greater than zero, it will come back with an answer. It will in fact be right about its own limitations, but completely wrong about numbers. An intellect knows there is no largest number, and no “first” real number greater than zero.

  35. @Garbanzo Bean:

    “If you ask a computer to calculate the smallest number (fraction) greater than zero, it will come back with an answer. It will in fact be right about its own limitations, but completely wrong about numbers. An intellect knows there is no largest number, and no “first” real number greater than zero.”

    While I agree with the OP, this is no proof that the computer does not “know” numbers. For the simple fact that since it is provably true that there is no smallest positive real, a computer can very well spit out a proof of this fact — just check programs like Coq or Mizar. You have to add more (in essence, some variation of the OP) to establish a difference in kind.

  36. @grodrigues

    My point was not that computers do not know numbers (agree also with the OP that they are incapable of the act of knowing), but more that they exhibit clear and definite limitations that the human intellect does not.

    Coq and Mizar and HOL and the like are proof checkers, they do not and cannot spit out proofs. If you give them a proof in their own language, written by you, they will verify that you did not make a mistake. This is no better than a compiler.

Leave a Comment

Your email address will not be published. Required fields are marked *