Think You Can Simulate A Brain? Think Again

A technician communicates with the future futurist Ray Kurzweil.
A technician communicates with the future futurist Ray Kurzweil.

Since is Silly Saturday, a few fun back-of-the-envelope calculations on simulating a brain. I’m drawing from the marvelous, must-read (go do it now) essay “The empty brain: Your brain does not process information, retrieve knowledge or store memories. In short: your brain is not a computer” by Robert Epstein.

(Update: About what I mean by “simulating”, see the exchange with DAV below.)

Lots to mine from this article, many fascinating implications, which we’ll come back to in the future. For now, what about the idea that we can “simulate” a brain in the Ray Kurzweil sense of being able to “download” a man onto a chip. Can we quantify the scope of the problem?

Think how difficult this problem is. To understand even the basics of how the brain maintains the human intellect, we might need to know not just the current state of all 86 billion neurons and their 100 trillion interconnections, not just the varying strengths with which they are connected, and not just the states of more than 1,000 proteins that exist at each connection point, but how the moment-to-moment activity of the brain contributes to the integrity of the system. Add to this the uniqueness of each brain, brought about in part because of the uniqueness of each person’s life history, and Kandel’s prediction starts to sound overly optimistic.

Okay, that’s 100 trillion interconnections, or 1013, times the number of active proteins (103) at each connection, and we’re at 1016 degrees of freedom at a minimum. For each “moment” of action. (The basic “step time” unit is microseconds or smaller.)

And this is only in the brain itself, and doesn’t include the rest of the nervous system (which, in our metaphor, makes it sound like a separate entity) and it’s connections. Then add That Which We Do Not Yet Know about workings we should be modeling but aren’t, and can’t because, by definition, we don’t know what they are, and we’re probably at 1020 (see inter aliaBlood exerts a powerful influence on the brain“). At the least. I’m only guessing. You can make your own guess. I’m doing all this on one cup of coffee. Mistakes will be made.

All this is happening in three-dimensions. Proteins move. Chemicals swap electrons at the connections between synapses and between nerve cells and other cells in the body. And so on. This adds several more orders of magnitude. A wild guess here, which I’m happy to disdain upon cogent criticism, but I’d say, for fun, about 1,000 degrees per protein, though maybe up to a million. We’re up to 1023~1026. And this on on the low end. Think of it as A Very Best Case Scenario.

Now computers are (at this point still) made of transistors. How many transistors does it take to model the actions of one protein? Well, an Intell Quad-core + GPU Core i7, an everyday processor, has 1.4 x 10 9, and this is enough to do one protein. Not very speedily, but it can do it with some power left over. Is one i7 enough to do two proteins interacting? I’m not an expert.

What we’re after is the number of processors it takes to simulate those 1023+ degrees of freedom. Say a billion for each degree of freedom. That puts us in need of 1031+ transistors, at a raw minimum, to fully simulate the organism which is a brain (and its connections). This simulate ignores vast areas of a human being, of course. But let’s pretend those areas don’t matter.

I’ve mixed up the time component in there surreptitiously by speaking of modeling a protein. Probably this is another underestimation. Probably a laughable underestimation. It must be, because those proteins are made of finer stuff, all of which has to be taken into account. I wouldn’t be shocked if 10100 (or more) is the right answer.

If we believe Moore’s “Law”, the number of transistors on a chip doubles every two years. We have a billion now and want to arrive at 1031+. I make it something less than a century (73 years). Maybe less if “quantum” computers fulfill any of their promises, maybe more depending on how badly I’ve botched the above calculations. All assuming Moore doesn’t break down and become logarithmic, which every single innovation in human history has done.

(Incidentally, I’m betting Moore lasts only twenty years more, or even fewer, before the vigor is gone.)

Of course, that’s on one “chip”. We can string processors together and reach the goal faster. Right now we’d need 1022 i7 computers linked up. That’s a big number.

So, plus or minus, it’s a century from now (or even two or three) and we might have the computer muscle, and the intelligence enough to figure out how to program such a monstrosity. No small thing, either, because many of the interactions we’ll have to model are quantum mechanical, and nobody in the world has any idea—as in NONE, even if the computers are quantum computers—of what actualizes potential states of QM objects. Are these potentia actualized one-by-one? Or is there coordination, which is to say a sort of entanglement, between some, a few, all of the elements? Not only do we not know this, I think we cannot know this.

Anyway, forget the insurmountable difficulty. We’ve got the thing. We switch it on and…

It still won’t work. “Brains”, which is to say the organisms (in their entirety) which are us, are not solely material. Our intellects are not just the physical stuff which makes us up. We are more than animated dust.

This sad finding destroys some science fictional concepts, but it invites new ones.

111 Comments

  1. DAV

    what about the idea that we can “simulate” a computer

    Why would you want to do that? Wouldn’t it be much easier to just build one?

  2. DAV

    I can just imagine the perhaps not very forward thinking technician in the above discussing the difficulties in making this room-sized computer small enough to be carried on the wrist. Think how difficult this problem is.

    Or the reverse argument which is being made here by Epstein: look at the complexity of the modern computer! It’s far too complicated with its billions of transistors with all of their interconnections to make one out of mechanical parts or vacuum tubes. Think how large it would have to be!

  3. Luis Dias

    Never stop an optimist from trying to do impossible things. You never know, they might actually be able to do it.

  4. Briggs

    DAV,

    That typo placed by enemies is instructive (so maybe they’re helpful to me after all). It would take a much larger computer to simulate another computer.

    Incidentally, arguments that others have underestimated progress in the past carry little weight, just as do the larger set of arguments of over-estimating progress. I use the word “progress” advisedly. Let’s all don’t forget that last premise about the non-material nature of our intellects.

  5. DAV

    Years ago, I was associated (peripherally) with a project funded by DARPA (then known simply as ARPA) to simulate the entire brain of a lobster. Worked fairly well IIRC though rather slow. This apparently is an outgrowth of that: http://www.cmu.edu/research/brain/

    At least they are actively researching the problem instead of sitting around pontificating on its impossibility. Opinion as Truth. I can just image a philoso-babbler in Ancient times declaring the impossibility of a machine which could soar through the air like an eagle. After all, how would you keep all those feathers attached? Look what happened to Icarus.

    There seems to be this strange all-or-nothing attitude permeating the atmosphere amongst some here that apparently are vehemently against the possibility of developing an understanding of the brain and use thereof.

    Reminds me of a car ad I saw years ago with two boys in a room. The one not looking out the window identified the specs of a car coming down the road in great detail from the sound of its engine The boy looking out the window said: “Yeah but what color is it?” as if it were a complete put-down. That all-or-nothing attitude on display.

  6. DAV

    It would take a much larger computer to simulate another computer.

    Perhaps true but the hidden assumption is that only a simulation would be possible There’s always the possibility of building a functional equivalent where the line has been crossed from simulation to the real deal. However, for the all-or-nothing crowd it is indeed impossible.

    Let’s all don’t forget that last premise about the non-material nature of our intellects.

    It’s conjecture and nothing more. Show that it is an actual requirement.

  7. Briggs

    DAV,

    You bring up an excellent point I should have better highlighted. I’d guess that that lobster project failed. I’d bet a lobster’s brain was not simulated. Not in the sense I’m using the word.

    Simulates can be used in the sense of analogies, which is where I’m sure the DARPA project succeeded, or they can be (in a sense) replacements, mechanical artifices, which I’m all but certain was not the case with the lobster.

    We can simulate the atmosphere in the sense of an analogy, for instance. Hence weather and climate forecasts. But we cannot reproduce (in a simulated-replacement sense) the entire atmosphere in a computer. Necessarily in analogy-simulations, much is left out. Same with a brain, either a lobster’s or a man’s.

    In other words, we won’t be “downloading” our brains onto a computer. Especially since, as I keep pointing out, parts of us are downloadable no matter how powerful a computer we develop.

    UPDATE: I just saw your other comment. You can see what I mean about one computer reproduce-simulating another computer now.

  8. DAV: Okay, I won’t say it’s impossible. I’ll just ask “why”? Do we really want to live in a quantum computer and have only thought for existence? Or could almost all of us live in the quantum computers while a few caretakers watch over the computers? Maybe we can, but again, why should we?

    (As for simulating the brain of a lobster, that might actually be useful. Humans seem very good at sitting in a pot of water and happily singing away as they are boiled by other humans.)

  9. Sander van der Wal

    A computer can simulate a different computer on many levels. Probably the smallest simulation is on the machine language level, where macine instructions from the simulated computer are translated to macine instructions of the simulating computer.

    But you can also simulate a computer at the gate level, or at the even lower transistor level. That is much harder to do ans the simulators will be a lot biggerr and run much slower than the the real computer that is simulated.

    The analogy is clear. Simulating a brain at the protein level or at the neuron level will be very much harder than simulating the brain at levels built in top of the neuron level.

    The problem is that we do nit know how the brain operates on those higher levels.

  10. JohnK

    Matt makes many excellent points here. In the piece, I think that Matt makes far too little of the connection of ‘the brain’ to the rest of the body. That is undoubtedly due more to limitations of word space than of Matt’s thought.

    Anyway, for instance, there is a direct physical link between ‘the brain’ and the immune system. (I put ‘the brain’ in quotes here because this physical linkage begins to call into question where exactly ‘the brain’ ends and begins):

    http://neurosciencenews.com/lymphatic-system-brain-neurobiology-2080/

    But inside this exquisite connectedness may be a foreboding of something very worrisome. How much of that connectedness is between not just any brain and any body, but our actual particular brain and our actual particular body?

    I’m not the only one to worry a lot about ‘brain transplants’. Some big-time neuroscientists also think that isolating our actual, particular brain from our actual, particular body would lead to insanity that was painful beyond imagining. “There are things much worse than death,” is a typical quote.

    Things much worse than death.

    So, if it’s even partially true that particular ‘brains’ are connected to particular bodies, multiply Matt’s numbers by a lot, again.

    However, I point out to anyone interested that it is NOT a matter of Catholic faith that the human mind has an immaterial aspect. That is, entirely, a theological and/or philosophical supposition, not a dogmatic one. Thomists in particular often appear to think that classical (or transcendental, or esse-essence, etc.) Thomistic theology, or even Thomistic philosophy, is equivalent to the professions and worship of the Church. However often they say it, it is still true zero times.

    I also point out that the mere philosophical supposition that the human mind has an immaterial aspect does not in any way require the human mind to have a spiritual aspect in any sense professed by the Catholic Church. That is, to journey from ‘the human mind’ to ‘the soul’ requires many additional suppositions and arguments.

    Indeed, if one was not aware of the predilection of the Greeks, and through them, the Fathers, to dehistoricize/idealize Mind, one might naively suppose that the true argumentation goes entirely the other way: since the Church professes that man has a soul, thus an identity that survives the flesh, then ….

    (And if truth be told, ‘the Greeks’ is itself a reification. Homeric-era Greeks would find the immateriality of Mind just as baffling, even repugnant, as would the modern-day cognitive scientist at the local university).

    In sum, the Catholic Church professes “how to go to heaven, not how the heavens go.” There is no Catholic profession per se about ‘mind’, but only about Him, who is like us in all ways but sin.

    My mind can never know my body, although
    it has become quite friendly with my legs.
    — Woody Allen

  11. Bob Lince

    DAV says: ” I can just image a philoso-babbler in Ancient times declaring the impossibility of a machine which could soar through the air like an eagle. After all, how would you keep all those feathers attached?”

    Not your best argument. For all the advancements since ancient times, an airplane is still not a bird. And not just for the feathers; it can’t take care of itself, as can a bird, let alone reproduce itself.

  12. DAV

    Sheri:
    Do we really want to live in a quantum computer and have only thought for existence?

    I don’t know. Never did it so have no reference point for an opinion. Not sure how it might differ when compared to the so-called “life after death”. No brain there. Would thought be possible at all? Many find it attractive though.

    There were more than a few who were accused of being initial failures of the project. They were called Lobster Brains.

    Matt:

    In other words, we won’t be “downloading” our brains onto a computer.

    Functional equivalent is good enough for me. If it looks, quacks and acts like a duck and tastes just as good then it is a duck for all practical purposes Anything else seems a picayune stance.

  13. DAV

    Not your best argument. For all the advancements since ancient times, an airplane is still not a bird. And not just for the feathers; it can’t take care of itself, as can a bird, let alone reproduce itself.

    I said “build a machine that soars through the air [much as] an eagle”. At no time did I say “build an eagle”. The point was about pontificating what is possible.

  14. DAV: Okay, there’s no way to know. However, once the genie is out of the bottle, there’s no going back. Maybe humans just have to try and free the genie—I don’t know. Look what happened when we learned to split the atom. Perhaps a bit of conjecture on what might happen and what can be done would be good?
    Differs from life after death because life after death is not a creation of human beings, though it seems interesting that even scientists think life after death may be possible with a computer.
    “Lobster brain” seems appropriate.

  15. Yawrate

    You nailed it…we don’t even know what we don’t know. We hear it said again and again online that we’re maybe 20 years from downloading a human mind into a machine. My view is that we’re nowhere near that stage. And it may be impossible…we don’t even know what we don’t know. But we may be just a few years away from a flying car…

    http://www.terrafugia.com/

    http://www.aeromobil.com/

  16. Briggs, I think your best point was that at the end:
    “It still won’t work. “Brains”, which is to say the organisms (in their entirety) which are us, are not solely material. Our intellects are not just the physical stuff which makes us up. We are more than animated dust.”
    An atheist, David Chalmers, has called consciousness (which what we associate with “mind”), “the hard problem”, although he does posit a computer like mechanism for thought. Roger Penrose (who is not, I believe, a theist) says (and offers proofs) that mind and consciousness can not be simulated by algorithmic processes, i.e. computers.
    John K…I’m not sure where your Catholic theology about mind, consciousness and the soul is coming from. I believe it disagrees with what the Catholic Catechism says, what St. Augustine, St. Bonaventure and St. Thomas have taught us, but I’ll have to look at what you said more carefully and go back and reread the authorities.

  17. Joy

    Imagination, innovation or man’s knowledge in the year 2016 aren’t the point of objection to the notion of making a brain. Man will continue to tinker and innovate. When they invent people will still love them for it.

    This has to do with recreating original creation. It’s quite something else. I think it’s logically impossible but to produce something which is a closed system. Something like an annex of a brain an lacking dimension of a brain. Like an external hard drive. Whatever material they use, the logical problem is the same.

    Dav, in the plane example, it wasn’t intended to be an eagle.
    Planes don’t lay eggs or nest on cliffs. It’s not even a good funtional simulation.
    (Dav, There’s nothing like the real thing.)

    Man drew from nature for his own invention, it inspires art, science and innovation. However man has not ever succeeded in reproducing a new organism from scratch. The normal process of cell reproduction is manipulated, cells are altered but the creation of life itself is out of our power. It requires power that is not there to be discovered, not that we shouldn’t but that we won’t because we can’t. The brain problem which is accepted for the purpose of the argument as being separate as Briggs says, is the same problem as reproducing a single cell.

    I have heard of the experimentation going on in the UK or US where a cell is being produced artificially. One cell. It is referred to as an information machine. Others will know more about this. However an information machine can only produce an output that is included as part of it’s input. It does as it is designed to do in a linear way. It cannot learn or think or innovate because whatever it does has to be ‘programmed’ beforehand, obviously.

    So, it’s a computer made of different material. It is a closed system. It has no life, not even that of a plant.

    They haven’t managed to copy a knee joint yet that will function as beautifully as the real thing. They have a very impressive alternative though. Man can only borrow from nature’s example. Man cannot make nature, only manipulate it.

  18. DAV

    Sheri:

    I’ve nothing against conjecture. It’s the first step in solving a problem. The difficulty as I see it is that too many think their conjectures are the real answers and have no compunction when declaring them as such.

  19. 1) Your process reminds me too much of the genesis of the “Drake Equation” for it to have any credibility. Sorry.

    2) Why simulate a brain? surely what is really at issue here is what it takes to host a personality and/or an intelligence.

    3) isn’t your tech srgt background in signals processing? and, if so, wouldn’t you want to agree that most of the human brain is given over to signal processing and body management – functions that we very nearly have replacement/replicants for now? (“want”, because this is obvious but not actually established )

    4) since we lack any understanding of what processes lead to either a predictable personality or an unpredictable intelligence we can’t reasonably guess at what it takes to host either or both. Perhaps a smart phone would do?

  20. Well, everything was all fine and scientific until the end of the post, Briggs, and then the silly, ridiculous religiosity made it’s way in there.

    JMJ

  21. DAV

    Joy,
    in the plane example, it wasn’t intended to be an eagle. … It’s not even a good functional simulation.

    Wasn’t intended to be a functional simulation of an eagle. We do though have devices which soar like an eagle. It was a parable intended to illustrate the foolishness of declaring the impossibility of things and using words like “never”.

    So,[some device is] a computer made of different material. It is a closed system. It has no life, not even that of a plant.

    Is that important? We don’t know really know what life is — at least nothing more than a functional description. We aren’t in a position to proclaim what is or is not possible in the absence of something we find difficult to even define.

    Is a virus alive? It’s mostly a protein polymer. One that can produce more things like it simply by pasting material together. We have no understanding how that works. Plenty of conjecture though. At least one of those is literally out of this world requiring supernatural intervention.

    It cannot learn or think or innovate because whatever it does has to be ‘programmed’ beforehand, obviously.

    We are preprogrammed, too. Where did you learn to see and hear? Obviously preprogrammed doesn’t mean “always needs programming”. OTOH, the brain seems capable of rewiring itself so maybe it does mean that.

    Pointing to something produced by current technology and saying it will always be this, er, primitive is somewhat shortsighted.

  22. Yawrate: Yes!! It’s about time!

    DAV: Agreed.

    JMJ: Don’t worry. Your brain can live among those who find religiosity distasteful. Think of it as the ideal afterlife. 🙂

  23. Anon

    JMJ–then how would you describe your intellect?

  24. Joy

    Dav,
    Yes humans use nature to inspire innovation. It would not be short sighted to tell someone not to bother trying to make an entire living brain. The prospect is to make the very thing that made the thought to make the thing.
    “We aren’t in a position to proclaim what is or is not possible in the absence of something we find difficult to even define.”
    That is my point except something can’t be impossible and possible It must be one or the other. Since something isn’t even defined it can’t be possible because that would be nonsense.

    We are pre-programmed? by what?
    If a person doesn’t see until a certain age they find that they cannot make sense of the visual impulses when they regain eyesight. So we are born with the equipment and learning is a part of being human.
    Babies are not bombarded with clear vision when they are born, nature gives them just enough information to match their sensitivity.

    Could the envisioned invented brain invent another new brain and repeat ad infinitum?
    Of course a virus is alive. It can be dead and used in vaccine. It can’t be dead if it wasn’t alive in the first place. Cell metabolism, respiration, reproduction, require energy and movement of molecules, of the organelles of the cell, requires a force and where did that force come from? Just random motion and time? (I’ve just realised that brownian motion can’t be random.) Why do even simple organisms avoid death?

    With respect to the possible and impossible we are talking about a different dimension. Briggs prediction is fair regarding the decelerating amount of processing power we can expect that humans will achieve. The reason that it is not unimaginative or as you say shortsighted is because this is not to do with pure mechanics. If the mind isn’t understood It can’t be reproduced, only mimicked. Even thinking about the moments before a muscle contracts and the conscious thought is in the brain no one knows how that thought is held until the time which you decide that the muscle will actually move. Even if you do it in really slow motion. There’s no handle on where that or how that happens. It is known that the motor cortex is the source of the motor neurone and that’s that. The answer given is that there’s another association centre and that’s where it all happens. It’s not enough to think it. It’s not the understanding of the fuse mechanism which is more akin to the nerve than the electric impulse but how movement and thought are generated. Nobody can find that. Experience whether positive or negative, That is what generates innovation and creative power in humans and nobody knows what it is but it isn’t material.

    There is neuroplasticity but this isn’t understood either. It’s all worse than Briggs points out as well because the brain, is not isolated functionally, only anatomically from the system.

  25. Ye Olde Statistician

    [E]ven if the brain can be modeled by a mechanistic system, then to recognize its own Gödel sentences as true would require the mind to be something more than the brain. — This Gödel is Killing Me
    http://tofspot.blogspot.com/2010/06/this-godel-is-killing-me.html

    Hence, the distinction between simulating a brain and simulating a mind. While brilliant engineers may one day simulate the former with a mechanism, they cannot simulate the latter because it is not a mechanism. All the engineering in the world cannot prove the continuum hypothesis within ZF set theory. So ‘impossibility’ has a different ring in math and computation than it does in engineering.
    +++
    Briefly thus: the intellect knows by grasping the form or essence of a thing. If things did not have forms, science would have nothing to consider. You cannot have a science of this cup on my desk, which is a concrete particular; but one may have a science of cups, an abstract generality. Now for matter to take on a form is just to be that thing. Matter that takes on the form of a cat just is a cat. But when I know that cat, the form of that cat is in the cat, and also in my mind. If my mind were matter, then some part of the mind would just be a cat. But this is absurd (I hear you say). Well, certainly, and that is why the mind cannot be material. And in particular, it cannot be brain.

  26. Leo

    Your enemies are everywhere.
    100 Trillion is 100 x 10^12 = 10^14

    This doesn’t affect the argument.
    The numbers are so vast that a factor of 10 is not that much.

  27. DAV

    While brilliant engineers may one day simulate the former with a mechanism, they cannot simulate the latter because it is not a mechanism.

    We struggle with the definition of mind thus are in no position to proclaim that it is a mechanism distinct from the brain itself. Destroying parts of the brain can lead to changes in personality. There is no evidence and only conjecture for the mind being nothing more than a manifestation of how the brain is put together.

  28. DAV

    We are pre-programmed? by what? … we are born with the equipment and learning is a part of being human.

    The ability to see is largely set in the brain (for humans anyway). A large part of that is normally present at birth. Learning to make sense of what is provided doesn’t mean it wasn’t built-in. You see colors. How did you come about doing that? I’m not talking about giving them names or describing them to someone else.

    So in terms of the mind what are we actually doing when we learn? Rewiring the brain?

    Learning is not just a human thing. Animals learn, too.

    Briggs’s point echoing Epstein and others is that the mind is different than and independent of the brain or its wiring. There is no evidence for this but plenty of evidence that altering the brain changes the mind. Given this, the question becomes: what would be transferred if we could download a mind. If it’s not the result of wiring then the transfer would appear impossible. Declaring it impossible is a reach. We don’t know enough. No it’s not really fair.

  29. Ye Olde Statistician

    Destroying parts of the brain can lead to changes in personality.

    No surprise to an Aristo-Thomist, the human substance is a synolon, a compound of form and matter. So ailments of the one will affect the other. A disease of spirit can also affect the body, as in psychosomatic illness. Destroying parts of the hand will lead to changes in piano playing. That doesn’t mean piano-playing resides in the hands as such. (I have hands, but cannot play the piano.)

    [we] are in no position to proclaim that [mind] is a mechanism distinct from the brain itself.

    The point made by Lucas is precisely that mind is not a “mechanism” and cannot logically be a mechanism, due to Goedel’s incompleteness theorems.

  30. La Longue Carabine

    The thing about simulations is the time between “renderings”, that during which the all the computation is done. If the entire universe were a simulation, the time needed to compute the next state would be, as best as I can estimate, a “long time.”

    But the denizens would be unaware of of that time because their “experience” would be the rendition itself, and they would be unaware of the computation time. So, computational performance is unimportant — as long as you don’t mind being long dead before the next frame rolls out of the printer.

    For JMJ’s benefit, just let me say I don’t believe God is playing some kind of video game with our lives. This “singularity” stuff is just mildly entertaining.

  31. Rich

    “Numbers are not material. Therefore computers can not do arithmetic.”
    Perhaps the arithmetic isn’t done until a person interprets the pattern of pixels on the screen. Can computers talk? Can parrots?

  32. DAV

    cannot logically be a mechanism, due to Gödel’s incompleteness theorems.

    So a limitation of mathematics and maybe philosophy in general or perhaps just our ability to apply either is an accurate description of the universe outside of mathematics? How is this not an example of Reification?

  33. Sander van der Wal

    Most humans will not be able to see that a Gödel statement is true. Some of them might figure it out by following the logical argument, but even then most of them won’t see it.

    Which makes AI both much easier to achieve, and a waste of time.

  34. G. Rodrigues

    “Therefore computers can not do arithmetic.”

    Exactly. Only rational beings can do arithmetic, in the relevant sense of “doing arithmetic”. We can do it with the aid of computers or an abacus, but it is we that are doing the arithmetic, not computers or the abacus.

  35. DAV

    makes AI both much easier to achieve, and a waste of time.

    “Waste of time” is a judgement call and greatly depends on the purpose.

    The original purpose of AI was to demonstrate the viability of proposed mechanisms. It still has that purpose but the term “AI” has been usurped to hype things which are more accurately described as AI-like.

    Who knows? An intelligent machine might be useful.

  36. DAV

    but it is we that are doing the arithmetic, not computers or the abacus.

    A fuel control in a car is doing the arithmetic (the sequence of calculations); not the driver. It’s irrelevant how the controller got the ability. The microwave cooks the food; not the person initiating the cooking.

  37. Ye Olde Statistician

    a limitation of mathematics

    and of any mechanism that operates by means of logic and mathematics.

  38. DAV

    and of any mechanism that operates by means of logic and mathematics.

    Only if its purpose is to prove all mathematical truths. Outside of that there is no limitation implied by Gödel.

  39. Ye Olde Statistician

    The microwave cooks the food; not the person initiating the cooking.

    Without the person, the microwave oven is simply a pile of plastic, glass, metal, wiring etc. What makes it an “oven” is the intent of its user; its telos, as it were. Likewise, a ‘computer’, such as the mug of hot tea on my desk, which is a computer executing the program ‘sit there and cool off.’
    http://users.ecs.soton.ac.uk/harnad/Papers/Py104/searle.comp.html

  40. DAV

    The microwave cooks the food
    Without the person, the microwave oven is simply a pile of plastic, glass, metal, wiring etc.

    Which is entirely irrelevant. The microwave is still doing the cooking. The person could sit with the food outside of the microwave and it won’t get cooked. Do you claim to have cooked the meal in restaurant by merely placing an order? Not much different than ordering a microwave to do the cooking.

  41. Joy

    Dav,
    We are preprogrammed? Preprogrammed implies a mind behind the programming.

    This is a distraction but if a person grows up without seeing light and then regains eyesight they have no understanding what the information or sensation is and how to process it. It is traumatic. This is not about describing things or any other cliché about blindness., not about restoring what was lost. How do I see colour? I don’t know, you’re the scientist! In my case the science says I’m colour blind. I fail the tests for colour. Yet I see colour. This is why I observe dogs picking certain colours and believe they’re discerning it even though ‘science’ says they can’t.
    Monet observed a colour is made to look a certain hue because it is next to another. It’s hard to find a perfect pink, neither purple or peach but in between. True blue is even rarer. Meconopsis botanicifolia is the best! That is one of the wondrous things about flowers.

    The visual cortex in the rear of the brain is striated in a way that exactly represents the impulses mapped out as they reach the brain from the optic nerve, which decussates and the image is reassembled in the brain. This is what I mean by equipment. That is just hardware. Even the automatic reassembly is not hardware.
    Having said all that, cameras are easy. Sensors of all types, although the weakest links in electronic items exist and yet they are not truly sensing in the way that a conscious being is. They are gadgets that reach certain thresholds and switch on or off. The bimetal strip in the kettle isn’t feeling the heat! To make a brain that means making consciousness. If it’s all about making a giant computer then there’s no reason to assume that won’t happen. It won’t be a brain and it won’t have a mind.

    “Declaring it impossible is a reach. We don’t know enough. No it’s not really fair.” I just see a contradiction of approach there. Faith in mankind to do what God didn’t do for sure.

  42. Rich: Perhaps a simulation of a brain is not a simulation of a brain until a person interprets the pattern of pixels on the screen.

    Sander: Well said!

    DAV: “Who knows? An intelligent machine might be useful.” Unlikely at this point—it might be “smarter” than its creator and that would be totally unfair. It would have to be mediocre or it would have to go.
    It is the mop cleaning the floor, not the person pushing it, right? YOS obviously disagrees. Of course, if we are looking at the current process only, it doesn’t matter. However, when my niece said she didn’t need to math because calculators did, we pointed out that if the calculators fail, it takes a human being doing the math to make a new one. Humans are only important if you want more or a rebuild of the item. Then, they are vital. (As for ordering in a restaurant, I don’t claim to have cooked the meal, but I did cause it to be cooked. So I cause the microwave to cook. I am the cause, but not the means to the end.)

  43. DAV

    Joy,

    Preprogrammed implies a mind behind the programming.

    Not really. It could mean simply built-in; a default setting perhaps. I didn’t intended to imply intelligent design.

    This is a distraction but if a person grows up without seeing light and then regains eyesight they have no understanding what the information or sensation is and how to process it. It is traumatic

    I understand. The same is true of language to some extent. People that begin to use a second language sometimes can’t hear certain vowel sounds — sometimes even with native language. The city where I grew up has a unique pronunciation of the word “downtown”. Some don’t hear the distinction and others can’t duplicate it.

    There seems to be a time frame for learning some things and attempts to learn later are less than perfect. Which is interesting. The brain seems to be more malleable during certain times.

    When you are born, you can see (assuming no abnormality) but you aren’t adept at controlling things like focus and it takes a while to know what you are seeing. Learning to see well and parse a scene is what is meant by learning to see. The seeing is built-in, though.

  44. DAV

    YOS obviously disagrees

    YOS is confusing responsibility for an action being done and the doing of the action.

    Some of it has do to the ambiguity of natural languages. Surprising though coming from a person who is a stickler for word usage.

  45. JH

    Long long time ago I had an operation under general anesthetic. I somehow rebooted, feeling the tug of wooziness initially, without any memory, nothing absolutely nothing, of what had happened to me after being wheeled into the operation room. Some chemicals mysteriously seized my consciousnesses or awareness or perception, whatever it is exactly. Perhaps, the mystery would be solved in the future, unfortunately, not by me. (Einstein’s everyone-is-genius is a white lie.) And I hope the supply of
    curious minds wouldn’t never be cut off by a dead end of “God-did-it.”

  46. JH

    And I hope the supply of curious minds would never be cut off by a dead end of “God-did-it.”

  47. JH: Sounds a lot like sleeping—wake up woozy with no memory of the last six hours.

  48. Joy

    JH,
    In what way does belief impinge on science?
    Science and faith deal with different questions. There is no conflict of interest.
    I’m picturing people working in R and D who say “Oh whatever, God did it.” and never get a thing done.
    The curiosity is at the why. Atheists don’t believe in why. There are no reasons.

    Medical science is a classic example of where the curiosity is utterly lacking on the atheist side. In particular with regards to brain function, anaesthetic and consciousness.

  49. Ye Olde Statistician

    And I hope the supply of curious minds would never be cut off by a dead end of “God-did-it.”

    Or by cries of “It just is!

    [They say] “We do not know how this is, but we know that God can do it.” You poor fools! God can make a cow out of a tree, but has He ever done so? Therefore show some reason why a thing is so, or cease to hold that it is so.
    — William of Conches, 12th cent.

    The microwave is still doing the cooking.

    It is what is known as an instrumental cause. Instruments are powerless to act in and of themselves. One may as well say ‘the piano played the concerto’ or ‘the gun shot the victim.’ Hence, ‘Bob cooked the food using a microwave (or some other means).’ What the microwave oven does is emit electrons through a magnetic field.

    A fuel control in a car is doing the arithmetic

    Do you mean a fuel injector? A carburetor? A tank gauge? But like a computer combining voltages or flipping switches, these mechanical acts are ‘arithmetic’ only because we say so. We can model the materialistic behavior of the instrument with mathematics, but the semantics is not in the syntax; otherwise you would know what “H” meant just by seeing it.

  50. DAV

    JH,

    Nice link. Actually feedback has been used in the past. The Kohonen network is an example. It can learn to recognize patterns on its own. Its called unsupervised learning as opposed to feeding it the answers along with the training data (supervised learning). The brain appears to be nothing more than a whole lot of interconnected sub-networks that are really recognizers.

    Currently, artificial networks don’t do much more than memorize the essence of patterns. As building blocks, though, that may be the only thing necessary combined with every increasing levels of recognition. Douglas Hofstadter covered this in his The Mind’s I and Gödel, Escher, Bach. There are also networks that follow progressive state changes of the inputs. I’ve been playing with one called FANN.

    Artificial nets are unlikely to come close to being brain equivalents until they are massively built into the hardware. Doing that might even require a fundamental change in the way the hardware is constructed to make it small enough. Speed though is probably not important if enough parallel operations can be done.

    I had dental surgery once and passed out from the Valium dose I was given. I remember closing my eyes and when I opened them again the clock had jumped 40 minutes. The time between is a black hole in my memory. I think this is as close to experiencing death as we can come without actually dying.

    I don’t think the God-did-it crowd will lack imagination and curiosity. The Church kept and protected a lot from the past during the era following the disintegration of the Roman empire.

  51. DAV

    Sorry, I said FANN but really meant RNN (Recurrent Neural Networks). I’ve been using both. Maybe I was on a Valium trip or got lost in my coffee buzz.

  52. DAV

    instrumental cause. Instruments are powerless to act in and of themselves.

    Sorry but you are being silly. The microwave is doing the cooking. English is ambiguous and words have shades of meaning.

    The amount of involvement is usually the guide. Playing the piano and mopping the floor require intense involvement so it’s reasonable to assign the action to the mopper and player.

    If OTOH, you merely initiated the action by handing the food to your microwave slave and pushing the “cook this” button then you did not do the cooking; the slave did while you went off and did something else. But then, you might be one of those dummies who stand around and stare at the microwave while it is cooking. In that case, you may feel justified in claiming the credit much like the football fan claims credit for the team win (“WE did IT” and all that).

    Do you mean a fuel injector? A carburetor? A tank gauge?

    There have been a lot of changes in the way car engines operate since the first wind-up ones. Try to keep up. Today most are computer controlled. I was referring to these. And yes, these are doing the computations. There is no human involvement beyond the initial instruction on how to do them.

  53. DAV’s comment reminded me of the problems with self-driving cars—they follow the rules of the road while humans may not. It may be possible to program computers to interact in a way that looks human, but throw a human in the middle and all bets are off. So far, computers can’t just throw out rules because they don’t like them, decide one reward is better than the other, or, as in the case of the cars, anticipate a driver making a completely unexpected move such as not slowing down when the self-driving car signals a lane change and makes it, thus hitting the other vehicle that did not slow down and allow the lane change. That seems to be the biggest challenge: anticipating that which is completely outside of the norm.

  54. Ye Olde Statistician

    Instruments are powerless to act in and of themselves.

    The amount of involvement is usually the guide. Playing the piano and mopping the floor require intense involvement so it’s reasonable to assign the action to the mopper and player.

    How much “involvement” is necessary before an instrument ceases to be an instrument? How is “involvement” measured? If I lift a bale using a lever does the lever do the lifting or do I do the lifting using a lever? What if I switch to a block and tackle: am I less involved and does the block and tackle lift in a greater sense than the lever? In what sense are you privileging an organism’s muscle movement over his use of an artifact? The number of muscle groups involved? Fewer are needed to push a button than to life a bale with your own arms, but this seems an odd reason to shift the autonomy. A microwave oven is not even a microwave over unless humans have so designated and used it. Otherwise all it does is emit microwaves into a Faraday cage. Whether it cooks anything depends on the intentions of its user.

    There have been a lot of changes in the way car engines operate since the first wind-up ones. Try to keep up. Today most are computer controlled. I was referring to these.

    You had written “A fuel control in a car is doing the arithmetic,” and I was not sure what you might have meant by “a fuel control.” Are you talking about something controlling the fuel/air ratio? But I see you were only referring to computers doing arithmetic again. Why fuel control? What is it “adding”? If you mean voltages or flips, these are arithmetical only because we have designated them as arithmetical. And we have done that only because the math is a good model for the physical actions of the material object. You may as well say that a camera “sees” or a microphone “hears” or that a graduated cylinder “measures rainfall.”

    And yes, these are doing the computations. There is no human involvement beyond the initial instruction on how to do them.

    Usually one sees an effort to reduce human involvement to mere mechanical actions, so it is nice to see you distinguishing and elevating the participation of humans as distinct from the mechanical actions of the instruments they use. But all the machines are doing is accumulating counts, turning wheels, combining voltages, etc. as the case may be. What these acts mean, the semantics, must come from outside the syntax of physical manipulations of material substances.

  55. DAV

    Sheri,

    Reminds me of the time in the early 80’s I took a trip down to Langley Research Center in Hampton, VA to drum up business. They were working on the then experimental CAT III approaches with computer controlled landings. They said the biggest problem they had was with the safety pilots wresting control from the controller and more so because the controller was far more accurate than the human pilots.

    Didn’t get any new business but did get to climb all over the 737 test plane. So, not a total loss. Fun fact: the 737 and DC-3 were the best engineered and still the most widely used aircraft in the world.


    People in WY slow down when someone signals a lane change instead of speeding up to block them? Wow.

  56. DAV

    YOS,

    I’m curious. If there was some failure where the food wasn’t cooked properly would you be just as quick to lay the failure on the operator as you are in assigning credit?

    If not, how are you different than all the other credir grabbing/blame spreaders out there?

  57. Ye Olde Statistician

    would you be just as quick to lay the failure on the operator as you are in assigning credit?

    Blame? Credit? What are you talking about?

  58. JohnK

    Wanted to enter into the record something about large-scale simulations that I had never considered. They may not be easily replicable. When these researchers tried to replicate a simulation of fluid dynamics, they found that changing the CPU, even mere library version changes, changed the simulation.

    “In an iterative linear solver, any of these things could be related to lack of floating-point reproducibility. And in unsteady fluid dynamics, small floating-point differences can add up over thousands of time steps to eventually trigger a flow instability (like vortex merging).”

    As they note, “computational science and engineering lacks an accepted standard of evidence”.

    “When large simulations run on specific hardware with one-off compute allocations, they are unlikely to be reproduced. In this case, it is even more important that researchers advance towards these HPC applications on a solid progression of fully reproducible research at the smaller scales.”

    The original paper on Arxiv is here.

    I found it not only amusing, but also revelatory, that both Jersey McJones and Bob Kurland –who could not be further apart on some other things– think that making the human mind (partly?) immaterial is a ‘religious’ move.

    Jersey can relax: even if the human mind is completely and utterly and provably immaterial, it does not follow that Jesus is the Christ, is true God and true man, and died for our sins. And Bob can relax, too: even if the human mind is completely and utterly and provably material, it does not follow that Jesus is NOT the Christ, is NOT true God and true man, and did NOT die for our sins. “But who do you say that I am?” [Mk 8:29] is still completely on the table. The truth that “Jesus is Lord” does not wait upon an answer to some philosophical point about the nature of the human mind; becoming true or no, like Schroedinger’s cat, depending on the Answer. And our theological speculation has become arid indeed if we even imagine such a dependency.

    And I repeat: whether ‘brain simulations’ are too near or too far, impossible or almost here, we are far too blithe about the possibility of our unleashing unspeakable suffering. Meatspace is indescribably precious; and it may not, in fact, be optional. Worse, however — evil, even — and consummately un- and anti-Catholic, is any idea that meatspace itself is ‘inferior’, is to be disdained, abandoned, or ‘defeated.’

  59. JohnK, what is “meatspace”?
    And I’m not sure whether I disagree with you or not, ’cause I still don’t understand what you’re driving at. My opinion about the soul and mind conforms to that of St. Thomas, St. Bonaventure, St. Augustine and the Catholic Catechism:
    “The human person, created in the image of God, is a being at once corporeal and spiritual….Man, whole and entire is therefore willed by God…soul refers to the innermost aspect of man, that by which he is most specially in God’s image: ‘soul’ signifies the spiritual principle in man…it is because of the spiritual soul that the body made of matter becomes a living, human body; spirit and body in man are not two natures united but rather their union forms a single nature.”
    I don’t know how to do boldface in these comments, but if I could I would for the last two lines.

  60. G. Rodrigues

    @DAV:

    “A fuel control in a car is doing the arithmetic (the sequence of calculations); not the driver.”

    A fuel control is not “doing anything”, much less arithmetic, because fuel controls are not rational subjects, and only rational subjects can be meaningfully said to be doing arithmetic, a pre-eminently rational activity if there ever was one. This is a pretty elementary conceptual point.

    @JohnK:

    “And Bob can relax, too: even if the human mind is completely and utterly and provably material, it does not follow that Jesus is NOT the Christ, is NOT true God and true man, and did NOT die for our sins.”

    There may be a little, tiny wiggle room here, but I would say that actually, it does follow.

  61. Joy

    “human body; spirit and body in man are not two natures united but rather their union forms a single nature.”

    One doesn’t need to be Catholic to come to this conclusion. Our own revelation tells us this Physiology tells us the same.

  62. Joy

    “Jesus is Lord” does not wait upon an answer to some philosophical point about the nature of the human mind; becoming true or no, like Schroedinger’s cat, depending on the Answer. “

    There is evidence, though.
    That there is the very apparent “immaterial’ reality of the mind is evidence of the metaphysical and therefore spirit is likely. There are no proofs, there is evidence. Mind, that we are able to derive meaning, or ask why is also evidence that only material is not all there is. That mindless unguided processes were not responsible for the human being. This would mean that the mindlessness which drove the process was less than what it was able to produce. That the mind which emerged was greater than that which brought it about. this would be nonsensical or seem to break some law of nature, or logic, perhaps the information machine rule.

    (Evolution deals with the material realm.)

    However, weak as my faith is, I’m not waiting on the answer. If someone needs proof they have no faith, they are doing science! If they need to feel like they are on the side of the intellectual majority for approval to save face, they lack courage.
    Faith is a temperamental condition not a matter of fact or proof. Neither is it a badge of office. Evidence is all there is (and revelation) but we’re not allowed to speak of that.

  63. swordfishtrombone

    @ Mr. Briggs: “It would take a much larger computer to simulate another computer”

    I can’t find that line in the article but I disagree with it. It would actually take only the simplest possible computer to simulate any other computer, no matter how complicated. (Ignoring memory limitations) The simulation would run slower than the original but the output would be exactly the same.

    The linked essay by Epstein is poor. My objections to it are summed up very well by the first comment under the article. You can’t argue that because someone can’t draw an accurate picture of a dollar bill, then we have no memory.

    Your article is very interesting until the last paragraph – the idea that our intellect is immaterial has lots of evidence against it (brain damage, unconsciousness) but no evidence in favour of it.

  64. DAV

    G. Rodrigues,

    A fuel control is not “doing anything”, much less arithmetic,

    If it does nothing then it can never fail and have to be replaced. Good to know. But then, doing something might be a failure mode if its task is to do nothing.

  65. G. Rodrigues

    @DAV:

    “If it does nothing then it can never fail and have to be replaced. Good to know.”

    The fuel control can only fail to do arithmetic as regards the intentions and the purposes of their builders, because of itself and outside the purposes and intentions of its builders it is not doing or failing to do such, because it is their builders, rational agents, not fuel controls, that can do arithmetic, fail at it, recognize when something that was supposed to “do arithmetic” is failing at it, etc.

  66. Geezer

    I don’t know how to do boldface in these comments, but if I could I would for the last two lines.

    For boldface: <b> in front (turns it on) and </b> at the end (turns it off).

  67. Sheri

    It is interesting that the discussion always heads toward “either/or”. Human intelligence is either immaterial or not. Work being done is either the appliance doing it or the creator of the appliance. There’s no evidence that this has to be the case. Human intellect can be both material and immaterial in part. The creator of the appliance is as essential to the appliance as the appliance is to cooking. It’s not an either/or, it’s both. (Yes, “both” means humans cannot duplicate a human mind, but that’s a separate issue that may be clouding the discussion. It has to be either/or if humans can create a human mind. If one believes a human mind can be created, then human intellect must be material. Thus, the either/or scenario is essential to this view.)

  68. Ye Olde Statistician

    The fuel control can only fail to do arithmetic as regards the intentions and the purposes of their builders

    That’s right. That the combination of voltages or ratcheting of counter wheels is “arithmetic” is purely in the intentions of the humans. If you place two pecans in a bowl, then place another two pecans, you indeed have four pecans; but the bowl is not doing arithmetic, even if you are using the bowl to do arithmetic.

    Of course, instruments intended for one thing or another may fail to do what is intended. The parchment may tear on the player piano roll and the piano fail to produce the intended music. I don’t know why you think an instrument cannot fail, just as a user may fail to use the instrument properly. This is whether I am using a ‘computer’ to tabulate some numbers by filling and shifting registers or using it as a paperweight.

    +++
    Human intelligence …. Human intellect

    Well, which one?

  69. Joy

    FANN sounds very similar to a precursor to a super worm.

  70. DAV

    A fuel control is not “doing anything”

    FYI: Not doing anything == doing nothing.

    I don’t know why you think an instrument cannot fail

    You can be really dense at times. Things that do nothing cannot fail at the task.

    instruments intended for one thing or another may fail to do what is intended.

    So you admit it is doing something and are just engaging in silly nitpicking of the verb used to describe the something? Do you two ever listen to yourselves? Talk about AI!

  71. DAV

    FANN sounds very similar to a precursor to a super worm.

    As in FANN out?

    Actually it stands for Fast Artificial Neural Network. It’s a library of functions. Claims to be up to 150 times faster than other libraries. But then, 0.1 times as fast is “up to 150 times faster”. Apparently, whole there’s an upper limit on speed there is no lower limit on slowness. presumably it is zero.

    http://leenissen.dk/fann/wp/

  72. DAV

    “whole” should have been “while”. Dratted keyboard. It knew what I meant.

  73. Ye Olde Statistician

    So you admit it is doing something

    Of course instruments “do something.” A computer tallies bits in registers, a piano roll trips hammers in an upright piano. What the computer does not do is “do arithmetic.” What the piano roll does not do is “make music.” The music was made by the creator of the piano roll and more immediately by the person who installed the roll and pumped the pedals. The arithmetic was done by those who designed and interpreted the meanings of accumulation in various registers.

    Naturally, we employ shorthand in everyday speech. It is certainly easier to say the computer “did” the arithmetic than it is to describe the actual physical motions and actions that it performs and which we interpret. You can’t get semantics from syntax.

    The fact remains that an instrumental cause is secondary to a primary cause. Instruments do not act on their own, even when like waterwheel grinding mills they are somewhat automatic. Fall not victim to the deadly illusion of reification. The computer is not thinking. The camera is not seeing. The microphone is not hearing.

  74. DAV

    we employ shorthand in everyday speech. It is certainly easier to say the computer “did” the arithmetic than it is to describe the actual physical motions and actions that it performs and which we interpret.

    No that’s silly. The sequence of operations is what is intended. When I tell a computer to add x and y then use the result elsewhere it is doing the adding and using; not me. In fact, the name for the device incorporated within is called an adder (http://isweb.redwoods.edu/instruct/calderwoodd/diglogic/full.htm) . The result is indistinguishable from me or anyone else or anything else doing the adding (ignoring representational differences). I call getting this result “doing arithmetic” — and I specifically said “sequence of computations” . The reason why the thing is called a “computer”: it computes. When you and GR run on about this you both are being pedantically silly with pointless distinctions. Knee-jerk even. In what way is it important?

    I get it. You want to think of yourself as Special and it really bothers you that you might not be. So you rail against that which makes you less Special.

  75. Ye Olde Statistician

    When I tell a computer to add x and y then use the result elsewhere it is doing the adding and using; not me.

    And when you move the beads on the abacus or accumulate stones in a bowl, it is the abacus or the bowl doing the adding. Got it. And when you write with your pencil on a sheet of paper the sigils “2”, “+”, “2”, “=”, “4”, it is the pencil and paper doing the adding and not you. Or does it matter if the pencil is electric and writes in phosphors on a screen rather than with graphite on a page?

    I’m not sure what you mean by “Special.” Everyone is special. But when Goedel, Searle, Lucas, et al. determine that computation is not thinking, that semantics does not arise from syntax, the the brain is not the mind, they really out to receive a respectful ear, even if one’s own obsessions drive one to reject their conclusions.

  76. DAV

    Arithmetic is a mechanical (as in machine-like for knee-jerk pedantics) process. No mathematics are involved. No theorems are being proven. It may be that the operational process is an outcome of mathematics but the operation itself is just following a recipe. It’s the answer to the question: what do I get when have these two symbols?

    There are a number of ways to implement adding. One way is to use a table lookup. You don’t even need to know what the symbols mean. they could be in some foreign or concocted language. You don’t have to be a person to follow the recipe. The result may have meaning to a person but getting there requires no understanding of the inputs or outputs.

    I’ll bet table lookup is what most people do when asked to add 7 and 6. They memorized the answer and recall it from memory storage. At least that was how I was taught. Some of the early decimal computers did this. Binary ones do it as well but it doesn’t look like a table. The table is stored in the wiring configuration for quicker access. One of the advantages of using binary. The computer is literally doing the adding and not simulating it.

    If you are reserving arithmetic as something only humans can do (and thus are ever-so Special) then you’ve picked a bad spot to make your stance. A mechanical process requiring no thought; yet you don’t think a mechanical device capable of it.

    And when you move the beads on the abacus or accumulate stones in a bowl, it is the abacus or the bowl doing the adding. Got it.

    Typical. you go out of your way to introduce things not said. Straw men.

    determine that computation is not thinking, that semantics does not arise from syntax

    And computation isn’t thinking. See above.

    I suspect that “syntax” and “semantics” are words in a sentence to you. You don’t seem to understand the semantics of “syntax” and even less of “semantics”. You keep insisting that some things are “syntax” (the manner of expression) and disregard the semantics. If I build a device that does something when I push a button, pushing the button is the way I express what I want (the syntax). The result of that is what pushing the button means (the semantics).

    If some word sequence should be replaced with XYZ according to the instructions then the replacement is the semantics of the operation. Yes, this is rudimentary semantics but semantics nonetheless. The results can be comical if neither sequence is seen to be anything but symbols — particularly if they are words of a natural language — but there are still semantics involved in the translation.

    Here’s a device that not learned syntax from complete scratch but is able to generate text with punctuation and capitalization properly placed within the generated text. The output almost looks like English but producing English wasn’t the goal. That it learned this from examples without guidance is interesting.

    http://karpathy.github.io/2015/05/21/rnn-effectiveness/
    Scroll down to the interesting parts titled “Paul Graham”, “Shakespeare” and “Wikipedia”.

    Before you make a fool of yourself yet again: no, it doesn’t think Not even close.

  77. DAV

    Erratum:

    Here’s a device that not learned syntax from complete scratch

    Here’s a device that not only learned syntax from complete scratch

    (*sigh*)

  78. Ye Olde Statistician

    The computer is literally doing the adding

    It is retrieving bits from a look-up table. It is only “adding” because we call it “adding.” Do you think Deep Blue is “playing chess”?

    Suppose you see this: H
    What is its meaning content? In what sense does that content reside in the physical arrangement of horizontal and vertical line segments?

  79. MatCzu

    Desperation. Incredible. Some people think that these machines do else than what they are designed and PROGRAMMED for. As the MW oven and like: put your intended-to-be meal into it and wait until “he or she” figures out whether it should be cooked or just warmed up, well done or rare, etc., etc.
    The same applies for computers, you can fairly long wait until they “find out” what they should do with those pile of bits you did not feed into.
    Even Epstein speaks of “Computer intelligence” – WHAT??? The damned things are as dumb as they can be. Intelligence had their designer(s) who planned the circuitry, boards, processors etc, as well as the many programmers who wrote, compiled and tested the SW.
    So do NOT please underrate that tiny bit of action pushing a knob or flipping a switch or a pushing trigger – all “development” of the technical civilization is about making things simpler and accessible for those who does not have the faintest idea what is exactly behind.
    Visualize a descendant sitting in a cave in front of a rusted PC, waiting for the blessed MACHINE to speak the truth. Or anything.

  80. DAV

    Do you think Deep Blue is “playing chess”?

    Unlikely in the same way Kasparov does but still playing. But then, it’s unlikely Kasparov plays the same way I do. I never got better than slightly past mediocre. It’s also unlikely I will get better. Chess is a visual game. For me anyway. I just see the moves and don’t know how I do that. For all I know Gary’s approach harnesses the immense power of the visual cortex and he conducts deep searches unconsciously. He also likely has a good book memory where the first 10 moves or so are pretty much canned. I have memorized only a few.

    Suppose you see this: H What is its meaning content?

    None really. Ask a person on the street the same question and you’ll most likely get a puzzled look while the person tries to guess where you are going with the question. Might even supply answers based on those guesses. But the initial puzzled look gives away the real answer: it has no meaning in and of itself.

  81. MatCzu

    DAV:
    Do you think Deep Blue is “playing chess”?

    ‘Unlikely in the same way Kasparov does but still playing.’

    There again. No. It is not playing. Its program executes “steps”, according to some, possibly fairly complicated and “clever” algorithms.
    This way of “deification and humanization” of machine, be it the most sophisticated piece of human constructions, material or less visibly so, is just
    a metaphor of the intervention that create these and other things.
    It is like people like to see sentences as “Molecules recognize” this and that, as if they were humans.
    P.s. Me is possibly an even worse chess player than you … 🙂

  82. G. Rodrigues

    @Sheri:

    “Human intelligence is either immaterial or not. Work being done is either the appliance doing it or the creator of the appliance. There’s no evidence that this has to be the case.”

    I do not know who is this supposed to be describing, but it certainly is not me. All I stated was a rather elementary (*), conceptual point about what “doing arithmetic” *means*. Doing arithmetic is an intellectual activity; following rules or implementing algorithms is an intellectual activity; a table lookup is an intellectual activity (algorithms implementing table lookup can be quite sophisticated and there is a whole industry devoted to the subject); the fact that there is an algorithm that computes the addition of two numbers in decimal representation is itself the product of intellectual activity. Only rational agents perform intellectual activities. A fuel control is not a rational agent. Ergo it cannot, as a matter of logical necessity, do arithmetic. We rational agents do arithmetic, even if with the aid of an abacus or a pocket calculator or whatever. An abacus or a pocket calculator can only be said to be doing arithmetic in a derived sense, as regards the intentions and purposes of their builders and users. If tomorrow human kind were to vanish, so would doing arithmetic, at least on planet Earth (and assuming there are no other rational beings hidden somewhere in its bowels).

    (*) the error is elementary, but it spawns many errors of its own.

  83. Ye Olde Statistician

    Suppose you see this: H What is its meaning content?

    None really. …the initial puzzled look gives away the real answer: it has no meaning in and of itself.

    Exactly. Structure has no meaning, whether it is strokes on a surface or accumulation of counters in registers.

    Deep Blue is not playing chess. It has no awareness of the game as game. It does not know what chess is, it does not know victory or defeat. It calculates outputs presumably from trees or sophisticated look-up tables, which are then interpreted as chess moves. But it is not playing.

    Grandfather Clock has intelligence in the very narrow sense of being able to count the minutes and hours correctly, adding up the sums in its head, and telling me the correct time, by deciding to play the chimes hanging in his case. Oddly, Grandfather Clock always decides just exactly on the hour and half-hour to ring the correct chimes. I am astonished at how accurately Grandfather Clock’s sense of timing is, how tirelessly he attends to his task, and how he never loses count or mistakes the number of minutes in an hour. No doubt Grandfather Clock is helped in the tireless precision of his thinking process by the wheels and gears that make up his brain. Nonetheless, we all must commend Grandfather Clock for his diligence and uncomplaining attention to detail. He is as patient and devout as a Beefeater Guard who stands before Buckingham Palace, and, like them, he never stirs from the spot where he has decided to stand.

    I am kidding, of course. Grandfather Clock is a machine.

    It does not have any intentions, diligence, ability to count, ability to add numbers up to 60, or any other mental operations of any kind. It is not even alive. It is a machine. Deep Blue is also a machine.

    If you took the time to write down every possible combination of chessman positions on a fat deck of cards, and a chessmaster took the time, for each and every possible location of chessman, to write down what he thought was the best move on the back of the card, you could pretend to play a game by setting up the board, finding the card that represented an opening, making a response, and finding the card representing the new position of the chessmen, and looking on the back of that card to see what the chessmaster thought the best response would be in that situation.

    However, the deck of cards would not be playing chess. The deck of cards would not have any intention, any awareness of the chessmen or any idea of the relation between the chessmen and what they symbolically represent, or what moves or victory conditions the game entailed. The deck of cards would not be aware of the game at all. Its a deck of cards.
    — John C. Wright

  84. Ye Olde Statistician

    I should add that this sort of anthropomorphism is the root of the old pagansim, and John’s comments remind me of something Theodoret wrote when an imperial soldier struck Old Man Serapis in the jaw with an axe. The onlookers gasped because this would cause the earth to open up and swallow everyone. But no such thing happened, he wrote, because he was a block of wood.

  85. Sheri

    G. Rodrigues: Calm down, I wasn’t addressing anyone. I was making a comment/observation.

    I have no quarrel with your statements about doing arithmetic, assuming one ascribes to math the need for human beings to have created and therefore owning it. Can math be left to its own devices? Probably not. Humans have to be involved at some point. Which to me means that computers doing math are a hybrid of a mechanical process and human input. I think my point to my niece also addressed this—calculators have to be programmed by a real person who has to be able to do the math without the calculator.

    Can math exist without humans? Technically, yes. Two apples plus two apples still make four, even if we never see the relationship or describe it. It’s a physical thing. Humans put the language to it. Without language, it does humans no good, as far as I can see. Now when we get to calculus, that becomes far more complex. I don’t know how it all fits together. I’m just saying I can’t see it as black or white—it’s all interconnected to me. Humans put the language to the phenomena, may input the technology and the device may do the labor thereafter. How does one agree on where to draw a line saying “This is what humans do, this is what machines do”? It’s probably a matter of degree, but the interconnection always remains.

  86. Joy

    Dav,
    Not FANN out but like the precursor to stuxnet had a similar name. The FANN thing looks like a waste of time to me!

    ’ll bet table lookup is what most people do when asked to add 7 and 6.

    Arithmetic ends up a matter of memory because we are able to miss out the middle steps when we become proficient as in almost any skill.

    The full title of arithmetic in infant and junior school was mental arithmetic.

    There were no tables involved. Nor for any of the basic functions, only for logarithmic tables.
    Milk bottle tops beans and buttons in infant school gave the prompt to picture what 3, 4, 7 look like as patterns. Dominos and dice patterns. I never had enough fingers they never stayed still to be counted or you couldn’t be sure.

    Same in the case of equations in algebra they are skills that can be honed and steps missed to reach the answer. computers can’t do that. Computers never have trouble “showing their working:! I’m going to do the A-level I should have done years ago.
    Computers are just doing as they’re told. No lateral thinking as they used to call it.

    They have zero degrees of freedom. The only way the illusion of freedom is created is by mixing systems together so as to make the outcome too complex to predict.
    That is not the same thing as the way the human mind works, with intention and purpose driving the arithmetic.

    Computers are proficient at offering an opponent’s option. That’s all.
    Forgetting the psychology of which there is a lot as in any game, Concentration, ability to read the board, seeing all the combinations of possibilities and picking your move considering the opponent’s likely moves. Computers can do this because they never miss a possibility as they have speed on their side. Karpov or kasperov have the edge on the best computer but the computer has the speed and infallibility of offering the best opponent’s option based on what the programmer wrote. So the only victory for the computer is to prove again and again that it is faster and more reliable on repetitive tasks. All credit to the programmer and computer hardware developers.

    However computers or instruments lack the ability to form intent.

  87. Joy

    This machine is self taught:
    Tucker piano Dec 7’2010.wmv

  88. Sheri

    Tucker is proof if you drop the bar low enough and suspend all of reality, you can have anything self-taught. (I had a dog that howled if I said the word “groomer”—it was cute, but mostly to me, I suspect. She was “self-taught” also, as in she hated the groomer and when she howled, I’d laugh.)

  89. DAV

    The FANN thing looks like a waste of time to me!

    There are a lot of problems where these are pretty much the way to go. Character and face recognition are two. They are really worthwhile when analysis is otherwise intractable.

    However computers or instruments lack the ability to form intent.

    Alan Turing listed similar arguments

    Arguments from Various Disabilities

    These arguments take the form, “I grant you that you can make machines do all the things you have mentioned but you will never be able to make one to do X.” Numerous features X are suggested in this connexion I offer a selection:

    Be kind, resourceful, beautiful, friendly, have initiative, have a sense of humour, tell right from wrong, make mistakes, fall in love, enjoy strawberries and cream, make some one fall in love with it, learn from experience, use words properly, be the subject of its own thought, have as much diversity of behaviour as a man, do something really new.

    No support is usually offered for these statements.

    Arithmetic ends up a matter of memory because we are able to miss out the middle steps when we become proficient as in almost any skill.

    Not sure what middle steps outside of actual counting there could be. Even that is largely mechanical. Must have been really hard learning multiplication.

    Tucker’s cute but there are machines that can be learn to recognize patterns on their own.

    Computers can do this because they never miss a possibility as they have speed on their side.

    At one time machines similar to Deep Blue were built as benchmarks. When Newell and Simon were investigating chess playing they interviewed a number of chess players to try to understand how they approached the problem. One thing that emerged was all of the interviewed players did what Blue does: follow the move tree. They did have various reasons for why one branch was investigated and others not but still searched the tree. Oddly, no one has come up with a better way to do the problem. I suspect that a lot of what occurs in the mind of any good player is a deeper although unconscious tree search with the best moves bubbling up into awareness. Also, really good chess playing is more like athletic ability. You can improve it to some extent through practice but you have to start with what it takes.

    Yes there is a lot of psych in a face-to-face tournament along with timers and pressure but not in correspondence chess. Two different things really.

    offering the best opponent’s option based on what the programmer wrote

    You do realize that what the programmer did was provide a way to analyze a board situation as a static entity and that the moves (except for perhaps a few opening moves) are not wired in but discovered by examining each tree position as a static board, yes? Also, Deep Blue does not examine all paths in the tree. It still prunes the search (using the static board analysis as a guide) though to a lesser degree than its predecessors. So, missteps are possible.Still, it was able to analyze 200M moves per second but, considering the size of the problem (288 billion different possible positions after four moves and a number exceeding the estimated number of electron in the universe after 40), it’s a drop in the bucket.

  90. DAV

    Structure has no meaning, whether it is strokes on a surface or accumulation of counters in registers.

    Rambling on, I see.

    I said “in and of itself”. It does have meaning given context. You somehow think only humans can use it to advantage? How do you think computer compilers work? There can be a number of levels of semantic content. The lowest is the word parser which detects the basic atoms of the program language, e.g., keywords, variable names and punctuation, from the strings of input characters. Depending on the language the letter ‘H’ might have specific meaning but more often than not its only meaning would be part of a variable name The next level processes expressions and statements. What these mean depend highly on the context. Given the expressions and meanings (semantics), the compiler can then generate the lower level code which may or may not be the machine’s own language. In some languages the higher an lower levels need to cooperate to fully do both the lower and higher level tasks. These are languages in where the lower level is guided by the context detected in the upper level.

    Deep Blue is not playing chess. It has no awareness of the game as game … it does not know victory or defeat.

    And why are these requirements for playing? It might a human’s motivation to play but hardly a requirement for actual play. The playing part is analyzing positions; making moves to bring about a win; protecting against loss. The rest is perhaps gravy and not really part of the game at all. Are you running on about this because the words “play” and “game” have several connotation some associated with enjoyment? You poor dear! It’s the words commonly used to describe these actions. Get over it.

  91. Ye Olde Statistician

    It does have meaning given context.

    Why do you think that agreeing with me is a rebuttal?

    You somehow think only humans can use it to advantage?

    Or any other rational being. We must await evidence for this, however, since extraordinary claims require extraordinary proof. Unless the claim is one treasured by computer technicians.

    How do you think computer compilers work?

    Basically, by putting beans into bowls. But by asking how they actually work you are getting closer to the physical reality independent of the human interpretations.

    And why are these requirements for playing?

    Be cause of that word “play.” The mechanical grinding of option trees or of decks of cards is not the playing any more than the physics of hurtling spheroids is playing baseball.

    Because we think in words and must use words here, we cannot help but compound the matter of fact with the abstraction of concept. See the analogy of the Grandfather Clock “telling time.”

  92. Ye Olde Statistician

    Two apples plus two apples still make four

    Where does the “plus” or the “make” come from? The only things on the ground are apples: this apple, that apple, the other apple, and still another apple. There is no reason to suppose that if this apple and that apple had fallen from a tree, and later this other apple and that other apple had also fallen from the tree, that the tree was therefore performing addition. The observer is performing the addition by first abstracting from the physical world the concept of “two” and then considering that the accumulation of apples on the ground constitutes “addition.”

  93. DAV

    We must await evidence for this, however, since extraordinary claims require extraordinary proof.

    Unfortunately for people such as yourself , any attempt to present any evidence is a futile attempt. No evidence could ever be extraordinary enough. You’ve already decided the answer and refuse to be dissuaded. The irony is that you frequently present equally extraordinary claims yourself and your only evidence for them is merely some philosophical ramblings. Conjectures claiming to be Truth while knowing you cannot show these conjectures are nothing more than the result of GIGO and confirmation bias.

    Doesn’t hurt to tailor definitions either:

    “animals have imagination; not intelligence”
    “Computers don’t play chess; they are following decision trees”

    I’m beginning to see a pattern here. Any evidence will produce yet another distinction.
    All or nothing with emphasis on the nothing.

    Sad really.

  94. Joy

    I heard the figure that there were more chess moves than grains of sand on every beach.
    The computer can only ‘view’ the board as a static option but couldn’t it view the consequences of the move like a human , so many moves ahead? The computer doesn’t ‘raise its game’ it has one method and one tack, no strategy, or will to win.
    I asked my Dad. “I don’t intellectualise too much if the situation is desperate I would try a move to take the opponents mind off what I’m thinking and pull a fast one.” “My object would be to attack, only defending if you have to.” A real master would do something similar to the computer” (How many moves ahead do you look) “I would say about three but don’t forget that’s an immense number of possibilities.”
    because the masters ‘live in a chessboard’ it’s in their blood and they know the patterns and the answers, that if you move bishop in that scenario a certain thing happens. “ Sounds like your bubbling up description is about right. That answer couldn’t be written into a programme. One would need a mechanical explanation to build the mechanism in the machine.

    Do those Turing statements need supporting? I hardly like to break them down but what would be the point of making a computer enjoy strawberries? It would have to be doing an impression of a person enjoying strawberries. If it said it liked the strawberry without training, nobody would believe it. You couldn’t trust a computer. It would be insincere. Those lower functions, it won’t have.
    Just dry facts and solutions in the way of predictions or answers that may or may not be correct.

  95. DAV

    “I don’t intellectualise too much if the situation is desperate I would try a move to take the opponents mind off what I’m thinking and pull a fast one.”

    As a tactic, it only really works on a weaker opponent or equal and even then only in a face-to-face tournament with its time limitations adding pressure. When the opponent has days to respond it would likely be a mistake as wasted moves are detrimental more often than not.

    I doubt I could psych Gary and, given his ability, why would he feel the need to psych me? In terms of ratings he’s as far from run-of-the-mill world masters as I am in the opposite direction.

    The computer can only ‘view’ the board as a static option but couldn’t it view the consequences of the move like a human , so many moves ahead?

    Well, it does. It is following the consequences of making a move. A (the maker of the next move) does X then the best response is Y because it is the best static board situation. Any given static situation is a clue showing where to look next.

    As far as how many levels down I go I really don’t know. I’m only conscious of a few but since I see the board as a flow I must be looking deeper. Apparently not as deep as some.

    Is Deep Blue thinking? I wouldn’t call it that. Is what it does the essence of what people do when playing chess or similar game? Quite likely. As an AI demonstration of the viability of proposing a similar mechanism a person might employ, it succeeds.

    The brain does indeed look like it is constructed of a series of neural networks giving it immense computational power. The FANN library mentioned above works conceptually the same way. The major differences with brain are in terms of scale. They are getting more powerful almost as we speak. Turns out that graphics cards function similarly and this has opened the door to a lot more processing power than was available 20 years ago. They will begin to appear in more applications as time goes by.

    And they will encroach more things that many would want to believe are reserved for humans. Which in turn makes them more indistinguishable from humans. The paper I have drawn from proposes that a similar test replace the question “Can computers think” and if capable of passing the test then they would be functionally equivalent.

    If you were to examine all the objections to doing this, most (if not all) boil down to either theological arguments or using philosophical conclusions in lieu of evidence. While these conclusions may be logical in nature, it must be shown that the initial premises aren’t themselves conjecture. I don’t see that happening. The only real response is they seem reasonable. Not a very good argument.

    I hardly like to break them down but what would be the point of making a computer enjoy strawberries?

    This is what Turing said in the same paper about that:

    There are, however, special remarks to be made about many of the disabilities that have been mentioned. The inability to enjoy strawberries and cream may have struck the reader as frivolous. Possibly a machine might be made to enjoy this delicious dish, but any attempt to make one do so would be idiotic. What is important about this disability is that it contributes to some of the other disabilities

    and explains via example that it apparently is intended to reference the ability people have of relating to each another through common experience.

  96. Joy

    You understand that this is not meant to scoff at R and D in IT. There’s robot on the news today taking the place of a fitness instructor! (it’s a novelty, people won’t do as it says. They don’t do as humans say either.)

    Absolutely Gary would indulge in psyching in a face to face. It might take a subtle form. Any high level sport or game involves an element of it.
    A surprise move wouldn’t be a surprise to a computer, for example, There is no expectation there is just a board with a layout. I am reminded of being psyched by a fellow nine year old at school in my only chess tournament. He tutted and shook his head at me like an old man. I’d only moved one or two pieces. I was defeated by this tactic and gave up chess after that. He was a sweet lad but all’s fair in chess.

    “Possibly a machine might be made to enjoy this delicious dish,”

    ‘common experience’ not sure if he means sociability or genuine communication on a level.
    Part of being human is enjoying food. This disability is terminal. Not to be able to experience something in a conscious way, sensation, original thought, emotion, free will, and all adjectives are not tangible, reachable, appreciable or ‘meaningful’ to them. There can be no ‘why?’ with a computer. As this is what drives humans it seems the computer will lack the drive. It will have electricity. It won’t seek another source when someone pulls the plug out. Humans will have always to be there to make them do the necessary because it will be always a masquerade of a human brain. Just a very useful puppet. This is as you say an argument for the soul. Like an actor in a movie isn’t really experiencing what they pretend so the fake human will be as real. (probably not ever even that good.)

    It’s impossible in the way that a perpetual motion machine’s impossible.

  97. Ye Olde Statistician

    Unfortunately for people such as yourself , any attempt to present any evidence is a futile attempt. No evidence could ever be extraordinary enough.

    All we need is some argument [you keep talking about “evidence,” but my background is in mathematics and only secondarily in physics] that demonstrates that meaning can arise from structure. That somehow the shape or arrangement of line segments in the form H somehow “means” the sound of “en.” Or that a deck of cards bearing responses to possible chess positions somehow “knows” how to “play” chess. Just a logical argument that does not involve the waving of hands, the invocation of the Men of the Future, or the magical summoning of “emergent properties.”

    You’ve already decided the answer and refuse to be dissuaded.

    Likewise, I’m sure. I would love to be proved wrong, as the fictional possibilities are vast. I’ve used AIs [and ASs] in some of my stories and novels.

    The irony is that you frequently present equally extraordinary claims yourself and your only evidence for them is merely some philosophical ramblings.

    Ah, yes. Logical arguments and reasoning. Can’t have that, can we. Where is the “evidence” that pi is irrational when every physical measurement of a circumference and its diameter results in an actual ratio?

    Doesn’t hurt to tailor definitions either:
    “animals have imagination; not intelligence”
    “Computers don’t play chess; they are following decision trees”

    Of course animals have imagination. Why would you think they don’t? The various sensory inputs have to be integrated into a single ymago of a singular object, otherwise the redness and the roundness would not be perceived as an apple and the animal would not move toward or away from it. The definitions have been in place for thousands of years, so it might be the moderns who are doing the tailoring.

    I’m beginning to see a pattern here. Any evidence will produce yet another distinction.

    Evidence? I was not aware of any. You can operate a simulator of an airliner on a flight from EWR to LAX, but no matter how realistic the output, you will not be in Los Angeles when you get out.

  98. DAV

    A surprise move wouldn’t be a surprise to a computer

    Depends on what you mean by “surprise”. The computer does have an expected response. The best approach is to consider your opponent a genius who will always make the best response. The computer is following what should be the best responses. That’s called “tree pruning” and allows a more directed search. Any moves not considered would be surprises, no? The computer would then need to backtrack and reconsider the situation.

    But this is peripheral to the actual game play itself. It’s an attempt to distract. If one has confidence one is not easily distracted by these tactics.

    Gary was rattled at least twice that I know of. The first was when the computer made a move that later analysis showed was faulty but Gary saw it as the output of a better player. He later recovered his composure.

    The second was when during a World Vs. Kasparov match a move was made that gave the World a two tempo advantage with a surprise sacrifice.Unfortunately, someone out to prove cheating in the voting process was possible by actually cheating, cast a huge number of votes for a ridiculous move. The game was destroyed and the advantage lost. So we will never know how truly effective that move would have been against Gary. Gary later claimed it wouldn’t have mattered but he failed to elaborate. I understand the move was discovered with a computer search.

    Part of being human is enjoying food. This disability is terminal. Not to be able to experience something in a conscious way, sensation, original thought, emotion, free will, and all adjectives are not tangible, reachable, appreciable or ‘meaningful’ to them. There can be no ‘why?’ with a computer.

    The first objection, regarding food, assumes that an intelligent machine must necessarily be human in outlook. We enjoy food largely because of a builtin drive to obtain it. The enjoyment may be nothing more than an incentive to continue the search. The computer also wouldn’t suffer from toenail fungus. I don’t see the point in the objection.

    I have a similar puzzlement with second objection in re looking for alternate sources of food. Humans eat just about anything organic yet some have starved in the wilderness after becoming lost and never attempted to eat the food around them — maybe because they didn’t recognize it. Do you know of any humans who tried using alternate sources of energy such as electricity or nuclear fusion? Even if you have, I don’t see why this is a requirement for intelligence.

    Lack of emotion is an interesting fault. It is a builtin drive in humans and occurs not from logical thought. One would think this not a requirement for intelligence.

    Most of what you’ve listed are things we have in common and these allow us to relate to other humans. Again, they don’t appear to be requirements for intelligence if we don’t expect to relate to the machine as a human.

    Curiosity drives learning. Our basic makeup requires learning. It’s a survival trait. It appears to be builtin. There really is no reason why it couldn’t also be built into an intelligent machine. In fact, it might be desirable.

    Whether humans will always be needed to guide it remains to be seen and provable only by induction (we haven’t built one that didn’t need guidance so therefore building one isn’t possible). We should be careful with these assertions.

    Consciousness is something we think we know but really don’t. It may be nothing more than self-awareness or self-recognition and even those are difficult to define. Despite all of the assertions about consciousness, they amount to pushing words around in attempts to categorize it. Categorization only allows recognition however provides no deeper understanding. For some, this is not only sufficient but allows wild conjecture as to its nature.

    I could go on about these things forever but we must stop at some point. I suggest we have reached it.

  99. DAV

    Doesn’t hurt to tailor definitions either:
    “animals have imagination; not intelligence”

    Of course animals have imagination. Why would you think they don’t?

    I’m growing quite tired of this silly straw man tactic you are fond of. OTOH, maybe it’s your inability to evaluate context that’s the real problem.

    Logical arguments and reasoning. Can’t have that, can we. Where is the “evidence” that pi is irrational when every physical measurement of a circumference and its diameter results in an actual ratio?

    How cute. But maybe it’s because your training has been in mathematics which is based on deductive arguments that are valid in themselves. When you try to extend this into a description of anything outside of mathematics (or philosophy) you find yourself with a problem.

    In computer programming it can be shown that a program is completely correct. (I say this tongue in cheek. It is not practical to do this with anything but a small program). But it is still possible for it to produce nonsensical output. The term for this is “Garbage In; Garbage Out” (GIGO).

    While the logical correctness of a philosophical argument is important, its conclusions are nothing more than conjecture if the starting premises are conjecture. A form of GIGO. Showing a conclusion isn’t GIGO requires a bit more than mere argument.

    Evidence? I was not aware of any.

    Unfortunately I don’t think you ever will because you seem incapable of accepting any. The imagination/intelligence definition thing shows how much you want to duck the question or eschew anything but a desired answer. Please don’t make the idiotic defense that you are merely parroting someone else. It is entirely irrelevant where it originated.

  100. Ye Olde Statistician

    Of course animals have imagination. Why would you think they don’t?

    I’m growing quite tired of this silly straw man tactic you are fond of.

    But not evidently growing tired of silly ad hominems — the implication that the worth of the argument depends on the imputed personal qualities of the arguer. In what manner is this a straw man? Your original response to the statement “animals have imagination; not intelligence” by a dismissive “Doesn’t hurt to tailor definitions either,” without, however, the ugly need to point out where and how the definitions were tailored. It is possible that you objected to animals having intellect, but I suspect not. Therefore, your objection must have been to the imagination — or perhaps to an inability to distinguish the two.

    Where is the “evidence” that pi is irrational when every physical measurement of a circumference and its diameter results in an actual ratio?
    How cute. But maybe it’s because your training has been in mathematics which is based on deductive arguments that are valid in themselves. When you try to extend this into a description of anything outside of mathematics (or philosophy) you find yourself with a problem.

    Actually, Late Modern science is having this problem, with all sorts of entities posited on the basis of mathematical models rather than empirical evidence. There have even been suggestions urging that theories be accepted on their mathematical elegance rather than empirical verification. But this is rank Pythagoreanism.

    The constant demand for “evidence” implies the privileging of empirical proof; but this does not apply to the realms of mathematics or metaphysics.

    In computer programming it can be shown that…

    At least you are not extending yourself beyond mathematics.

    While the logical correctness of a philosophical argument is important, its conclusions are nothing more than conjecture if the starting premises are conjecture.

    Unlike Hume and his successors, the perennial philosophy always started from empirical experience. In a famous example, from the existence of change in the material world.

    Which starting premise do you regard as conjecture? Or do you prefer to keep your charges vague and inchoate?

    The imagination/intelligence definition thing shows how much you want to duck the question

    Or else it shows how important such distinctions may be.
    https://thomism.wordpress.com/2007/02/16/intellect-imagination-and-sense/
    “This loftiness and creativity of imagination accounts for both its dignity and its danger- it is a certain middle between a particular sense power and intellect and so it is easy to reduce it to both, and to reduce- more dangerously- intellect to imagination.” — Chastek
    Or again:
    “No sense power, in fact, can be perfectly proportionate to intellect. If one hooks intellect to imagination (as is presently the case with us) intellect will just use imagination as a tool to understand unimaginable things, It will use analogy, inferences from causality, and positive judgments concerning the existence of the not-imaginable. Physics, chemistry, and psychology will have to make as much use of this as theology, though not in the same ways (hence the need for multiple models in each).” — Chastek
    https://thomism.wordpress.com/2009/01/13/imagination-as-a-tool-for-intellect/

    Please don’t make the idiotic defense that you are merely parroting someone else. It is entirely irrelevant where it originated.

    Oh dear, if you do not allow yourself to stand on the shoulders of giants, how far do you expect to see? Or are you simply being mum about whom you are citing?

    There is this relevance: that the distinction is ages old means that it predates Late Modern speculations about intelligent machines; and this in turn means that it could not be a “tailoring” of definitions. Rather, it would seem the Late Modern who tailors his definition of “intelligence” [inter alia] in order to accommodate his accomplishments, along the lines of “if you can’t be with the one you love, then love the one you’re with.”

  101. Joy

    Can there be man made, fully orbed, intelligence outside of a living body. I say it will be Artificial and therefore no. I believe the endeavours of IT neural networks don’t have the same goal in any event so arguing from the negative disability is probably misplaced.

    Leaving aside animals, I’m describing that intelligence and intellect are blended with what it is to be human. Not just a button but part of the fabric. So I shouldn’t have embarked on a discussion about the brain being separable.
    That point was where the premises have already diverged and I knew that but being human I argued anyway.

    There is nothing to fear and much to be gained from mechanical, computer, cyber, IT tinkering with logic and intelligence. In my view there is a lot to be feared when a neurosurgeon or neuroscience makes false claims or pretends that the human brain is less than it is or that what we know we know for sure with respect to consciousness.

    The brain is more than a system of neural networks. Consciousness is not all there is to being human either. People wake up from long term unconsciousness when neuroscience is exhausted. So when neuroscience speaks of ‘centres’ in the brain these are sometimes not as isolated as is assumed from the rest. Like lots of parts of the body destruction of a part sometimes results in no change in function of the system and medical science can’t say why.
    Why physical examination or imaging (which amount to the same) doesn’t correlate with function, not rarely but surprisingly often. This is not true of a car or a supersonic jet What you see is what you get. If the O ring’s busted it’s busted.

    That computers can perform a function that the brain can perform is not to say that they can perform all that a brain can do; that brains don’t act in isolation. Computers don’t either without the need for a human wing man.

    AI is artificial intelligence.

    What I have learned is that It’s easy to tell a robot from a human. The cheekiest humans disguise themselves as robots. So I have something in common with Gary Kasparov. It’s a title for a novel.
    He Who Would Be Robot.

  102. DAV

    the perennial philosophy always started from empirical experience.

    Uh, yeah, sure. Explains why you asked about the the evidence for things mathematical and nothing at all about attempts to validate premises or conclusions. Presumably implying that no evidence is required ever. The deduction itself is sufficient

  103. Ye Olde Statistician

    There are three great realms of knowledge: Physics, Mathematics, and Metaphysics.

    1. Physics deals with real bodies. It proceeds by induction from empirical experience to falsifiable theories.

    2. Mathematics deals with ideal bodies. It proceeds by deduction from principles to demonstrated theorems.

    3. Metaphysics deals with being as such. It proceeds by deduction from empirical experience to determined conclusions.

    (“Physics” deals with knowledge of physical things and can be equated with natural philosophy and/or modern science.) Metaphysics, as the name suggests, deals with the necessary prequisites for Physics: e.g., existence/being as such, motion as such, etc., things the Physics must assume in order to get started.

    Each abstracts to a different degree and in a different extent.
    http://realphysics.blogspot.com/2006/10/degrees-of-abstraction.html

    IOW, they are not all the same. I used Mathematics as a counterexample to the usual demand for “evidence” as it applies to Metaphysics. In particular, while both Physics and Metaphysics start from empirical experience, they move in different directions using different methods. If we start from the experience “Some things in the world are in motion,” Physics might induce a theory explaining the causes of motion while Metaphysics deduces the consequences of motion’s existence.

    Hope this helps.

  104. fderol

    I used Mathematics as a counterexample to the usual demand for “evidence”

    I agree with DAV. It’s great to have a theory and logical consequences but without a continuing attempt to determine it isn’t isolated from reality then it is merely an interesting exercise. Jumping from the theory to assertions about reality is unjustified.

    You do seem to be setting your definitions to sway the answer. “You” of course does not mean you personally. You are espousing them so they are your definitions, too. Tying the definition of intelligence to language use is an example. Yes this is old. It is maybe the reason why “dumb” is a synonym for “unintelligent”. Using language as a criterion seems a setting of the definitions carefully to avoid an unwanted outcome. After all, animals don’t talk so they must be “dumb” — by definition. It still looks very much forcing a desired answer and making the question “are animals intelligent” nonsensical — by definition.

    Metaphysics deals with being as such. It proceeds by deduction from empirical experience to determined conclusions.

    Physics has the advantage that a lot of it is testable through prediction. It does seem to have wandered away somewhat .

    As for starting from empirical evidence, there is a trap is in the assumption that the experiences are complete. E.g., we have never found a change that wasn’t preceded by a cause. While it may appear reasonable to assume this is universal, it is still a belief and all that follows is just expression of belief. We need to keep in mind that a continual verification that the assumptions and conclusions have any meaning outside of themselves is a requirement or should be one. Doing this requires evidence.

    Imagine starting with the belief of aliens are visiting our planet (or substitute your favorite preposterous idea) and coming to logical conclusions from this. Would any assertions using the deductions have any validity outside of themselves?

    When wandering into fields where no evidence can be obtained then one must wonder what use these fields have outside of enjoyment. There’s no way to tell. Yes, something like mathematics can be useful when modelling a theory but mathematics doesn’t make theories about things outside of mathematics. It isn’t attempting to describe the universe even if it originally started with questions about the universe.

    Your request for evidence for mathematics caused me to raise an eyebrow and question if you really understand when and where evidence is useful.

    Even fields which could possibly have evidence, one can use terms that are hard to define and test. For example, regarding the question of intelligence, what does it mean to “know” something? If the words used to answer this question don’t give clues allowing testing if X “knows” Y, then the definition hasn’t defined anything. At the very least, it is a postponement and mere characterization. You can’t apply “know” as a criterion if you can’t tell when “X knows Y”.

    If the definition includes the ability to talk then perhaps it would be OK but, when it can’t be applied, you can only state “I don’t know” in response to the question of X’s knowledge. Using a definition like is is very much like saying “Fast cars are red. If the car isn’t red then it can’t be fast.”

    You are making unwarranted assertions. They appear to arise merely from the way you’ve chosen to define things and nothing else. You should replaces these assertions with more qualified statements. You are being overconfident.

  105. Ye Olde Statistician

    Tying the definition of intelligence to language use

    Fairly direct, in the sense that language use is the surest empirical evidence of “intelligence,” for those who ask for empirical evidence. The word in its root meaning — inter legere — is “to read between [the lines].” This certainly implies language, and not merely making signs to refer to concrete objects. See the account of Helen Keller of the day at the well-house when she finally grasped the use of language and how she expressed that realization. Also see the Underground Grammarian in Less Than Words Can Say, Ch. 2 “The Two Tribes.” http://www.sourcetext.com/grammarian/less-than-words-can-say/02.htm

    However, the issue is not simply “intelligence” as that word is [ab]used today, but “intellect,” the capacity to abstract universal concepts from particular percepts. The use of language is an indicator of this; so much so that symbol-mongers like ourselves have a very difficult time divorcing that abstracting from the act of perceiving. Hence, the rampant anthropomorphizing regarding animals and machines. We are projecting ourselves onto the others.

    The tying of “intelligence” to a dozen unrelated traits is even more arbitrary and runs the risk of building contradictions in to matters.

    we have never found a change that wasn’t preceded by a cause. While it may appear reasonable to assume this is universal, it is still a belief

    Change is due to a changer, not a “cause” in the restricted sense. It comes about from the actualization of a potential. This actualization must come from something already actual — because something that is not actual cannot do diddly-squat. Whether it “precedes” the change is not important. [I take it that like most Moderns, you mean “precede in time.”] When a book presses on the table, the table presses on the book, but one does not precede the other in time.

    When wandering into fields where no evidence can be obtained then one must wonder what use these fields have outside of enjoyment.

    Which brings us back to mathematics; and while I can attest that it holds a great deal of enjoyment — I once proved a new theorem in general topology, and it was quite lovely — it does have [I have been told by my former professor] some use in computer solvability.

    regarding the question of intelligence, what does it mean to “know” something?

    The term “know” is too vague. Generally, there are three levels of cognition: digestion, sensation, and intellection. In digestion, the matter of something is acquired while the form is discarded; in sensation, the form is acquired while the matter is discarded. In intellection, as we said, a universal is abstracted from a particular. Certainly, all animals have cognition of the sensitive sort, and with the estimative power of instinct can come very close to intellection. (The sheep esteems the wolf as an enemy and flees. It needs no prior experience with wolves to know this.)

    Using a definition like is is very much like saying “Fast cars are red. If the car isn’t red then it can’t be fast.”

    Actually, it’s very different. Why it is different is left as an exercise to the reader. But a clue appears in the first response.

  106. fderol

    Do you specialize in missing the point? Where and how is left to the interested reader.

    Cheers

  107. Joy

    “The term “know” is too vague. Generally, there are three levels of cognition: digestion, sensation, and intellection. In digestion, the matter of something is acquired while the form is discarded; in sensation, the form is acquired while the matter is discarded. In intellection, as we said, a universal is abstracted from a particular.”

    This is another example of the intellectualised description of human behaviour.

    It is A description. One of those “it is generally accepted that…” until someone comes up with another ‘more acceptable’ description, and so we go on. Mere onlookers watch and observe what really happens and are left shaking our heads.

  108. First, I really enjoy reading your blog. I’ve learned a lot about statistics. And I tend to agree with your point about the difficulty of simulating the brain to a certain extent. But we can’t simulate writing on paper if we hold to your standards for simulation. Each pencil tip, examined microscopically, has a different molecular structure, which, combined with the ambient temperature, texture and substance of paper, all impact the way that molecules of pencil lead adhere to aforementioned paper. You would probably need to know the history of the universe in order to simulate the writing of a single sentence even using a perfectly operating writing machine. But we can still read that writing though, and produce indistinguishable copies given a nice enough camera.
    I would disagree in your assessment of Epstein’s paper, if only because it seems evident that either his understanding of computers is light years ahead of mine or pretty far below. I would guess pretty far below, given the way he uses various concepts. I think it is also REALLY important to notice that (to be slightly biblical) the blind can see, the deaf can hear, and the lame can type because we can directly and successfully connect a computer to a brain. Which means that our knowledge of cognition vis a vis computer is much greater than null. It also suggests that whatever metaphors we are using have some validity.

    Epstein gets tripped up on this “TRAPPED” in the computer metaphor idea. And there is probably some truth in that. But most of the concepts he mentioned were not computer metaphors. They are factory metaphors that people applied to computers. Input was input long before computers existed. As were terms like “storage” and “output” and “process”. They are general terms, and as such, very hard to “escape” from. This quote in particular “But neither the song nor the poem has been ‘stored’ in it. The brain has simply changed in an orderly way that now allows us to sing the song or recite the poem under certain conditions. When called on to perform, neither the song nor the poem is in any sense ‘retrieved’ from anywhere in the brain, any more than my finger movements are ‘retrieved’ when I tap my finger on my desk. We simply sing or recite – no retrieval necessary.” is one of the dumbest things I’ve ever read. As is the dollar bill metaphor (haven’t you noticed the difference between an original picture and the fuzzy version shrunk down to email size?). If Epstein were fair, he would acknowledge that computers don’t store information either. There is no single bit or series of bits that stores a file. The hard disk just changes in a wide variety of locations in a completely unique fashion entirely dependent on the history of the computer and the events it is currently experiencing in such a manner that the next time someone expresses a wish to see that file, the computer is able to produce it.

Leave a Reply

Your email address will not be published. Required fields are marked *