Statistics

Strong AI From Ping Pong Balls

If you can’t see the tweet above, click here.

Practitioners of Strong AI believe that if we string enough of these ping-pong (or marble) machines together, rationality will emerge. By rationality, I mean intellect and will.

No, seriously. They do. This is not a jest!

Of course, the little ball movers have to be in a particular order, and be supplied with energy. The balls don’t get loaded into the hopper by themselves. But, in principle, since a contraption like this can be built, it is then a computer, which can (they say) be programmed to develop rationality. Strong AI.

Why they say this and why it won’t work is covered in three articles, starting with this one (the other two are linked within). So I won’t repeat any of it here.

That’s how they are wrong. But why might they be wrong, or on the wrong track? That’s what I’m curious about here.

There is nothing mysterious about little balls sliding down ramps. And so it doesn’t seem like a mysterious power like rationality can develop from a huge ball-sliding machine. But electricity, particularly in its quantum aspects, is strange and wondrous. It is even magical, which prompts, we might guess, the magical thinking that curve-fitting algorithms can think because they can carry on operations fast via shifting electrons (or whatever).

We can’t see electrons (etc.) tunneling and doing their things, which makes it easier to suppose that they can get away with creating intellects. Hey, they might be up to anything down there! We can see dumb little balls sliding down stupid inclined planes. They can’t possibly do anything extra.

This is, as should be clear, a psychological argument. It presupposes the philosophical one provided in the links, that no AI algorithm is ever going to reach rationality. That being so, we have to ask why people think it will.

One reason is progress. We’re cleverer now than a century ago at making gadgets. So, it is argued, we’ll eventually be clever enough to build a rational being. Out of some substance, it follows. A substance which is usually thought to be formed from silicon and wire. But it could easily (in our thought experiment) be made from ramps and balls, and pulleys, and maybe a wheel rotating in a stream.

Yet it seems absurd that a created rational being can be constructed from ping pong balls. Since it should be able to do this if the goals of strong AI are possible, then we have to ask ourselves why it seems absurd. It’s because we can’t think of a way sliding ping pong balls can make rationality out their movement. In this, we are correct.

But since these slippery balls are a computer, it must follow that we cannot extract rationality out of any computer.

Categories: Statistics

22 replies »

  1. Perhaps we could argue that computer systems appear to exhibit rationality because they store the rationality of the programmer. Perhaps, if enough of the programmer’s rationality is stored the system can emulate the thinking of the programmer. In which case how would we tell the difference? Perhaps humans only seem rational because they store the rationality of their programmer.

  2. Well, whether or not it is a computer is debatable. I would call it a single program computer. The idea behind a programmable computer (and the one you describe isn’t) is an unspecified (potential) machine. All that software (or even hardware) does is make a specific machine. The machine you described won’t have much, if any, intelligence nor is it capable of changing its programming.

  3. This leads to several interesting side tracks.

    Quantum computing is, in its current state, a gigantic fraud. Quantum computing is merely analog computing. At best, it is a random number generator. Of course, you can make an electronic random number generator with an antenna or microphone hooked up to any computer. A real security company uses digital cameras pointed at lava lamps.

    Wooden rails and ping pong balls can’t change their own programming. Electronic computers can, since programming and data are interchangeable.

    Machine learning has been both an incredible success (in small endeavors) and a massive failure (in large endeavors like AI and facial recognition). You simply can’t have any idea what the machine is actually “learning” to recognize – and that’s by design.

  4. Quantum computing has the potential to implement massive parallel computing. Something that’s likely needed for AI.

    No, it’s not a random number generator.

    You simply can’t have any idea what the machine is actually “learning” to recognize

    Not true. It’s no different than with human learning. You examine the mistakes. People have the added capacity to tell you their rasoning. Current AI designs aren’t yet complex enough for that.

  5. This appears to be a four bit byte. It could be compared to a computer in the same way a neuron is a brain. If you string a few million of these together, and provide some mechanism for the various current states to be properly accumulated and used, you’d have a functioning computer.

  6. Yep Briggs, sounds like A.I. has become the new alchemy only instead of trying to turn lead into gold it’s trying to create conscious beings from unconscious matter. Quite doubtful that will ever come about.

  7. RE: “Practitioners of Strong AI believe that if we string enough of these ping-pong (or marble) machines together, rationality will emerge.”

    NO, practitioners of AI believe that if a neural network were ever manufactured of sufficient size it could gain rationality/self-awareness. Such a network would necessarily exclude one of the ping-pong-ball variety.

    People with a very strong belief (or need to believe) in religion find this unsettling. Many as a result will make statements such as Briggs’ remark, above (quoted), that takes the premise and twists it in a way that’s easy to dismiss (e.g., that “neural network” could equate to a “ping-pong-ball network”), and by extension to dismiss the larger real hypothesis about self-awareness arising out of a sufficiently complicated network.

    If anyone has any doubt that the human neural meat computer isn’t the source of consciousness and intellect by virtue of its many complex interconnections … consider what happens to the human “mind” when parts of that neural meat computer get damaged and those neural connections fail to function (e.g., injury, stroke, drugs, etc.).

    Believers are fond of saying, correctly, nobody knows HOW consciousness and intellect arise, but they routinely sidestep addressing that we at least DO KNOW WHERE particular intellectual capabilities of “mind” reside in the meat computer — that based on the repeated/reproduced observations that damage to particular areas results in predictable losses in cognitive function.

    Its rather obvious, but if the belief is to be sustained, the obvious must be dismissed or ignored — and dismissive ignorance is rampant. This is necessary to sustain the belief in an eternal soul, which cannot exist if the consciousness that can imagine an afterlife can be destroyed with the destruction, or temporary impairment, of a particular patch of the meat computer.

  8. Well, I’d say they are doing a good job of preparing people like Ken for
    the demon infested robots that will turn everyone’s heads.
    Diabolus ex machina.

  9. Practitioners of AI believe that if a neural network were ever manufactured of sufficient size it could gain rationality/self-awareness. Such a network would necessarily exclude one of the ping-pong-ball variety.

    People with a very strong belief (or need to believe) in AI find this comforting.

  10. Ken’s got the right idea though current neural neteorks are, for the most part, little more than weighting schemes. A successful network would likely need feedback. Kohonen networks, e.g., demonstrate unsupervised training. They learn much in the same way postulated for humans. They are still a long way off from genuine AI.

    People with a very strong belief (or need to believe) in AI find this comforting.

    About as comforting as knowing gravity pulls things downhill.

  11. Several articles saying the same thing, and Briggs still doesn’t know the difference between a calculator (counter, abacus) and a computer. No sense explaining anything to someone who has become ineducable and is just repeating what he thinks he knows to himself to gain comfort.

    That’s not to say that computers can become intelligent or gain minds, or that they can’t. To think that the answer to this is simple requires, well, a simple mind.

  12. Speaking of AI but somewhat digressing, here are some thoughts on legal issues with driverless cars:
    https://www.youtube.com/watch?v=u-UT4nxE-LE

    Note that these cars will not have intelligence since it has been claimed that intelligence can only exist in critters that talk. This may change when autonomous cars start giving each other hand gestures or exhibit other signs of road rage.

    Note also that none of these cars will be built with ping pong ball accumulators.

  13. To answer Lee’s comment, it appears a fully mechanical Turing machine has been built. Therefore, a computer can be built purely out of mechanical components, like Briggs says.

    https://www.youtube.com/watch?v=40DkJ9vt5CI

    A Turing machine has also been made of LEGO. I don’t know how the sensor works, though, and it might be electronic. It’s just cool to look at, so I’m posting it too.

    https://www.youtube.com/watch?v=FTSAiF9AHN4

    You can also build a computer using a stream of water or air flowing through various chambers. This is called “fluidics”.

    https://en.wikipedia.org/wiki/Fluidics

  14. The objection is not that what Briggs describes is mechanical. It’s just not a computer. It’s a counter.

    Babbage’s concept was a purely mechanical computer. It was never successful because of limitations in its construction but it could have worked.

    Briggs’s concept is on the same level as looking at a model of the Wright Flyer and declaring such a thing would never carry hundreds of trans-Atlantic passengers. In a way, it would be correct though rather myopic.

  15. Thiago:

    I don’t see why that’s a response to my comment. Mechanical computers are old news. I had one when I was a child. You cycled it through instructions by pushing and pulling on a plastic handle.

  16. I personally know a few people, here in SV who work in some of the top companies that develop AI/ML. All of them know very well that AI/ML/Neural Networks are just catchy marketing terms. There’s no ‘I’ in AI and there’s no ‘L’ in ML. It is just a bunch of regression models on top of other regression models that give an impression of complexity to uninitiated (example here: http://www.bbc.com/culture/story/20181210-art-made-by-ai-is-selling-for-thousands-is-it-any-good). All their models are deterministic and they admit it.

    If they were lying to me, I’m lying to you.

    Former programmers/developers (instead of mathematicians and statisticians) recently re-branded themselves to ‘data scientists’ because they were already in place, but that’s a whole another story. The initial drive/motivation of those companies is not to provide an answer to age-old philosophical questions about consciousness, but to develop fancy automation.

    Now, a really scary part my friends tell me is that more than half of the current ‘jobs’ people hold can easily be replaced by automation and nobody has a clue how a current capitalist system (especially here in US) would cope with that.

    To have a real AI/ML, one needs wetware, as software/hardware analogies won’t cut it. Instead of consulting with cognitive neuroscientists/geneticists, etc., a bunch of programmers are playing with their toys without a good grasp of what current level of knowledge in real vision/perception/attention/cognition is. A human cell has about 20K genes and 100K proteins and we don’t know how real neural networks work, let alone how to model one. And that’s looking at bottom-up level. If we want to model bottom-up processes of attention, vision and cognition, forget it.
    I think we can agree that there’s no consciousness without meat but to have a true AI, one would need to invent a system that is self-aware and more importantly has a survival instinct. The rest will fall in place easily. I believe, we are far away from that point.

    Anyone interested, I suggest listening to this:
    https://www.youtube.com/watch?v=dadT-14FkSY&list=PL8AD2B712B1A0578F

    And this book (none of the usual AI hype you read about):
    https://www.amazon.com/dp/069116276X/?coliid=I1CF4L19YIGB5U&colid=3DRFG52M1HK5Q&psc=0&ref_=lv_ov_lig_dp_it

  17. I meant to post this the other week,
    https://www.youtube.com/watch?v=njU4u2hMFnE

    A more reasonable look at the matter of AI. Rather, for me, it is the authorities and individuals blaming computers and as will be artificial intelligence for making bad choices. Passing the buck.
    It’s not all bad.
    That lecture is long and takes a while to get going but some might find it interesting. I’ve got to listen to it again because I didn’t concentrate first time.

    People should stop pretending to panic about machines and be more grateful to the people who helped make computers what they have become.
    There’s always an off button.
    Merry Christmas.

Leave a Reply

Your email address will not be published. Required fields are marked *