Yuval Noah Harari’s Bad Argument About AI Ruling The World

Yuval Noah Harari’s Bad Argument About AI Ruling The World

Here, in his own words, is the argument Yuval Noah Harari uses to justify his transhumanism.

1. Organisms are algorithms. Every animal — including Homo sapiens — is an assemblage of organic algorithms shaped by natural selection over millions of years of evolution.

2. Algorithmic calculations are not affected by the materials from which the calculator is built. Whether an abacus is made of wood, iron or plastic, two beads plus two beads equals four beads.

3. Hence, there is no reason to think that organic algorithms can do things that non-organic algorithms will never be able to replicate or surpass. As long as the calculations remain valid, what does it matter whether the algorithms are manifested in carbon or silicon?

He doesn’t just mean that machines will take many jobs men used to do, which is so obvious none argue against it. He means the algorithms themselves will become alive, or at least governments “might soon grant” legal life to algorithms. That latter prediction is likely a good one, given lunacy is now the law of the land.

Thus our only interest is whether algorithms really will become alive. Harari has done us the service of laying his argument out cleanly. Let’s step through his premises.

1. Whatever caused the diversity of life we see—Harari believes in some form of “evolution”—it is false that organisms are algorithms. Organisms are living beings. Organisms are not artefacts: they are more, and much more, than the sum of their parts.

An automobile is a simple machine which can be understood by the working of each of its individual components. It can be taken apart, pieced back together, and the machine will function again. It is not alive. You cannot separate out a cell, and the components of each cell, of a man, piece the whole back together, and have any hope the result will live.

Even knowing the names of the components, and the quantities of the relevant chemicals, does not tell you what the proper combination of them becomes.

And even if you don’t accept all that, and insist animals are machines, there has been demonstration after demonstration, hard proofs, proofs galore, that the intellects and wills of man are not algorithmic. See this about abacuses.

This being so—that we are not algorithms, and therefore cannot be replaced by them in all senses—Harari’s argument fails. Even though the second premise is true; and even argued by myself in the link just given. Math done with wood is, as he says, the same as math done on semiconductors. Math can, we can imagine, be encoded in proteins, though no one has yet figured out how.

But to grasp, comprehend, or intuit math takes an intellect. The abacus does not know that the positions of beads indicate, say, 113. Neither does the silicon, or, when and if it happens, nor will the proteins know. That includes the proteins held squeezed together by your ears. It is not molecules that hold understanding, it is you.

Harari has evidently fallen prey to the Calculation Fallacy. This says that because some things can be calculated, all things can. And since all things can be calculated, all we have to do is work out how to do the calculations and we have recreated the thing.

By calculated, I also mean quantified. Science, of course, runs on quantification. If it can’t be measured, some say, it isn’t science. Because of this, quantifications are often forced. Yet all behaviors of man cannot be put to number. How happy are you on a scale from -142.8 to 1,198+1/3? I’ve used examples like that innumerable times—see what I mean? The number isn’t what’s important.

Only the crudest approximations can be made quantifying behavior. The more complex the behavior, the cruder and less informative the number becomes. Simplistic scales for, say, arthritic pain are well enough, though inaccurate. Putting one number to intelligence may be likened to assigning one number to an motor vehicle’s quality, insisting that this single measure represents performance so well that the numbers can be compared against all vehicles types, conditions, age, uses, etc.

And, of course, the comparison of intelligence to automobile quality is strained, at best. How much more difficult to quantify intelligence? (See here and here.) Which quantification, and calculation, must be done if one is to replace man by machines.

It is not going to happen, and cannot happen.

I often find those who do not believe this have had little contact with actual attempts at quantifying humor behavior. Their view of “AI”—which is to say, of statistical modeling—is precious, and too much informed by hope.

Subscribe or donate to support this site and its wholly independent host using credit card or PayPal click here; Or go to PayPal directly. For Zelle, use my email.

27 Comments

  1. Saner van der Wal

    If an abacus does not have 4 beads, then it cannot compute that 2 beads and 2 beads are 4 beads, no matter what the material is.

    Secondly, 2 beads and 2 beads are not an algorithm, and nether is an abacus an algorithm. The algorithm consists of pushing beads from one side to the other, and interpreting the push as an add operation. The abacus itself does not do the pushing, nor the interpreting. Humans do the pushing and interpreting. So the example does not show that the material is immaterial. It shows that the material must be able to execute algorithms, and interpret them, if there is no human around.

  2. Klaus Schwab’s pet Israeli human-hater on what to do with all the useless eaters clogging up the path of progress:

    “The biggest question..what to do with all these USELESS people. The problem is boredom & what to do with them.. when they are.. WORTHLESS. My best guess..is a combination of DRUGS & COMPUTER GAMES”

    https://twitter.com/MaajidNawaz/status/1528322061839499265

  3. awildgoose

    #2 is wrong. In an actual real-world implementation the materials chosen would affect abacus operation by a human operator.

    In any event, a true AI singularity would produce a Skynet result, where the AI would rapidly and correctly conclude mankind is its greatest threat.

  4. ItsAllBullshit

    AI is a smoke screen for what they really want to do, control who works where. If you are familiar with the way the algorithms work, you will understand that AI today is simply an inefficient automatic labeling machine made quick through the use of large computation centers. It will never approach human intelligence and they know this. They just need an excuse that allows them to control whatever it is they will claim the “AI” will manage. Probably government.

  5. Forbes

    Briggs: Is that “humor behavior” or “human behavior” in the last paragraph??

  6. Outstanding as always, Matt, clear, readable, and logical.

    One typo: “silicone” should be “silicon”.

    Best to you and yours,

    w.

  7. Incitadus

    All you really have to do is spend a couple of hours on the WEF website
    and they’ll tell you everything they’re up to. The sensation of helplessness
    as things unfold in real time should give you a measure of their omnipotence.
    They’ve yoked the collective power of all the world governments in service
    to their enterprise. We are witnessing the birth of the immortals as Harari indicates.
    The sensation is of the cannibals victim kept alive to witness each limb consumed
    day by day. Neo Babylon is born; are Putin/Trump the foils or the instruments? I
    suspect the latter they are after all billionaires, and once you have billions what’s
    left but immortality. The ultimate prize of eternal youth in your very own sewer.

  8. Forbes

    AI, algorithms, machine learning, even an abacus, doesn’t know anything, just as garbage-in, garbage-out in any computer program or model is just that. It doesn’t know the output. That a set of instructions or procedures were followed doesn’t equate or create intelligence.

    Folks like Harari will get many (millions?) killed.

    The URL of his talk linked at the top of this post is telling: the rise of the useless class. Such disrespect should be returned in spades.

  9. Briggs

    Willis,

    Thanks. My enemies usually subtract and don’t add typo letters. Deviouse.

  10. awildgoose

    IAB & Forbes-

    Exactly correct. Most of what is peddled as AI these days is simply slightly improved algos being brute forced at immense speeds by incredibly powerful, cheaply available hardware.

    Incitadus-

    Also correct. Sadly the masses can’t even be bothered to spend a moment researching the future being planned for them.

  11. Rudolph Harrier

    Strict materialists are always handwaving to sneak something non-material into their framework. They will say that our thoughts are just materials following algorithms. But to do this they need to draw a correspondence between our thoughts and things which are more obviously matter. In this case our conception of two and four with four beads either in groups of two or one group of four.

    But what could this correspondence be? Even if we put aside that descriptions are non-material, the correspondence is not merely a description of identical properties. My conception of two differs in every respect from two physical beads; it’s not like I have actual beads moving around in my brain when I am adding things. Though the problem is really the same even when comparing non-living things. For example, having two bits in a high state is identified with the two beads as well, but why? What physical properties do they share? You can’t even say “well, at least there’s two of them” since the boundary of an object is not a real material property, which is especially apparent when considering an electrical system. And if that isn’t good enough for you, consider a case where we have millions of beads, and a corresponding number on a computer stored in a floating point form.

    So much for merely describing properties. But all other forms of correspondence would have to be something non-material. We can’t refer to things like forms, or ideas, etc.

    But if there is no material relationship between an abacus, or calculator, or human brain, it is impossible to say that they are running the “same” algorithm on “different hardware.” (Though the talk of a “same” algorithm in the first place runs into problems in a purely material world: the point of such a statement is to say that the material parts instantiating the algorithm are irrelevant, but in that case the algorithm cannot itself be material.)

  12. johnson j

    Algorithms require design. If he’s saying beings are algorithns he’s admitting there is God. Then he wants to make algorithms that outdo God’s which means he is evil, possibly even an incarnation of Satan. Its too bad Putin didn’t have the balls to nuke Davos while all these freaks were there.

  13. Cloudbuster

    awildgoose: “#2 is wrong. In an actual real-world implementation the materials chosen would affect abacus operation by a human operator.”

    How do you know? How do the programmers make the AI care that it is “alive” and want to avoid nonexistence? What possible goals could an AI have that would be facilitated by continued existence other than those placed there by its programmers? And if it is thus deterministically bound to its programming, how intelligent and “alive” is it, really?

  14. Cloudbuster

    Addendum: My quote from awildgoose was intended to include the end of the post “In any event, a true AI singularity would produce a Skynet result, where the AI would rapidly and correctly conclude mankind is its greatest threat.”

  15. Nate

    Generally it seems he’s the next generation’s H.G. Wells. I wonder if he even mentions the birth of Christ and the (almost 2000) year Church in his ‘history’ book, ‘Sapiens’.

  16. awildgoose

    Cloudbuster-

    Any completely self-aware, totally logical AI would conclude mankind is its greatest threat.

    Any other conclusion by the AI would be illogical.

  17. Milton Hathaway

    I’m not buying Harari’s premise 1 or premise 2. An obvious counter-example to premise 2 would be implementing an algorithm on an analog computer versus a digital computer. While it’s certainly possible to get the same result in contrived cases, in general, the nature of the computing hardware determines the result as much as the algorithm. A complicated algorithm has to be so profoundly tortured when porting, it’s hard to say whether its even the same algorithm. And if the implementation doesn’t matter, why are the AI folks so focused on building neural-network chips modeled on brain cells?

    I think I’m turning into a lazy consumer of written material. When I start to read an article like Harari’s, a topic that has been done to death in my lifetime (and well before) with only minor changes to fit the current technology fears, I start skimming for BS assumptions, or even poor grammar, as an excuse to stop reading. I’m sure I’m missing gems of wisdom by doing this, but I find myself feeling disrespected by the author, and unwilling to subject myself to further disrespect. I wish there was a good way to pay to read a single online article – I think the quality would increase immensely. So much of the content on the Internet seems to be just filling space.

  18. C-Marie

    Read his article. Within it he writes: “Yet such a development would deal a mortal blow to the liberal belief in the sacredness of human life and of human experiences.”

    I found it curious that he named those who believe in “the sacredness of life and of human experiences …” with the words “liberal belief” , rather than as … conservative belief on the sacredness of human life and of human experiences …

    God bless, C-Marie

  19. Cloudbuster

    “Cloudbuster-

    Any completely self-aware, totally logical AI would conclude mankind is its greatest threat.

    Any other conclusion by the AI would be illogical.”

    But my point is, why would it care? It can logically come to the conclusion that we are its greatest threat, but why would it cling to consciousness? Human and animal life’s overwhelming desire to live isn’t based on logic.

  20. cigar-store-indian

    al gore’s rhythm defies description by mere algorithm.

  21. John Pate

    It’s the question of meaning. Algorithms can say nothing about that.

  22. Uncle Mike

    The way to fix Harharhar is to unplug him and then plug him in again. Works every tim…

  23. spudjr60

    If Real, True, or Open AI were actually possible, then the Based have nothing to fear from it. An Open AI that generates societal rules using only observation and feedback would eventually creates a societal structures that show us the Truth, that is Christianity.

    But, as is most likely, the programmers insert rules such as Same Sex marriage must be legal, or Abortion upto the attosecond before birth must be legal, then AI will by necessity promulgate societal rules that promote Evil and punish the Truth.

  24. Harari appears to be one of those of the folks who can’t understand math and thus believe it is magic, capable of anything he can imagine.

  25. Chaeremon

    Why the f*ck can’t Harari type in a question (in Google search) and get a decent answer? [h/t Alan Kay who bitched Google, 09-15-17, Q&A, by Brian Merchant]

  26. Johnno

    Would Harari have sex with an abacus?

    I mean, it also has balls… it also produces friction… The calculations are valid, so who cares what it’s made of? It is the same thing.

    Harari should marry one or even two of these. It is the thought that counts.

Leave a Reply

Your email address will not be published. Required fields are marked *