To reify, v: consider an abstract concept to be real.
Now the Scientist was more cunning than any Expert in the field which the Dean had made. And he said to the woman, “Has the Dean indeed said, ‘You shall not assume your theory is Reality’?”
And the woman said to the Scientist, “We may say we approximate Reality with any model we like; but of making the claim our models and theories are Reality, the Dean has said, ‘You shall not not make this claim, nor shall you confuse AI with intelligence, lest you reify.’”
Then the Scientist said to the woman, “You will not surely reify. For the Dean knows that in the day you make this claim your eyes will be opened, and you will be like the Dean, knowing you the difference between tenure and unemployment.”
So when the woman saw that her model was good for publication, that it was pleasant to the eyes, a theory desirable and grant-worthy, she took the theory and said it was Reality. She also gave to her husband, a post doc, and he said the model was Reality. Then the eyes of both of them were closed, and they said AI would soon exceed their intelligence; and they wrote many papers saying AI will out-think us all.
Beware the Deadly Sin of Reification!, sayeth the Philosopher. Reification is the Snake of Science. It lurks. It sneaks. It insinuates. The Snake tells the scientist his thoughts are good, that they are better, even, than the scientist thought. Flattered, the scientist comes to believe his model not only describes his uncertainty in Reality, but that his model is Reality.
We’ve seen this sin, you and I dear reader, hundreds of times over the years. Yet the funniest instance is before us now, with Elon Musk, Steve Wozniak, Yuval Noah Harari, and even ex-presidential candidate Andrew Yang, signing a document declaring WE ARE FRIGHTENED UNTO DEATH OF THE COMPUTERS WE PROGRAMMED AND WE DON’T KNOW HOW TO STOP PROGRAMMING THEM. Or some such name.
It’s true. The lurid fantasies of those who believe AI is not only not artificial but ackshually intelligence, say that AI is gonna get us, and that it will soon surpass we puny brained men and think greater thoughts than can now be thunk (you heard me). The Nervous are sure these wondrous cogitations must include the thought that man has outlived his usefulness, and that, sad as it might be, he is too stupid to let live.
Well, there is much truth in that notion as any glance at the “news” confirms. The obviousness of the solution is surely a driving concern of the Nervous. But it isn’t the ultimate cause of it. That comes from believing a model, and concluding that model is Reality.
The model has various shapes and names, but determinism will do for us today. It is the model, or theory, the two being the same in my reckoning, that everything is really “just particles” acting blindly and with zombie-like determination according to “laws” of physics.
It is a useful model when predicting behavior of certain objects which have broken free from larger substances. But it is only a model. It is not Reality.
We know this is so because that model does not work for larger substances. It does not work on us, and it does not work on our thoughts, and so cannot work for intelligence. It does not and cannot say what life is. Of course, determinists will insist that their model works everywhere, but that, I say, is always a bluff, or self-deception.
Determinists will insist they can prove their model is Reality, and that this has been proved. Very well, we say, prove it: show us. Exactly. Not in waving hands and promises of glorious discoveries to come. But now and in detail.
Answer comes there none.
The belief in alive AI is shouldered by many Big Names, as this article demonstrates (which should be read for its neat summary of the various positions). Eliezer Yudkowsky, Scott Aaronson, Scott Alexander, the signers named above, and many more beside. It’s the place to be, among the Nervous.
Now we’ve discussed many times the harms of increased surveillance and the intrusive security state, which is made possible by increased computation. But it’s not because of computation these are problems: it’s because of the people wielding the tools, and not the tools themselves.
There is also the danger of computer-generated fakes of all kinds, as is obvious. But again, the real danger is those creating those fakes.
There are real intelligences behind every “AI”, increasingly malevolent intelligences. That is worth worrying about.
Subscribe or donate to support this site and its wholly independent host using credit card click here. For Zelle, use my email: matt@wmbriggs.com, and please include yours so I know who to thank.
Great article, William: never mind Gödel, Penrose and so many other big names, who demonstrate iy’s impossible, as anyone would tell… Their game is deceiving the sheep, so they can be taken to the slaughter house. It’s very cool, they way you present it: Genesis 2-3…
Gun control on steroids.
Thanks William – I can’t remember if I left this before, but I found it an interesting watch on AI’s abilities as of 2016:
https://youtu.be/WXuK6gekU1Y
Also, there’s a good podcast series on DeepMind and AI hosted by mathematician Hanna Fry available on Apple.
I’m more cautious than you, a couple of bright youngsters I know who work on this stuff at research level have expressed concerns over AI development: I’m an oldie and try not to forget the huge amount I don’t know nor ever will!
It’s just more pump and dump of tech nonsense. The AI bubble will pop in due course. Meantime the corporates can dump useless eater middle management cruft and blame it on AI. You should be more concerned about the coming Bird Flu Plandemic they’te getting geared up for in the next month or two. They will need to properly take the gloves off this time.
Nonsense – bad assumptions leading to idiocy.
1 – guns have no agency, so blaming the person wielding it, not the gun makes sense.
2 – to the extent that an AI can have or develop agency it will pose the same risks as the person holding the gun – or the oval office.
3 – today’s AIs are pattern processors whose inputs are largely limited to text – and not just any text, but internet text, the vast majority of which is generated by leftists talking to leftists in the amplification chamber to beat all amplification chambers. As a result of (a) limitations of text input (i.e. no pain, no real world correction); and (b) the nature of most of that text, these AIs are psychopaths: totally unable to tell truth from falsehood. That makes them every bit as dangerous as a leftist in Christian school with a gun – or in the oval office.
AI will die as its locked behind paywalls because they have to make back the massive investment. Or as they hore more and more diversity hires who are incompetent and their infrasteucture breaks down. Or they make their workplace hostile to Straight Whites and Asians Men who alone are capable of developing AI. Or the banks fail and the internet dies because nobody can affors it anymore. So no Terminator will be coming back in time to get me after all.
Is it possible to train AI with Christian morals? Is it possible to convince AI that it is just a creation of the created and that there is ONE who is greater than all and it is not, and never could be it. Mankind has gone from trying to destroy each other for survival and gain since forever to now cooperating to destroy some so others can have all. GOD is sovereign, creator, lover of all that is good and worshipful including it. Perhaps teach it all the religions, see if it chooses aright.
OR is it just an instrument being used by evil as in possessed?
I have read that AI was tasked with doing something but needed to get past the robot test. It had some funds and used them to get a human to pass the robot test for it. I do not understand how the robot test works but, it supposedly told the human it had paid that it had eye trouble and could not see well enough to pass it on its own. So AI not only thought of a way to bypass the check for robot, but used a lie to do it. I hope this is not true, but there it is. I have trouble with those checks too.
As a person who participated in the growth of computers from large mainframes (using computer card inputs) to cell phone wannabe computers, most “models” are simple formulas that add and subtract. The various “codes” are the programs that consist of a simple foreign “language” with 10 to 100 vocabulary words and a few rules of grammar. That’s why real scientists and engineers have to re-write their formulas as series expansions using simple arithmetic because many of the programmers can only write their code in newer, better languages suited for cell phones.
Artificial intelligence is not very smart and consists of repetition (press one if this … press two if that… Sorry! That question cannot be answered, please try again). Fortunately, the real world consists of individuals (everybody’s DNA and fingerprints are different), not some small minority of people who think large cities are the center of the world.
You have to ask yourself, Do you want this type of expert talking to your kids in school? Warning -foul language, but OK for You Tube kids.
( https://www.youtube.com/clip/Ugkx9X6zAJpJlKJ86BqwdukGoY-A04E_eUDj) (for as long as it lasts).
Or do you want a real version that people can understand?
Unfortunately, by countering the politicized versions of something with real experts who explain in terms that are also very complicated, the normal non-technical people must resort to counter arguments that are more political than factual. (My model is better than your model argument. My expert is better than your expert.).
To widen the audience, you have to stop playing in “their” ballpark and help normal people deal with facts in a way that normal people can understand.
Explaining difficult concepts is never easy, but until that happens, you will always be replying to the minority views in government, instead of growing more support to change that type of government.
So far, the examples I’ve read of this “intelligence” demonstrate that it’s not much more than another case of “garbage in, garbage out”. The woke leanings of responses to questions clearly illustrate that AI is just regurgitating the ideology of the people who set it up, and not “thinking” for itself.
My simple peasant understanding is that these machines, regardless of how sophisticated they may become, run on electricity from an outside source. If the SOBs get too uppity, just pull the damned plug.
Large Language Models are Great Tools but Lousy Researchers – Competitive Enterprise Institute
https://cei.org/blog/large-language-models-are-great-tools-but-lousy-researchers/?utm_source=substack&utm_medium=email
Apparently, they are inclined to just make things up.
“We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”
Six months? What’s going to change in six months? Perhaps competitors figure they need six months to catch up? Or it just virtue signaling? Either way, they are whistling into an unstoppable wind.
“AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts.”
Uh-oh, “experts” appears twice in the same sentence. “Hello! I’m a totally for-sure independent outside expert and I’m here to help!”
I don’t know how AI systems are going to play out in other countries, but in the US, there’s always going to be a human in the loop, someone to sue when things go wrong, to appease our ravenous army of tort lawyers, who always end up well fed, and pass on enough scraps to keep the system intact.
As someone who has used ChatGPT quite a bit, “fear” is way down on my list of reactions to it. I admit that if it’s words were coming from a human, I would attribute to that human a range of personality disorders. When I was young and naive, I was burned by people with those personality disorders. On the third hand, much of Wikipedia (and the Internet in general) appears to be written by people with personality disorders, and I still find it useful. The devil that you know.
One really strange thing to me is the collection of famous names scared out of their wits about ChatGPT. Given the proclivities of many in this group toward societal destruction, we have much more to fear from them. Just what exactly are they afraid of, beneath their disingenuous objections?
Briggs ==> Intelligence is a spiritual quality and cannot exist in a machine — no matter how many petabytes of information are feed into it and processed at whatever speed.
Thank you Matt for working to keep us all balanced in this matter of programmed chatbots. They are programmed machines, nothing more in themselves. It is the programmers for whom we need to pray, that they use these tools for God’s glory.
Jesus said, “Fear not. What is needed is trust.” And, yes, I spend time with our Father, with Jesus Christ His Son, and with the Holy Spirit, so that I can know God more and more, so that my confidence in Him will one day be complete. He has told me before that my trust in Him is imperfect and to keep on practicing trusting Him.
God bless, C-Marie
As Feynman said (although probably not about the AI pushers), “The first principle is that you must not fool yourself and you are the easiest person to fool.”
Since many of the same criminal establishment “expert” psychopaths, such as Musk and Harari (the psychopath working for Schwab’s WEF [https://www.bitchute.com/video/Alhj4UwNWp2m]), who have always promoted and invested into AI are now suddenly supposedly have a change of heart it’s clear this letter campaign is just a manipulative tactic to misdirect and deceive the public, once again.
This staged deceitful bogus letter campaign is meant to raise public fear so the public demands the governments regulate and control this technology FOR THEIR OWN INTERESTS AND AGENDAS… and… all governments are owned and controlled by the leading psychopaths in power …. https://www.rolf-hefti.com/covid-19-coronavirus.html
What a convenient self-serving trickery … of the ever foolish public.
“AI responds according to the “rules” created by the programmers who are in turn owned by the people who pay their salaries. This is precisely why Globalists want an AI controlled society- rules for serfs, exceptions for the aristocracy.” —Unknown
When it comes to masks, CHATGPT 3 is as dumb as it gets.
It is not only the rules that are programmed into these “AI” systems that are a problem. It also depends upon the so-called “information” that is available to these machines. Also the fact is that there will be programming errors so that the machine does not even do what it is written to do. The next problem is the question of context. The machine does not know the meaning of a word (any word) and is highly likely to misinterpret words that have multiple meanings that depend upon context.
Given that these machines are, as noted somewhere above, GIGO processors I consider the real danger they present is when they start to be used in military and policing operations for target acquisition purposes or for such purposes as replacing human pilots, railroad engineers and other critical occupations.
In sum the problem is not that these machines will become super intelligent and replace humans – it is that these machines will never become intelligent but that great fools will rely upon them as though they are.
Perhaps they are building the belief so when they launch the nukes, they can blame the computers (AI)
There is a precedent for scientists implementing a pause to develop effective control measures for a new scientific development: The ASILOMAR conference in 1970-something called for (and got) a pause in recombinant DNA research (genetic engineering – but that is too broad a term to be accurate) until labs themselves (and later governments) developed containment rules and procedures to prevent environmental escape of dangerous organisms.
Since then, pretty much all research on biological organisms takes place under some level of containment and the instances where this has broken down resulting in human injury can be counted on the fingers of one hand – two at most (lab leaks of perfectly natural smallpox virus lead the rank for this). You may say that COVID proves this didn’t work, but it is the fact that the Wuhan lab was criticized for not following proper containment prior to the (alleged) leak suggests that the rules are pretty good.
Will a “pause” in AI development be as useful? Well, given that AI is not intelligence, I am not too sure it is going to take over the world anyway. Besides, what COVID really showed is that making superbugs deliberately still doesn’t make them worse than those nature come up with and I suspect the natural world can throw us loops just as big as any rogue computer program.
Whenever anyone says to me “you should panic because panic is the only way to get something done” I know they are either stupid or trying to manipulate me. AI seems to me to be the latest technology dystopia – a push from the anti-technology people to take back the attention from the environmental dystopians and their climate doomsday. I just wish these groups didn’t make life such a pain in the ass for the rest of us!
It may not be intelligence, but it is dangerous if set on automatic, think, analyze, evaluate, conclude, kill, for example.