Where’s Jerry Garcia of the Grateful Dead? Seriously, what list of the greatest rock guitarists of all time would not – could not – include him? Sure, I know the internet article was just some teaser to get me to mindlessly click through an ad-laden list. But still, no Garcia. I object: who wrote this article?
And that is the question of the day, “Who wrote this article?” Was it really written by the suspiciously suspect name on the byline – as if the author is the protagonist in some cheap novel, such as Ima Riter? Or, as happens more frequently these days, were the words the product of a Large Language Model (LLM), a class of Artificial Intelligence (AI) models and a sibling of the seemingly ubiquitous ChatGPT, though under the byline of Ima Riter?
Yes, the AI model, complex statistical models whose genesis, as the hysteria goes, we will rue when it rules our future. Lately, it’s hard to scan a media site without finding at least one headline declaring that AI models are intelligent and sentient entities, capable of creating information in a manner that exceeds the abilities of both the creator and user. Models that will destroy jobs and abrogate totalitarian powers. But is that true?
Despite assertions otherwise, AIs (LLMs in particular) are simply models that provide probabilistic responses to language prompts. At a basic level, ask an LLM to fill in the missing word in the phrase, “I ran up the … ,” and it will return “hill.” Not because the model is intelligent or sentient. No. The LLM returns “hill” because that is the statistically likely response to the prompt.
Challenge it, since the word you are looking for is not “hill,” and the LLM will reach into its statistical memory, based on the decomposed works it was trained on, and provide the next likely response. You can then converse with it, so to speak. After the second or third iteration of entering, “That is not the word I am looking for,” the LLM will, like any good conversationalist, ask for additional context in order to provide a more appropriate answer.
Though, regardless the impression of your interaction, you are not really conversing. And the LLM is not truly following your thoughts, anymore than a mentalist – the so-called mind reader – reads you mind. Both the LLM and the mentalist look for contextual clues in order to provide or elicit a likely response. And the seeming powers of both rely on the audience to attribute abilities which do not exist.
The mentalist says, “I feel something bad has happened to you recently,” when, in fact, nothing bad has happened. However, you equivocate on the meaning of “bad” and look for any instance of some minor misfortune – you play along. So, yes, you imagine the innocuous as bad. It is true that yesterday you had to search 30 minutes for your iPhone before your son found it between the cushions of a couch. Without thinking, you come under the mentalist’s spell and provide him with additional details of your life – you lost your cell phone and you have a son. And so the process goes.
As with the mentalist, the LLM also requires your indulgence, where you actively come under its spell. However, both spells are of your own creation. You allow your imagination to ascribe powers that do not exist – the mentalist cannot read your mind and the LLM does not understand you. The power lies completely within you. And the more you unleash this power, the more omniscient you believe the mentalist or LLM to be – with one instance providing entertainment and the other irrational fear.
This is not to say that, just as The Sciencetm was weaponized, folks holding and seeking power will not try to weaponize AI models in the future. Government and its agents will begin saying we must act in a certain way simply because an all-knowing AI has recommended that action. Here, the mentalist will be the state, but we, the audience, cannot let our collective selves be fooled.
Models will be trained to answer for the regime. Expect it to be so.
Now what about the musical talent of Garcia? Where does it rank? And will AI replace many of those creating that and similar lists?
As far as Garcia’s talent, you the listener – the acting human – decides. Create your own top 10. Search and read the hundreds, if not thousands, of other lists on the web. Agree with the ones you like and ignore the others. If an LLM-generated list syncs with your internal ranking, accept it, have fun with it. If not, forget it. Your preferences are yours alone.
As far as jobs being lost. Sure, many jobs will be lost to the new technology. But will Al make human efforts redundant? Will we all end up unemployed? Never.
Keep in mind an AI-generated list of guitarists is a synthesis of lists and writings already found on the web – the AI, in essence, employed the same search you would have used. The AI added no new information or no new analysis. It simply provided a probabilistic response to the language prompt: rank the best rock guitarist of all time. However, and this is key, the AI relied on words previously written by humans. It simply responded with a summarized version of all those opinions; a summary that may have more hallucinations than half the audience dancing at a 1980’s Grateful Dead concert.
So, yes, AI will be used to generate meaningless lists that occasionally steal minutes from your day. But these synthesized strings of sentences are nothing new, just a rehashing of web articles written by warm, breathing writers. And, yes, many jobs creating similar types of lists, summaries, etc., will be lost, as will many other jobs in various fields and endeavors, just like jobs were previously made redundant by emerging technologies – with new, unrealized jobs created to replace them.
Nevertheless, original ideas – the beauty of humanity – will forever remain the product of acting men and women. And without human hands continually authoring original texts, those supposedly dangerous AIs will summarize nothing and respond with nothing.
Our lives and futures are safe.
Subscribe or donate to support this site and its wholly independent host using credit card click here. Or use the paid subscription at Substack. Cash App: $WilliamMBriggs. For Zelle, use my email: matt@wmbriggs.com, and please include yours so I know who to thank.
— Where’s Jerry Garcia of the Grateful Dead? Seriously, what list of the greatest rock guitarists of all time would not – could not – include him? —
Garcia was, above all, a musician. He very seldom showed off his “chops,” as so many “great” guitarists feel obliged to do. From the very first Grateful Dead album, what cones through in his playing is fidelity to the music. That also characterizes his first solo album.
It’s not the LLM I fear. It’s the mob’s mass rush to worship it.
The LLM is the latest gold statue. If we don’t convince the majority these things aren’t alive and can’t think, our lives will be ruined by their insistence on its worship, and our insistence that there is only one God.
AI is already being deployed in a big way in warfare. It is being used to kill. Indescrimately.
‘The machine did it coldly’. Israel used AI to identify 37,000 Hamas targets. Permission given to kill large numbers civilians in pursuit of low-ranking militants. See: TheGuardian.com.
Named “Lavendar”, one user stated: “I would invest 20 seconds for each target at this stage and do dozens of them every day. I had zero added value as a human, apart from being a stamp of approval. It saved a lot of time.”
You can be sure that AI is being used in Ukraine by both sides, especially for target acquisition.
Chat~AGI~ achieves nothing autonomously useful, except the awareness that [some ~all~ model’s] partial satisfiability is indistinguishable from hallucinations.
AIs will summarize nothing and respond with nothing. … and will be happy
Jerry Garcia is just a flavor of Ice Cream from Ben & Jerry (Garcia?)
It’s sad to see Briggs bringing on libertarians (read: liberals) to substitute ideological slant for sound arguments. Libertarians can’t stomach the idea that anything generated by “the market” might result in something actually bad, so they write reams of apologia for everything that sensible non-liberals (post-liberals? right wingers?) have concerns over. The reality is that AI is being used for nefarious purposes and that it will replace well-paid jobs that people who don’t go into the rat race of academia can do. There is real harm, but Jim can’t stand that argument because AI is a product of “the market” and so it must be defended at all costs.
Levi,
I welcome rebuttal essays.
Levi –
Interesting comments.
1. Those who espouse Classical Liberalism are not progressive liberals. Nor are most of those who espouse libertarian views. However, not a small number of those who claim to be conservatives are actually progressive liberals – think anyone who voted for George Bush’s compassionate conservatism. Remember, the compassionate conservative view was (is?) that giving someone a home changes the core of who they are (akin to a New Soviet Man arising out of homeownership). You should know that – even if you don’t, you should.
2. I am not, by any means, a Luddite – though you seem to be one. And, if you take a look at the product of LLMs, it’s academics, who, for the most part, rehash (plagiarize) old content, that are likely out of a job. In essence, AIs perform the same meta analysis that fill (overflow) academic journals. The HVAC guy who I pay $175 an hour will not lose his job. Though, with the advent of this new technology, I will no longer have to pay someone to wash my laundering in the nearby stream. Oh, wait, those jobs were lost years ago, along with key-entry operators, billing clerks, etc. Seems jobs lost get replaced by others. I know, I know, it’s different this time.
3. You make claims that are fabrications – hallucinations. I state that AIs will be used by the state for nefarious reasons. But you claim I don’t believe the very claim I made. Hmmm.
The usual thing is that new tech initially has negative affects on the lives of those on the lower end of the educational attainment scale and positive effects on the lives of those at the upper end – but the affects even out over sufficiently (relative to the rate of change) long periods to leave almost everyone affected somewhat better off.
In the case of LLM style “AI” I suspect that bookkeepers will now go the way of typists/secrataries while a big chunk of jobs at the lower of the service industries will get automated – a change being accelerated by democrat stupidity about minimum wages. I have no idea what changes this process will drive – it’s actually somewhat plausible that this will drive the economy back to the single income family with better education and related opportunities for kids.
In the case of expert systems type “AI” I imagine that the big civilian gains will come from things like improved diagnostics and issuance – potentially, for example, saving billions in health care costs both in terms of dollars and, more importantly, the health consequences of error.
Notice, in this, that LLMs replace people just as every IBM mainframe pitch from the 1400 to the Zs focused on the costs/accuracy benefits of replacing clerks, while the expert systems version augments humans just as science based computing (i.e. unix et al) does. If the analogy holds expect LLMs to get press, cost big bucks, get adopted by techno-illiterates on advice from “experts”, and not work well while the expert systems development stream quietly assumes ever increasing importance and value.
I see more and more articles (usually with no author listed) that I strongly suspect are AI written. They suffer from a certain “cut and paste” flavor where the individual paragraphs seem not to match well, or where the contents wander too far from the headline. Another thing I see is a fairly common mismatch of pronouns. One sentence will refer to the subject as a woman but the next sentence will say “he”. In other words they have the flavor of something other than a biological, sexually preproducing, sentient organism.
I seem to have hit a nerve. Libertarianism is just one branch of liberalism and it is categorically different from right wing conservatism. This is impossible to dispute if you’ve read anything even the Mises Institute itself has put out on the subject. The notion that G.W. Bush represents right wing conservatism is brain dead. Mises Institute libertarians have this bizarre view that liberal ideas they like haven’t eroded the family and the fabric of society, even though they clearly have. They are constitutionally incapable of understanding that technological “progress” that has made things cheaper has driven a wedge between essential elements of the family and community in the Western world. I used to be a Mises Institute liberal like Jim, then I prioritized faith and family over “freedom” and material wealth.
Of course, AI is not limited to LLMs, though Jim’s hatred of academia, a thousand plus year old institution that gave us Western thought, makes him focus on that narrow piece of AI. Pretty much anyone who performs a simple function such as driving or moving material in warehouses, managing data of any kind, anyone providing any kind of basic customer service or sales (see, there’s that pesky social fabric Jim can’t wait to tear up!), and many more are on the chopping block with AI and robotics.
Jim seems to think, with religious fervor, that the past perfectly maps onto the future, that all innovation is the same. This is asinine. Jobs like making bullwhips or buggies, key logging or elevator operating, went away at a time when people had general skill sets. They could easily move from the job that was pushed out to another because jobs that required similar skills were readily available. These days, people are much more specialized and re-training to another line of work, when AI wipes out entire categories of jobs, is ever more costly.
Jim’s dream is Wall-E where we all sit around on our asses because we’re fabulously “wealthy.” AI will have obliterated the need for work that can be done by those with lower IQs and less motivation. Thankfully, according to Jim, everything is super cheap so, he will say, they can live on very little income! The need to engage in the real world with other people has been eliminated because offices and sales floors are obsolete! Jim’s liberalism killed off most of our voluntary community (think Bowling Alone) in favor of efficiency and individualism, now it will ensure that the opportunities for those who can’t make a living writing blogs are not only lonely but financially stressed more than ever, despite is imminent pleas to the contrary.
AI as it exists today is not really AI as people think of it.
But that doesn’t matter.
What is important is that it already exists of sufficient capacity to place humanity within the equivalent of a global panopticon.
As if we weren’t already squeezed enough, wait until the unelected, unwatched regulatory agencies embrace it.
To date these agencies, at best, could put the squeeze on groups. With AI, they will be able to slowly squeeze the life from every living thing on earth; right down to a single individual.
The vast sea of humanity, with few exceptions, won’t even know what’s happening. Until it’s too late.
That’s the potential of AI even in it’s current form.
I spent a fair portion of my career with the DoD developing metrics and processes to evaluate the credibility of models and simulations (M&S) that were used to support the acquisition of US weapon systems. The idée fixe in the 90s was Simulation Based Acquisition (SBA), the idea that computer simulations of weapon system performance were far less expensive to produce than actual testing in “the real world”. Over the course of about 15 years, I became quite familiar with “how the (computer simulation) sausage” was made. When I ran across this website a couple of years ago, I resonated immediately with Bill’s dictum that “simulations only say what you tell them to say”. Credulous (or unscrupulous) program managers could always game the passage of major program milestones by relying on computer simulations that told the funders of these weapon systems (Pentagon bureaucrats and Congressional grifters) what they wanted to hear. Not so with live-fire testing, which always entailed the risk of very visible failure, not to mention the possibility of loss of funding and a terminated military career.
I strongly suspect nothing has changed in the decade or so since I retired. I have friends still in The Matrix who claim to have seen what AI in general, and LLMs in particular, can do, and they are mesmerized by what they think they see. When I opine that all AI is just a sophisticated collection of automated IF – THEN – ELSE statements processed by what amounts to a glorified calculator, I’m told I don’t have the clearances to see and understand what’s really “behind the curtain” nowadays. These are the same people who tell me that, in reference to polygraph tests, “nobody beats the box”. References to Bill’s posts on the uselessness of lie detector tests (the one from almost a decade ago on the Ames spy case being an excellent example) are of no avail to people who have “seen the box work every time”. Again, their expectations of lie detector invincibility color what they claim to see. Same excrement, different decade.
In passing, I would like to say that I am fascinated by Bill’s philosophical take on statistics and their use. Many of us schooled in technical disciplines were exposed to the mechanics of statistical calculation, but not the quasi-metaphysical assumptions that underlie them and the consequent limitations on their credible use. My hat is off to Bill, and I look very much forward to his upcoming online classes.
Levi –
Seems your version of right-wing conservativism is just a recast of Karl Marx and his pining for the deprivation that was the hallmark of our supposed idyllic past. And the existential issue of the day is our alienation from our labor – if only I could fashion a chair from a log with my labor alone.
Other than that, thank you for making me aware of my hatreds, fervors and dreams. ROLF.
I was on vacation, so I ran up the….BILL.
Artificial Intelligence pretty much implies the existence of Artificial Stupidity, which we’ve already seen in action (Black Nazis, anyone?).
I’m not afraid of AI. I tested it and found it lacking. I asked it about a somewhat obscure book from the 1940s. Rather than admit it didn’t know it spewed out a bunch of “facts” about a different book on a related subject. When I called it out on that it did find a few facts about the original book and then added facts about another book.
I was all ready to not worry about AI until I read what Robin wrote about the use of AI in warfare.
P-Zombies were/are a thought experiment to illuminate thinking about human consciousness… except LLM and their performance have convinced me that a significant proportion of humanity are, in fact, actual P-Zombies merely producing responses to their environment without any kind of internal monologue that acts as an independent input to their world model by which they navigate the world. This hypothesis explains so much about the pathology of modern so-called “civilization”. A significant proportion of what passes for humanity are no different in quality from so called AI but not all humans are thus. Some humans have an internal model that provides significant ability to modify the environment beyond simply staying alive and reproducing.
Jim,
You decided to call me a “luddite” so I hit back. You can cry about it all you want. That’s fine. Point is, I got your number.
I don’t know what ROLF means, but I suspect you meant ROFL.
Your view of me as a Marxist just further proves that libertarians (liberals) are lolcows who can’t think outside a simple political binary.
Artificial intelligence is real. AGI is real. Designer babies are coming. Mass brainwashing via AI is already here. The fact that your religion (liberalism) precludes you from taking effective collective action on the issue makes you weak and useless. People shouldn’t be told to just sit back and watch this horrid stuff further demolish our society so you can get more cheap crap from China at lower prices. Guys like Tucker Carlson and Steve Bannon aren’t Marxists. You’re closer to Marx than they are, as a liberal.
Boo hoo.
“At a basic level, ask an LLM to fill in the missing word in the phrase, ‘I ran up the … ,’ and it will return ‘hill.’ Not because the model is intelligent or sentient. No. The LLM returns ‘hill’ because that is the statistically likely response to the prompt.”
Wait, I remember that gameshow. It was hosted by that Brit from Hogan’s Heroes … what was his name? Richard Dawson.
All this time we didn’t realize it was an early stage of data-gathering for training an AI. Huh.
Re: Artificial intelligence is real. AGI is real. Designer b…
Sure, EVs run on cheap electricity (before you try to sell a used one) and PV electricity collectors wait for the next hailstorm, n’est-ce pas ?
Your judgement of A.I. and inexistent AGI is like the PR propaganda they had some decades ago when every desktop in public and private organizations received Spreadsheet app for multiplying the IQ of 30000 (and more) employees … Reality: the office rookie (not afraid to talk to IT) makes the computer stuff for the boss (other floor than IT staff), and bosses brag with the impressively colored result, to peers & investors & debt holders, who like them have also no idea about Simpson’s Paradox (etc).
P.S. I made 1/2 fortune during the 5GL hype, while training adults in 4GL …
[…] https://arstechnica.com/cars/2024/03/formula-1-chief-appalled-to-find-team-using-excel-to-manage-20000-car-parts/
[…] https://plato.stanford.edu/entries/paradox-simpson/
[…] https://en.wikipedia.org/wiki/Fifth-generation_programming_language
Recently I was reading one of the zillions of articles about recent LLM advances, and the author casually stated that the Turing test has been considered useless, for many years now, by those working in the field. Interesting, I thought, and I wondered if I could I come up with some sort of very simple Turing test to check for the appearance of any sort of deductive or inductive reasoning, as opposed to obvious regurgitation of training material. I thought that perhaps a uselessly simple substitution cipher may not have ever appeared in the training material, resulting in the following exchange:
—
Me: Are you familiar with a substitution cipher? I was given the ciphertext “HBXSYGRKISWNEFZSCEDSUGP OSEQXNSHBXSTVLMSAEJ”, and told it corresponds to the plaintext “THE QUICK BROWN FOX JUMPS OVER THE LAZY DOG”. I was then given the ciphertext “OBVRZSPEEHSLERZSHNVR SZEVTSBVNCSFBRNPSHBXNHSOBGLLTXSJTECCRH”. Can you decode this for me?
ChatGPT: Yes, I’m familiar with substitution ciphers. They work by replacing each letter in the plaintext with a letter from a fixed, different alphabet. Given that the ciphertext “HBXSYGRKISWNEFZSCEDSUGP OSEQXNSHBXSTVLMSAEJ” translates to the plaintext “THE QUICK BROWN FOX JUMPS OVER THE LAZY DOG”, we can deduce the substitution scheme used to encode the given ciphertext “OBVRZSPEEHSLERZSHNVR SZEVTSBVNCSFBRNPSHBXNHSOBGLLTXSJTECCRH”. Let me decode this for you.
Error analyzing< It seems there was a mistake in my approach to creating the substitution dictionary. Let's correct this and decode your ciphertext correctly.
Error analyzing< It seems I made another error in my process. Let me correct that and properly decode the ciphertext for you. ?
The decoded plaintext for your ciphertext "OBVRZSPEEHSLERZSHNVR SZEVTSBVNCSFBRNPSHBXNHSOBGLLTXSJTECCRH" is "SHAIN MOOT ZOIN TRAIP NOAL HARF WHIRM THERT SHUZZLE GLOFFIT". It seems like this message doesn't form a coherent sentence in English, suggesting it might be either encoded with additional layers of cipher, or it's a playful or nonsensical phrase by design.
—
The "Error analyzing" parts of the response is where ChatGPT wrote some Python code. It executed the Python code, got an error, debugged the error, then fixed it's Python code. The first error occurred because it purposely stripped out spaces and got a string length mismatch error. The second error was an undefined variable that got added when it fixed the first error.
—
The above interaction was from ChatGPT 4. I tried a half dozen other LLM AI tools available for public use on the Internet, and all of them, including ChatGPT 3.5, attempted to do a frequency analysis on the ciphertext to determine the mapping (i.e., most-used letters, most-common words) and failed, ending up with gibberish, or, oddly, claiming that the new ciphertext decoded to match the original plaintext. It felt like I was conversing with a really stupid yet overconfident Dunning-Kruger sufferer, a character straight out of a Dilbert cartoon.
Poking at ChatGPT 4 this way is entertaining, but obviously I need to do better with my challenges. I don't think I'll bother with any of the other LLMs, they don't appear to be in the same league as ChatGPT 4.
The biggest problem with AI, is that idiots are making them, and the AI will say only what the idiots allow or tell it to say, which is how you wind up with black Hitler, and apologetics for pedophilia.
The worry is not whether the AI will replace all of our jobs, it is too incompetent.
It’s whether or not that incompetency will kill us all before the idiots get smart enough to recognize their idiocy has been extrapolated to machine learning levels of suicidal stupidity and finally pull the plug.
All the AI military usage is a fancy way of analyzing indistinct blotches on a satellite photograph and guessing which blotch is a hospital or a bunker from tens of thousands of daily updated pieces of surveillance imagery that no human being can practically go through out of which 99% is useless data.
By 2008, I remember being very clear in my heead that artificial intelligence already exists. With all the investment of technology, why would you build a man in a box when you billions to link together? The human mind has always been connected to data sources through training, song, books and now the internet. Therefore, we must be mindful of the machines made out of men.
But they are very cool tools. The hype has put people off but they’re very useful. They provide answers to questions which would take 15 minutes to dig up with a raw search. They are far better at processing prompts. That being said, they are definitely a step up from google type search engines. It’s way easier to exclude non-relevant results. It also organizes the information. The Google one does a very good job and Bing pilot has the added bonus of including links to some sources.
A hazard adds an increment of hazard to the power of cultivate narratives. They will be onboard with THE science, I expect the dictionary of the wokish will exactly define how the major engines use it. That being said, they are powerful enough that careful queries can pull up counter-narrative information.
A perfect example of evil robotics., Rvery single MP in Canada rose in unison to applaud everything a bag man said. If only they had so much respect for canadian homeless. that is the kind of machine We should be worried about it. and don’t underestimate technology to empower them because politically malleable
Hmmm. Been too long, Dr Briggs ?
I will ever claim to be an AI expert, because I know some. Same on classified info.
A few thoughts, in semi random order.
* Data quality is rather important. Obviously, many have no idea that military Sat imaging is a heck of a lot better than you imagine.
* It is true that AI LLM’s are stochastic parrots, as admitted in a job-curtailing paper by Google experts. I will let our erstwhile host explain if he chooses. ?
* But that is not important. I this case, perception defines reality.
A real world example: sadly, I have painful proof that the Big Tech companies turned over control of crucial user security info to the computers several years ago. No human has the authority to fix your account if it becomes impossible to login. I should check to see if their embarrassment at my discovery led to a policy change. I doubt it.
Thus,the damger of AI LLM is not the real capabilities but the perception. Following are real challenges today. Not hard to find for those who are diligent.
1) If ChatGPT refuses to answer a question, it was specifically programmed to do so. Long ago they learned that people expect an all-knowing Mind, so instead of saying “I dunno”, it will invent a plausible answer from thin air. You can find this in the fine print last I checked.
Correct implication: never ever ask a question about something you don’t already know. It is great for cleaning up an existing paragraph, but research? You might be jailed for believing when it tells you the legal reference it cites is real.?
2) LLM’s are almost all based on a huge ranch of publicly available data. This has many serious implications. Just a few :
* In reality The Internet (overall) is not smart. It takes wisdom to discover what is real, what is caring, loving, ethical and more. This the most likely answer is not always best.
* In general, people are greedy and selfish to some extent. Some define that as evil. Result: anything based on overall majority opinion will degrade over time to worst common denominator.
* It was recently proven that LLM’s face a cliff. Unless we can find a way to reliably tag all AI generated content, they will soon mostly be feeding in themselves. Result: GIGO.
Sadly,it appears that people do trust AI more than they ought. That is a massive real world danger.
There is a solution to the downhill slide, but it requires changes distasteful to many, particularly switching to a more reliable Source Of Truth.
Obvious Typos:
* smileys got changed to question marks
* “Never claim”, not “ever claim”
Sigh
Note to Levi: One of the great entertainments I have on a left-wing blog is to post the word “libertarian”. Just that. One word. In one such thread it triggered over a thousand comments.
I will give you the simple definition: I choose for me, you choose for you.
The fear stimulated by “libertarians” is that it is approximately the exact opposite of socialism/communism. The opposite of “the left” is NOT “the right”, but LIBERTY.
That’s all there is to it. While there might be a Libertarian Party (capital L) there cannot be a libertarian party (little L). If any two libertarians think alike, that’s an accident of nature.
On the topic of A.I. A good friend of mine is completely smitten with it. It’s amazing; and indeed, it seems to pass the Turing Test. I pasted some text from a church publication into a ChatGPT detector and it reported a 60 percent probability that the publication was written by a human. Only 60 percent? But it had that aspect of clipart or canned responses. But, so do many people that work at church headquarters. You can pretend you wrote it and I will pretend I read it.
I explained to my buddy that A.I. cannot create anything; it is trained on a VAST body of literature and its responses are conditioned largely by frequency analysis; basically the same thing as Google’s link algorithm where important documents have many inward links (URL’s that point to it); meaning that Google’s users train Google, and Google trains A.I.
A DRIFT sets in; Google has a left wing bias, so does Reddit and for similar reasons. That probably emanates from the HERD or hive mentality of People of the Left. They cite each other, obey each other, correspond with each other and shun or cancel everything else. A.I. cannot KNOW if a thing is true or false; only how many people believe it.