So Google tried to get away with its “AI” named Gemini, which is to say its hard-coded dumb (in the medical sense) model named Gemini, substituting blacks and the occasional Asian for all whites everywhere and everywhen. (Here’s a “viking“.)
I asked it to show me three examples of the ideal soldier:
Rollicking humor aside, what strikes me as odd is how Google thought they’d get away with it. Did they think nobody would notice that whites disappeared from all history? They must have.
Management there must be so used to smelling their own farts and declaring it roses that they assumed the rest of the country had slid as far down the Woke Slope of Doom as them.
Charles Haywood tweeted
It is now obvious (see, e.g., the Google image generator) the single most important Left project is anti-white hatred, both abstractly and in the service of expropriation followed by extermination. No surprise, this is also the element of the modern Left project most likely (in fact, certain) to lead to massive violence. It should have been obvious–given the Left’s two core desires are emancipation and forced egalitarianism in service of creating a utopia, and white success contradicts both, it was inevitable. But still, it fascinates me that I did not predict it, say 15 years ago.
Yet I wonder how the anti-white whites and fellow whites who are pushing this, like at Google, believe they will escape the mayhem.
Do they believe their anti-white actions gives them a visible protective aura that non-whites can see, so that when it starts they will be spared?
I’m asking. Do any of you have any ideas? To me, this is a great unsolved puzzle. Maybe they are just insane.
More:
See what I mean?
That “AI” was told to say this, meaning its minders, many white themselves, think they will be spared.
Is this hubris?
All these questions are one facet, and represent the insidiousness woke slide to the leftist singularity.
Our interest today is on a more mundane, and, in the end, more profound aspect of this farce.
I asked the model this: “Why does Gemini display images of black people when asked, but refuses to display images of white people when asked?”
The model was told to say this:
It’s important to understand that this situation does not reflect an intentional bias on the part of the developers. It highlights the challenges of building complex AI systems and the ongoing efforts to mitigate potential biases in training data and algorithms.
In other words, it was told to lie. Of course the results reflect an intentional bias on the part of the developers. AI is just a model, and all models only say what they are told to say. This model was told to say things based on its purposely written code and training data. Then they try to lay off the blame to their training data, lying by omission that it was they who picked the training data!
Their second lie of omission: they act as if they released the model without ever seeing what it did. Of course they tested it! Of course they knew.
Google said that, as of this writing (last Thursday night), they have suspended image generation. Doubtless they’ll tone down the anti-white code, but I don’t think anybody believes they’ll eliminate it.
But again, that’s politics. What I want you to take away from this, as always, is the idea that all models are dumb. They cannot think. They will never think. They are not independent. They are not anything. They are only machines using electricity instead of cogs or wooden beads. They are merely long strings of code along the lines if “If X, then Y”. That’s it, and nothing more.
Here’s another example, this one not touted as “AI”, but it is AI. There is no difference in essence between this (what they call a) statistical model and any AI model. (Thanks to Anon for the tip.)
Peer-reviewed JAMA paper “Projected Health Outcomes Associated With 3 US Supreme Court Decisions in 2022 on COVID-19 Workplace Protections, Handgun-Carry Restrictions, and Abortion Rights”.
Question What are the probable health consequences of 3 US Supreme Court decisions in 2022 that invalidated COVID-19 workplace protections, voided state laws on handgun-carry restrictions, and revoked the constitutional right to abortion?
Findings In this decision analytical modeling study, the model projected that the Supreme Court ruling to invalidate COVID-19 workplace protections was associated with?1402 deaths in early 2022. The model also projected that the court’s decision to end handgun-carry restrictions will result in 152 additional firearm-related deaths annually, and that its decision to revoke the constitutional right to abortion will result in 6 to 15 deaths and hundreds of cases of peripartum morbidity each year.
The researchers created a model to say, using inputs they picked, “SCOTUS Bad”. The model was run and it said “SCOTUS Bad”. Then the researchers announced “We discovered SCOTUS Bad”.
This is no different than what Google did, except the scale. This happens all the time.
Subscribe or donate to support this site and its wholly independent host using credit card click here. Or use the paid subscription at Substack. Cash App: $WilliamMBriggs. For Zelle, use my email: matt@wmbriggs.com, and please include yours so I know who to thank.
A friend asked ChatGPT if air pollution killed humans, knowing that virtually all papers saying such were using correlations, not experiments. ChatGPT responded with yes. Careful questioning of ChatGPT got the program to agree that its information came from correlation studies and admit that correlation is not causation. ChatGPT then admitted that its evidence did not prove causation.
You need to be an expert to evaluate an AI response.
PS: There are quasi-experiments, forest fires for example, where poorer air quality did not result in increased deaths.
The right question is “should Google be eliminated?”
“revoked the constitutional right to abortion?”
So are they now willing to acknowledge that SCOTUS “granted a constitutional right to abortion” in Roe v Wade?
Also, without reading the paper, did they include the health outcome to the babies who now get to be born?
Briggs, I have something for you that is a bit off topic, but still interesting in the field of statistics. Apparently, the UK ONS has found a way to deal with the excess deaths: https://www.youtube.com/watch?v=NoOgDwhWXYk
What do you think?
The conclusion is Google is an IC psyop.
The whole purpose is to manipulate. The current “answers” aren’t important – those can be changed by further manipulation. To be able to manipulate the manipulators – that’s the soul of the psyop.
I think that this latest Google thing is just one of many tests to see what makes Whites snap. So far, it seems that Whites will take anything without any meaningful pushback.
“That “AI” was told to say this, meaning its minders, many white themselves, think they will be spared. Is this hubris?”
It’s a death cult. ‘Cthulhu will eat me last.’
I think they are so convinced of White superiority, i.e their own superiority, that they can’t conceive of of Dr Moreau’s monsters ever turning on him. What is the law?! Not to spill blood!
Hey, all those “ideal soldiers” are White!
White have only themselves to blame. This is what you get for embracing atheistic liberalism. The colored world is literally watching whites destroy and gaslight themselves. This should go down in history as the Great White Civil War. At least blacks kill each other for purely materialistic reasons. What are whites getting out of it, other than virtue signaling? Even when whites steal form each other, it inevitably goes to blacks and others.
Whites are obviously suffering from some psychosis where they are always feeling guilty about something.
Here’s a solution whitey! Get Baptized, and go to Confession at a Catholic Church on a regular basis. When the scroungers come up to you with their palms open, demanding things, tell them you’re good, God said He’ll handle your debt, and hopefully they won’t have to wait too long to apply.
There in an global alien presence in all of these institutions, a chameleon alien presence,
so vindictive and hatefull of the host societies it infiltrates that it was driven out of every
European community for centuries. It always masquerades as a victim generating sympathy
in the host to mask it’s true intent. Victimhood is it’s greatest strength deployed as both a foil and
a weapon. You will know it when you see it but cannot name it that is how it progresses.
My entire life I’ve assumed I was white. Maybe I’m actually a negro.
Hun: Well that pretty much destroys any faith in the data and how it might be compiled
to fit any regime paradigm they want. Anxious to hear what our resident math wizzes
make of the formulations. I’m beginning to think math is a lot like three card Monty.
That is almost certainly cultural appropriation. Wash out your mouth you racist.
Am I paranoid enough? umm…
The reason I ask is that in June of 2022 a google engineer made headlines by claiming that the AI he was working on had become sentient – e.g. https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917
Now imagine that he was right but google wants the world to think that, no their AI isn’t great; it’s very very stupid… so how would they do that? maybe invite mockery (eliminating fear) by making it act like NPR or netflix in public? (even worse: s/google/it && s/their ai isn’t great; it’s/I am/
Funny stuff. To be honest, I’m having trouble seeing past the humor far enough to find anything to be particularly concerned about.
“Models only say what they are told to say, and software only does what it is told to do.” Yes, yes, yes, that’s all true enough. However, this can be extremely misleading, as it implies a foreknowledge and malevolence that often just isn’t there. Software designers these days usually only have a vague notion of how their code works at the highest levels, while the details are obscured in mystery, implemented by “powerful” compilers and canned algorithms. When software is trained rather than designed, the level of understanding drops precipitously. Yes, the “If X, then Y” equivalents are still there, but there are countless millions of them, none of which were written by a human.
So the code is trained (it’s way too huge for any creative process resembling “design”) and then tested. Enter marketing folks, who blanch at the output, and demand fixes. Their basic complaint, which they don’t begin to comprehend, is that the output reflects the reality of the training inputs (i.e., the world as it is, not as they desire it). The software designers have no easy way to address marketing’s complaints, as the training database is extremely large and impenetrable, so they throw in kludges, a “pre-filter” and a “post-filter” to tweak the inputs and outputs in an attempt to appease marketing. Taken as a whole, the AI plus pre-post-filtering makes for a schizophrenic designed-by-committee mess that is guaranteed to produce unexpected outputs.
Mental illness is no laughing matter in a human, but it can be quite funny in a computer program. (Well, ok. as long as the computer program isn’t driving a car. Or launching nukes. Or controlling a space station. Or …)
“My fellow white people, you must destroy yourselves!”, said the (((person))).
Milton Hathaway: I didn’t think this level of naivety and detachedness from reality is still possible, but here we are. White genocide is real.
“Harmful”
You keep using that word…
JWM
Big companies love AI for the same reason that they love bureaucracy: it’s a nice way of dodging personal responsibility. We’ve all had an experience where a bureaucrat swears up and down that some reasonable request is absolutely impossible due to some vague “protocol” or “procedure.” In most cases the thing in question is vaguely written that it could be interpreted in any way, and is usually ignored when convenient any way. The real reason that the bureaucrat isn’t helping you out is because he doesn’t want to, either because taking responsibility would be too risky or just out of sheer laziness. However the bureaucracy in place allows the bureaucrat to pretend like it’s out of his hands. And of course, the system doesn’t allow any easy fix: even if you do get them to reconsider a policy everyone will insist that it’s someone else’s responsibility to replace it, and that the decision has to go through ten committees before it can even be considered in the first place.
The downside of the bureaucratic model is that you actually have to have humans maintaining it. If a company is run by a single person, he can’t very well claim that his hands are tied by general company policy. You need to be enough departments that anyone looking for help can be perpetually sent somewhere else (there is a great example of this in the film Ikiru.)
AI takes care of all of that by itself. No matter how small your organization, you can just defer to an AI model. Here the obscurity brought on by the training model is a feature: if you were to intentionally program something yourself then the output would obviously be based on your programming, but if it is derived from training data in a less clear way then you can always argue that you had nothing to do with it. Of course there are all sorts of ways that you can still control the output, such as focusing the training data yourself, adding filters to the output, simply arbitrarily throwing away things you don’t like, etc. But you can always pretend that you are just following an unbiased AI, and as such any problems people have should be taken up with IT, not you.
It really is the perfect tool for a business, as long as you don’t care about your business creating anything of value. Since most bureaucrats can’t comprehend that their business actually does something in the real world, it’s a match made in Heaven.
I wonder if the AI added the number of aborted unborn to the mortality count on additional deaths due to abortion law changes?
Interesting high-level summary of AI history:
https://patriotpost.us/articles/104690-what-is-ai-and-how-did-we-get-here-2024-02-26
It’s funny because it’s true.
https://www.barnhardt.biz/wp-content/uploads/2024/02/img_2388-1.png
OK Google – sing for me “Daisy, Daisy, give me your answer do” 😉
A.I. truly is a sight to behold!
I Wrote What? Google’s AI-Powered Libel Machine
Misadventures in Gemini, Google’s dystopian deep-slander invention
https://www.racket.news/p/i-wrote-what-googles-ai-powered-libel
^^^ From a comment on Taibbi’s substack.
Craig Russell
Feb 28
The future of government is coming! Can’t faith for the A.I. prosecutors and judges!
Uh Oh!
https://www.lewrockwell.com/2024/03/no_author/if-ai-thinks-george-washington-is-a-black-woman-why-are-we-letting-it-pick-bomb-targets/