Another AI Exaggeration: It Can Say Why There Is Religious Conflict

Another AI Exaggeration: It Can Say Why There Is Religious Conflict

Dear reader, though you won’t believe it, there is such a thing as the Journal of Artificial Societies and Social Simulation. When you search its name, Google shows you this:

The Journal of Artificial Societies and Social Simulation is a quarterly peer-reviewed academic journal created by Nigel Gilbert. The current editor is Flaminio Squazzoni. The journal publishes articles in computational sociology, social simulation, complexity science, and artificial societies

I don’t know, but I would not be surprised to learn the comically named Flaminio Squazzoni is (him-)itself the result of an AI prank.

Anyway, the journal has such papers as “An Agent-Based Model of Rural Households’ Adaptation to Climate Change”, “Innovation and Employment: An Agent-Based Approach”, “Methodological Investigations in Agent-Based Modelling”, and “Agent-Based Agent-Basing: An Agent-Based Approach.” I’m kidding about the last one.

A real one is “A Generative Model of the Mutual Escalation of Anxiety Between Religious Groups” by the beautifully named F. LeRon Shults and others. If F. LeRon Shults doesn’t double as a famous Baptist preacher, he’s missed a huge opportunity.

What about the paper?

We propose a generative agent-based model of the emergence and escalation of xenophobic anxiety in which individuals from two different religious groups encounter various hazards within an artificial society.

That’s some serious agent-basing.

Now academics are constantly inventing puzzles for themselves to solve. They have to have something to write about, and, in truth, it really is publish or perish. It doesn’t really matter if the puzzles have nothing to do with anything; it only matters if enough academics can be gathered together to call their puzzle a subject.

So we can’t find fault in painting what are glorified video games with a scholarly patina. We recognize artificial societies are not real societies, and we understand what happens in a video game is only interesting within that video game. We know that an artificial society being run on a computer and given the name Artificial Intelligence does not mean it has any bearing or relation to non-artificial societies. We know this because whatever comes out of algorithm is only what is put into it.

If, therefore, an algorithm is designed to show “video game religions will have conflict if these conditions hold”, then it will show that video game religions will have conflict if those conditions hold. Whether this idea applies to real religions is not a question that can be answered inside the algorithm.

The folks at Science News might not grasp this point. They say “AI systems shed light on root cause of religious conflict: Humanity is not naturally violent.

Artificial intelligence can help us to better understand the causes of religious violence and to potentially control it, according to a new Oxford University collaboration. The study is one of the first to be published that uses psychologically realistic AI — as opposed to machine learning….

The study is built around the question of whether people are naturally violent, or if factors such as religion can cause xenophobic tension and anxiety between different groups, that may or may not lead to violence?

A true psychologically realistic AI would be able to simulate, in a causal sense, the real intellects, wills, memories, and so on of human beings. Can the algorithm designed by Shults do that? No, sir, it cannot.

What it does do instead, is this:

At every time step, the model environment produces hazards that may be of four different types: natural hazards (e.g., earthquake or volcano), predation hazards (e.g., prowling predatory animal), social hazards (e.g., cultural other interpreted as a threat), and/or contagion hazards (e.g., out-group member with apparent contagious disease). The first two of these hazards have to do with nature, broadly speaking, while the latter two hazards are related to other human beings encountered in society.

These are not real hazards, you understand, but weights in a portion of the video game that are labeled hazards. There are no real people reacting to real hazards, even in a simulated sense. There are only equations interacting with inputs from other equations. There is nothing psychological about it.

The SD people don’t appear to grasp this and actually believe the authors were able to code “how humans process information against their own personal experiences.”

Yet even the authors understand that they still have to prove whether their model works. “Concerns about external validity arise when the results of the model cannot be generalized. The results of our current model cannot be generalized to explain specific occurrences of (or to forecast) mutually escalating xenophobic anxiety.

They still want to believe,though: “However, this does not mean that the model bears no relationship to the real world.”

We’re going to see more of this sort of thing. People believe that since smart people built a model on a computer that therefore the Artificial Intelligence model must be good, true, useful, and worthy. This blind faith is, of course, a form of scientism.

7 Comments

  1. They obviously never heard of the groundbreaking British study on what amount of difference is required between two groups to cause conflict.

    The researchers took a homogeneous class of English school boys, and randomly divided them into two groups. And that’s all it took.

    Whoever wrote the line “Humanity is not naturally violent” has obviously never met humans and has not the slightest familiarization with history or video games.

  2. They obviously never heard of the groundbreaking British study on what amount of difference is required between two groups to cause conflict. The researchers took a homogeneous class of English school boys, and randomly divided them into two groups. And that’s all it took to start the fighting.

    Whoever wrote the line “Humanity is not naturally violent” has obviously never met humans and has not the slightest familiarization with either history or video games.

  3. John B()

    McChuck:

    Was that an actual British study?

    Or was that “Lord of the Flies”? 😉

  4. Sander van der Wal

    Amazing. They program a social hazard, and are surprised that a programmed social hazard is indeed a hazard.

    Probably passing the hazard unit tests with flying colors, too.

  5. Jim Fedako

    Wait! Are you telling me I cannot learn real world tank tactics by playing capture the flag in World of Tank?!?

  6. Kathleen Reeves

    Anyone who believes humans are not violent and are just full of sweetness and light is not playing with a full deck. Even if you believe that the Fall, as described in Genesis, is a “myth,” the fact that the world has been full of Cains clobbering Ables since before the Neanderthals must tell you something. And reading a bit of History wouldn’t hurt either.

  7. Faith

    This is the agenda. Humans aren’t naturally violent therefore give up your right to bear arms. Meanwhile brutalize everyone who disagrees.

Leave a Reply

Your email address will not be published. Required fields are marked *