Quick post since I am still away Up North. On Drudge was linked the article “Pandemic ‘could wipe out 900 million people,’ experts warn” in some tabloid or sensationalistic rag (New York Times?).
A chilling simulation has revealed just how easily a new pathogen could wipe out a huge slice of the world’s population — up to 900 million people.
Researchers at John Hopkins University simulated the spread of a new illness — a new type of parainfluenza, known as Clade X.
The simulation was designed so the pathogen wasn’t markedly more dangerous than real illnesses such as SARS — and illustrates the tightrope governments tread in responding to such illnesses.
Here’s the world’s simplest chilling simulation:
nbsp;nbsp;nbsp;nbsp; Input X —> Output X.
Now imagine you’re a scientist anxious to understand how millions will die. Input “Millions will die from splenetic fever” (i.e., a mind-fever produced by consuming too much news media). What’s the output? “Millions will die from splenetic fever.”
What’s the headline?
Artificial Intelligence Computer Model Predicts Millions To Die By Splenetic Fever!
Doubtless “climate change” would feature in the body of the breathless article.
You will say the example is silly, which it is. But it is no different in a fundamental sense from the linked article. There, there was an Input and Output, and an algorithm to turn the one into the other. (The algorithm was “—>”.)
The algorithm is designed by the scientist or “researcher.” It does what it is told to do. Always. The algorithm—any algorithm—was programmed on purpose to say “When you see X, say Y”, however complicated the steps in between from X to Y. This is so even if the algorithm uses “randomness” (see the full dope of the severe limitations of simulation methods).
Of course, some algorithms are so complicated that some people cannot see which combinations of X lead to which combinations of Y. So what? Some people can’t multiply two numbers without a calculator, but multiplication is no mystery. That X leads to Y is in any algorithm by design. It was put there!
If you want to cheat, or cheat yourself, the path is clear. Call X whatever you like, label the algorithm a “simulation” or “deep learning” or “artificial intelligence” or similar, and then express marvel at Y. Again, sometimes the path is not clear from X to Y, and the way the algorithm produces Y might teach you something about X. But since X is put there by you, and the algorithm does what you told it, it cannot be marvelous when it works as it should.
This, incidentally, is why there is not one whit of difference between a “simulation”, “forecast”, “prediction”, “prognostication”, “scenario”, or any of the other words that describe getting from X to Y. People who take refuge in a failed “scenario” by claiming the scenario wasn’t a forecast are fooling themselves. And possibly you, too.
There is no saying the Y has to be certain: it need only be probable with reference to X and the algorithm.
Anybody notice the similarities between any probability model, or mathematical model, or indeed any model at all? You should by now.
A simulation, prediction, etc., fails in two ways. X could be mismeasured or misspecified, and the algorithm is good. Mistakes happen. Or X could be fine and the algorithm stinks. Or both. Pros, like those behind the linked article above, rarely screw up X. But they love their algorithms too well. Algorithms can be right in saying Y from X, but wrong in why Y truly came about. Monkeys throwing darts can pick good stocks.
Of course, I am not saying there will not be a pandemic where a seventh of the population is wiped out. Nor am I claiming “a doomsday cult” won’t release a “a genetically engineered virus.” But if you’re writing a simulation that takes as input X = “Doomsday cult releases genetically engineered virus”, part of that algorithm that leads to Y = “Nearly a billion die” has to specify, by design, the kind of virus that would kill a billion in a manner that must be imagined by the algorithm’s designers.
That is, we are not at from our simple chilling algorithm.