# Naomi Oreskes Plays Dumb On Statistics And Climate Change

This post is one that has been restored after the hacking. All original comments were lost.

Remember how I said, again and again—and again—that everybody gets statistics wrong? Here’s proof fresh from the newspaper “of record”, which saw fit to publish prominently an odd article by Naomi Oreskes, who wrote:

Typically, scientists apply a 95 percent confidence limit, meaning that they will accept a causal claim only if they can show that the odds of the relationship’s occurring by chance are no more than one in 20. But it also means that if there’s more than even a scant 5 percent possibility that an event occurred by chance, scientists will reject the causal claim. It’s like not gambling in Las Vegas even though you had a nearly 95 percent chance of winning.

This is false, but it’s false in a way everybody thinks is true. I hate harping (truly, I do), but “significance” is this. An ad hoc function of data and parameter inside a model produces a p-value less than the magic number. Change the function or the model and, for the same data, “significance” comes and goes. Far from being “scant”, that 1 in 20 is trivially “discovered” given the barest effort and creativity on the part of researchers. As regular readers know, time and again nonsensical results are claimed real based on 1 in 20.

That’s the small error. The big one is where she says scientists “will accept a causal claim” when wee p-values are found. It isn’t Oreskes that’s wrong. Scientists will accept a causal claim in the presence of p-values. Problem is they should not. A wee p-value does not prove causality. A non-wee p-value does not—it absolutely DOES NOT—say that the results “occurred by chance”. No result in the history of the universe was caused by or occurred by chance. Chance or randomness are not causes. They are states of knowledge, not physical forces.

If I thought it’d do any good, I’d scream those last four sentences. It won’t. You’re too far away. Do it for me.

Oreskes goes on to discuss “Type 1” and “Type 2” errors (statistical terminology is usually dismal like this). “Type 1” is the false positive, accepting that which is false as true. Sociologists, educationists, psychologists, any person with “studies” in their title, and similar folk know this one well. It is their bread and butter. “Type 2” is the false negative, not accepting that which is true. Die-hard consensus lovers in changing fields know this one intimately.

By far, and without even a skosh of a scintilla of a doubt, false positives are the larger problem. Most new ideas are wrong, for the same reason most mutations are bad. We can certainly understand somebody holding to a mistaken consensus. like those who disbelieved in continental drift, or those who believe the world will end in heat death unless the government is given orders of magnitude more control over peoples’ lives. Going against the flow is rarely encouraged. But if you’re rewarded for coming up with “unique” and politically favorable findings, as indeed scientists are, trumpets will be incorrectly sounded all too often.

Yet Oreskes embraces false positives for the good they will do.

When applied to evaluating environmental hazards, the fear of gullibility can lead us to understate threats. It places the burden of proof on the victim rather than, for example, on the manufacturer of a harmful product. The consequence is that we may fail to protect people who are really getting hurt.

She next aptly uses the word “dumb” to describe thinking about this situation. No better term. Look: the manufacturer is guilty because it has made a harmful product. The poor victim can’t have justice for fear of false positives. Yet how do we know the manufacturer is guilty? According to Oreskes’s logic: because it is a manufacturer! That’s dumb thinking all right.

What if we have evidence to support a cause-and-effect relationship? Let’s say you know how a particular chemical is harmful; for example, that it has been shown to interfere with cell function in laboratory mice. Then it might be reasonable to accept a lower statistical threshold when examining effects in people, because you already have reason to believe that the observed effect is not just chance.

So we know this chemical boogers up some mice. Is that proof it does the same in men? No, sir, it is not. Especially when we consider that the mice might have been fed a diet of nothing but the chemical in order to “prove” the chemical’s harmful effects.

And she misunderstands, again, the nature of probability. We want to know the probability that, given this chemical, a man will fall ill. That can be answered. But simply loosening the p-value requirement does nothing to help to answer it. Lowering an evidential standard which is already a wide-open door can only mislead. You also notice the mistake about the observed effect being “just chance.”

This is what the United States government argued in the case of secondhand smoke. Since bystanders inhaled the same chemicals as smokers, and those chemicals were known to be carcinogenic, it stood to reason that secondhand smoke would be carcinogenic, too. That is why the Environmental Protection Agency accepted a (slightly) lower burden of proof: 90 percent instead of 95 percent.

Yes. The EPA misled itself then us. What we wanted, but did not get, was, given a person inhales this known amount of secondhand smoke (of this and such quality), what is the probability the person develops cancer? What we got were crappy p-values and preconceptions passed off as causes. We remain ignorant of the main question.

Sigh. It’s reality and probability deniers like Oreskes and the EPA that give science a bad name.

Despite all evidence, Oreskes claims scientists are fearful of embracing false positives. Why?

The answer can be found in a surprising place: the history of science in relation to religion. The 95 percent confidence limit reflects a long tradition in the history of science that valorizes skepticism as an antidote to religious faith.

Dear Lord, no. No no no. No. Not even no. If this were a private blog, I’d tell you the real kind of no, the sort Sergeant Montoya taught me in basic training. No. That rotten 95-percent “confidence” came from Fisher and quickly transmogrified into pure magic. That level is religion. It is in no way an antidote to it, nor was it ever meant to be. Good grief!

I stopped reading after this, having been reduced to an incoherent sputtering volcanic mass. This person, this misinformed and really quite wrong person, is feted, celebrated, and rewarded for being wrong, being wrong in the direction politically desired, while folks like Yours Truly are out in the cold for being impolitely right. Hard to take sometimes.

Update The post at our friend’s D.G. Mayo (an unrepentant frequentist) on this subject is worth reading.