The end is probably not nigh.
This originally ran 28 May 2013, but given Shaun Lovejoy’s latest effort
in Climate Dynamics
to square the statistical circle, it’s necessary to reissue. See the Lovejoy update at the bottom.
My Personal Consensus
I, a professional statistician, PhD certified from one of the top universities in the land—nay, the world—a man of over twenty years hard-bitten numerical experience, a published researcher in the very Journal of Climate, have determined that global temperatures have significantly declined.
You read that right: what has gone up has come back down, and significantly. Statistically significantly. Temperatures, he said again, have plunged significantly.
This is so important a scientific result that it bears repeating. And there is another reason for a recapitulation: I don’t believe that you believe me. There may be a few of you who are suspicious that old Briggs, well known for his internet hilarity, might be trying to pull a fast one. I neither josh nor jest.
Anyway, it is true. Global warming, by dint of a wee p-value, has been refuted.
Which is to say that according to my real, genuine, mathematically legitimate, scientifically fabricated scientific statistical scientific model (calculated on a computer), I was able to produce statistical significance and reject the “null” hypothesis of no cooling. Therefore there has been cooling. And since cooling is the opposite of warming, there is no more global warming. Quod ipso facto. Or something.
I was led to this result because many (many) readers alerted me to a fellow named Lord Donoughue, who asked Parliament a question which produced the answer that “the temperature rise since about 1880 is statistically significant.” Is this right?
Not according to my model. So who’s model, the Met Office’s or mine, is right?
Well, that’s the beauty of statistics. Neither model has to be right; plus, anybody can create their own.
Here’s the recipe. Grab, off the shelf or concoct your own with sweat and integrals, a model. The more scientific sounding the better. Walk into a party with “Autoregressive heteroscedastic GARCH process” or “Coupled GCM with Kalman-filtering cloud parameterization” on your lips and you simply cannot fail to be a hit.
Don’t despair of finding a model. They are as dollars to a bureaucracy: they are infinite! Thing is, all models, as long as they are not fully deterministic, have some uncertainty in them. This uncertainty is parameterized by a lot of knobs and switches which can be throw into any number of configurations.
Statistical “significance” works by tossing some data at your model and hoping that, via one of a multitude of mathematical incantations, one of these many parameters turns out to be associated with a wee p-value (defined as less than the magic number; only adepts know this figure, so if you don’t already have it, I cannot tell you).
If you don’t get a wee p-value the first time, you keep the model but change the incantation. There are several, which practically guarantees you’ll find joy. Statisticians call this process “hypothesis testing.” But you can think of it as providing “proof” that your hypothesis is true.
Funny thing about statistics is that you can always find a model with just the right the set of parameters so that one, in the presence of data, is associated with a wee p-value. This is why, for example, one scientist will report that chocolate is good for your ticker, while another will claim chocolate is “linked to” heart disease. Both argue from a different statistical model.
Same thing holds in global warming. One model will “confirm” there has been statistically significant cooling, another will say statistically significant warming.
The global temperature (as measured operationally) has certainly changed since the 1800s. Something, or some things, caused it to change. It is impossible—as in impossible—that the cause was “natural random variation”, “chance” or anything like that. Chance and randomness are not causes; they are not real, not physical entities, and therefore cannot be causes.
They are instead measures of our ignorance. All physical and probability models (or their combinations) are encapsulations of our knowledge; they quantify the certainty and uncertainty that temperature takes the values it does. Models are uncertainty engines.
This includes physical and statistical models, GCMs and GARCHes. The only difference between the two is that the physical models ties our uncertainty of temperatures to knowledge of other physical processes, while statistical models wed uncertainty to mysterious math and parameterizations.
A dirty, actually filthy, open secret in statistics is that for any set of data you can always find a model which fits that data arbitrarily close. Finding “statistical significance” is as difficult as the San Francisco City Council discovering something new to ban. The only evidence weaker than hypothesis tests are raw assertions and fallacies of appeal to authority.
The exclusive, or lone, or only, or single, solitary, sole way to check whether any model is good is if it can skillfully predict new data, where “new” means as yet unknown to the model in any way—as in in any way. The reason skeptics exist is because no know model has been able to do this with temperatures past a couple of months ahead.
The Dramatic Conclusion
There isn’t a soul alive or dead who doesn’t acknowledge that temperatures have changed. Since it cannot be that the observed changes are due to “natural variation” or “chance,” that means something real and physical, possible many different real and physical things, have caused temperature to take the values it did.
If we seek to understand this physics, it’s not likely that statistics will play much of role. Thus, climate modelers have the right instinct by thinking thermodynamically. But this goes both directions. If we have a working physical model (by “working” I mean “that which makes skillful predictions”) there is no reason in the world to point to “statistical significance” to claim temperatures in this period are greater than temperatures in that period.
Why abandon the physical model and switch to statistics to claim significance when we know that any fool can find a model which is “significant”, even models which “prove” temperatures have declined? This is nonsensical as it is suspicious. Skeptics see this shift of proof and rightly speculate that the physics aren’t as solid as claimed.
If a statistical model has skillfully predicted new temperatures, and of course this is possible, then it is rational to trust the model to continue to do so (for the near horizon; who trusts a statistics model for a century hence?). But there is not a lot that can be learned from the model about the physics, unless the parameters of the model can be married to physical concepts. And if we can do that, we should be able to create skillful physical models. Good statistical models of physical processes thus work toward their own retirement.
Ready for the punch line? It is shocking and deeply perplexing why anybody would point to statistical significance to claim that temperatures have gone up, down, or wiggled about. If we really want to know whether temperatures have increased, then just look. Logic demands that if they have gone up, then they have gone up. Logic also proves that if they have gone down, then they have gone down. Statistical significance is an absurd addition to absolute certainty.
The only questions we have left are—not whether there have been changes—but why these changes occurred and what the changes will be in the future.
Lovejoy Update To show you how low climatological discourse has sunk, in the new paper in Climate Dynamics Shaun Lovejoy (a name which we are now entitled to doubt) wrote out a trivially simple model of global temperature change and after which inserted the parenthetical words “skeptics may be assured that this hypothesis will be tested and indeed quantified in the following analysis”. In published comments he also fixated on the word “deniers.” If there is anybody left who says climate science is no different than politics, raise his hand. Anybody? Anybody?
His model, which is frankly absurd, is to say the change in global temperatures is a straight linear combination of the change in “anthropogenic contributions” to temperature plus the change in “natural variability” of temperature plus the change in “measurement error” of temperature. (Hilariously, he claims measurement error is of the order +/- 0.03 degrees Celsius; yes, three-hundredths of a degree: I despair, I despair.)
His conclusion is to “reject”, at the gosh-oh-gee level of 99.9%, that the change of “anthropogenic contributions” to temperature is 0.
Can you see it? The gross error, I mean. His model assumes the changes in “anthropogenic contributions” to temperature and then he had to supply those changes via the data he used (fossil fuel use was implanted as a proxy for actual temperature change; I weep, I weep). Was there thus any chance of rejecting the data he added as “non-significant”?
Is there any proof that his model is a useful representation of the actual atmosphere? None at all. But, hey, I may be wrong. I therefore challenge Lovejoy to use his model to predict future temperatures. If it’s any good, it will be able to skillfully do so. I’m willing to bet good money it can’t.