Models, theories, consistency, and truth

Ready? Put on your best straight face, recall that global temperatures have not increased for a decade, and that it’s actually been getting cooler, then repeat with Brenda Ekwurzel, of the Union of Concerned Scientists—what they’re concerned about, heaven knows; perhaps replace “concerned” with “perpetually nervous”—repeat, I say, the following words: “global warming made it less cool.”

Did you snicker? Smile? Titter? Roll your eyes, scoff, execrate, deprecate, or otherwise excoriate? Then to the back of the class with you! Because what Ekwurzel said was not wrong, because it is true that the theory of global warming is consistent with cooler temperatures. The magic happens in the word consistent.

To explain.

While there might be plenty of practical shades of use and definition, there is no logical difference between a theory and a model. The only distinctions that can be drawn certainly are between mathematical and empirical theorems. In math, axioms—which are propositions assumed without evidence to be true—enable strings of deductions to follow. Mathematical theories are these deductions, they are tautologies and, thus, are true.

Empirical theories, while they might use math, are not math, and instead say something about contingent events, which are events that depend on the universe being in a certain way, outcomes which are not necessary, like temperature in global warming theory. Other examples: quantum mechanics, genetics, proteomics, sociology, and all statistical models: all models that are of practical interest to humans.

Just like with math, empirical models start with a list of beliefs or premises, again, some of which might be mathematical, but most are not. Many premises are matters of observation, even humble ones like “The temperature in 1880 was cooler than in 1998.” The premises in empirical models might be true or uncertain but taken to be certain; except in pedantic examples, they are never known to be false.

It is obvious that the predictions of statistical models are probabilistic: these say events happen with a probability different than 0 or 1, between certainly false and certainly true. Suppose the event X happens, where X is a stand-in for some proposition like “The temperature in 2009 will be less than in 2008.” Also suppose a statistical theory of which we have an interest has previously made the prediction, “The probability of X is very, very small.” An event which was extraordinarily improbable with respect to our theory has occurred. Do we have a conflict?

Global warming cools things off

No, we do not. The existence of X is consistent—logically compatible—with our theory because our theory did not say that X was impossible, merely improbable. So, again, any theory that makes probabilistic predictions will be consistent with any eventual observations.

Global warming is a statistical theory. Of course, nowhere is written down a strict definition of global warming; two people will envision two different theories, typically at the edges of the models. And this is not unusual: many empirical theories are amorphous and malleable in exactly the same way. This looseness is partly what makes global warming a statistical theory. For example, for nobody I know, does the statement “Global warming says it is impossible that the temperature in any year will fall” hold true. The theory may, depending on its version, say that the probability of falling temperatures is low, and as low as you like without being exactly 0; but then any temperature that is eventually observed—even dramatically cold ones—are not inconsistent with the theory. That is, the theory cannot been falsified1 by observing falling temperatures.

It is worth mentioning that global warming, and many other theories, incorporate statistical models that give positive probability to events that are known to be impossible given other evidence. For example, given the standard model in physics, temperature can fall no lower than absolute zero. The statistical global warming model gives positive probability to events lower than absolute zero (because it uses normal distributions as bases; more on this at a later date). But even so, the probabilistic predictions made by the model are obviously never inconsistent with whatever temperatures are observed.

Incidentally, even strong theories, like, say, those used to track collisions at the Large Hadron Collider, which are far less malleable than many empirical models, are probabilistic because a certain amount of measurement error is expected; this ensures its statistical nature (space is too short to prove this).

Now, since, for nearly all models, any observations realized are never inconsistent with the models’ predictions, how can we separate good models from bad ones? Only one way: the models’ usefulness in making decisions. “Usefulness” can mean, and probably will mean, different things to different people—it might be measured in terms of money, or of emotion, or by combination of the two, or by how the model fits in with another model, or by anything. If somebody makes a decision based on the prediction of a model, then they have some “usefulness” or “utility” in mind. To determine goodness, all we can do is to see how our decisions would have been effected if the model had made better predictions (better in the sense that its predictions gave higher probability to the events that actually occurred).

Unfortunately for Ekwurzel, while she’s not wrong in her odd claim, global warming theory has not been especially useful for most decision makers (those that make their utility on the basis of temperature and not on the model’s political implications). It is trivial to say that the theory might be eventually useful, and then again it might not. So far, the safe bet has been on not.

———————————————————

1Please, God, no more discussions of Popper and his “irrational” (to quote Searle) philosophy. This means you, PG!

21 Comments

  1. Kevin

    So, a scientist has two competing hypotheses to test; one that says the value of X is 5 and the other that it is 10. Like good introductory stats books tell us to do, this scientist decides on a statistic, decides on an alternative hypothesis that X5 (two tailed), then collects data. These have a mean of 2 and a standard error of 1. So, our scientist has evidence against X=5, and should accept the alternative; but the data are even worse evidence for the proposition that X=10. I tell my intro stat students not to get one’s self into a pickle like this in the first place. If one has two competing theories, then use a likelihood ratio.

    So under any reasonable (consistent) theory of global warming by mankind’s doing (GWbMD), the observation of declining temperature must have lower likelihood than would such an observation under the theory of no global effect to anything mankind does (!GE2AMD).

    Sure, declining temperatures might be exceptions to the truth of global warming over limited time and limited space, but, utility of good predictions aside, exceptions are not evidence in favor of a theory.

  2. Kevin

    Oops. The system doesn’t allow the “not equal” sign, so “X5 “in the prior post should be “X not equal to 5”.

  3. JJD

    By the way, a tautology is a statement which, because of its structure, is always true. For example, “if 5 is odd, then 5 is odd” is a tautology, as is “if 5 is even, then 5 is even.” Nontrivial mathematical theorems, such as the Pythagorean theorem (about right triangles) and Fermat’s Last Theorem (about integers), are proved from axioms by means of logical rules, but in general they are not tautologies.

    Regarding the consistency of a statistical theory with any observations, it makes sense that any single observation would constitute fairly weak evidence for or against such a theory, but a large body of the right kind of observations collectively can provide strong evidence for or against. That is why particle accelerators record tremendous numbers of events, and for the most part they are not looked at one at a time.

    Another consideration is that if a statistical theory is based upon quack statistics it may be a waste of time arguing about evidence for or against.

  4. Briggs

    JJD,

    Right on the tautology, Jim, of course. But philosopher’s often use the word in sense I did for mathematical proofs; but right that it’s not the same lingo as the mathematicians themselves use. But we agree that, conditional on the truth of the axioms (all of them, even those used to justify steps in a proof), the results are deductively proved, hence true. In that logical sense, a mathematical proof is a tautology.

    Also exactly so about how evidence lends credence to a theory; however, when you say “good” you have to define some measure of this credence or “goodness.” This is what I meant when I said that a model can be good for one person and bad for another—if those two people are putting the model to different uses, i.e., they are using it to make decisions that are either different, or they are the same but have different utilities in their outcomes.

    We lastly agree that junk statistical methods lead to junk models—-but, however, the statements of those junk models are not inconsistent with whatever obtains. That is the key point: consistency isn’t enough. And we are led back to the necessity of creating a measure of goodness. And not one, I must emphasize, which finds how well the model fits the known data, but with one which finds how well it predicts data that were not used in any way to create the model. It is a trivial proof that for any set of data you care to imagine, a perfectly fitting (by whatever measure you choose) model can be found.

  5. Kevin

    I failed to make my point all that clear, I suppose. The junk method I was alluding to in this specific instance is that temperature alone is the defining characteristic of climate. Briggs, you smacked the issue obliquely when you speak of those who see utility only in temperature and not the political implications. I don’t think one has to bring in political implications to see what makes most people roll their eyes, legitimately, at statements like Brenda’s. As long as global temperatures are rising, people who believe in AGW, are perfectly content to point to the temperature record alone. When counterintuitive trends in temperature become apparent, they now must add complexity to the argument. Either that or begin fiddling around with the temperature record. In either case it looks ad hoc.

    Jim’s mention of tautology explains well why I have always been a skeptic in this debate. Without some method (data and analysis in this case) capable of resolving between AGW and GW, all I am able to comprehend is GW. In fact, I am even becoming a little skeptical of GW. Climate change is another tautology.

  6. RW

    “recall that global temperatures have not increased for a decade, and that it’s actually been getting cooler”

    You claim to be a statistician and yet you make a statement like this? Dear me.

  7. Briggs

    Dear me, yes, RW. Do you have information the rest of us don’t?

  8. Kevin

    Yes, for the past decade, no temperature increase. The satellite data are here: http://www.ssmi.com/msu/msu_data_description.html (three panels of tropospheric temperature), the GISS surface data are here: ttp://data.giss.nasa.gov/gistemp/graphs/Fig.C.lrg.gif, and Hadley/Climate Research Unit data are here: http://www.cru.uea.ac.uk/cru/data/temperature/nhshgl.gif, although these particular graphs are hard as the dickens to read because the time scale is so compressed. There are other data sets as well, but you can pretty well take your pick and not find any temperature increase. Are the oceans warming? — probably not. Is sea ice vanishing? Well that is a complicated story, but this year is not shaping up as anything like 2007.

    Certainly this has revived my interest in climate change some. During the 1990s, particularly, I began to see the topic as little more than tracking temperature year by year or even month by month, and fretting over what catastrophe the latest 0.01 degree of temperature increase had wrought. Debate settled. It made the topic stale, other than watching people tie everything to a little warming.

    Now I think people have begun to see that climate is a lot more than just temperature, and that climate matters.

  9. Briggs

    Thanks, Mike. (Say, I wonder if you remember, but we did meet once, at an AMS meeting, back in ’95, or so. )

  10. Mike

    I do, and I am a big fan of your blog!

  11. JH

    I am trying to catch up with some blogreading.

    Good points on consistency in prediction using probability. There is also consistency in probability on which the law of large number is based.

    Here is my view on consistency. Let’s consider the following simple example.

    Briggs has always been punctual in attending meetings. That is, there is little fluctuation or variability around the scheduled time in his arriving time (AT). And you would expect (predict) him to appear within a small window of the scheduled time. If he hasn’t show up 15 minutes after the scheduled time, you would wonder what has happened to him. Really, you have decided that the probability of his being late is slim, and such an event is inconsistent with the model you have in mind about his AT. So, is your model incorrect based on one such event? Maybe not. However, if he continues to be late, you would probably modify your model about his AT.

    Also, we all have a friend like JH who comes and goes as she wishes. There is large variability and uncertainty in her AT. Your prediction interval of her AT, due to the large variation, is probably wide. If she is too late or early for anything, no big deal, that’s the way she is. Her lateness is consistent with her usual erratic behavioral model. The probability of her being late or early is not significantly small enough to cause any concern.

    In more complex cases, building a mathematical or statistical model is not as simple and there are many issues to be considered such as data integrity and identification of important explanatory variables. However, some of these basic concepts on modeling, prediction precision, uncertainty, probability and decision making still carry through. I am not interested in GWA issues or debates at all, though I have read some of statistical methodologies employed in climate modeling. I can only imagine that there are certain events that can cause the inconsistency of the prediction. A prediction of a lower or higher temperature than observed is not a definite indication of an incorrect model and inconsistent prediction. Even if there are errors, modification of a model is a routine process of a scientific study.

  12. Briggs

    JH,

    Thanks. Of course, in the Law of Large Numbers, by “large” we mean “approaching the limit of infinity.”

    Sure, people are fluid in what they mean by “Model”, especially those that are not written down. They will call the changes and progressions “part” of the Model. Technically, this is false. Make a change to a model, you have a new model. And in any case, the observations are not inconsistent with either the original or the new (modified).

    A prediction of hotter temperatures where cooler ones are observed are not inconsistent with the global warming model because nobody expects the predictions are 100% for any event. If the model said, it is impossible for lower temperatures to obtain and we saw them, the model is proven faulty. But this almost never, ever, never never never occurs.

  13. David

    What I am taking here is that AGW would have to specify beforehand what would be the inconsistant level, the level that makes the model falsifiable. And from what I understand, no such level is made available, so talking about consistency is out of topic.

  14. Briggs

    David,

    No. Actually, the models’ outputs are in fact probabilistic (in hard and soft senses), so it is quite useful to talk about inconsistency.

  15. Richard

    RW says:
    3 August 2009 at 6:17 pm
    “recall that global temperatures have not increased for a decade, and that it’s actually been getting cooler”

    You claim to be a statistician and yet you make a statement like this? Dear me.

    Almost “incontrovertibly” global temperature data has been trending down from 2002, thats 9 years. The sea surface temperatures have been trending down too (which should ultimately have an effect on atmospheric temperatures)

    The following link gives the graphs for the 2 satellite temperatures UAH and RSS and one “surface” temperature record (Hadley, UK).

    http://woodfortrees.org/plot/uah/from:2002/to:2008/trend/plot/hadcrut3vgl/from:2002/to:2008/trend/plot/rss/from:2002/to:2008/trend/plot/hadsst2gl/from:2002/to:2008/trend

    The lone temperature data that stubbornly bucks the trend is GISS, maintained and “adjusted” (many tens of thousands of times) by James Hansen, who though a “scientist” is also a activist and a firm believer in his cause of anthropogenic global warming by CO2. (There are other acknowledged and non-controversial ways that we have influenced the local climate such as urbanisation and irrigation). An “activist scientist” sounds like an oxymoron to me, as you cannot then be objective in your views.

    That there has been no warming since 1998 is controversial as there was a big El Nino in 1998 which warmed the atmosphere.

  16. Richard

    oops sorry 7 years

  17. Richard

    PS even if the “surface temperature” data were not controversial (there is doubt about the Urban heat island effect and if this has been effectively “adjusted” for in the data and the lack of coverage and accuracy of the data in many areas), it still makes no sense as a meaningful global temperature record.

    It actually measures the atmospheric temperature over land and the sea water temperature over the sea and then averages the two.

    The satellite temperatures have a more even coverage and actually measure (or compute) the atmospheric temperatures. They however suffer from lack of historical data as they started only in 1979.

  18. Richard

    “..what Ekwurzel said was not wrong, .. it is true that the theory of global warming is consistent with cooler temperatures.”

    However NOAA has made the job of, if not disproving, at least doubting the Anthropogenic Global Warming (AGW) hypothesis, by merely watching the temperature records, easier for the layman.

    They ran 10 simulations spanning a period of 700 years and found 17 non-overlapping decades “with trends in ENSO-adjusted global mean temperature within the uncertainty range of the observed 1999–2008 trend (−0.05° to 0.05°C decade–1)”.

    This is what they say “The simulations rule out (at the 95% level) zero trends for intervals of 15 yr or more, suggesting that an observed absence of warming of this duration is needed to create a discrepancy with the expected present-day warming rate.”

    If there is no positive temperature trend for at least 15 years, then AGW, we have a problem.

    http://www.ncdc.noaa.gov/oa/climate/research/2008/ann/bams/full-report.pdf

Leave a Reply

Your email address will not be published. Required fields are marked *