Yesterday we looked at NCDC’s claim that the 13-month stretch of “above-normal” temperatures had only a 1 in 1.6 million chance of occurring. Let’s today clarify the criticism.

The NCDC had a list of premises, or evidence, or assumptions, or some model which they assumed true. Given that model (call it the Simple Model), they deduced there was a 1 in 1.6 million chance of 13-in-a-row months of “above-normal” temperatures. This probability, given that model, was true. It was correct. It was right. It was valid. Everybody in the world should believe it. There was nothing wrong with it. *Finis*.

However, the intimation by the NCDC and many other folks was that because this probability—the *true* probability—was so small, that therefore the Simple Model was false. And that therefore rampant, tipping-point, deadly, grant-inducing, oh-my-this-is-it global climate disruption on a unprecedented scale never heretofore seen was true. That is, because given the Simple Model the probability was small, therefore the Simple Model was false and another model true. The other model is Global Warming.

This is what is known as backward thinking. Or wrong thinking. Or false thinking. Or strange thinking. Or just plain silly thinking: but then scientists, too, get the giggles, and there’s only so long you can compile climate records before going a little stir crazy, so we musn’t be too upset.

Now something caused the temperatures in those 13 months to take the values that it did. Some string of physics, chemistry, topography, whatever. Call this whatever the True Model; and call it that because that is what it is: it is the true cause of the temperature. Given the True Model, then, the probability of the temperature taking the values it did was 1—100%. We can only add *of course*.

The Global Warming model is a rival model held by many to be unquestionable (which is not to say *true*). Why not ask: given the Global Warming model, what is the probability of 13-in-a-row “above-normal” temperatures? Nobody did ask, but let’s pretend somebody did. There will be some answer, some probability. Save this and set it aside. This probability will also be true, correct, right, assuming we believe the Global Warming model is true.

Yet there also exists *many* other rival models besides the Global Warming and Simple Models. We can ask, for each of these Rival Models, what is the probability of seeing 13-in-a-row “above-normal” temperatures? Well, there will be some answer for each. And each of those answers *will be true*, correct, *sans reproche*. They will be right.

Now collect all those different probabilities together—the Simple Model probability, the Global Warming probability, each of the Rival Model probabilities, and so on—and do you know what we have?

A great, whopping pile of nothing.

What we have are a bunch of probabilities that aren’t the slightest use to us. Get rid of them. Consider them no more. They will do us no good. And why should they? All they are, are a group of *true* probabilities, each calculated assuming a different model was true.

But we want to know *which model is true*! The probabilities are mute on this question, silent as the tomb. We ask these probabilities to tell us which model is true (or closest to the True Model) but answer comes there none. Actually, the answer will be, “Why ask me? I’m just a valid probability calculated assuming my model was true. I have no idea whether my model, or any other model, is true.”

Here is what we *should* ask: Given we have seen 13-in-a-row “above-normal” temperatures, and given my understanding of all the rival models, what is the probability that any of these rival models is true?

So if somebody tried to answer that question with a “I don’t know. But I do know that if I assume the Simple Model is true, the probability of seeing the data is this-and-such” you would be right to find that person a comfortable chair and to lecture him gently on the advantages of decaffeinated coffee.

Last thrust: assume the Simple Model is the best model there is. Once more, the probability of seeing the data we saw is small. But so what? Rare things happen all the time (see yesterday’s example). People win the lottery, which has a smaller probability than seeing the temperatures we say. If the Simple Model is the best we have, then all we can say is that we have seen a rare event. And this *should be cheering news!* Especially if you did not enjoy 13 months in-a-row of “above-normal” temperatures. For we have just learned that such events are rare, and that things almost certainly return to “normal.”

Beyond all the more philosophical claims, isn’t this mining your data for a hypothesis?

i.e, even if the “Simple Model” is completely true, there’s a slight of hand in the data as it’s presented. Unless I’m missing something, the probability they did calculate was Pr(Tempature above average in SPECIFIC set of 13 months). But it’s presented as though it’s Pr(Tempature would ever be above average for 13 months).

Am I missing something?

I would like those Alarmist number crunchers to calculate the probability of observing alternating above and below average temperatures over 21 months using the same model. And what might that say about the likelihood that we are facIng a catastrophic human caused warming.

Would it strengthen our belief in an underlying model if it said “the next N months are going to be hotter than normal by measure X”, and then they were? I would say “Yes”. I would also say that every time it made such a successful prediction, my confidence in it would be strengthened, provided that there were few wrong predictions. Had I a model with a good track record of successful predictions, I would be interested in playing a guessing game involving money with those who are convinced that the future is not predicted by the past.

Given that there have been roughly 1.5 trillion days in the history of the planet, every day is an exceedingly rare event (if you consider a large enough event space). So what? What are you going to do differently with that thought in your mind than you might have otherwise done?

Call me dumb, but I don’t see the point of a probability calculation that I can’t use to guide my actions. A model is more interesting if it tells me about tomorrow than it is if it tells me about yesterday. The probability I care about is one that allows me to place a winning bet (using an expansive definition of the term that includes committing resources toward a future reward in a general sense).

Gecko:

Sure, but let’s carry on Speed’s example from yesterday. A guy comes up to you and offers to bet you on the result of tosses of what appears to be a fair coin that he has. He offers you $1.50 if it lands heads, you owe him $1.00 if it lands tails. That sounds pretty good, so the game starts. After 15 tosses, you’ve given $15 because it’s landed tails 15 straight times. Do you think “gosh, I’m very unlucky, but this outcome is no more unlikely than any of the other 32,767 possible HT sequences, let’s keep going?” or do you think “hmmm, this is not a fair game”?

Rob/Gecko, if I may interject.

If you had predicted a different outcome (e.g. that the coin was fair), you might start to doubt your model. Simple statistics can quantify your level of doubt, if that matters to you. Of course, you might have questioned your model ex ante when you learned the odds you were offered in the game.

Your conclusion may be different if you knew that the same game played earlier with the same coin resulted in 15 straight heads.

It seems to me that the point at hand in the string-of-hot-weather discussion is really whether any particular model has shown any skill at predicting such events. With a model of generalized warming, how many failures (periods of unseasonably cool weather, or, in fact “average” weather) have occurred? If the generalized warming model is any good, it would predict the next occurrence of unseasonably warm weather before it happens.

So, when does the generalized warming model predict the current heat wave will end, and when and where will the next one occur? Oh, wait, that’s weather, and, as we hear during cold snaps, climate models do not predict weather.

Pointing to a past event and saying, in effect, that a certain model would have predicted it, if it were capable of making such a prediction, and thus the model is true, is as absurdly surreal as a judge saying “Congress could have said such a provision was a tax, so, even though it didn’t, we will assume it did.” Oh, wait…

Briggs said,

People win the lottery, which has a smaller probability than seeing the temperatures we say.Pedant alert … it is the probability of one specific set of numbers winning that is improbable (aprox. 175 million to one against).

It’s worse than we thought!

a) Starting at some point in time, the global ave temp will go up as more time passes.

b) Starting at some point in time, the global ave temp will go down as more time passes.

If you work with lots of significant figures, then either a) or b) will hold true.

It is a given that a) has been happening since the little ice-age (pre human-caused increase of the the Annihilation Molecule), and is apparently still happening.

Hence, Hansen is a Satanist, who serves his master by increasing human misery and death (via locking-in of poverty).

If you are not stupid, you might declare that Hansen is ‘stupid’, or some other label that lets the pedophile keep raping the children. Throwing people into prison is, after all, such a… violent thing to do. Lets rather all drink some tea, wot wot, be friends and debate: violence, after all, never like ever solved anything.

So we are left with two possibilites:

1. The heat wave was spectacularly unusual and therefore not plausibly a random event.

2. The NCDC — representing your government — is hell bent on convincing you of something that you must be convinced of in order for the NCDC and other government agencies and the multibillion dollar climate business to be preserved. It therefore deliberately chose a misleading probability model to help convince the great unwashed and doesn’t care if a few statistically inclined nerds see through the scam.

As good Baysians, what is P(1 or 2|prior knowledge) ??

Suppose we had 1300 consecutive monthly readings, all at least 100 degrees above normal. It seems Briggs’ analysis would still apply. Rare events happen all the time, after all.

SteveBrooklineMA,

Yes! Exactly. If we had 1300, or 15000, or 1 trillion, consecutive monthly readings 100 degrees (or more) Briggs’s analysis still would apply. You’re finally getting the Bayesian way. You have finally realized the old way of thinking had things exactly backwards.

The old way was: You wanted to know what caused a thing, and had an idea what that complex thing was. But you ignored that thing; completely put it out of your mind. You then “believed” a different thing; a thing much simpler than the first. You then calculated the probability of observing some event assuming the simple thing was true. If that probability was small, you said the simple thing couldn’t be true.

And now you finally get that this is nuts. And what we wanted to know all along was whether the simple thing was true, or whether the other thing true, given the data we saw. You finally got the order of the conditions in the conditional probability right. That is, you finally grasped that

Pr( complex thing | data, other evidence) .n.e. Pr( data, other evidence | complex thing)

I am so proud that I’m like to weep!

(Sincerely.)

Steve,

In my joy, I forgot to mention that both Pr( complex thing Â¦ data, evidence) and Pr( data, evidence) of course have nothing to do with Pr(data Â¦ simple thing, evidence). Not even the same species! How anybody could have confused these things you know understand to be a very curious question.

“The Global Warming model is a rival model held by many to be unquestionable”

I am aware of at least 20 different climate models all producing different answers. If these modelers really knew what they were doing there would be only one model.

Suppose your child was an A student in middle school. Are you saying that if she has consistently received Bs (below normal) during the freshman year in high school, instead of finding out possible reasons for the change, you would just conclude that nothing out of the ordinary could have happened and that things would almost certainly return to â€œnormal?â€ You are not talking about assigning a â€œnew normalâ€ for her, are you? I wouldnâ€™t be so sure that she would return to the â€œnormalâ€ her, an A student.

Big Mike.

As Briggs points out. A partciular run of outcomes is only “improbable” relative to the model being assumed.

A run of higher than average temperatures is “improbable” relative to the model assumed in the same way as an alternating above/below average run would be.

It is when you try to reach conclusions when the problems occur. Just because a run of higher than average temperatures is “improbable” under the assumed model it does not mean that the “correct” model is Dangerous Anthropogenic Warming.

Imagine we are talking cards, not temperatures. If someone managed to deal up 10 consecutive aces from a deck (an extremely “improbable” event if you assume there is a 4/52 chance of doing it once and each occurance is independent) would you not only conclude that your niave model is incorrect, but the dealer is psychic, or some other specific cause?

Much better Mr Briggs. I love this post. And I still think many posters here aren’t grasping it entirely. JH’s comments for example. He still believes that the 13 months prove anything just for the sake of being 13. Perhaps that’s because it’s a kind of a lucky number or some such. The problem is really how mankind “intuits” probability, which is to say, really really bad.

At the office we used to share an Euro-Million (how to name it?) ticket, where everyone would put their own guess at the final lottery numbers (5 numbers between 1 and 49, plus two “star” numbers between 1 and 9). Because I take these things ironically and lightly, I chose once the sequence, 1,2,3,4,5 and stars 6 and 7. My colleagues simply refused to accept these numbers of mine, and unless I took the exercise “seriously” (putting what they called “random numbers”) they wouldn’t make a shared bet with me. I tried to argue back to no avail, they simply laughed at me (we are talking about university-degree personnel here, not plumbers). Only then I realised how badly the human mind deals with probabilities.

Small nitpick, mr Briggs:

Or, more to the point perhaps, that our “best model” still sucks big time.

They never even bothered trying to make a prediction on how “probable” or “improbable” such a 13 month streak is on that particular “model” (as if there was only one…). Would it really be surprising to find similar “unprobable” numbers? Oh but we would then be starting to have a relatively rigorous discussion, and we clearly do not want such a thing, now do we?

Better just to throw a “1 in a 1.6 million” line in there in the most handwaving way possible, so that people may confuse it with the odds of “nothing happening” vs “global warming”, which is the only choices we should ever think about. All for the cause, you know.

(by gods the typos in that last comment!!)

JHâ€™s comments for example.Hestill believes that the 13 months prove anything just for the sake of being 13. Perhaps thatâ€™s because itâ€™s a kind of a lucky number or some such. The problem is really how mankind â€œintuitsâ€ probability, which is to say, really really bad.Luis,

I am not sure if â€œHeâ€ mean I (a female) or Mr. Briggs. I donâ€™t believe that â€œ13-in-a-row above-normalâ€ temperaturesâ€ has proved anything, however, it sends a signal that there

mightbe something going on. Scientific progress requires curiosity.Yes, we assume that all combinations of lottery numbers are equally likely, and there is only one winning combination. Mr. Briggsâ€™ point is that the event of 13-in-a-row above-normal temperatures is nothing but a rare event. I am saying it may not be the case.

He uses the lottery analogy to demonstrate that we see people win the lottery and hence all we can say is that itâ€™s a rare event.

Let me just point out that the events â€œ

your ONE ticket will hit the jackpotâ€ and â€œone of the millions lotto ticket sold will hit the jackpotâ€ are different. The probability of the latter event is not small, which is the reason we see people win the jackpot.JH, on rare events.

I’m in Australia, which means most every town is 200 years old or less. There any may “records” broken every year.

Let me be wistful and say that there are 10,000 towns in Queensland. For “any” of the following measures (hottest, coldest, wettest, driest, longest…. ect;) there will be a record to be broken. So in any given year, there might be 200 all time records broken. Perfectly common I say.

Given enough metrix, and enough locations, an all time record will be broken somewhere for something every other week.

What deserves our attention is not just the occurrence of a low probability event but the occurrence of a particular low probability event that we have specified in advance. For example, I would bet $100 that someone’s selection of numbers will win the lotto jackpot at least once in the coming year but I would also bet $100 that none of the selections *I* make will win.

Similarly, given that we have a theoretical reason to expect that there might be global warming, events such as having more than even just five *specified* current months (eg the next five starting now if month to month correlation can be ignored) in the “top third” bracket of the long-term historical record would make me suspicious of anyone who insists that the predicted warming is not happening.

Of course if those five aren’t all in the top third we wouldn’t be free to just keep trying until we found five that were, but there are definitely other ways of looking at the data which avoid the pitfalls of “cherry picking”.

The significance of that string of 13 high temperature months depends on whether it was specified in advance, and if it wasn’t then how many other strings of 13 months could have been chosen *without* a high proportion of high temp cases.

Even with no warming at all, if we wait long enough we’ll almost certainly eventually see a string of 13 high-temp months, and if we choose just those then the probability of them all being high is 1. But we *didn’t* have to wait forever, and so although the 1 in 1.6 million figure is bogus, I think the extent to which it needs to be discounted is not really so great after all.

What happens when we are dealing with nature which exhibits persistence, not uncorrelated events? e.g. see The Hurst phenomenon and climate, Y. Markonis, D. Koutsoyiannis, European Geosciences Union General Assembly 2008, Vienna, Austria, 13â€“18 April 2008, Session IS23-HS2.3/NH2.7/CL53

Hurst-Kolmogorov dynamics in paleoclimate reconstructions

Creationists use a very similar argument – the probability of life on Earth having evolved purely by chance is so infinitesimally tiny that the standard model cannot be right, and the alternate model (God created it) must therefore be true.

Peter317,

Exactly right. This is the wrong way to go about proving God’s existence.