Around the 4th of July, here in the States, there is a tendency for official weather forecasts to show a probability of precipitation that is lower than it should be. It rains more than the forecasters guess.

The same thing inverted happens around December 25th (the Federally Recognized Holiday That Shall Not Be Named): the forecasts tend to give too high a probability of precipitation. It snows less than the forecasters guess.

This phenomena is well recognized in meteorology where it has long gone by the name of wishcasting: it is also found in many other areas of life, which I’ll talk about below. Wishcasting describes the tendency of the forecaster to tilt his guess toward the outcome which he would like to see, or toward the outcome he knows his viewers would like to see.

Good weather forecasters, obviously, are aware of this tendency and do their best to lessen its influence. But even the best of them tend to get excited when a big storm is on its way, these being matters of great and evident importance, and sometimes issue forecasts which exaggerate the chance of severe weather. Still, the influence of wishcasting is small among professionals, mostly because of the routine evaluation of forecast performance and criticism of peers. People like to pick on weather forecasters, but among any professional group, I have not found any to be better or more reliable than the National Weather Service.

Before we go further, let me answer an objection which might have occurred to you. Why not exaggerate the probability of a storm causing damage since “it’s better to be safe than sorry”? To do this takes the decision out of the hands of person who will experience the storm and puts it into the hands of the forecaster. And that is the wrong thing to do: the forecaster does not know better than his audience what decisions are best. Every person in the path of a storm knows what losses he will face if a storm hits, and how much it will cost him to protect. If people are routinely given exaggerated forecasts, then they will pay the cost of protecting more than they should, and those costs are not insignificant (how much money is being lost by the shops of New Orleans from the protracted evacuation?). You cannot use the forecast as a tool to warn people of dangers which are unimportant to them. It will make them less likely to believe forecasters when real dangers arise. The lesson of Chicken Little is pertinent.

While the Weather Service forecasters do a great job, this is not so among reporters who routinely wildly overstate potential dangers, even when that danger has passed. Anybody who watched television coverage of hurricane Gustav could attest to this. We saw fearless reporter Geraldo Rivera standing in the streets of New Orleans holding a small aneomometer shouting, “There’s a 60 miles per hour, Bob! Wait! A 61!” He bravely leaned into the stiff breeze and held his ground to bring us this breaking news. Of course, anybody who has driven a car and stuck their hand out the window will know that a 60 MPH wind is hardly life threatening.

Well, reporters shading the truth, embroidering facts, neglecting pertinent information, and at times outright lying is by now of no surprise. People have learned to “divide by 10” any statement issued from a newsroom, so journalists cause less harm than they would if they were taken at face value.

Wishcasting is by no means restricted to weather predictions. I’ll ask you right now, who will be elected president: McCain or Obama? It is difficult to remove the prejudices you have for one candidate or the other and give a good guess. If you love McCain, you are likely to increase the chance of him winning. If you fear Obama’s promised tax increases, that might increase your guess of the chance of him winning if you are naturally pessimistic. To carefully sift through all the evidence and arrive at an unemotional prediction is extremely difficult.

Gamblers often wishcast. “Red hasn’t come up if seven spins, so it’s more likely to now.” Part of this reasoning is due to misunderstanding or not knowing the rules of probability that govern simple games, but part is also due to the desire for the outcome. Wishcasting is prevalent in environmental circles. So much so, that an “activist” who doesn’t embellish is a oddity. Brokers, financial planners, stock pickers, and similar professionals are no less prone to wishcasting.

Wishcasting is somewhat different than the experimenter effect, although there is some overlap. The experimenter effect is when a scientist (or group of them), consciously or not, set up a model to demonstrate the effect they were looking for. A common example is a drug trial. One group is given a new drug, the other an old one or a placebo. If the patients are evaluated by a physician who knows which patient got which drug, it is likely the effects of the new drug will be exaggerated. This phenomena is so well known that the government mandates blinding of medical trials. This is where the physician who evaluates the patients has no idea which treatment the patient has received.

Michael Crichton, physician and author, in testimony to congress, gave an example of this:

It’s 1991, I am flying home from Germany, sitting next to a man who is almost in tears, he is so upset. He’s a physician involved in an FDA study of a new drug. It’s a double-blind study involving four separate teams—one plans the study, another administers the drug to patients, a third assesses the effect on patients, and a fourth analyzes results. The teams do not know each other, and are prohibited from personal contact of any sort, on peril of contaminating the results. This man had been sitting in the Frankfurt airport, innocently chatting with another man, when they discovered to their mutual horror they are on two different teams studying the same drug. They were required to report their encounter to the FDA. And my companion was now waiting to see if the FDA would declare their multi-year, multi-million dollar study invalid because of this chance contact.

His point in this testimony was to show that researchers in global warming are nowhere near as careful as their colleagues in medicine:

[T]he protocols of climate science appear considerably more relaxed. In climate science, it’s permissible for raw data to be “touched,” or modified, by many hands. Gaps in temperature and proxy records are filled in. Suspect values are deleted because a scientist deems them erroneous. A researcher may elect to use parts of existing records, ignoring other parts. But the fact that the data has been modified in so many ways inevitably raises the question of whether the results of a given study are wholly or partially caused by the modifications themselves…

…[A]ny study where a single team plans the research, carries it out, supervises the analysis, and writes their own final report, carries a very high risk of undetected bias. That risk, for example, would automatically preclude the validity of the results of a similarly structured study that tested the efficacy of a drug.

Wishcasting meets the experimenter effect when the results from a non-blinded experiment are exaggerated to “raise awareness” of the potential horrors that await us if we do not heed the experimenters. Sometimes this exaggeration is done on purpose, as with the weather forecaster who feels his viewers would be “better safe than sorry”, and sometimes the overstatement is unconscious because the forecaster has not recognized his limitations. Scientists often feel they are special and able to avoid the frailties that plague the rest of us, but of course, they cannot; they are still human.

It is nearly impossible to disentangle experimenter effect from wishcasting in any situation, nor can we easily identify the constituent facts and their relevance used by a forecaster in producing his forecast. To do so essentially means producing a rival forecast and is a laborious process.

What we can do (this is my line of country) is to check how good the actual performance of a forecast is. If the forecast routinely fails, we can say something has gone wrong. Just what requires more work: was it bad data, mistaken theory, wishcasting, or something else? If the forecast routinely fails, we are rational to suspect it will fail in the future, and that the theories said to underly the forecast might be false. If the forecast fails, we are also right to question the motives of the forecaster, because it is these motives that influence the presence or amount of wishcasting.

These cautions do not just apply to weather or climate forecasts, but in all areas where routine predictions are made. Could you be making more money in your stock portfolio or office football pool, for example? Generally, wishcasting takes places when forecasting complex systems, like the weather, climate, or any area involving human behavior. It’s much less likely in simple situations, like how much this electron will move under a certain applied force, or what will happen when these two chemicals are mixed. But we’ll save complexity for another day.


  1. Excelent post, mr Briggs! I really like Crichton’s lectures and speeches. He’s very intelligent as well.

  2. Even in ‘simple situations’ there seem to be problems. From James Gleick’s book ‘Genius’ on the life of Richard Feynman:

    ‘[Feynman] was mercilessly skeptical. He loved to talk about the famous oil-drop experiment of Caltech’s first great physicist Robert Millikan, which revealed the indivisible unit charge of the electron by isolating it in tiny, floating oil drops. The experiment was right but some of the numbers were wrong–and the record of subsequent experimenters stood as a permanent embarrassment to physics. They did not cluster around the correct result; rather, they slowly closed in on it. Millikan’s error exerted a psychological pull, like a distant magnet forcing their observations off center. If a Caltech experimenter told Feynman about a result reached after a complex process of correcting data, Feynman was sure to ask how the experimenter had decided when to stop correcting, and whether the decision had been made before the experimenter could see what effect it would have on the outcome. It was all too easy to fall into the trap of correcting until the answer looked right.’

  3. I bet the companies offering bets on snow predictions do okay. Maybe we should look at the odds in the local betting shop as a real gage! It would be impossible to gather the data but comparing odds of snow at “CHRISTMAS!” versus met office predictions and media predictions would be interesting, they should be the same. Money talks so if it is known that over prediction is a fact, which I do not doubt, the guys working out the odds will factor/cook this into the odds.

    Well how the wheel turns, “CHRISTMAS!” is now a rebellious word in the land of the free!

    Never mind America, in Portugal they don’t even believe in Santa! I know he exists, I saw him in Harrods!…twice!

  4. Dr. Briggs,

    If you haven’t read ‘The Black Swan’ by Nassim Nicholas Taleb, I really think you’d enjoy it. It kind of stands reality on its head, particularly for highly-educated knowledge workers who are used to being the smartest person in most given rooms. It has opened my eyes to the fact that what I do not know is far more important than what I do know.

    He gives the example of two friends, one an engineer and the other a street-smart businessman. He tells each of them that a coin has been flipped 100 times and come up heads every time, then asks the engineer what the odds are of getting heads on the next flip. The engineer gives the answer that I immediately thought of — 50% — the answer that most of us who’ve studied probability even briefly would give.

    The street-smart friend says, “Heads 100 times in a row? The next flip will be heads, because that coin is no good.” That’s a much more reasonable conclusion in retrospect, what blinds us to that assumption is what we know about probability that in some ways blocks further thinking.

  5. Darren, I have read Taleb’s book and I mean to review it soon (well, soon-ish). His main message is good, but he spent a lot of time talking about himself. Could just be jealously on my part, though. He gets a hundred grand a speech, and the best I can say is that nobody has (yet) asked me to pay for making mine.

    Sailor Ben, thanks very much.

  6. I think Feynman talked a lot about how you have to think about “what if I am wrong”. You pressure test your own assumptions. The more you do that (and it withstands), the more confident you should be. And at times, you will find something wrong with your thinking.

    This is very different from how typical advocates in politics or such will try to build a case. Or how people in business (e.g. McKinsey) have too much of a tendancy to try to build a case for a hypothesis…rather than doing really tough tests to knock it down (and then watch if it survives).

    Similarly, I find that advocates of Steve at CA have too much of a tendancy to believe him and cackle and enjoy him…rather than to double check him. (same thing on the opposite side btw). I have seen some good people like Burger or Zorita frustrated by Steve’s tendancy to avoid questions about broad statements that he makes.

    Just a word to the wise. Since I am also an “anti-AGWer”. But mostly am someone who wants to use critical thinking. Whichever way it cuts. Rather than lawyer style reticence. Scientists need to put their own babies on the chopping block and try to kill them. What remains is more interesting that way.

  7. Dr Briggs,
    I think I’ve noticed the radio stations I listen to during drive time in Dallas exaggerate extreme temperatures. When the temp is between 35F and 95F, the radio temp is within a couple of deg F of my car’s thermometer.

    Outside of that range on the hi side, radio temp is markedly higher than car and on the low side, markedly lower.

    Because it’s so inconvenient to write, this remains a suspicion.

    Bill Drissel
    Grand Prairie, TX

Leave a Comment

Your email address will not be published. Required fields are marked *