The human race has prospered by relying on forecasts that the seasons will follow their usual course, while knowing they will sometimes be better or worse. Are things different now?
For the fifth time now, the Intergovernmental Panel on Climate Change claims they are. They assume that the relatively small human contribution of this gas to the atmosphere will cause dangerous global warming. Other scientists disagree, arguing that the climate is so complex and insufficiently understood that the net effect of human emissions cannot be forecasted.
The computer models that the IPCC reports rely on are complicated representations of the assumption that human carbon dioxide emissions are now the primary factor driving climate change. The modelers have correctly stated that they produce scenarios. Scenarios are stories constructed from a collection of assumptions. Well-constructed scenarios can be very convincing, in the same way that a well-crafted fictional book or film can be. However, scenarios are neither forecasts nor the product of validated forecasting procedures.
The IPCC modelers were apparently unaware of the many decades of research on forecasting methods. I along with Dr. Kesten Green conducted an audit of the procedures used to create the IPCC scenarios. We found that they violated 72 of 89 relevant scientific forecasting principles. (The principles are freely available on the Internet.) Would you go ahead with your flight if you overheard two of the ground crew discussing how the pilot had violated 80 percent of the pre-flight safety checklist?
Given the expensive policies proposed and implemented in the name of preventing dangerous man-made global warming, we are astonished that there is only one published peer-reviewed paper that claims to provide scientific forecasts of long-range global mean temperatures. The paper is Green, Armstrong, and Soon’s 2009 article in the International Journal of Forecasting.
The paper examined the state of knowledge and the available empirical data in order to select appropriate evidence-based procedures for long-range forecasting of global mean temperatures. Given the complexity and uncertainty of the situation, we concluded that the “no-trend” model is the method most consistent with forecasting principles.
We tested the no-trend model using the same data that the IPCC uses. We produced annual forecasts from one to 100 years ahead, starting from 1851 and stepping forward year-by-year until 1975, the year before the current warming alarm was raised. (This is also the year when Newsweek and other magazines reported that scientists were “almost unanimous” that Earth faced a new period of global cooling.) We conducted the same analysis for the IPCC scenario of temperatures increasing at a rate of 0.03 degrees Celsius (0.05 degrees Fahrenheit) per year in response to increasing human carbon dioxide emissions. This procedure yielded 7,550 forecasts from each method.
Overall, the no-trend forecast error was one-seventh the error of the IPCC scenario’s projection. They were as accurate as or more accurate than the IPCC temperatures for all forecast horizons. Most important, the relative accuracy of the no-trend forecasts increased for longer horizons. For example, the no-trend forecast error was one-twelfth that of the IPCC temperature scenarios for forecasts 91 to 100 years ahead.
Our research in progress scrutinizes more forecasting methods, uses more and better data, and extends our validation tests. The findings strengthen the conclusion that there are no scientific forecasts that predict dangerous global warming.
There is no support from scientific forecasting for an upward trend in temperatures, or a downward trend. Without support from scientific forecasts, the global warming alarm is a false alarm and should be ignored.
Government programs, subsidies, taxes, and regulations proposed as responses to the global warming alarm result in misallocations of valuable resources. They lead to inflated energy prices, declining international competitiveness, disappearing industries and jobs, and threats to health and welfare.
Climate policies require scientific forecasts, not computerized stories about what some scientists think might happen.
—————————————————————————
Professor J. Scott Armstrong is a founder of the two major journals on forecasting methods, author of Long-Range Forecasting, editor of the Principles of Forecasting handbook, and founder of forecastingprinciples.com. When people want to talk forecasting, this is the guy they call.
In 2007, he proposed a ten-year bet to Mr. Albert Gore in 2007 that he could provide a more accurate forecast than any forecast Mr. Gore might propose. See The Climate Bet for the latest monthly results that would have happened so far.
I differ only from my colleague slightly and claim that “scenarios” are forecasts, too; and that trying to re-label them to avoid the responsibility of a busted prediction is no defense.
All forecasts are conditional (all probability is) on “stories”, i.e. evidence, some of which is tighter or is of such quality to make the prediction quantifiable. A prediction is a prediction. That is, if the conditions specified in the “scenario” obtain, then any statements this “scenario” makes are predictions and must be judged as such.
And, as Scott well knows, most of these predictions stink (my word, not his).
“any statements this “scenario†makes are predictions and must be judged as such.”
Yeah, but if they are obtained you can’t claim you called it. Kinda like: “The Bronco’s might win their next game” and if (or when, if you’re a Bronco’s fan) they do then proclaiming “See? I called it!”
DAV,
I’m more interested in people using “scenarios” to evade responsibility and have it both ways. “Well, the sky didn’t fall. I only said it would if such-and-such and such-and-such didn’t happen. >Sniff<.”
Briggs,
Well, I don’t see anything wrong with “if such-and-such and such-and-such then X”. I’ll go along with calling that a prediction. Saying it was conditioned prediction with the conditions supplied after the fact should be a no-no.
Clearly, professor Armstrong confirms there is a vast difference in science performed to discover knowledge with an open mind and explaining observed results from science with a politically driven, grant furthering, predetermined conclusion. Kind of like assuming there is a bridge over a river and routing traffic over the bridge before construction.
The “no change” model of Green, Armstrong and Soon (2009) suffers from logical shortcomings. Some of them are:
* The model is falsified by evidence, supplied by the authors, that the predicted temperatures fail to match the observed temperatures.
* The events which underlie the model are not independent, as they overlap in time.
* In view of the non-independence, the statistical ideas of frequency, relative frequency and information do not exist for the model.
* The model fails to supply information to a policy maker about the outcomes from his or her policy decisions.
For purposes of policy making, policy makers need a model that is not falsified by the evidence and that conveys information to them about the outcomes from their policy decisions. Neither the IPCC models nor the no-change model match this description.
Terry,
The GAS (if I may) model would only be falsified if its predictions were said to be 100% certain. That is an interpretation, but one nobody makes. Instead, everybody adds a layer of “fuzz” to any prediction they hear, including this one, and it’s very difficult and even impossible in most cases to falsify a probabilistically stated model. On the strict criterion of the forecast everywhere matching exactly the observation, every physical model I have ever heard of is falsified.
Instead, the GAS was meant as a “nothing-to-see-here” kind of model, one which said things would continue much as they have been. This is perfectly understandable and even useful to “policy makers”, that dreaded species.
Why is it, incidentally, it is never a “policy” to do nothing?
The questions of “independence” are technical and beside the point, and anyway I don’t agree with them. However a prediction is made, there it is. It has to be dealt with as it stands.
“However a prediction is made, there it is. It has to be dealt with as it stands.”
Rubbish. There is a clear and compelling reason why “Methods” appear before “Results”.
Briggs:
Thanks for taking the time to respond! In an examination of whether or not the conclusion of an argument is true, one necessarily relies upon the text that conveys this argument. In the description of their model that is given by Green et al in their 2009 paper, I find no fuzz. To the contrary, they repeatedly state the model’s predictions to be in error and quantify this error. I’m forced to conclude that their model is falsified by the evidence.
Regarding the independence of the underlying events, the authors state that they obtained “58 error estimates for 100-year-ahead forecasts.” As the period from 1850 to 2007 contains at most one 100 year period, the periods of the 100-year-ahead forecasts must have overlapped, and thus the events which had these periods must not have been independent.
Pingback: J. Scott Armstrong: Scientific Forecasts, Not Scenarios, For Climate Policy | The Global Warming Policy Foundation (GWPF)
conard,
Rubbish?! Well I’m right and you’re wrong, nyah, nyah, nyah.
Based on what scenarios/assumptions is such judgment/claim made?
I work in an industry that uses ‘models’ to operate physical infrastructure that spans multiple states, from the gulf coast into New England. Most times they work pretty well. So I don’t discount the creature out of hand.
But I deal with much smaller spans. It could be called a type of process control. One of the parameters I deal with is temperature. Measuring such. Our usual process instruments that measure this cost about $900 dollars. It is compared to and calibrated with one that runs about $6000. The latter is accurate to about five hundredths of a degree. The minimum accuracy acceptable in the field device is .2 degrees Fahrenheit. Most do considerably better.
Said all that to say this.
Our local airport has two sophisticated, automated weather stations. The FAA operates one and NOAA the other. They are less than a hundred feet apart, located on the same large patch of mown grass, far from heat traps or radiant thermal masses. They often disagree by up to THREE degrees.
Seeing this kind of anomaly in data gathering really makes me take predictions based on exactly that data, flawed from the start, not quite so seriously.
I’ve offered my assistance, free of charge to assist in the calibration of those two particular devices. There was no response from either agency.
My astrologer predicted a flippant reply.
“For purposes of policy making, policy makers need a model that is not falsified by the evidence and that conveys information to them about the outcomes from their policy decisions.”
Really? Wow, who’da thunk! But I suppose if the model in question has shown some skill in predicting how many votes the policy-maker in question will get in the next election, I can see your point.
Pingback: Scientific Forecasts, Not Scenarios, For Climat...