William M. Briggs

Statistician to the Stars!

Page 151 of 590

Daily Links & Comments

@1 The Loss of the Permanent Things in Higher Education: trivial, multicultural, relativistic, sexual and politically correct studies supplant what had been a focus on Western civilization. Link

@2 November 9th, Thomas Aquinas and Philosophical Realism. Among others, our main man Edward Feser on, “An Aristotelian Argument for the Existence of God.” Free! NYU Catholic Center. Link

@3 The Problem of Polygenism and the Theory of Evolution. Long live Monogenism! Link

@4 The Will Rodgers phenomenon in statistics. “When the Okies left Oklahoma and moved to California, they raised the average intelligence in both states.” Think about it. Link

@5 University of Colorado Boulder tells students to avoid costumes including cowboys, indians, white trash or anything potentially deemed offensive. You racist! Link

@6 An interesting time series, plotted in just the right way. Onwards and upwards to a brave new future! Link

Please prefix your comments with “@X” to indicate which story you’re commenting on. I should hardly need to say that a link does not necessarily imply endorsement.


8 Comments

Scientific Forecasts, Not Scenarios, For Climate Policy—Guest Post by J. Scott Armstrong

J. Scott Armstrong.

J. Scott Armstrong.

The human race has prospered by relying on forecasts that the seasons will follow their usual course, while knowing they will sometimes be better or worse. Are things different now?

For the fifth time now, the Intergovernmental Panel on Climate Change claims they are. They assume that the relatively small human contribution of this gas to the atmosphere will cause dangerous global warming. Other scientists disagree, arguing that the climate is so complex and insufficiently understood that the net effect of human emissions cannot be forecasted.

The computer models that the IPCC reports rely on are complicated representations of the assumption that human carbon dioxide emissions are now the primary factor driving climate change. The modelers have correctly stated that they produce scenarios. Scenarios are stories constructed from a collection of assumptions. Well-constructed scenarios can be very convincing, in the same way that a well-crafted fictional book or film can be. However, scenarios are neither forecasts nor the product of validated forecasting procedures.

The IPCC modelers were apparently unaware of the many decades of research on forecasting methods. I along with Dr. Kesten Green conducted an audit of the procedures used to create the IPCC scenarios. We found that they violated 72 of 89 relevant scientific forecasting principles. (The principles are freely available on the Internet.) Would you go ahead with your flight if you overheard two of the ground crew discussing how the pilot had violated 80 percent of the pre-flight safety checklist?

Given the expensive policies proposed and implemented in the name of preventing dangerous man-made global warming, we are astonished that there is only one published peer-reviewed paper that claims to provide scientific forecasts of long-range global mean temperatures. The paper is Green, Armstrong, and Soon’s 2009 article in the International Journal of Forecasting.

The paper examined the state of knowledge and the available empirical data in order to select appropriate evidence-based procedures for long-range forecasting of global mean temperatures. Given the complexity and uncertainty of the situation, we concluded that the “no-trend” model is the method most consistent with forecasting principles.

We tested the no-trend model using the same data that the IPCC uses. We produced annual forecasts from one to 100 years ahead, starting from 1851 and stepping forward year-by-year until 1975, the year before the current warming alarm was raised. (This is also the year when Newsweek and other magazines reported that scientists were “almost unanimous” that Earth faced a new period of global cooling.) We conducted the same analysis for the IPCC scenario of temperatures increasing at a rate of 0.03 degrees Celsius (0.05 degrees Fahrenheit) per year in response to increasing human carbon dioxide emissions. This procedure yielded 7,550 forecasts from each method.

Overall, the no-trend forecast error was one-seventh the error of the IPCC scenario’s projection. They were as accurate as or more accurate than the IPCC temperatures for all forecast horizons. Most important, the relative accuracy of the no-trend forecasts increased for longer horizons. For example, the no-trend forecast error was one-twelfth that of the IPCC temperature scenarios for forecasts 91 to 100 years ahead.

Our research in progress scrutinizes more forecasting methods, uses more and better data, and extends our validation tests. The findings strengthen the conclusion that there are no scientific forecasts that predict dangerous global warming.

There is no support from scientific forecasting for an upward trend in temperatures, or a downward trend. Without support from scientific forecasts, the global warming alarm is a false alarm and should be ignored.

Government programs, subsidies, taxes, and regulations proposed as responses to the global warming alarm result in misallocations of valuable resources. They lead to inflated energy prices, declining international competitiveness, disappearing industries and jobs, and threats to health and welfare.

Climate policies require scientific forecasts, not computerized stories about what some scientists think might happen.

—————————————————————————

Professor J. Scott Armstrong is a founder of the two major journals on forecasting methods, author of Long-Range Forecasting, editor of the Principles of Forecasting handbook, and founder of forecastingprinciples.com. When people want to talk forecasting, this is the guy they call.

In 2007, he proposed a ten-year bet to Mr. Albert Gore in 2007 that he could provide a more accurate forecast than any forecast Mr. Gore might propose. See The Climate Bet for the latest monthly results that would have happened so far.


16 Comments

(Most) Everything Wrong With Time Series

A fun time series.

A fun time series.

It came out at my recent meeting that a compilation of everything wrong with time series was in order. A lot of this I’ve done before, and so, just as much for you as for me, here’s a list of some of the posts I found (many probably hiding) on this abused subject.

A Do not smooth time series, you hockey puck! Used to be scientists liked their data to be data. No more, not when you can change that data into something which is more desirable than reality, via the ever-useful trick of smoothing. Link.

B Do not calculate correlations (or anything else) after smoothing data. No, seriously: don’t. Take any two sets of data, run a classical correlation. Then smooth both series and re-run the correlation. It will increase (in absolute value). It’s magic! Link.

C Especially don’t calculate model performance on smoothed data. This is like drinking. It’s bound to make your model look better than she is. Link.

D Time series (serieses?) aren’t easy. Let’s put things back in observational, and not model-plus-parameter, terms. Hard to believe, but nobody, not even the more earnest and caring, have ever seen or experienced a model or its parameters. Yet people do experience and see actual data. So why not talk about that? Link.

E A hint about measurement error and temperature (time series) using the BEST results. Link. Here’s another anticipating that data. Link. Here’s a third using the language of predictive inference. Link.

F You’ve heard about the homogenization of temperature series. Now read all about it! A thick subject and difficult. This is the start to a five-part post. Link.

G Lots of ways to cheat using time series. Example using running means and “hurricanes.” Did he say running means? Isn’t that just another way of smoothing? Why, yes; yes, it is. Link.

H A “statistically significant increase” in temperature is scarcely exciting. One person’s “significant” increase is another’s “significant” decrease. Link.

I I can’t find a favorite, which shows the “edge” effect. If you use classical statistical measures, you are stuck with the data you have, meaning an arbitrary starting and ending point. However, just changing these a little bit, often even by one time point, can turn conclusions upside down. Michael Mann, “The First Gentleman of Climate Science” relies on this trick.

Here’s the précis on how to do it right, imagined for a univariate temperature series measured via proxy at one location.

A proxy (say O18/O16 ratio, tree rings, or whatever) is matched, via some parameterized model, to temperature in some sample where both series are known. Then a new proxy is measured where the temperature is unknown, then the range of the temperature is shown. Yes, the answer is never a single number, but a distribution.

This range must not be of the parameter, telling us the values it takes. But it must be the value of the temperature. In technical terms, the parameters are “integrated out.” The range of the temperature will always be larger than of the parameter, meaning, in real life, you will always be less certain. As is proper.

The mistakes at this level usually come in two ways: (1) stating the model point estimate and eschewing a range, (2) giving the uncertainty of the (unobservable) parameter estimate. Both mistakes produce over-certainty.

Mistake (1) is exacerbated by plotting the single point number. If the observable range were plotted (as noted above), it would be vastly harder to see if there were any patterns in the series. As is proper.

Mistake (2) plots the point estimate with plus-or-minus bars, but these (based on the parameter) are much too small.

Mistake (3) is to then do a “test”, such as “Is there a ‘statistically significant’ trend?” This means nothing because this assumes the model you have picked is perfect, which (I’m telling you) it isn’t. If you want to know if the temperature increased, just look. But don’t forget the answer depends on the starting and stopping points.

Mistake (4) is to ask whether there was a “statistically significant” change or increase. Again, this assumes a perfect model, perfect prescience. And again, if you want to know there was a change, just look!

Mistake (5) is putting a smoother over points as if the smoothed points were “real” and causative, somehow superior to the actual data. The data (with proper error bounds) is the data. This mistake ubiquitous in technical stock trading. If the model you put over your data was any good, it would be able to skillfully predict new data. Does it, hombre?

Everything said here goes for “global” averages, with bells on. The hubris of believing one can predict what the temperature will be (or was) to within a tenth of degree or what the sea-level will be (or was) within a tenth of millimeter fifty years hence is astonishing. I mean, you expect it from politicians. But from people who call themselves scientists? Boggles the mind.

Stay tuned for an example!


56 Comments

We Gave Them The Willies

Yours Truly relaxing by a major body of water.

Yours Truly relaxing by a major body of water.

I have returned from my conference where I gave a half-stunned audience my lecture The Top Six Fallacies In Statistics. But though I labored long and hard, it wasn’t all work as the picture to the right proves.

There I am, relaxing by the beach. The meeting guidelines announced a “business causal” dress code, which I think you have to agree I nailed. I mean, I’m not even wearing a waistcoat. It’s nice to kick back and put on some old thing every now and then.

Fifty-percent of my audience, as I say, were sympathetic, but it’s my fault the other fifty-percent weren’t. See, I gave my two favorite examples of bad statistics (linked on my Classic Posts page), which are (1) Statistics “prove” that even brief exposure to an American flag is likely to turn one into a Republican, and (2) Statistics “prove” that attendance at a Fourth of July parade turns one into a Republican.

There is a minor industry of these kinds of papers, all produced by sincere academics who, after polling their friends, colleagues and neighbors and failing to discover even one of these strange creatures, ask how it is they (the strange creatures) are created. Since everything that comes into existence has a cause, some thing must be causing people to turn into Republicans. But what?

I can report it isn’t brief exposure to flags nor parade attendance. More likely it’s exposure to over-confident over-egoed (yes, over-ego-ed) intellectuals who over-populate certain university departments.

Incidentally, Yours Truly is not a registered Republican. He is not a registered anything (though once, many years ago, he was briefly registered as a Democrat).

Anyway, my talk started on contested grounds, which would have been all right, but I happened to couple those prescient observations with several others the gist of which was that Leviathan had fallen into the bad habit of relying on evidence which accorded with its desires and not with the truth.

It turned out that a good chunk of my audience belonged to a company which makes its living by selling products and services to bring firms “into compliance” with certain of our beneficent government’s many and increasing regulations. They liked the evidence which caused Leviathan to call and rely on them.

This proves the maxim that capitalism is bound to fail when it becomes cozy with government (right, health insurance companies?). But I did make friends with the other half of the group, which were members of those firms which were being forced to comply and weren’t well pleased with the expensive idea.

On the whole, I think a success. There were meals (a variety of aquatic life), cigars (Ashton for me), and whiskey (Maker’s Mark) with friends, entertaining talks, and quiet meetings where we planned our cabal’s next moves. Thanks very much to those who made it possible!


11 Comments
« Older posts Newer posts »

© 2015 William M. Briggs

Theme by Anders NorenUp ↑