William M. Briggs

Statistician to the Stars!

Page 395 of 410

Consensus in science

In 1914, there was a consensus among geologists that the earth under our feet was permanently fixed, and that it was absurd to think it could be otherwise. But in 1915, Alfred Wegener fought an enormous battle to convince them of the relevance of plate tectonics.

In 1904, there was a consensus among physicists that Newtonian mechanics was, at last, the final word in explaining the workings of the world. All that was left to do was to mop up the details. But in 1905, Einstein and a few others soon convinced them that this view was false.

In 1544, there was a consensus among mathematicians that it was impossible to calculate the square root of negative one, and that to even consider the operation was absurd. But in 1545, Cardano proved that, if you wanted to solve polynomial equations, then complex numbers were a necessity.

In 1972, there was a consensus among psychiatrists that homosexuality was a psychological, treatable, sickness. But in 1973, the American Psychiatric Association held court and voted for a new consensus to say that it was not.

In 1979, there was a consensus among paleontologists that the dinosaurs’ demise was a long, drawn out affair, lasting millions of years. But in 1980, Alvarez, father and son, introduced evidence of a cataclysmic cometary impact 65 million years before.

In 1858, there was a consensus among biologists that the animal species that surround us were put there as God designed them. But in 1859, the book On the Origin of Species appeared.

In 1928, there was a consensus among astronomers that the heavens were static, the boundaries of the universe constant. But in 1929, Hubble observed his red shift among the stars.

In 1834, there was a consensus among physicians that human disease was spontaneously occurring, due to imbalanced humours. But in 1835, Bassi and later Pasteur, introduced doctors to the germ theory.

All these are, obviously, but a small fraction of the historical examples of consensus in science, though I have tried to pick the events that were the most jarring and radical upsets. Here are two modern cases.

In 2008, there is a consensus among climatologists that mankind has and will cause irrevocable and dangerous changes to the Earth’s temperature.

In 2008, there is a consensus among physicists that most of nature’s physical dimensions are hidden away and can only be discovered mathematically, by the mechanisms of string theory.

In addition to the historical list, there are, just as obviously, equally many examples of consensus that turned out to be true. And, to be sure, even when the consensus view was false, it was often rational to believe it.

So I use these specimens only to show two things: (1) from the existence of a consensus, it does not follow that the claims of the consensus are true. (2) The chance that the consensus view turns out to be false is much larger than you would have thought.

These are not news, but they are facts that are often forgotten.

Do not calculate correlations after smoothing data

This subject comes up so often and in so many places, and so many people ask me about it, that I thought a short explanation would be appropriate. You may also search for “running mean” (on this site) for more examples.

Specifically, several readers asked me to comment on this post at Climate Audit, in which appears an analysis whereby, loosely, two time series were smoothed and the correlation between them was computed. It was found that this correlation was large and, it was thought, significant.

I want to give you, what I hope is, a simple explanation of why you should not apply smoothing before taking correlation. What I don’t want to discuss is that if you do smooth first, you face the burden of carrying through the uncertainty of that smoothing to the estimated correlations, which will be far less certain than when computed for unsmoothed data. I mean, any classical statistical test you do on the smoothed correlations will give you p-values that are too small, confidence intervals too narrow, etc. In short, you can be easily misled.

Here is an easy way to think of it: Suppose you take 100 made-up numbers; the knowledge of any of them is irrelevant towards knowing the value of any of the others. The only thing we do know about these numbers is that we can describe our uncertainty in their values by using the standard normal distribution (the classical way to say this is “generate 100 random normals”). Call these numbers C. Take another set of “random normals” and call them T.

I hope everybody can see that the correlation between T and C will be close to 0. The theoretical value is 0, because, of course, the numbers are just made up. (I won’t talk about what correlation is or how to compute it here: but higher correlations mean that T and C are more related.)

The following explanation holds for any smoother and not just running means. Now let’s apply an “eight-year running mean” smoothing filter to both T and C. This means, roughly, take the 15th number in the T series and replace it by an average of the 8th and 9th and 10th and … and 15th. The idea is, that observation number 15 is “noisy” by itself, but we can “see it better” if we average out some of the noise. We obviously smooth each of the numbers and not just the 15th.

Don’t forget that we made these numbers up: if we take the mean of all the numbers in T and C we should get numbers close to 0 for both series; again, theoretically, the means are 0. Since each of the numbers, in either series, is independent of its neighbors, the smoothing will tend to bring the numbers closer to their actual mean. And the more “years” we take in our running mean, the closer each of the numbers will be to the overall mean of T and C.

Now let T' = 0,0,0,...,0 and C' = 0,0,0,...,0. What can we say about each of these series? They are identical, of course, and so are perfectly correlated. So any process which tends to take the original series T and C and make them look like T' and C' will tend to increase the correlation between them.

In other words, smoothing induces spurious correlations.

Technical notes: in classical statistics any attempt to calculate the ordinary correlation between T' and C' fails because that philosophy cannot compute an estimate of the standard deviation of each series. Again, any smoothing method will work this magic, not just running means. In order to “carry through” the uncertainty, you need a carefully described model of the smoother and the original series, fixing distributions for all parameters, etc. etc. The whole also works if T and C are time series; i.e. the individual values of each series are not independent. I’m sure I’ve forgotten something, but I’m sure that many polite readers will supply a list of my faults.

Global Warming Stress Syndrome Increasing, Psychologist Says

There has been a disturbing increase in Global Warming Stress Syndrome (GWSS, pronounced gwiss) according to Dr. Ron N. Hyde, a clinical psychologist at the prestigious McKitrick Center for the Especially Disturbed.

“Since April, there is been a 32.817% increase in public cases of GWSS,” he explained. “The rate now is almost double what it was this time last year.” He added the trend was very worrying to his colleagues.

According to literature provided by the McKitrick Center, GWSS was at first a disease confined to academics, where it was thought to be controllable. But somehow it became public in the mid 1990s and struck those whose minds were weakest and easiest to influence, such as celebrities. Since GWSS is communicable, the next to be infected were those in the media in contact with celebrities.

“Entertainment news reporters have become increasingly integrated into ordinary news organizations, which made it easier to disseminate much-needed celebrity gossip and tittle-tattle. But it also meant that ordinary reporters soon became infected,” explained the brochure.

“After the mainstream media contracted GWSS, it was only a matter of time before politicians displayed symptoms of GWSS.”

Dr. Hyde described typical symptoms: “A belief that mankind causes every bad event, excessive hand-wringing, frequent bowel movements, a tendency to lurk on internet message boards and post things such as, ‘There is a consensus! There is a consensus!’, an irrational desire to measure one’s personal ‘carbon footprint.’” But the most worrying of all is the, “Urge to make idiotic comments in public tying global warming to any event.”

As examples, he cited Loch Ness Monster hunter Robert Rines, who has publicly claimed that global warming has killed the monster, which is why nobody can find it.

And the recent comments of New York City Mayor Mike Bloomberg who likened global warming to terrorism. Bloomberg said, “terrorists kill people” and global warming “has the potential to kill everybody.” “We should go after terrorists every place in this world, find them and kill them, plain and simple,” Bloomberg said.

Dr Hyde explained, “All the classic manifestations are there. Mayor Bloomberg didn’t actually say—yet—that we should hunt down and kill those who exhale exorbitant amounts of carbon dioxide, but he implied it.” At the United Nations forum where Bloomberg spoke, also in attendance were film actress Daryl Hannah and Virgin Atlantic Airways founder Richard Branson. “It’s always the contact with celebrities that does it,” Hyde explained. Bloomerg’s statements are “strong evidence of a seriously addled mind.”

Dr. Hyde ended his statement on an ominous note, “So far, there is no known cure for GWSS.”

For the record, the only official program Mayor Bloomberg has announced so far is to reduce the use of hardwoods on city park benches.

Can having a mammogram kill you? How to make decisions under uncertainty.

The answer to the headline is, unfortunately, yes. The Sunday, 10 February 2008 New York Post reported this sad case of a woman at Mercy Medical Center in New York City. The young woman went to the hospital and had a mammogram, which came back positive, indicating the presence of breast cancer (she also had follow-up tests). Since other members of her family had experienced this awful disease, the young woman opted to have a double mastectomy and to have have implants inserted after this. All of which happened. She died a day after the surgery.

That’s not the worst part. It turns out she didn’t have cancer after all. Her test results had been mixed up with some other poor woman’s. So if she never had the mammogram in the first place, and made a radical decision based on incorrect test results, the woman would not have died. So, yes, having a mammogram can lead to your death. It is no good arguing that this is a rare event—adverse outcomes are not so rare, anyway—because all I was asking was can a mammogram kill you. One case is enough to prove that it can.

But aren’t medical tests, and mammograms in particular, supposed to be error free? What about prostate exams? Or screenings for other cancers? How do you make a decision whether to have these tests? How do you account for the possible error and potential harm resulting from this error?

I hope to answer all these questions in the following article, and to show you how deciding whether to take a medical exam is really no different than deciding which stock broker to pick. Some of what follows is difficult, and there is even some math. My friends, do not be dissuaded from reading. I have tried to make it as easy to follow as possible. These are important, serious decisions you will someday have to make: you should not treat them lightly.

Decision Calculator

You can download a (non-updated) pdf version of this paper here.

This article will provide you with an introduction and a step-by-step guide of how to make good decisions in particular situations. These techniques are invaluable whether you are an individual or a business.

The results that you’ll read about hold for all manner of examples—from lie detector usefulness, to finding a good stock broker or movie reviewer, to intense statistical modeling, to financial forecasts. But a particularly large area is medical testing, and it is these kinds of tests that I’ll use as examples.

Many people opt for precautionary medical tests—frequently because a television commercial or magazine article scares them into it. What people don’t realize is that these tests have hidden costs. These costs are there because tests are never 100% accurate. So how can you tell when you should take a test?

When is worth it?

Under what circumstances is it best for you to receive a medical test? When you “Just want to be safe”? When you feel, “Why not? What’s the harm?”

In fact, none of these are good reasons to undergo a medical test. You should only take a test if you know that it’s going to give accurate results. You want to know that it performs well, that is, that it makes few mistakes, mistakes which could end up costing you emotionally, financially, and even physically.

Let’s illustrate this by taking the example of a healthy woman deciding whether or not to have a mammogram to screen for breast cancer. She read in a magazine that all women over 40 should have this test “Just to be sure.” She has heard lots of stories about breast cancer lately. Testing almost seems like a duty. She doesn’t have any symptoms of breast cancer and is in good health. What should she do?

What can happen when she takes this (or any) medical test? One of four things:
Continue reading

How to look at the RSS satellite-derived temperature data

It’s already well known that the Remote Sensing Systems satellite-derived temperature data has released the January figures: the finding is that it’s colder this January than it has been for some time. I wanted to look more carefully at this data, mostly to show how to avoid some common pitfalls when analyzing time series data, but also to show you that temperatures are not linearly increasing. (Readers Steve Hempell and Joe Daleo helped me get the data.)

First, the global average. The RSS satellite actually divides up the earth in swaths, or transects, which are bands across the earth whose widths vary as a function of the instrument that remotely senses the temperature. The temperature measured at any transect is, of course, subject to many kinds of errors, which must be corrected for. Although this is not the main point of this article, it is important to keep in mind that the number you see released by RSS is only an estimate of the true temperature. It’s a good one, but it does have error (usually depending on the location of the transect), which most of us never see or few actually use. That error, however, is extremely important to take into account when making statements like “The RSS data shows there’s a 90% chance it’s getting warmer.” Well, it might be 90% before taking into account the temperature error: afterwards, the probability might go down to, say, 75% (this is just an illustration; but no matter what, the original probability estimate will always go down).

Most people show the global average data, which is interesting, but taking an average is, of course, assuming a certain kind of statistical model is valid: one that says averaging transects gives an unbiased, low-variance estimate of the global temperature. Does that model hold? Maybe; I actually don’t know, but I have my suspicions it does not, which I’ll outline below.

So let’s look at the transects themselves. The ones I used are not perfect, but they are reasonable. They are: “Antarctica” (-60 to -70 degrees of latitude; obviously not the whole south pole), “Southern Hemisphere Extratropics” (-70 to -20 degrees; there is a 10 degree overlap with “Antarctica”), “Tropics” (-20 to 20 degrees), “Northern Hemisphere Extratropics” (20 to 82.5 degrees, a slightly wider transect than in the SH), and “Arctic” (60 to 82.5 degrees; there is a 22.5 degree overlap with NH Extratropics). Ideally, there would be no overlap between transects, global coverage would have been complete, and I would have preferred more instead of fewer transects, which would have allowed us to see greater detail. But we’ll work with what we have.

Here is the thumbnail of the transects. Click it (preferably open it in a new window so you can follow the discussion) to open the full-sized version.
RSS transects
All the transects are in one place, making it easy to do some comparisons. The scale for each is identical, each has only been shifted up or down so that they all fit on one picture. This is not a very sexy or colorful graph, but it is useful. First, each transect is shown with respect to its mean (the small, dashed line). Vertical lines have been placed at the maximum temperature for each. The peak for NH-Extratropics and Tropics was back in 1998 (a strong El Nino year). For the Arctic, the peak was in 1995. For the Antarctic, it was 1990. Finally, for the SH-Extratropics it was 1981.

You also often see, what I have drawn on the plot, a simple regression line (dash-dotted line), whose intent is usually to show a trend. Here, it appears that there were upward trends for the Tropics to the north pole, no sort of trend for the SH-Extratropics, and a downward trend for the Antarctic (recall there is overlap between the last two transects). Supposing these trends are real, they have to be explained. The obvious story is to say man-made increases due to CO2, etc. But it might also be that the northern hemisphere is measured differently (more coverage), or because there is obviously more land mass in the NH, and—don’t pooh-pooh this—the change of the tilt of the earth: the north pole tipped closer to the sun and so was warmer, the south pole tipped farther and so was cooler. Well, it’s true that the earth’s tilt has changed, and will always do so no matter what political party holds office, but effects due to its change are thought to be trivial at this time scale. Of course, there are other possibilities such as natural variation (which is sort of a cop out; what does “natural” mean anyway?).

To the eye, for example, the trend-regression for the Arctic looks good: there is an increase. Some people showing this data night calculate a classical test of significance (don’t get me started on these), but this is where most analysis usually stops. It shouldn’t. We need to ask, what we always need to ask when we fit a statistical model: how well does it fit? The first thing we can do is to collect the residuals, which are the distances between the model’s predictions and the actual data. What we’d like to see is that there is no “signal” or structure in these residuals, meaning that the model did its job of finding all the signal that there was. The only thing left after the model should be noise. A cursory glance at the classical model diagnostics would even show you, in this case, that the model is doing OK. But let’s do more. Below is a thumb-nail picture of two diagnostics that should always be examined for time series models (click for larger).
RSS transects

The bottom plot is a time-series plot of the residuals (the regression line minus the observed temperatures). Something called a non-parametric (loess) smoothing line is over-plotted. It is showing that there is some kind of cyclicity, or semi-periodic signal, left in the residuals. This is backed up by examining the top plot: which is the auto-correlation function. Each time-series residual is correlated with the one before it (lag 1), with the one two before it (lag 2), and so on. The lag-one correlation is almost 40%, again meaning that the residuals are certainly correlated, and that some signal is left in the residuals that the model didn’t capture. (The “lag 0″ is always 1; the horizontal-dashed lines indicated classical 95% significance; the correlations have to reach above these lines to be significant.)

The gist is that the ordinary regression line is inadequate and we have to search for something better. We might try the non-parametric smoothing line for each series, which would be OK, but it is still difficult to ask whether trends exist in the data. Some kind of smoothing would be good, however, to avoid the visual distraction of the noise. We could, as many do, use a running mean, but I hate them and here is why.
Running mean
Show in black is a pseudo-temperature series with noise: the actual temperature is dashed blue. Suppose you wanted to get rid of the noise using a “9-year ” running mean: the result is the orange line, which you can see does poorly, and shifts the actual peaks and troughs to the right. Well, that is only the start of the troubles, but I don’t go over any more here except to say that this technique is often misused, especially in hurricanes (two weeks ago a paper in Nature did just this sort of thing).

So what do we use? Another thing to try is something called Fourier, or spectral analysis, which is perfect for periodic data. This would be just the thing if the periodicities in the data were regular. They do not appear to be. We can take one step higher and use something called wavelet analysis, which is like spectral analysis (which I realize I did not explain), but instead of analyzing the time series globally like Fourier analysis, it does so locally. Which means it tends to under-smooth the data, and even allows some of the noise to “sneak through” in spots. This will be clearer when we look at this picture (again, just a thumb-nail: click for larger).
RSS wavelets

You can see what I mean by some of the original noise “sneaking through”: these are the spikes left over after the smoothing; however, you can also see that the spikes line up with the data, so we are not introducing any noise. The somewhat jaggy nature of the “smoothed” series has to do with the technicalities of using wavelets (I’ll have to explain this a latter time: but for technical completeness, I used a Daubechies orthonormal compactly supported wavelet, with soft probability thresholding by level). Anyway, some things that were hidden before are now clearer.

It looks like there was an increasing trend for most of the series starting in 1996 to 1998, but ending in late 2004, after which the data begin trending down: for the tropics to north pole, anyway. The signal in the southern hemisphere is weaker, or even non-existent at Antarctica.

This analysis is much stronger than the regression shown earlier; nevertheless, it is still not wonderful. The residuals don’t look much better, and are even worse in some places (e.g. early on in the Tropics), than the regression. But wavelet analysis is tricky: there are lots of choices of the so-called wavelet basis (the “Daubechies” thing above) and choices for thresholding. (I used the, more or less, defaults in the R wavethresh package.)

But the smoothing is only a first start. We need to model this data all at once, and not transect by transect, taking into account the different relationships between each transect (I’m still working on a multivariate Bayesian hierarchical time-series model: it ain’t easy!). Not surprisingly, these relationships are not constant (shown below for fun). The main point is that modeling data of this type is difficult, and it is far too tempting to make claims that do not hold up upon closer analysis. One thing is certain: the hypothesis that the temperature is linearly increasing everywhere across the globe is just not true.

APPENDIX: Just for fun, here is a scatter-plot matrix of the data (click for larger): You can see that there is little correlation between the two poles, and more, but less than you would have thought, between bordering transects.
RSS wavelets

How to read this plot: it’s a scatter plot of each variable (transect), with each other. Pick a variable name. In that row, that variable is the y-axis. Pick another variable. In that column, that variable is the x-axis. This is a convenient way to look at all the data at the same time.

« Older posts Newer posts »

© 2014 William M. Briggs

Theme by Anders NorenUp ↑