Skip to content
February 15, 2008 | 58 Comments

Consensus in science

In 1914, there was a consensus among geologists that the earth under our feet was permanently fixed, and that it was absurd to think it could be otherwise. But in 1915, Alfred Wegener fought an enormous battle to convince them of the relevance of plate tectonics.

In 1904, there was a consensus among physicists that Newtonian mechanics was, at last, the final word in explaining the workings of the world. All that was left to do was to mop up the details. But in 1905, Einstein and a few others soon convinced them that this view was false.

In 1544, there was a consensus among mathematicians that it was impossible to calculate the square root of negative one, and that to even consider the operation was absurd. But in 1545, Cardano proved that, if you wanted to solve polynomial equations, then complex numbers were a necessity.

In 1972, there was a consensus among psychiatrists that homosexuality was a psychological, treatable, sickness. But in 1973, the American Psychiatric Association held court and voted for a new consensus to say that it was not.

In 1979, there was a consensus among paleontologists that the dinosaurs’ demise was a long, drawn out affair, lasting millions of years. But in 1980, Alvarez, father and son, introduced evidence of a cataclysmic cometary impact 65 million years before.

In 1858, there was a consensus among biologists that the animal species that surround us were put there as God designed them. But in 1859, the book On the Origin of Species appeared.

In 1928, there was a consensus among astronomers that the heavens were static, the boundaries of the universe constant. But in 1929, Hubble observed his red shift among the stars.

In 1834, there was a consensus among physicians that human disease was spontaneously occurring, due to imbalanced humours. But in 1835, Bassi and later Pasteur, introduced doctors to the germ theory.

All these are, obviously, but a small fraction of the historical examples of consensus in science, though I have tried to pick the events that were the most jarring and radical upsets. Here are two modern cases.

In 2008, there is a consensus among climatologists that mankind has and will cause irrevocable and dangerous changes to the Earth’s temperature.

In 2008, there is a consensus among physicists that most of nature’s physical dimensions are hidden away and can only be discovered mathematically, by the mechanisms of string theory.

In addition to the historical list, there are, just as obviously, equally many examples of consensus that turned out to be true. And, to be sure, even when the consensus view was false, it was often rational to believe it.

So I use these specimens only to show two things: (1) from the existence of a consensus, it does not follow that the claims of the consensus are true. (2) The chance that the consensus view turns out to be false is much larger than you would have thought.

These are not news, but they are facts that are often forgotten.

February 14, 2008 | 14 Comments

Do not calculate correlations after smoothing data

This subject comes up so often and in so many places, and so many people ask me about it, that I thought a short explanation would be appropriate. You may also search for “running mean” (on this site) for more examples.

Specifically, several readers asked me to comment on this post at Climate Audit, in which appears an analysis whereby, loosely, two time series were smoothed and the correlation between them was computed. It was found that this correlation was large and, it was thought, significant.

I want to give you, what I hope is, a simple explanation of why you should not apply smoothing before taking correlation. What I don’t want to discuss is that if you do smooth first, you face the burden of carrying through the uncertainty of that smoothing to the estimated correlations, which will be far less certain than when computed for unsmoothed data. I mean, any classical statistical test you do on the smoothed correlations will give you p-values that are too small, confidence intervals too narrow, etc. In short, you can be easily misled.

Here is an easy way to think of it: Suppose you take 100 made-up numbers; the knowledge of any of them is irrelevant towards knowing the value of any of the others. The only thing we do know about these numbers is that we can describe our uncertainty in their values by using the standard normal distribution (the classical way to say this is “generate 100 random normals”). Call these numbers C. Take another set of “random normals” and call them T.

I hope everybody can see that the correlation between T and C will be close to 0. The theoretical value is 0, because, of course, the numbers are just made up. (I won’t talk about what correlation is or how to compute it here: but higher correlations mean that T and C are more related.)

The following explanation holds for any smoother and not just running means. Now let’s apply an “eight-year running mean” smoothing filter to both T and C. This means, roughly, take the 15th number in the T series and replace it by an average of the 8th and 9th and 10th and … and 15th. The idea is, that observation number 15 is “noisy” by itself, but we can “see it better” if we average out some of the noise. We obviously smooth each of the numbers and not just the 15th.

Don’t forget that we made these numbers up: if we take the mean of all the numbers in T and C we should get numbers close to 0 for both series; again, theoretically, the means are 0. Since each of the numbers, in either series, is independent of its neighbors, the smoothing will tend to bring the numbers closer to their actual mean. And the more “years” we take in our running mean, the closer each of the numbers will be to the overall mean of T and C.

Now let T' = 0,0,0,...,0 and C' = 0,0,0,...,0. What can we say about each of these series? They are identical, of course, and so are perfectly correlated. So any process which tends to take the original series T and C and make them look like T' and C' will tend to increase the correlation between them.

In other words, smoothing induces spurious correlations.

Technical notes: in classical statistics any attempt to calculate the ordinary correlation between T' and C' fails because that philosophy cannot compute an estimate of the standard deviation of each series. Again, any smoothing method will work this magic, not just running means. In order to “carry through” the uncertainty, you need a carefully described model of the smoother and the original series, fixing distributions for all parameters, etc. etc. The whole also works if T and C are time series; i.e. the individual values of each series are not independent. I’m sure I’ve forgotten something, but I’m sure that many polite readers will supply a list of my faults.

February 13, 2008 | 18 Comments

Global Warming Stress Syndrome Increasing, Psychologist Says

There has been a disturbing increase in Global Warming Stress Syndrome (GWSS, pronounced gwiss) according to Dr. Ron N. Hyde, a clinical psychologist at the prestigious McKitrick Center for the Especially Disturbed.

“Since April, there is been a 32.817% increase in public cases of GWSS,” he explained. “The rate now is almost double what it was this time last year.” He added the trend was very worrying to his colleagues.

According to literature provided by the McKitrick Center, GWSS was at first a disease confined to academics, where it was thought to be controllable. But somehow it became public in the mid 1990s and struck those whose minds were weakest and easiest to influence, such as celebrities. Since GWSS is communicable, the next to be infected were those in the media in contact with celebrities.

“Entertainment news reporters have become increasingly integrated into ordinary news organizations, which made it easier to disseminate much-needed celebrity gossip and tittle-tattle. But it also meant that ordinary reporters soon became infected,” explained the brochure.

“After the mainstream media contracted GWSS, it was only a matter of time before politicians displayed symptoms of GWSS.”

Dr. Hyde described typical symptoms: “A belief that mankind causes every bad event, excessive hand-wringing, frequent bowel movements, a tendency to lurk on internet message boards and post things such as, ‘There is a consensus! There is a consensus!’, an irrational desire to measure one’s personal ‘carbon footprint.'” But the most worrying of all is the, “Urge to make idiotic comments in public tying global warming to any event.”

As examples, he cited Loch Ness Monster hunter Robert Rines, who has publicly claimed that global warming has killed the monster, which is why nobody can find it.

And the recent comments of New York City Mayor Mike Bloomberg who likened global warming to terrorism. Bloomberg said, “terrorists kill people” and global warming “has the potential to kill everybody.” “We should go after terrorists every place in this world, find them and kill them, plain and simple,” Bloomberg said.

Dr Hyde explained, “All the classic manifestations are there. Mayor Bloomberg didn’t actually say—yet—that we should hunt down and kill those who exhale exorbitant amounts of carbon dioxide, but he implied it.” At the United Nations forum where Bloomberg spoke, also in attendance were film actress Daryl Hannah and Virgin Atlantic Airways founder Richard Branson. “It’s always the contact with celebrities that does it,” Hyde explained. Bloomerg’s statements are “strong evidence of a seriously addled mind.”

Dr. Hyde ended his statement on an ominous note, “So far, there is no known cure for GWSS.”

For the record, the only official program Mayor Bloomberg has announced so far is to reduce the use of hardwoods on city park benches.

February 11, 2008 | 15 Comments

Can having a mammogram kill you? How to make decisions under uncertainty.

The answer to the headline is, unfortunately, yes. The Sunday, 10 February 2008 New York Post reported this sad case of a woman at Mercy Medical Center in New York City. The young woman went to the hospital and had a mammogram, which came back positive, indicating the presence of breast cancer (she also had follow-up tests). Since other members of her family had experienced this awful disease, the young woman opted to have a double mastectomy and to have have implants inserted after this. All of which happened. She died a day after the surgery.

That’s not the worst part. It turns out she didn’t have cancer after all. Her test results had been mixed up with some other poor woman’s. So if she never had the mammogram in the first place, and made a radical decision based on incorrect test results, the woman would not have died. So, yes, having a mammogram can lead to your death. It is no good arguing that this is a rare event—adverse outcomes are not so rare, anyway—because all I was asking was can a mammogram kill you. One case is enough to prove that it can.

But aren’t medical tests, and mammograms in particular, supposed to be error free? What about prostate exams? Or screenings for other cancers? How do you make a decision whether to have these tests? How do you account for the possible error and potential harm resulting from this error?

I hope to answer all these questions in the following article, and to show you how deciding whether to take a medical exam is really no different than deciding which stock broker to pick. Some of what follows is difficult, and there is even some math. My friends, do not be dissuaded from reading. I have tried to make it as easy to follow as possible. These are important, serious decisions you will someday have to make: you should not treat them lightly.

Decision Calculator

You can download a (non-updated) pdf version of this paper here.

This article will provide you with an introduction and a step-by-step guide of how to make good decisions in particular situations. These techniques are invaluable whether you are an individual or a business.

The results that you’ll read about hold for all manner of examples—from lie detector usefulness, to finding a good stock broker or movie reviewer, to intense statistical modeling, to financial forecasts. But a particularly large area is medical testing, and it is these kinds of tests that I’ll use as examples.

Many people opt for precautionary medical tests—frequently because a television commercial or magazine article scares them into it. What people don’t realize is that these tests have hidden costs. These costs are there because tests are never 100% accurate. So how can you tell when you should take a test?

When is worth it?

Under what circumstances is it best for you to receive a medical test? When you “Just want to be safe”? When you feel, “Why not? What’s the harm?”

In fact, none of these are good reasons to undergo a medical test. You should only take a test if you know that it’s going to give accurate results. You want to know that it performs well, that is, that it makes few mistakes, mistakes which could end up costing you emotionally, financially, and even physically.

Let’s illustrate this by taking the example of a healthy woman deciding whether or not to have a mammogram to screen for breast cancer. She read in a magazine that all women over 40 should have this test “Just to be sure.” She has heard lots of stories about breast cancer lately. Testing almost seems like a duty. She doesn’t have any symptoms of breast cancer and is in good health. What should she do?

What can happen when she takes this (or any) medical test? One of four things:
Continue reading “Can having a mammogram kill you? How to make decisions under uncertainty.”