Example of how easy it is to mislead yourself: stepwise regression

I am, of course, a statistician. So perhaps it will seem unusual to you when I say I wish there were fewer statistics done. And by that I mean that I’d like to see less statistical modeling done. I am happy to have more data collected, but am far less sanguine about the proliferation of studies based on statistical methods.

There are lots of reasons for this, which I will detail from time to time, but one of the main ones is how easy it is to mislead yourself, particularly if you use statistical procedures in a cookbook fashion. It takes more than a recipe to make an eatable cake.

Among the worst offenders are methods like data mining, sometimes called knowledge discovery, neural networks, and other methods that “automatically” find “significant” relationships between sets of data. In theory, there is nothing wrong with any of these methods. They are not, by themselves, evil. But they become pernicious when used without a true understanding of the data and the possible causal relationships that exist.

However, these methods are in continuous use and are highly touted. An oft-quoted success of data mining was the time a grocery store noticed that unaccompanied men who bought diapers also bought beer. A relationship between data which, we are told, would have gone unnoticed were it not for “powerful computer models.”

I don’t want to appear too negative: these methods can work and they are often used wisely. They can uncover previously unsuspected relationships that can be confirmed or disconfirmed upon collecting new data. Things only go sour when this second step, verifying the relationships with independent data, is ignored. Unfortunately, the temptation to forgo the all-important second step is usually overwhelming. Pressures such as cost of collecting new data, the desire to publish quickly, an inflated sense of certainty, and so on, all contribute to this prematurity.

Stepwise

Stepwise regression is a procedure to find the “best” model to predict y given a set of x’s. The y might be the item most likely bought (like beer) given a set of possible explanatory variables x, like x1 sex, x2 total amount spent, x3 diapers purchased or not, and on and on. The y might instead be total amount spent at a mall, or the probability of defaulting on a loan, or any other response you want to predict. The possibilities for the explanatory variables, the x’s, are limited only to your imagination and ability to collect data.

A regression takes the y and tried to find a multi-dimensional straight line fit between itself and the x’s (e.g., a two-dimensional straight line is a plane). Not all of the x’s will be “statistically significant1“; those that are not are eliminated from the final equation. We only want to keep those x’s that are helpful in explaining y. In order to do that, we need to have some measure of model “goodness”. The best measure of model goodness is one which measures how well that model does predicting independent data, which is data that in no way was used to fit the model. But obviously, we do not always have such data at hand, so we need another measure. One that is often picked is the Akaike Information Criterion (AIC), which measures how well the model fits the data that was used to fit the model.

Confusing? You don’t actually need to know anything about the AIC other than that lower numbers are better. Besides, the computer does the work for you, so you never have to actually learn about the AIC. What happens is that many combinations of x’s are tried, one by one, an AIC is computed for that combination, and the combination that has the lowest AIC becomes the “best” model. For example, combination 1 might contain (x2, x17, x22), while combination 2 might contain (x1, x3). When the number of x’s is large, the number of possible combinations is huge, so some sort of automatic process is needed to find the best model.

A summary: all your data is fed into a computer, and you want to model a response based on a large number of possible explanatory variables. The computer sorts through all the possible combinations of these explanatory variables, rates them by a model goodness criterion, and picks the one that is best. What could go wrong?

To show you how easy it is to mislead yourself with stepwise procedures, I did the following simulation. I generated 100 observations for y’s and 50 x’s (each of 100 observations of course). All of the observations were just made up numbers, each giving no information about the other. There are no relationships between the x’s and the y2. The computer, then, should tell me that the best model is no model at all.

But here is what it found: the stepwise procedure gave me a best combination model with 7 out of the original 50 x’s. But only 4 of those x’s met the usually criterion for being kept in a model (explained below), so my final model is this one:

explan. p-value Pr(beta x| data)>0
x7 0.0053 0.991
x21 0.046 0.976
x27 0.00045 0.996
x43 0.0063 0.996

In classical statistics, an explanatory variable is kept in the model if it has a p-value< 0.05. In Bayesian statistics, an explanatory variable is kept in the model when the probability of that variable (well, of its coefficient being non-zero) is larger than, say, 0.90. Don't worry if you don't understand what any of that means---just know this: this model would pass any test, classical or modern, as being good. The model even had an adjusted R2 of 0.26, which is considered excellent in many fields (like marketing or sociology; R2 is a number between 0 and 1, higher numbers are better).

Nobody, or very very few, would notice that this model is completely made up. The reason is that, in real life, each of these x’s would have a name attached to it. If, for example, y was the amount spent on travel in a year, then some x’s might be x7=”married or not”, x21=”number of kids”, and so on. It is just too easy to concoct a reasonable story after the fact to say, “Of course, x7 should be in the model: after all, married people take vacations differently than do single people.” You might even then go on to publish a paper in the Journal of Hospitality Trends showing “statistically significant” relationships between being married and travel model spent.

And you would be believed.

I wouldn’t believe you, however, until you showed me how your model performed on a set of new data, say from next year’s travel figures. But this is so rarely done that I have yet to run across an example of it. When was the last time anybody read an article in a sociological, psychological, etc., journal in which truly independent data is used to show how a previously built model performed well or failed? If any of my readers have seen this, please drop me a note: you will have made the equivalent of a cryptozoological find.

Incidentally, generating these spurious models is effortless. I didn’t go through 100s of simulations to find one that looked especially misleading. I did just one simulation. Using this stepwise procedure practically guarantees that you will find a “statistically significant” yet spurious model.

1I will explain this unfortunate term later.
2I first did a “univariate analysis” and only fed into the stepwise routine those x’s which singly had p-values < 0.1. This is done to ease the computational burden of checking all models by first eliminating those x's which are unlikely to be "important." This is also a distressingly common procedure.

You cannot measure a mean

I often say---it is even the main theme of this blog---that people are too certain. This is especially true when people report results from classical statistics, or use classical methods…

Homework #1: Answer part II

In part I, we learned that all surveys, and in fact all statistical models, are valid only conditionally on some population (or information). We went into nauseating detail of the…

Next prohibition: salt

Here is a question I added to my chapter on logic today.

New York City “Health Czar” Thomas Frieden (D), who successfully banned smoking and trans fat in restaurants and who now wants to add salt to the list, said in an issue of Circulation: Cardiovascular Quality and Outcomes that “cardiovascular disease is the leading cause of death in the United States.” Describe why no government or no person, no matter the purity of their hearts, can ever eliminate the leading cause of death.

I’ll answer that in a moment. First, Frieden is engaged in yet another attempt by the government to increase control over your life. Their reasoning goes “You are not smart enough to avoid foods which we claim—without error—are bad for you. Therefore, we shall regulate or ban such foods and save you from making decisions for yourself. There are some choices you should not be allowed to make.”

The New York Sun reports on this in today’s paper (better click on that link fast, because today could be the last day of that paper).

“We’ve done some health education on salt, but the fact is that it’s in food and it’s almost impossible for someone to get it out,” Dr. Frieden said. “Really, this is something that requires an industry-wide response and preferably a national response.”…”Processed and restaurant foods account for 77% of salt consumption, so it is nearly impossible for consumers to greatly reduce their own salt intake,” they wrote. Similarly, regarding sugar, they wrote: “Reversing the increasing intake of sugar is central to limiting calories, but governments have not done enough to address this threat.”

Get that? It’s nearly impossible for “consumers” (they mean people) to regulate their own salt intake. “Consumers” are being duped and controlled by powers greater than themselves, they are being forced to eat more salt than they want. But, lo! There is salvation in building a larger government! If that isn’t a fair interpretation of the authors’ views, then I’ll (again) eat my hat.

The impetus for Frieden’s latest passion is noticing that salt (sodium) is correlated—but not perfectly predictive of, it should be emphasized—with cardiovascular disease, namely high blood pressure (HBP). This correlation makes physical sense, at least. However, because sodium is only correlated with HBP, it means that for some people average salt intake is harmless or even helpful (Samuel Mann, a physician at Cornell, even states this).

What is strange is that, even by Frieden’s own estimate (from the Circulation paper), the rate of hypertension in NYC is four percentage points lower than the rest of the nation! NYC is about 26%, the rest of you are at about 30% If these estimates are accurate, it means New York City residents are doing better than non residents. This would argue that we should mandate non-city companies should emulate the practices of restaurants and food processors that serve the city. It in no way follows that we should burden city businesses with more regulation.

Sanity check:

[E]xecutive vice president of the New York State Restaurant Association, Charles Hunt…said any efforts to limit salt consumption should take place at home, as only about 25% of meals are consumed outside the home.

“I’m concerned in that they have a tendency to try to blame all these health problems on restaurants…This nanny state that has been hinted about, or even partially created, where the government agencies start telling people what they should and shouldn’t eat, when they start telling restaurants they need to take on that role, we think its beyond the purview of government,” Mr. Hunt said.

Amen, Mr Hunt. It just goes to show you why creators and users of statistics have such a bad reputation. Even when the results are dead against you, it is still possible to claim what you want to claim. It’s even worse here, because it isn’t even clear what the results are. By that I mean, the statements made by Frieden and other physicians are much more certain than they should be given the results of his paper. Readers of this blog will not find that unusual.

What follows is a brief but technical description of the Circulation paper (and homework answer). Interested readers can click on.