William M. Briggs

Statistician to the Stars!

Page 394 of 410

Example of how easy it is to mislead yourself: stepwise regression

I am, of course, a statistician. So perhaps it will seem unusual to you when I say I wish there were fewer statistics done. And by that I mean that I’d like to see less statistical modeling done. I am happy to have more data collected, but am far less sanguine about the proliferation of studies based on statistical methods.

There are lots of reasons for this, which I will detail from time to time, but one of the main ones is how easy it is to mislead yourself, particularly if you use statistical procedures in a cookbook fashion. It takes more than a recipe to make an eatable cake.

Among the worst offenders are methods like data mining, sometimes called knowledge discovery, neural networks, and other methods that “automatically” find “significant” relationships between sets of data. In theory, there is nothing wrong with any of these methods. They are not, by themselves, evil. But they become pernicious when used without a true understanding of the data and the possible causal relationships that exist.

However, these methods are in continuous use and are highly touted. An oft-quoted success of data mining was the time a grocery store noticed that unaccompanied men who bought diapers also bought beer. A relationship between data which, we are told, would have gone unnoticed were it not for “powerful computer models.”

I don’t want to appear too negative: these methods can work and they are often used wisely. They can uncover previously unsuspected relationships that can be confirmed or disconfirmed upon collecting new data. Things only go sour when this second step, verifying the relationships with independent data, is ignored. Unfortunately, the temptation to forgo the all-important second step is usually overwhelming. Pressures such as cost of collecting new data, the desire to publish quickly, an inflated sense of certainty, and so on, all contribute to this prematurity.

Stepwise

Stepwise regression is a procedure to find the “best” model to predict y given a set of x’s. The y might be the item most likely bought (like beer) given a set of possible explanatory variables x, like x1 sex, x2 total amount spent, x3 diapers purchased or not, and on and on. The y might instead be total amount spent at a mall, or the probability of defaulting on a loan, or any other response you want to predict. The possibilities for the explanatory variables, the x’s, are limited only to your imagination and ability to collect data.

A regression takes the y and tried to find a multi-dimensional straight line fit between itself and the x’s (e.g., a two-dimensional straight line is a plane). Not all of the x’s will be “statistically significant1“; those that are not are eliminated from the final equation. We only want to keep those x’s that are helpful in explaining y. In order to do that, we need to have some measure of model “goodness”. The best measure of model goodness is one which measures how well that model does predicting independent data, which is data that in no way was used to fit the model. But obviously, we do not always have such data at hand, so we need another measure. One that is often picked is the Akaike Information Criterion (AIC), which measures how well the model fits the data that was used to fit the model.

Confusing? You don’t actually need to know anything about the AIC other than that lower numbers are better. Besides, the computer does the work for you, so you never have to actually learn about the AIC. What happens is that many combinations of x’s are tried, one by one, an AIC is computed for that combination, and the combination that has the lowest AIC becomes the “best” model. For example, combination 1 might contain (x2, x17, x22), while combination 2 might contain (x1, x3). When the number of x’s is large, the number of possible combinations is huge, so some sort of automatic process is needed to find the best model.

A summary: all your data is fed into a computer, and you want to model a response based on a large number of possible explanatory variables. The computer sorts through all the possible combinations of these explanatory variables, rates them by a model goodness criterion, and picks the one that is best. What could go wrong?

To show you how easy it is to mislead yourself with stepwise procedures, I did the following simulation. I generated 100 observations for y’s and 50 x’s (each of 100 observations of course). All of the observations were just made up numbers, each giving no information about the other. There are no relationships between the x’s and the y2. The computer, then, should tell me that the best model is no model at all.

But here is what it found: the stepwise procedure gave me a best combination model with 7 out of the original 50 x’s. But only 4 of those x’s met the usually criterion for being kept in a model (explained below), so my final model is this one:

explan. p-value Pr(beta x| data)>0
x7 0.0053 0.991
x21 0.046 0.976
x27 0.00045 0.996
x43 0.0063 0.996

In classical statistics, an explanatory variable is kept in the model if it has a p-value< 0.05. In Bayesian statistics, an explanatory variable is kept in the model when the probability of that variable (well, of its coefficient being non-zero) is larger than, say, 0.90. Don't worry if you don't understand what any of that means---just know this: this model would pass any test, classical or modern, as being good. The model even had an adjusted R2 of 0.26, which is considered excellent in many fields (like marketing or sociology; R2 is a number between 0 and 1, higher numbers are better).

Nobody, or very very few, would notice that this model is completely made up. The reason is that, in real life, each of these x’s would have a name attached to it. If, for example, y was the amount spent on travel in a year, then some x’s might be x7=”married or not”, x21=”number of kids”, and so on. It is just too easy to concoct a reasonable story after the fact to say, “Of course, x7 should be in the model: after all, married people take vacations differently than do single people.” You might even then go on to publish a paper in the Journal of Hospitality Trends showing “statistically significant” relationships between being married and travel model spent.

And you would be believed.

I wouldn’t believe you, however, until you showed me how your model performed on a set of new data, say from next year’s travel figures. But this is so rarely done that I have yet to run across an example of it. When was the last time anybody read an article in a sociological, psychological, etc., journal in which truly independent data is used to show how a previously built model performed well or failed? If any of my readers have seen this, please drop me a note: you will have made the equivalent of a cryptozoological find.

Incidentally, generating these spurious models is effortless. I didn’t go through 100s of simulations to find one that looked especially misleading. I did just one simulation. Using this stepwise procedure practically guarantees that you will find a “statistically significant” yet spurious model.

1I will explain this unfortunate term later.
2I first did a “univariate analysis” and only fed into the stepwise routine those x’s which singly had p-values < 0.1. This is done to ease the computational burden of checking all models by first eliminating those x's which are unlikely to be "important." This is also a distressingly common procedure.

Continue reading

Vegetarian Intestines

You know how it is. It’s dinner time, but you’re tying to cut back on the red meat. So what do you do? That’s right. You reach for a big ol’ bag of vegetarian intestines:
Vegetarian intestines

Look carefully at the bag. Two things are striking. The first is obviously the pile, the loops and loops, of fake intestines. You ask yourself: how did they ever get them to look so lifelike? Chinese attention to detail!

The second, noted by the caption “The picture is for reference only”, are the two exquisite bottles of wine, which, as everybody knows, go perfectly with boiled intestine.

Many of you by now want to know where to find this delicacy. Go to the Hong Kong Supermarket, frozen food aisle, in Elmhurst, Queens, right off the R, V, or G subway lines. Only $2.45, an exceptional bargain.

An excuse I hadn’t thought of

A few weeks ago I speculated what would happen if human-caused significant global warming (AGW) turned out to be false. There might be a number of people who will refuse to give up on the idea, even though it is false, because their desire that AGW be true would be overwhelming.

I guessed that these people would slip into pseudoscience, and so would need to generate excuses why we have not yet seen the effects of AGW. One possibility was human-created dust (aerosols) blocking incoming solar radiation. Another was “bad data”: AGW is true, the earth really is warmer, but the data somehow are corrupted. And so on.

I failed to anticipate the most preposterous excuse of all. I came across it while browsing the excellent site Climate Debate Daily, which today linked to Coby Beck’s article “How to Talk to a Global Warming Sceptic“. Beck gives a list of arguments typically offered by “skeptics” and then attempts to refute them. Some of these refutations are good, and worth reading.

His attempt at rebutting the skeptical criticism “The Modelers Won’t Tell Us How Confident the Models Are” furnishes us with our pseudoscientific excuse. The skeptical objection is

There is no indication of how much confidence we should have in the models. How are we supposed to know if it is a serious prediction or just a wild guess?

and Beck’s retort is

There is indeed a lot of uncertainty in what the future will be, but this is not all because of an imperfect understanding of how the climate works. A large part of it is simply not knowing how the human race will react to this danger and/or how the world economy will develope. Since these factors control what emissions of CO2 will accumulate in the atmosphere, which in turn influences the temperature, there is really no way for a climate model to predict what the future will be.

This is as lovely a non sequitur as you’re ever likely to find. I can’t help but wonder if he blushed when he wrote it; I know I did when I read it. This excuse is absolutely bullet proof. I am in awe of it. There is no possible observation that can negate it. Whatever happens is a win for its believer. If the temperature goes up, the believer can say, “Our theories predicted this.” If the temperature goes down, the believer can say, “There was no way to know the future.”

What the believer in this statement is asking us to do, if it is not already apparent, is this: he wants you to believe that his prognostications are true because AGW is true, but he also wants you to believe that he should not be held accountable for his predictions should they fail because AGW is true. Thus, AGW is just true.

Beck knows he is on thin ice, because he quickly tries to get his readers to forget about climate forecasts and focus on “climate sensitivity”, which is some measure showing how the atmosphere reacts to CO2. Of course, whatever this number is estimated to be means absolutely nothing about, has no bearing on, is meaningless to, is completely different than, is irrelevant to the context of, the performance of actual forecasts.

It is also absurd to claim that we cannot know “how the human race will react” to climate change while (tacitly or openly) simultaneously calling for legislation whose purpose is to knowingly direct human reactions.

So, if AGW does turn out to be false, those who still wish to believe in it will have to work very hard to come up with an excuse better than Beck’s (whose work “has been endorsed by top climate scientists”). I am willing to bet that it cannot be done.

Statistics’ dirtiest secret

The old saying that “You can prove anything using statistics” isn’t true. It is a lie, and a damned lie, at that. It is an ugly, vicious, scurrilous distortion, undoubtedly promulgated by the legion of college graduates who had to suffer, sitting mystified, through poorly taught Statistics 101 classes, and never understood or trusted what they were told.

But, you might be happy to hear, the statement is almost true and is false only because of a technicality having to do with the logical word prove. I will explain this later.1

Now, most statistics texts, even advanced ones, if they talk about this subject at all, tend to cover it in vague or embarrassed passages, preferring to quickly return to more familiar ground. So if you haven’t heard about most of what I’m going to tell you, it isn’t your fault.

Before we can get too far, we need some notation to help us out. We call the data we want to predict y, and if we have some ancillary data that can help us predict y, we call it x. These are just letters that we use as place-holders so we don’t have to write out the full names of the variables each time. Do not let yourself be confused by the use of letters as place-holders!

An example. Suppose we wanted to predict a person’s income. Then “a person’s income” becomes y. Every time you see y you should think “a person’s income”: clearly, y is easier to write. To help us predict income, we might have the sex of the person, their highest level of education, their field of study, and so on. All these predictor variables we call x: when you see x, think “sex”, “education”, etc.

The business of statistics is to find a relationship between the y and the x: this relationship is called a model, which is just a function (a mathematical grouping) of the data y and x. We write this as y = f(x), and it means, “The thing we want to know (y) is best represented as a combination, a function, of the data (x).” So, with more shorthand, we write a mathematical combination, a function of x, as f(x). Every time you see a statistic quoted, there is an explicit or implicit   “f(x)“, a model, lurking somewhere in the background. Whenever you hear the term “Our results are statistically significant“, there is again some model that has been computed. Even just taking the mean implies a model of the data.

The problem is that usually the function f(x) is not known and must be estimated, guessed at in some manner, or logically deduced. But that is a very difficult thing to do, so nearly all of the time the mathematical skeleton, the framework, of f(x) is written down as if it were known. The f(x) is often chosen by custom or habit or because alternatives are unknown. Different people, with the same x and y, may choose different f(x). Only one of them, or none of them, can be right, they both cannot be.

It is important to understand that all results (like saying “statistically significant”, computing p-values, confidence or credible intervals) are conditional on the model that chosen being true. Since it is rarely certain that the model used was true, the eventual results are stated with a certainty that is too strong. As an example, suppose your statistical model allowed you to say that a certain proposition was true “at the 90% level.” But if you are only, say, 50% sure that the model you used is the correct one, then your proposition is only true “at the 45% level” not at the 90% level, which is, of course, an entirely different conclusion. And if you have no idea how certain your model is, then it follows that you have no idea how certain your proposition is. To emphasize: the uncertainty in choosing the model is almost never taken into consideration.

However, even if the framework, the f(x), is known (or assumed known), certain numerical constants, called parameters, are still needed to flesh out the model skeleton (if you’re fitting a normal distribution, these are the μ and σ^2 you might have heard of). These must be guessed, too. Generally, however, everybody knows that the model’s parameters must be estimated. What you might not know is that the uncertainty in guessing the parameter values also has to carry through to statements of certainty about data propositions. Unfortunately, this is also rarely done: most statistical procedures focus on making statements about the parameters and virtually ignore actual, observable data. This again means that people come away from these procedures with an inflated sense of certainty.

If you don’t understand all this, especially the last part about parameters, don’t worry: just try to keep in mind that two things happen: a function f(x) is guessed at, and the parameters, the numerical constants, that make this equation complete must also be guessed at. The uncertainty of performing both of these operations must be carried through to any conclusions you make, though, again, this is almost never done.

These facts have enormous and rarely considered consequences. For one, it means that nearly all statistics results that you see published are overly boastful. This is especially true in certain academic fields where the models are almost always picked as the result of habit, even enforced habit, as editors of peer-reviewed journals are suspicious of anything new. This is why—using medical journals as an example—one day you will see a headline that touts “Eating Broccoli Reduces Risk of Breast Cancer,” only to later read, “The Broccolis; They Do Nothing!” It’s just too easy to find results that are “statistically significant” if you ignore the model and parameter uncertainties.

These facts, shocking as they might be, are not quite the revelation we’re after. You might suppose that there is some data-driven procedure out there, known only to statisticians, that would let you find both the right model and the right way to characterize its parameters. It can’t be that hard to search for the overall best model!

It’s not only hard, but impossible, a fact which leads us to the dirty secret: For any set of y and x, there is no unconditionally unique model, nor is there any unconditionally unique way to represent uncertainty in the model’s parameters.

Let’s illustrate this with respect to a time series. Our data is still y, but there is no specific x, or explanatory data, except for the index, or time points (x = time 1, time 2, etc.), which of course are important in time series. All we have is the data and the time points (understand that these don’t have be clock-on-the-wall “time” points, just numbers in a sequence).

Suppose we observe this sequence of numbers (a time series)

y = 2, 4, 6, 8; with index x = 1, 2, 3, 4

Our task is to estimate a model y = f(x). One possibility is Model A

f(x) = 2x

which fits the data perfectly, because x = 1, 2, 3, 4 and 2x = 2, 4, 6, 8 which is exactly what y equals. The “2″ is the parameter of the model, which here we’ll assume we know with certainty.

But Model B is

f(x) = 2x |sin[(2x+1)π/2]|

which also fits the data perfectly (don’t worry if you can’t see this—trust me, it’s an exact fit; the “2″s, the “1″ and the “π” are all known-for-certain parameters).

Which of these two models should we use? Obviously, the better one; we just have to define what we mean by better. Which model is better? Well, using any—and I mean any—of the statistical model goodness-of-fit measures that have ever, or will ever, be invented, both are identically good. Both models explain all the data we have seen without error, after all.

There is a Model C, Model D, Model E, and so on and on forever, all of which will fit the observed data perfectly and so, in this sense, will be indistinguishable from one another.

What to do? You could, and even should, wait for more data to come in, data you did not use in any way to fit your models, and see how well your models predict these new data. Most times, this will soon tell you which model is superior, or if you are only considering one model, it will tell you if it is reasonable. This eminently common-sense procedure, sadly, is almost never done outside the “hard” sciences (and not all the time inside these areas; witness climate models). Since there are an infinite number of models that will predict your data perfectly, it is no great trick to find one of them (or to find one that fits well according to some conventional standard). We again find that published results will be too sure of themselves.

Suppose in our example the new data is y = 10, 12, 14: both Models A and B still fit perfectly. By now, you might be getting a little suspicious, and say to yourself, “Since both of these models flawlessly guess the observed data, it doesn’t matter which one we pick! They are equally good.” If your goal was solely prediction of new data, then I would agree with you. However, the purpose of models is rarely just raw prediction. Usually, we want to explain the data we have, too.

Models A and B have dramatically different explanations of the data: A has a simple story (“time times 2!”) and B a complex one. Models C, D, E, and so on, all too have different stories. You cannot just pick A via some “Occam’s razor2” argument; meaning A is best because it is “simpler”, because there is no guarantee that the simpler model is always the better model.

The mystery of the secret lies in the word “unconditional”, which was a necessary word in describing the secret. We can now see that there is no unconditionally unique model. But there might very well be a conditionally correct one. That is, the model that is unique, and therefore best, might be logically deducible given some set of premises that must be fulfilled. Suppose those premises were “The model must be linear and contain only one positive parameter,” then Model B is out and can no longer be considered. Model A is then our only choice: we do not, given these premises, even need to examine Models C, D, and so on, because Model A is the only function that fills the bill; we have logically deduced the form of Model A given these premises.

It is these necessary external premises that help us with the explanatory portion of the model. They are usually such that they demand the current model be consonant with other known models, or that the current model meet certain physical, biological, or mathematical expectations. Regardless, the premises are entirely external to the data at hand, and may themselves be the result of other logical arguments. Knowing the premises, and assuming they are sound and true, gives us our model.

The most common, unspoken of course, premise is loosely “The data must be described by a straight line and a normal distribution”, which, when invoked, describes the vast majority of classical statistical procedures (regression, correlation, ANOVA, and on and on). Which brings us full circle: the model and statements you make based on it are correct given the “straight line” premise is true, it is just that the “straight line” premise might be, and usually is, false.3

Because there are no unconditional criteria which can judge which statistical model is best, you often hear people making the most outrageous statistical claims, usually based upon some model that happened to “fit the data well.” Only, these claims are not proved, because to be “proved” means to be deduced with certainty given premises that are true, and conclusions based on statistical models can only ever be probable (less than certain and more than false). Therefore, when you read somebody’s results, pay less attention to the model they used and more to the list of premises (or reasons) given as to why that model is the best one so that you can estimate how likely the model that was used is true.

Since that is a difficult task, at least demand that the model be able to predict new data well: data that was not used, in any way, in developing the model. Unfortunately, if you added that criterion to the list of things required before a paper could be published, you would cause a drastic reduction in scholarly output in many fields (and we can’t have that, can we?).

1I really would like people to give me some feedback. This stuff is unbelievably complicated and it is a brutal struggle finding simple ways of explaining it. In future essays, I’ll give examples from real-life journal articles.
2Occam’s razor arguments are purely statistical and go, “In the past, most simple models turned out better than complex models; I can now choose either a simple or complex model; therefore, the simple model I now have is more likely to be better.”
3Why these “false” models sometimes “work” will be the discussion of another article; but, basically, it has to do with people changing the definition of what the model is mid-stream.

800 gram balls: Key words in my log files

Every now and then I have a glance at my log files to see what kinds of key words people type into sites like Google and who are subsequently directed to my site. It won’t surprise you that I see things like briggs and bad statistics examples. But there is a class of keywords that I can only describe as odd, even, at times, worrying. Here are those keywords (all spellings are as they were found), split into rough categories. My comments, if any, appear in parentheses. Each of these keywords are real.

Statistics

  • don't forget about us model (I could never)
  • great statisticians (flatterer)
  • how to exaggerate (think big, think big)
  • i need to be statician (it can be a powerful force, it’s true; learning to spell it correctly will help)
  • some pictures of statistician (here’s somebody with a lot of time on their hands)
  • statisticians aviod doing things because other people are doing it (I think he has us confused with accountants)
  • statistician god exists (His name is Stochastikos)
  • virginity statistics (score: 0 to 0)
  • lifelong virginity statistics (score still tied)
  • what to look for in a statician (get one of the tall ones; we have a sense of humor)
  • why do statisticians love tables? (because we can’t help ourselves)
  • you cannot be a scientist if you are not a good mathematician (I have the feeling that this person desired a negative answer)

Zombies

  • factors that cause zombism (blogging…)
  • recorded zombie outbreaks
  • what year will zombies take over the earth? (has to be soon)
  • wild zombies (as opposed to domesticated?)
  • will zombie attacks happen
  • zombies can happen (he might have been trying to answer the other guy)
  • zombies in nature
  • zombies true or false

Miscellaneous

  • 800g balls (mine are only 760g–in petanque, of course!)
  • anything (I can see Google knows where to go…)
  • beer does not have enough alcohol (which is why I tend to stick with rum)
  • home is where the heart is william briggs (somebody’s trying to give me a lesson)
  • horizontal alcoholic (is there any other kind?)
  • how does pseudoscience effect the mind (badly)
  • lee majors george bush (you can’t go wrong aligning yourself with the six-million dollar man)
  • man's got his limits briggs (true enough; must be same advice giver as before)
  • purposely causing someone to get cancer (oh my…no murder tips here)
  • sentence with the word, "impossibility" (shouldn’t be hard to come by)
  • what can we do not to be poor (get a job)
« Older posts Newer posts »

© 2014 William M. Briggs

Theme by Anders NorenUp ↑