February 18, 2008 | 37 CommentsThe old saying that “You can prove anything using statistics” isn’t true. It is a lie, and a damned lie, at that. It is an ugly, vicious, scurrilous distortion, undoubtedly promulgated by the legion of college graduates who had to suffer, sitting mystified, through poorly taught Statistics 101 classes, and never understood or trusted what they were told.

But, you might be happy to hear, the statement is almost true and is false only because of a technicality having to do with the logical word *prove*. I will explain this later.^{1}

Now, most statistics texts, even advanced ones, if they talk about this subject at all, tend to cover it in vague or embarrassed passages, preferring to quickly return to more familiar ground. So if you haven’t heard about most of what I’m going to tell you, it isn’t your fault.

Before we can get too far, we need some notation to help us out. We call the data we want to predict `y`

, and if we have some ancillary data that can help us predict `y`

, we call it `x`

. These are just letters that we use as place-holders so we don’t have to write out the full names of the variables each time. Do not let yourself be confused by the use of letters as place-holders!

An example. Suppose we wanted to predict a person’s income. Then “a person’s income” becomes `y`

. Every time you see `y`

you should think “a person’s income”: clearly, `y`

is easier to write. To help us predict income, we might have the sex of the person, their highest level of education, their field of study, and so on. All these predictor variables we call `x`

: when you see `x`

, think “sex”, “education”, etc.

The business of statistics is to find a relationship between the `y`

and the `x`

: this relationship is called a model, which is just a function (a mathematical grouping) of the data `y`

and `x`

. We write this as `y = f(x)`

, and it means, “The thing we want to know (`y`

) is best represented as a combination, a function, of the data (`x`

).” So, with more shorthand, we write a mathematical combination, a function of `x`

, as `f(x)`

. Every time you see a statistic quoted, there is an explicit or implicit “`f(x)`

“, a model, lurking somewhere in the background. Whenever you hear the term “Our results are *statistically significant*“, there is again some model that has been computed. Even just taking the mean implies a model of the data.

The problem is that usually the function `f(x)`

is *not known* and must be estimated, guessed at in some manner, or logically deduced. But that is a very difficult thing to do, so nearly all of the time the mathematical skeleton, the framework, of `f(x)`

is written down as if it *were* known. The `f(x)`

is often chosen by custom or habit or because alternatives are unknown. Different people, with the same `x`

and `y`

, may choose different `f(x)`

. Only one of them, or none of them, can be right, they both cannot be.

It is important to understand that all results (like saying “statistically significant”, computing p-values, confidence or credible intervals) are *conditional* on the model that chosen being *true*. Since it is rarely certain that the model used *was* true, the eventual results are stated with a certainty that is too strong. As an example, suppose your statistical model allowed you to say that a certain proposition was true “at the 90% level.” But if you are only, say, 50% sure that the model you used is the correct one, then your proposition is only true “at the 45% level” *not* at the 90% level, which is, of course, an entirely different conclusion. *And if you have* no *idea how certain your model is, then it follows that you have no idea how certain your proposition is.* To emphasize: the uncertainty in choosing the model is almost never taken into consideration.

However, even *if* the framework, the `f(x)`

, is known (or assumed known), certain numerical constants, called parameters, are still needed to flesh out the model skeleton (if you’re fitting a normal distribution, these are the μ and σ^2 you might have heard of). These must be guessed, too. Generally, however, everybody knows that the model’s parameters must be estimated. What you might not know is that the uncertainty in guessing the parameter values also has to carry through to statements of certainty about data propositions. Unfortunately, this is also rarely done: most statistical procedures focus on making statements about the parameters and virtually ignore actual, observable data. This again means that people come away from these procedures with an inflated sense of certainty.

If you don’t understand all this, especially the last part about parameters, don’t worry: just try to keep in mind that two things happen: a function `f(x)`

is guessed at, and the parameters, the numerical constants, that make this equation complete must also be guessed at. The uncertainty of performing *both* of these operations *must* be carried through to any conclusions you make, though, again, this is almost never done.

These facts have enormous and rarely considered consequences. For one, **it means that nearly all statistics results that you see published are overly boastful.** This is especially true in certain academic fields where the models are almost always picked as the result of habit, even enforced habit, as editors of peer-reviewed journals are suspicious of anything new. This is why—using medical journals as an example—one day you will see a headline that touts “Eating Broccoli Reduces Risk of Breast Cancer,” only to later read, “The Broccolis; They Do Nothing!” It’s just too easy to find results that are “statistically significant” if you ignore the model and parameter uncertainties.

These facts, shocking as they might be, are not quite the revelation we’re after. You might suppose that there is some data-driven procedure out there, known only to statisticians, that would let you find both the right model and the right way to characterize its parameters. It can’t be that hard to search for the overall best model!

It’s not only hard, but impossible, a fact which leads us to the dirty secret: **For any set of **`y`

and `x`

, there is no unconditionally unique model, nor is there any unconditionally unique way to represent uncertainty in the model’s parameters.

Let’s illustrate this with respect to a time series. Our data is still `y`

, but there is no specific `x`

, or explanatory data, except for the index, or time points (`x`

= time 1, time 2, etc.), which of course are important in time series. All we have is the data and the time points (understand that these don’t have be clock-on-the-wall “time” points, just numbers in a sequence).

Suppose we observe this sequence of numbers (a time series)

`y = 2, 4, 6, 8`

; with index `x = 1, 2, 3, 4`

Our task is to estimate a model `y = f(x)`

. One possibility is Model A

`f(x) = 2x`

which fits the data perfectly, because `x = 1, 2, 3, 4`

and `2x = 2, 4, 6, 8`

which is exactly what `y`

equals. The “2” is the parameter of the model, which here we’ll assume we know with certainty.

But Model B is

`f(x) = 2x |sin[(2x+1)π/2]|`

which also fits the data perfectly (don’t worry if you can’t see this—trust me, it’s an exact fit; the “2”s, the “1” and the “π” are all known-for-certain parameters).

Which of these two models should we use? Obviously, the better one; we just have to define what we mean by *better*. Which model is better? Well, using any—and I mean *any*—of the statistical model goodness-of-fit measures that have ever, or will ever, be invented, both are *identically* good. Both models explain all the data we have seen without error, after all.

There is a Model C, Model D, Model E, and so on and on forever, *all* of which will fit the observed data perfectly and so, in this sense, will be indistinguishable from one another.

What to do? You could, and even should, wait for more data to come in, data you did not use *in any way* to fit your models, and see how well your models predict these new data. Most times, this will soon tell you which model is superior, or if you are only considering one model, it will tell you if it is reasonable. This eminently common-sense procedure, sadly, is almost never done outside the “hard” sciences (and not all the time inside these areas; witness climate models). Since there are an infinite number of models that will predict your data perfectly, it is no great trick to find one of them (or to find one that fits well according to some conventional standard). We again find that published results will be too sure of themselves.

Suppose in our example the new data is `y = 10, 12, 14`

: both Models A and B still fit perfectly. By now, you might be getting a little suspicious, and say to yourself, “Since both of these models flawlessly guess the observed data, it doesn’t matter which one we pick! They are equally good.” If your goal was *solely* prediction of new data, then I would agree with you. However, the purpose of models is rarely just raw prediction. Usually, we want to *explain* the data we have, too.

Models A and B have dramatically different explanations of the data: A has a simple story (“time times 2!”) and B a complex one. Models C, D, E, and so on, all too have different stories. You cannot just pick A via some “Occam’s razor^{2}” argument; meaning A is best because it is “simpler”, because there is no guarantee that the simpler model is always the better model.

The mystery of the secret lies in the word “unconditional”, which was a necessary word in describing the secret. We can now see that there is no *unconditionally* unique model. But there might very well be a *conditionally* correct one. That is, the model that is unique, and therefore best, might be logically deducible *given* some set of premises that must be fulfilled. Suppose those premises were “The model must be linear and contain only one positive parameter,” then Model B is out and can no longer be considered. Model A is then our *only* choice: we do not, given these premises, even need to examine Models C, D, and so on, because Model A is the only function that fills the bill; we have logically deduced the form of Model A given these premises.

It is these necessary external premises that help us with the explanatory portion of the model. They are usually such that they demand the current model be consonant with other known models, or that the current model meet certain physical, biological, or mathematical expectations. Regardless, the premises are entirely external to the data at hand, and may themselves be the result of other logical arguments. Knowing the premises, and assuming they are sound and true, gives us our model.

The most common, unspoken of course, premise is loosely “The data must be described by a straight line and a normal distribution”, which, when invoked, describes the vast majority of classical statistical procedures (regression, correlation, ANOVA, and on and on). Which brings us full circle: the model and statements you make based on it are correct *given* the “straight line” premise is true, it is just that the “straight line” premise might be, and usually is, false.^{3}

Because there are no unconditional criteria which can judge which statistical model is best, you often hear people making the most outrageous statistical claims, usually based upon some model that happened to “fit the data well.” Only, these claims are not *proved*, because to be “proved” means to be deduced with certainty given premises that are true, and conclusions based on statistical models can only ever be probable (less than certain and more than false). Therefore, when you read somebody’s results, pay less attention to the model they used and more to the list of premises (or reasons) given as to why that model is the best one so that you can estimate how likely the model that was used is true.

Since that is a difficult task, at least demand that the model be able to predict *new *data well: data that was not used, *in any way*, in developing the model. Unfortunately, if you added that criterion to the list of things required before a paper could be published, you would cause a drastic reduction in scholarly output in many fields (and we can’t have that, can we?).

^{1}I really would like people to give me some feedback. This stuff is unbelievably complicated and it is a brutal struggle finding simple ways of explaining it. In future essays, I’ll give examples from real-life journal articles.

^{2}Occam’s razor arguments are purely statistical and go, “In the past, most simple models turned out better than complex models; I can now choose either a simple or complex model; therefore, the simple model I now have is more likely to be better.”

^{3}Why these “false” models sometimes “work” will be the discussion of another article; but, basically, it has to do with people changing the definition of what the model is mid-stream.