Statistics

Last Class

It was the Nines last night. Pizza, beer, and wine, followed by sketchy music.

A definite conceit of statistics is its habit of declaring with something-approaching-certainty that this or that hypothesis is true. It relies much too much on quick and easy answers, or on mathematical cleverness. Nearly always lost are demonstrations that what was predicted actually occurred.

Ever read an academic paper in one of the areas that rely almost exclusively on statistics? Fields like sociology, epidemiology, and the like. They all “run” some statistics, present some p-values, make some definite conclusions.

But how often do you see a follow-up article which says something like, “We tested the model we built in our last paper on data we had never seen before (in any way), and here is how it fared”? I’ll tell you how often: never.

These kinds of papers are common in more concrete fields, of course, areas in which prediction of real things are fundamental.

It’s too easy to find the result you’re looking for with statistics. This is echoed in a review of the book Wrong: Why Experts Keep Failing Us—And How to Know When Not to Trust Them today at the Wall Street Journal.

But the current market creates the wrong kinds of incentives for doing good research or admitting failure. Novel ideas and findings are rewarded with grants and publication, which lead to academic prestige and career advancement. Researchers have a vested interest in overstating their findings because certainty is more likely than equivocation to achieve all of the above. Thus the probability increases of producing findings that are false. As the medical mathematician John Ioannidis tells Mr. Freedman: “The facts suggest that for many, if not the majority of fields, the majority of published studies are likely to be wrong.”

Look up Ioannidis’s name and read some of his papers. You’ll be glad of the time spent reading them.

That’s it for my warning. Class dismissed.

Regular service resumes tomorrow.

Categories: Statistics

4 replies »

  1. So when referring to a group of statisticians what is the preferred term? “A deviant of statisticians”? Hmmm. That could be a problem. How about, “A sampling of statisticians”?

    Welcome back, professor. Remind us to tell you about the wisdom of speaking off-the-cuff to “Rolling Stone” journalists sometime.

  2. Nice post. I’d like to read “Wrong.”

    Have you seen “Expert Political Judgment” by Philip Tetlock? It has a similar theme, though it’s not statistical. Tetlock says political pundits are often no better at predicting the future than a random person on the street. In fact, they may be worse. They’re expected to form opinions quickly. They get airtime for being provocative, not for being prescient. Etc.

  3. I read the book Expert Political Judgment some years ago, and I thought it had good points to make. I recommend it too. As soon as our social scientists at the school learned I was reading it, they spent a lot of time panning the book and trying to “educate” me. The problem everywhere is that no one will introspect, review past performance, or admit to anything like being mortal.

  4. Kevin, your comment reminded me of this quote from President Hoover:

    The great liability of the engineer compared to men of other professions is that his works are out in the open where all can see them. His acts, step by step, are in hard substance. He cannot bury his mistakes in the grave like the doctors. He cannot argue them into thin air or blame the judge like the lawyers. He cannot, like the architects, cover his failures with trees and vines. He cannot, like the politicians, screen his sort-comings by blaming his opponents and hope the people will forget. The engineer simply cannot deny he did it. If his works do not work, he is damned.

Leave a Reply

Your email address will not be published. Required fields are marked *