Prominent Academic Asks: When And How Did Statistics Lose Its Way? 

Prominent Academic Asks: When And How Did Statistics Lose Its Way? 

The question is not mine. It belongs to Philip Stark, who asked it of Deborah Mayo at a Berkeley lecture. Stark plainly asked five good questions:

  1. When and how did Statistics lose its way and become (largely) a mechanical way to bless results rather than a serious attempt to avoid fooling ourselves and others?
  2. To what extent have statisticians been complicit in the corruption of Statistics?
  3. Are there any clear turning points where things got noticeably worse?
  4. Is this a problem of statistics instruction ((a) teaching methodology rather than teaching how to answer scientific questions, (b) deemphasizing assumptions, (c) encouraging mechanical calculations and ignoring the interpretation of those calculations), (d) of disciplinary myopia (to publish in the literature of particular disciplines, you are required to use inappropriate methods), (e) of moral hazard (statisticians are often funded on scientific projects and have a strong incentive to do whatever it takes to bless “discoveries”), or something else?
  5. What can academic statisticians do to help get the train back on the tracks? Can you point to good examples?

I have the answers.

1. Statistics went sour a century ago, at its beginning. The seeds of it becoming a “mechanical way to bless” were then planted, sowed in large part by the bombastic efforts of one brilliant man, RA Fisher. It was he who invented the “P value.”

Which became magic, religion. A wee P blesses a study. A large P causes weeping and gnashing of teeth.

But it is absurd. It is ridiculous. It answers no questions anybody wants to know. None. Not one. Everything you think a P does, it does not.

What is the mighty magical P? The probability of seeing data you didn’t see, assuming something nobody believes.

One example only, since I’ll be doing this to death in the Class: you want the probability your new drug beats the old one. You could just calculate this, using probability. But no. Let’s calculate a P instead! You assume the drugs aren’t any different, then you calculate the probability of the value of an ad hoc statistic (based on a parameter inside a model) you didn’t see. If that’s wee, you claim the new drug is better. A fallacy. Every time a fallacy. Every time.

But a beloved fallacy, because why? Because Ps do the thinking for you. You are relieved of the burden of thought, and instead seek a score, which is all you need to claim your correlation turned into causation.

2. Statisticians are the direct cause of the crisis they brought upon us all. It was they who gifted us with the replication crisis, which will with us until (a) P values are dumped, and (b) parameter-centric analysis is expunged.

Parameters are the tunable dials inside models, if you will. They are so mathematically delightful, statisticians talk only of them, and forget–they forget—why they were doing an analysis in the first place.

One example, and the same example. What is the probability your new drug is better than the old? You could just calculate it. But no. Let me instead amaze you with the estimate of a parameter inside a model, and its confidence interval, a parameter which I will mistakenly call the real thing of interest, and imply that reality is less important than the parameter. It isn’t.

3. Things got worse with the introduction of statistics software, which anybody could use, and which everybody did. The mad hunt for the wee P was on.

4. The problem indeed is of instruction. It is the failure to emphasize philosophy to prioritize blind use of mathematics. Probability is no more a branch of mathematics than physics is. Both make heavy use of math, and should, but the true interest in both of these subjects is what is going on, not math. An equation doesn’t mean anything: the idea behind the equation does.

Physics uses equations in service of ontology, about the way the world works. Probability uses equations in service of epistemology, about our knowledge of the world. The two, epistemology and ontology, are not the same. Both need each other.

Do not teach casual students the math. Teach them the ideas. Do not teach serious students the math until they damn sure understand the ideas.

(I would never be allowed to teach my Class in a statistics department, for instance, without agreeing also to teach introductory classes which insist what I say is false are true.)

5. Academics could help by trashing the old bad ideas. But it won’t happen. It almost never does. What does happen is one funereal at a time. New subjects, like “AI” and “machine learning”, rise up and take the place of decrepit, flawed and moribund subjects.

There are many excesses and lacunae in AI, which are nothing but probability models after all, married to wonderful increases in storage and processing. These excesses and gaps will need correction, too, but the subject is young enough that many are still ignoring them.

Subscribe or donate to support this site and its wholly independent host using credit card click here. Or use the paid subscription at Substack. Cash App: \$WilliamMBriggs. For Zelle, use my email: matt@wmbriggs.com, and please include yours so I know who to thank.

11 Comments

  1. Tim

    So long as vested interests exist, they will seek to capture boatloads of loot using twisted stats to do their master’s sleight of hand tricks…with LOTS of bright, shiny statistical objects used to impress and then “de-treasurize” their loot-bearing victims. WHEEE! Just like anything humans touch, it can and will be P bent to produce a custom-made truths of astounding bogus revelations…to serve the unsatisfiable greed of sons-of-unwed mothers.

  2. yes – and..

    My first job in Alberta was as the math geek in a major consulting firm. What i found was that the client’s ability to understand the project proposal tightly limited what we could do – as did the comfort zone for the partner making the sale. Ops research? no way. Do a poll? use n=396 because the z score is easy and 5/20 sells); stats analysis? – they understood cross tabs, nothing else.

    Bottom line: when preparing a grant proposal (and papers are usually just the outcomes of those) you cater to the audience – so you promise what they know and that means weepy analysis. What to fix a lot of things in academia? change the criteria under which research is funded.

  3. JH

    There are many excesses and lacunae in AI, which are nothing but probability models after all, married to wonderful increases in storage and processing. These excesses and gaps will need correction, too, but the subject is young enough that many are still ignoring them.

    To be precise, at least 98% of the population ignores this issue, while the top 2% (or even less) are actively working to improve AI.

    AI is nothing but a heap of probability models (statistical models), which can be reduced to calculus after all, which ultimately boils down to small pebbles. Even if one argues that numerical optimization is at AI’s core, numerical optimization is nothing but calculus after all, which again can be simplified to small pebbles.

    See… I can oversimplify too.

  4. JH

    One example, and the same example. What is the probability your new drug is better than the old? You could just calculate it.

    Show me example that goes beyond simple summary statistics. Just calculate it!

  5. Paul Fischer

    Fisher.

  6. Jorge Gonzales

    I got A’s in statistics but always hated it. Statistically speaking it was 80% lying with numbers and 20% proving theorems that presumably were already proven so why do I have to prove them on the test?

  7. Briggs

    All,

    I have done examples so many times I lost count.

    Here’s one:

    https://www.wmbriggs.com/post/36507/

    We’ll be doing this in Class very carefully. But only after I’m sure you are all sure you understand what probability it — and what it is not.

  8. Hagfish Bagpipe

    Wonderful graphic. That’s what I would say if the bloody bouncer dropped the velvet rope and let me enter Club Briggs. First he didn’t like my polka dot tie. “Against regulations”, says the beetle brow ox. I don my Club Briggs rep tie — okay? — “Sorry Sir, nice tie, but your shoes need polishing”. Mind you this is from a bouncer with facial tats and a nose ring. I polish my shoes, present myself again the clubhouse door for inspection. “Sorry sir, you need to be six-foot-one inches to enter”. Bloody hell. I used to be six-one and five-eighths, but I’ve shrunk, owing to the nefarious machinations of Briggs’ enemies, and age. I go home and pad my shoes, show up at the clubhouse door and the tatted blighter manning the door waves me through. So here I am. Did I miss anything? What’s Briggs banging on about? Gin and tonic, please.

  9. Briggs

    Hagfish,

    Nose ring?

  10. JH

    Mr. Briggs,

    So, you’ve got a made-up example here, where the parameter effect size is included in the regression and the flaws of p-values are demonstrated, specifically, a larger sample provides more precise estimation, making it easier to detect subtle differences (i.e., to produce small p-values). The assumptions of no effect and normality are required to produce the so-called predictive probability of 50%.

    What is the probability your new drug is better than the old? You could just calculate it.

    Maybe my question is not clear. I will provide you with data if you would like to directly answer the question. And the decision I’d like to make is whether the drug should be approved for manufacturing.

    Well, I know you are very good at providing non-answer answers.

  11. You’re a bit hard on old R.A. I vaguely remember reading one of his books. What killed statistics was software, especially Excel macro packages, which meant you didn’t need to make any attempt to understand what was going on. That and the general dumbing down of higher education.

Leave a Reply

Your email address will not be published. Required fields are marked *