Stop Using P-values & Parameter-Centric Methods

Stop Using P-values & Parameter-Centric Methods

P-values should be banned. Every use of them involves a fallacy or mistake in thinking.

“P-values have some good uses.”

No, they don’t. I used every as in every.

“P-values are fine if used properly.”

I’m not getting across. P-values have no proper use.

“P-values have some good uses.”

I wrote two papers with about a dozen or two arguments proving my contention that every use of a p-value is fallacious or mistaken. Here is one, here is the other.

“P-values are fine if used properly.”

Did you read the papers?

“P-values have some good uses.”

Which of the arguments do think is flawed, and how is it flawed?

“P-values are fine if used properly.”

So you’re saying you didn’t read the papers, or that perhaps you scanned them hurriedly, or that you did read them but can discover no flaws in the arguments. Right?

“P-values have some good uses.”

What you’re trying to say is that, even though it’s been proved p-values are fallacious and mistakes, that they have good uses, as long as those uses are proper?

“Yes. P-values are fine if used properly.”

It’s not only p-values that have to go. Parameter-centric methods cause vast, mighty over-certainty.

“Everybody uses parameter-based methods.”

The idea is that people substitute certainty they have in parameters, which do not exist and which therefore are of no interest to man or beast, with certainty in observables.

“Everybody uses parameter-based methods.”

People start all analyses by asking about what happens to an observable—what happens to the uncertainty in its value, that is. They say, “If we change this X, how does it affect our uncertainty in Y?” Grand question, that. But they end by saying, “The parameter in this model take this value, plus or minus something.” What does that have to do with the price of cookies in Byzantium?

“Everybody uses parameter-based methods.”

If we changed practice and eliminated all parameter-based methods, then we’d have a much better understanding of how much we don’t know. We couldn’t then go around so cocky and claim we knew much more than we really do.

“Everybody uses parameter-based methods.”

It’s worse. For if these non-existent parameters take certain values, cause is said to have been discovered. This is the curse of null hypothesis significance testing.

“P-values are fine if used properly.”

Everybody says “Correlation is not causation.” Every authority swears to this, and for good reason. It is true. It is as solid a piece of philosophy as we have in science. Yet if a p is wee, correlation becomes causation.

“P-values have some good uses.”

It’s not only frequentists, of course. So-called Bayesians with their Bayes factors commit the same fallacy at the same rate as frequentists.

“Bayes factors are well accepted.”

Same rate as frequentists. Nothing? That’s a joke, son.

“Bayes factors are well accepted.”

Say it. Say it with me: correlation isn’t causation.

“Bayes factors are well accepted.”

Correlation isn’t causation when the p is wee.

“P-values are fine if used properly.”

Correlation isn’t causation when the Bayes factor is big, either.

“Bayes factors are well accepted.”

Tell me. If correlation isn’t causation, then just what does it mean when a p is wee? What has been proved? If the Bayes factor is a whopper, what does it mean, exactly? Not in terms of a model, but of reality. Of the observable. Of cause.

“Statistical significance has been reached.”

And what does “statistical significance” mean except that it is a restatement the p-value was wee?

“P-values have some good uses.”

You’ve read this award-eligible book, yes? Now at a very affordable $40, or thereabouts. That magnificent work has a long and detailed discussion of cause, of why probability models can’t identify cause. Of what cause means. What do you say to those arguments?

“P-values are fine if used properly.”

So you’re saying everything is fine, that nothing need change. That the philosophy of probability infecting statistics now is not only benign but beneficial. That we needn’t answer any of these hard questions about cause and probability. Right?

“Who are you anyway? Just some guy on the internet.”

6 Comments

  1. DAV

    This person is undoubtedly in favor of wee pee. https://tinyurl.com/jhbszn9
    In fact, all of these are supporters of Pee-Wee: Cyndi Lauper, Annette Funicello, Zsa Zsa Gabor and Valeria Golino, Bill Cosby … the list goes on.

    Oh, wait! Bill Cosby is on the list. Maybe P’s Wee ain’t so great.

  2. Anon

    Not just some guy but many some guys and for quite some time… “Statisticians have been talking about the deficiencies of P values as evidence for at least 70 years, but nobody listens,” says one David Colquhoun, a professor of pharmacology at University College London. https://www.bmj.com/content/364/bmj.l1374

    “Nobody” still not listening, and nobody must be legion.

  3. brad tittle

    But there is a good use of p-values.

    1. If someone has p-values/confidence intervals in their analysis, I can without thinking too hard throw the paper in the trash knowing that whatever they were talking about is not going to actually affect my. It might affect me secondarily because everyone else jumped onto the wagon, but changing my behavior isn’t going to do much.

    2. If the p-value is connected to a study that refutes another study and throws it into question.. Good use.

    3. I am still convinced that the p-values we should be looking at are the ones that are > 0.05. We can pile all of those correlations into the pile of things that don’t need to be worried about. They are the real pieces of information. All of the results that end up in the waste bucket are the ones that should be published. Can you imagine the interest generated by stories like “Sea water doesn’t cause people in Indiana to sit down”. Piles and piles of papers telling you not to worry about the walking by people in Los Angeles causing bad breathing in people in Bangladesh, because there is no connection.

    People running around doing nothing does not keep people fed.

  4. Anon

    A broken clock is right twice a day. You have to keep in mind that P-values were in vogue in the olden days when analysis had to be done by hand and using the tables that bookended your basic Stats 101 textbook. So, it true that the old way had limited use, and a few results may be considered “right” –that is, aligned with results arrived upon by using newer techniques. But why not always and consistently the newer and better techniques that are more apt in dealing with uncertainty? It is better to be right more than twice a day.

  5. If you ever are interested in writing a ‘for-dummies’ explanation, I would be much indebted to you. In the meantime I will pore over your papers and see what I can gather. This is extremely fascinating to me, but I am unfortunately limited by my own education and am working to rectify that.

Leave a Reply

Your email address will not be published. Required fields are marked *