Thanks to reader Frank Kristeller we learn that the far left *New York Times* yesterday ran an article by F.D. Flam praising the rise of Bayesian statistics: The Odds, Continually Updated.

The replacement of frequentist statistics is, if true, moderately cheering news. And Bayes is the next step in the removal of magical and loose thinking from statistics. But far from the destination. That, I argue, is logical probability, which you can think of as Bayes *sans* scientism and subjectivism.

However, baby steps:

Bayesian statistics are rippling through everything from physics to cancer research, ecology to psychology. Enthusiasts say they are allowing scientists to solve problems that would have been considered impossible just 20 years ago. And lately, they have been thrust into an intense debate over the reliability of research results.

Nothing like a little hyperbole, eh? I don’t think our frequentist friends would agree they couldn’t solve the same problems as Bayesians. And of course they can. But so can storefront psychics solve problems. What we’re after is *good* solutions.

Flam got this right:

But the current debate is about how scientists turn data into knowledge, evidence and predictions. Concern has been growing in recent years that some fields are not doing a very good job at this sort of inference. In 2012, for example, a team at the biotech company Amgen announced that they’d analyzed 53 cancer studies and found it could not replicate 47 of them.

This is what happens when you base your decisions on p-values, little mystical numbers which remove the responsibility of thinking. P-values aren’t the only scourge, of course, willful transgressive thinking (especially in fields like sociology) and false quantification are just as, and probably even more, degrading.

False quantification? That’s when numbers are put to non-numerical things, just so statistics can have a go at them. Express your agreement with that statement on a Likert scale from 1 to 5.

Again:

“Statistics sounds like this dry, technical subject, but it draws on deep philosophical debates about the nature of reality,” said the Princeton University astrophysicist Edwin Turner, who has witnessed a widespread conversion to Bayesian thinking in his field over the last 15 years.

This is true. But just try to get people to believe it! Most academics, even their Bayesian variety, feel the foundations are fixed, that most or all that need be known about our primary premises is already known. Not true. Philosophy in a statistician’s education is put last, if at all. The error here is to assume probability is only a branch of mathematics.

One downside of Bayesian statistics is that it requires prior information — and often scientists need to start with a guess or estimate. Assigning numbers to subjective judgments is “like fingernails on a chalkboard,” said physicist Kyle Cranmer, who helped develop a frequentist technique to identify the latest new subatomic particle — the Higgs boson.

This isn’t really so. The problem here is blind *parameterization*, which is the assigning of probability models for the sake of convenience without understanding where the parameters of those models arise. This is an area of research that most statisticians are completely unaware of, so used are they to taking the parameters as a given. Logical probability removes the subjectivism and arbitrary quantification here, so that the true state of knowledge at the beginning of a problem is optimally stated.

Others say that in confronting the so-called replication crisis, the best cure for misleading findings is not Bayesian statistics, but good frequentist ones. It was frequentist statistics that allowed people to uncover all the problems with irreproducible research in the first place, said Deborah Mayo, a philosopher of science at Virginia Tech. The technique was developed to distinguish real effects from chance, and to prevent scientists from fooling themselves.

Mayo (our friend) is wrong. It was the discordance between scientists’ commonsensical knowledge of causality and the official statistical results that allowed us to see the mistakes. Statisticians do causality very, very badly. Indeed, frequentism is based on a fallacy of mixing up ontology (what is) with epistemology (our knowledge of what might be). Bayes does slightly better, but errs but introducing arbitrary subjective opinion.

Uri Simonsohn…exposed common statistical shenanigans in his field — logical leaps, unjustified conclusions, and various forms of unconscious and conscious cheating.

He said he had looked into Bayesian statistics and concluded that if people misused or misunderstood one system, they would do just as badly with the other. Bayesian statistics, in short, can’t save us from bad science.

Simonsohn (whom I don’t know) is right, mostly. The problems are deep. But you notice he left out p-values.

Flam missed that resistance to Bayes is still strong in many traditional fields, like medicine, where p-values are demanded. Still, that Bayes is becoming more available is good. But since we’re at the start and let’s try and do it right, and not, say, re-introduce old notions (like p-values!) into new theory.