@A_NEVATHIR @Noahpinion @mattstat Sticking to logical probability doesn't tell us about the world, where do initial assignments come from?.

— Deborah G. Mayo (@learnfromerror) September 1, 2015

Short note today, because today, and most days for the next couple of weeks, are book days. I am nearly finished with the damned thing. I’m now handling drudgery like the bibliography and rooting out typos (yes). And also tying the thing together so that all my terms and examples are consistent.

It’s nearly there. I think the editor I contacted, who was initially enthusiastic, must have looked me up on line and discovered my, um, academic non-conformities, because I haven’t had any email answered from her for over eight months. So once I’m done done, I’ll go searching for another. But I’ll also send a draft copy of the book to selected colleagues for comment.

In the book, incidentally, there is no global warming, no ethics, no (let us call them) social questions. It’s pure philosophy of probability and statistics, it’s all applied epistemology.

As you can see from the tweet above (if you can’t see it, click here), I was dragged into a discussion about p-values. Mayo is a respected philosopher of probability, but she takes the frequentist line and objects to Bayesian procedures because, she claims, priors are *ad hoc*.

She’s right: they are. But then, with even greater force, so are frequentist models, which raise *ad hocness* to an art form—but one resembling the graffiti at the backs of grocery stores. Seems to me, if you can swallow regressions—used for *everything*—you can buy flat priors for the parameters of that regression.

And you should. Swallow them, I mean. Because, as Mayo knows, a regression with flat priors gives the same numerical results as frequentism. The same.

Thus the battle between frequentists and Bayesians is a fight over a territory we logical probabilists have long abandoned. This is why I advocate (the title is goofy) The Third Way Of Probability & Statistics: Beyond Testing and Estimation To Importance, Relevance, and Skill.

This is logical probability, where models can be *deduced*, where the origin of parameters is made clear, and where parameters don’t even exist, unless one heads out the limit. The Third Way recognizes that we will never be able to eliminate all *ad hocness*, so the attention is turned from an arbitrary model’s innards to its actual performance.

This has the added benefit of being natural and easy to explain to users of our models. We speak in plain English. And since our models are exposed to the world, they can be verified by anybody. Plus, some of our models are deduced, and therefore impeccable.

The focus in the Third Way is on understanding cause, and understanding probability isn’t cause. Now that sounds mysterious because cause is deeply misunderstood in probability and statistics. The old way—hypothesis testing and Bayes factors—confuse decision and probability and thus mix up cause.

Best part of the Third Way is the dramatic reduction of over-certainty, which is now at pandemic levels. Hence the “replication crisis”, among other calamities. The Third Way, i.e. logical observable-and-not-parameter-based probability,

Notice I said “reduction” and not “elimination.” No program can do that. The old methods pretend they can, though. They claim to have discovered truth or the “optimal” action (hence the confusion of decision and probability implicit in hypothesis testing etc.), but logical probability *forces* due consideration of all uncertainties.

Anyway, there you are. You can read my Arxiv papers for a small taste of the book, but only a very small taste.

Incidentally, I’m available to speak on these topics. Even, possibly, for no cost.

Briggs: You’re trying to kill the psychic scientists here. If science can’t use magical formulas to devine the importance of things and predict the future—without that pesky verification requirement because no non-scientist understands p-values anyway—what ever shall the shamans of science do with their time? Statistics as practiced offers a way to be scientific and psychic at the same time, using tiny bits of data, exploding that volume with infilling and estimates (SWAGs, of course) and pronouncing the future. If one must use “importance, relevance and skill”, that’s really going to mess with the magic and superiority of those who can do advanced mathematics and chose the variables and methods that yield the desired outcome. In other words, you may be killing the Wizard of Oz.

Sheri: the Wizards are humbugs.

And their little dogs too!

Gary: NO!!!!!!!!! 🙂

Sheri,

“because no non-scientist understands p-values anyway”

No scientist understands p-values either.

MattS: Good point.

The url: http://wmbriggs.com/public/sat.txt given in the paper continues to give error 404.

Heigh-ho.

I never got why ad hoc is primarily a bad thing. I guess it has a bad connotation now, but all it means is made for a particular purpose. Seems like there are all kinds of particular purposes out there that might like things made for them.

That being said, I do think that the selection of priors is more ad hoc than frequentism. Frequentism’s main assumption is about infinite sampling/observation. Sure, frequentism has ad hoc choices about what model to use, but that’s not exclusive to them. That being said, I still don’t see how ad hoc is a negative.

Priors may be ad hoc but that doesn’t mean they can be anything. Their purpose is to convert Pr(C|R) to Pr(R, C) given whatever which is needed to get Pr(R|C).

One way of looking at Bayes theorem is that it defines how to go from row probabilities to column probabilities (assuming two variables).

They are subjective only in the sense that your information may be different than mine. But, given the information at hand, they have fixed values.

“I’ll also send a draft copy of the book to selected colleagues for comment.”

“selected colleagues” for peer review…hhhmmmm…wonder how that stacks up to http://www.nature.com/nature/peerreview/debate/ and so forth.

>”I think the EDITOR I contacted…”

I hope to God that means “copy editor”.

Matt’s papers at Arxiv speak loudly about two things, the first one “awful” in the sense of inspiring genuine awe, and the second, well, awful:

1. this book is going to make an important, lasting, and urgently needed contribution to epistemology and thence to statistics and thence to science

2. Matt is a terrible, horrible, not very good editor of his own words

We knew you when.

Good luck, Matt!

Rich,

Rats. Thanks. Link old and prior to hacking. I’ll fix tomorrow.

Rich,

Try http://wmbriggs.com/public/sat.csv

For some reason, the (new) server won’t let you have the text version, which is certainly there. Even I can’t download it through the browser. The CSV is the same file. Only difference is each time I generate a new study hours with the R code:

round(abs(10*rnorm(dim(x)[1])))

where ‘x’ is the data frame. I’ll update the Arxiv paper when I get the chance.

Thanks.

Got it. Thanks.