Title stolen from article of same name by Leland Teschler in the trade journal Machine Design.
Update Statisticians often screw up statistics, too. See below.
The article is the result of an interview I gave Teschler a month ago. He called me up and asked about bad statistics, and I became that obnoxious guy in the bar who grabs your elbow and won’t let go until you understand his theory of life, the universe, and everything. Poor Teschler was panting by the time I finished with him.
Yet he must have recovered sufficiently to write:
Briggs’ argument for such a radical stance is that most nonexperts misapply these ideas and often use them to leap to bad conclusions. “The technical definition of a p-value is so difficult to remember that people just don’t keep it in mind. Even the Wikipedia page on p-value has a couple of small errors,” Briggs says. “People treat a p-value as a magical thing: If you get a p-value less than a magic number then your hypothesis is true. People don’t actually say it is 100% true, but they behave as though it is.”…
“P-values can and are used to prove anything and everything. The sole limitation is the imagination of the researcher,” he says. “To the civilian, the small p-value says that statistical significance has been found, and this, in turn, says that his hypothesis is not just probable, but true.”
Why not eliminate frequentist statistics for all but math PhD students and teach Bayes or, my preference, logical probability?
Nevertheless, there is only a slim chance a Bayesian revolution will sweep through statistics classrooms. The problem is one of inertia. “Most statistics classes are taught by nonstatisticians. They can’t teach Bayesian statistics because a lot of them have never heard of it,” says Briggs. Even worse, “Peer-review journal editors still want to see p-values in the papers they publish.”
Update The interview I had with Teschler was wide ranging and did not focus on who was king of the statistical hill. I frankly do not care. The main complaint against me was that I am an academic. Ouch. I am so, it’s true, but only for two weeks of every year. The rest of the time I am on my own. Because why? Because the crazy ideas I espouse do not endear me to professional academics.
I didn’t appreciate that some people might take exception to the claim that professionals would be better at statistic than non-professionals. Of course, it is always possible that any non-trained person would do better than a trained one in statistics, or in any field.
My main point with Teschler was that statistics as a field was broken. Regular readers will understand just what I mean by this. Countless times I have showed that the further a field gets from the simple, the worse the evidence is handled. Most engineering is simple, and subject to much feedback, at least compared to the monstrous complexity which is human behavior.
If you’re new here, have a look around and you’ll see quickly what I mean.
Did you see the comments over there on that? Kind of funny how it’s split and you get some like:
Yeah! Stop taking probable actions, Briggs. 😉 lol
Nate,
Nope. I was working with an early version before any commenters dropped by. Notice that most of the detractors are anonymous? Brave, brave.
Of course, you can do an entire theory of statistics in 600 words so there is no possibility of confusion. But neither Teschler nor I have figured out how to do so.
All,
Got this email from JT in response to article.
JT,
Take a look at some of the articles here and you will see that we are sympatico.
That non-statisticians screw up statiics is likely true. But what is the extent of the problem? Citing examples of bad statistics as you do on this website might give a biased picture of the prevelance of this problem. There might very well be statistics done by non-statisticians that are perfectly fine. I suspect that many of the statistics that you have a gripe with are produced by statisticians themselves. Page through any medical journal. Chances are that a statistician was involved in the particular paper that you have a problem with.
Cheers
Francsois
Francsois,
We agree. See the main text (soon).
I am new here. And you are right. I teach statistcs being economist. I do not screw up statistics, I do not have such power. And I do not trust statistics (econometrics).
But, give me more, please, show me your best post about this.
Best regards,
Pedro Erik
Pingback: Nonstatisticians Often Screw Up Statistics | Sk...
Pedro,
Appreciate your stopping by. Take a look at the post from the day before, Johnson’s Revised Standards For Statistical Evidence. And then surf to the Classic Posts pages (top of site) for a slew of entries, categorized by topic.
Thanks, William. I just read one on p-values. Fantastic. Congratulations. I will read that on Johnson and follow your classic posts.
“But in a lot of real-life situations, there is no hard number there, particularly when the evidence and data are so complicated that they can’t be quantified….â€
I am one engineer who has attempted to use statistics to validate design approaches and failed, more than once, and enough to learn to be wary of all things statistically predicted. I recall hours and hours of learning then trying to apply “Design of Experiments” to my work.
My problem was (is) dealing with complicated data that they can’t be quantified with certainty. Examples: root cause failure analysis of a structure, design standard for a flood control structure, structural safety margin for multiple cycles of varying high frequency vibration. These are all contain very complicated factors for analysis with multiple variables and attached variable outcomes, and my final design always risks my professional competence. So I once sought the path of predictive statistics but found it did nothing to improve my design except making it more appealing to my customer. Until it wasn’t because the statistics did nothing to change or improve my design, and thus nothing to solve the problem the design was intended to solve.
I hate uncertainty! I distain the waste of over specification to accommodate risk. I long for a more elegant way of reducing uncertainty thus reducing waste while not increasing risk. Even with powerful computer aided tools, design is still a design-build-test-fix, redesign-rebuild-retest process. It iterates and statistics are not a short cut to iteration, my opinion.
I suggest that all non-professional statisticians or mathematician newcomers start out with your excellent book “Breaking the Law of Averages”
https://www.wmbriggs.com/public/briggs_breaking_law_averages.pdf