William M. Briggs

Statistician to the Stars!

A Statistician’s Lament

This is about the best vision we should be claiming.

Regular readers (those wanting to catch up, click here) will know that the world is far too sure of itself, especially in areas which touch on human behavior, and most particularly in judgments of behavior gleaned through statistics.

I and a few like-minded folks have written many times of the over-certainty which is all but guaranteed using classical statistical methods. By “classical” I mean the ubiquitous frequentist p-value-centric “hypothesis testing” framework. But I also mean parameter estimation-focused frequentist and Bayesian methods.

Both testing and estimation take far too much for granted. Every analysis begins by assuming more than is warranted, a predicament explained by the impulsive rush to quantify that which is unquantifiable because it is felt that only quantification is scientific, and the analysis ends with a result in which too much credence is granted and too much faith is placed.

I won’t here rehearse the multitude of arguments and examples against classical approaches, nor will I outline the superior alternate approach (which many call “predictive statistics”). I will only state that this method is designed to better state the level of certainty one should have in a problem, and that this certainty is always less than traditional scheme.

One would have wagered that since the predictivist philosophy is superior that it would have been adopted. It has not because it is hard to “sell” somebody on the idea of less certainty. “Sure, you can use the classical stuff, which implies you should be 95% sure of your result. Or you could use the predictive method which says your belief should be just higher than a coin flip, and maybe even less than that.”

Who the hell wants to buy a product which claims it will deliver less? The urge to “be sure”, to have a decision made for you by “objective” criteria is too strong. Besides, “everybody else” is using the other stuff. Why shouldn’t you?

Whereas classicists promise clear skies, predictivists forecast fog. Classicists offer resolution, predictivists blurred vision. The classicist wants to get on with it, the predictivists says hold on a minute. The illusion of certainty often trumps the promise

It’s not like predictive methods haven’t made inroads. Casinos us them and always have. Automated data processors like license plate readers, handwriting recognition, and barcode scanners are so routine they don’t even seem like statistics, but they are1. These triumph because they are simple. Saying whether a scribble is an “f” or a “h” is trivial next to explaining (say) why a woman has an abortion. Even voice recognition—a notoriously “difficult” problem—is tinker toys when compared to saying how much the economy will expand or contract next quarter, let alone in a decade.

Yet there is no shortage of economists (folks somebody once called “statisticians without personalities”) willing to tell you exactly what the GDP will be on October 10th, 2022. Just are a plethora of “soft” scientists convinced that their theory of ___ism (where the blank may be filled in with the political concern of the day) is verified by a regression model—which the press will call “a computer model.”

Some say “soft” scientists—educationists, sociologists, psychologists, and so on—are envious of the prestige of mathematicians and physicists, the two professions (in order) which can rightly boast of confidence in their results. The certainty “quants” have, like we talked about before, is because these professions picked easy subjects.

Saying why a proposition is true because certain others are, once you’ve identified the new proposition, is a matter of mental elbow grease. And explaining why a certain particle moves in a field where all the variables are precisely known and controlled takes almost no brain power. Not compared to saying what a person—even worse, what people—will do and why he or they do it six months from now.

I don’t think it is envy, but habit which drives the “soft” scientist (or other typical statistics user) to his over-confidence. Everybody does the same thing he is doing, and from that he develops his confidence. “It can’t be wrong if so many people are winning so many grants and publishing so many papers.” It’s not easy to change a custom, especially a beloved one.

—————————————————————–

1Yes, it’s all statistics, that is, all probability, even though it sometimes goes by other labels and done by (say) computer scientists. See this post for an explanation. Or see many posts on the Classic Posts page.

12 Comments

  1. It helps to separate academic and commercial consumers of statistics.

    Academics are not paid to be correct. They’re paid to publish. Overconfidence means publications and success. Humility means fewer publications and failure.

    People outside the ivory tower also desire certainty and have a tendency to believe what they want to believe. However, the incentives are reversed. Clear-eyed assessment of uncertainty can save your company, and overconfidence can be fatal. Business people are often more scientific than scientists because they are held accountable for their decisions. (At least when they’re not shielded from consequences by politics, but that’s another topic.)

  2. Habit plays an important role, no doubt, but more fundamentally, soft scientists hardly get any real feedback: your claims are not likely to ever be falsified, the worst that can possibly happen is that they are later “put in context”. Whereas nature and logic are unforgiving and ruthless. Experiencing that, mathematicians and physicists tend to develop a cautious attitude which soft scientists are not likely to acquire.

  3. One thinks of Long-Term Capital Management and the Black-Scholes model, which has a statistical core.

  4. Oh but it is soooo warm and fuzzy when you can say “the computer says….” That way you don’t have to accept responsibility for what you say. If what the computer says turns out to be wrong, the programmer made the mistake or the program is too hard to understand. It’s rather like saying the cause of the crash was “pilot error” when the pilot was killed in the crash. There is no one who can prove it wasn’t.

    Reason, Reality, and Logic? All piffles. What matters these days is what you can get away with. All the while civilization is in a tail spin and the pilot (aka. Ethics) is unconscious. Quite obviously it is “pilot error”. Unfortunately, there will not be much worth having when civilization crashes.

    If you don’t have the qualitative aspects clearly connected to realty and fully consistent with all the other qualitative aspects you know are clearly connected to reality, any analysis based upon quantitative computation will be right only by accident. Since there are massively far more ways to be wrong than right, it is quite sensible to assume all such results are wrong. But…but…but the computer said that …. Meanwhile, civilization continues its tailspin ever more rapidly downward.

  5. My “theory” (using the term somewhat loosely) is that science became certain of itself because it was the only way to compete with religion. When evolution was attacked by religion, science had to become its own “god” and elevate itself to a level of certainty that was not warranted by facts or statistics, but necessary to sell the theory. Then there was climate change, medications, disease–for all of these science had to be “god” and certain of itself to make the sale. Thus, science now claims consensus and certainty in much the same way any religion does.

  6. You repeat the oft spoken claim that physics deals with subjects which are easier than those dealt with by sociology etc. This claim does not stand up to scrutiny as it is not based on anything objective. There are many topics in physics that are too difficult to understand due to their inherent complexity just as there are in sociology. Where do you find a common degree of complexity in the two subjects so that you can make a comparison? What criteria would you use?

    Maybe the claim is that physics has produced more useful results than sociology, but it is not clear that this is due to a difference in complexity or ease of subject matter. For example astrology has not produced any useful results, but few claim that this is because it is a harder subject than physics. Note that I am not making a comparison between astrology and sociology or implying that the latter is not a legitimate subject of study.

    Different subjects might allow a differing degree of depth of study (however defined) but this does not obviously translate into a difference of degree of difficultly. Subjects that allow only a superficial degree of study can be said to be easier than those that allow in depth analysis. To say otherwise is to make the claim that you can define what would be required to completely understand reality.

  7. This post, along with quite a few others, has a clarity that borders on brilliance. The post, with the unfortunate title, “Anthropogenic Forcing Not Significant? …” is another example. I sent it to my daughter and told her to ignore the title but to read the main body of the post ten or more times and once she had it would save her much grief in her professional life.

    So thanks for the clarity.

    From my experience many people have trouble admitting to uncertainty, statistical or otherwise. They would rather claim to know something or to be certain of something than admit that they don’t know or are uncertain.

    I’m not sure why this is so but it seems to be so.

  8. Ye Olde Statistician

    20 January 2013 at 1:54 pm

    Physics deals with objects; sociology with subjects. It is much harder to discern laws when subjects have wills of their own. Cf. Astrology, which in its mechanical parts, is rigorous science – in fact, it is observational and mathematical astronomy – but in its applications to the behavior of people fails miserably.

  9. Lionell Griffith

    20 January 2013 at 4:03 pm

    It is very easy to be certain and it is even more easy to make predictions. The challenge is in being right. Being right takes a lot more work that few seem willing to do.

  10. Sander van der Wal

    21 January 2013 at 2:58 am

    Even if some economist is willing to tell you what the GDP on a certain date in the future is, it is implied that that number is computed in accordance with some model. And if you ask, he will tell you that. The economist is very sure that the computation is done correctly. He will also tell you that he has no idea whether the GDP will be that number on that date. But you want a number, so you get a number. And because you believe that a number computed by a computer is better, you get a number computed by a computer.

    This way the economist proves again and again that given a demand, the market will respond by filling that demand.

    I wish Physics was that easy.

  11. I really think that you undermine your case here when you attack “classical statistics” itself rather than its abuse. Such abuse is definitely all too common in many fields but it is just not true that “*Every* analysis begins by assuming more than is warranted”(emphasis added). And on the other side of your argument the superior alternate methods you advocate are just as susceptible to abuse as the old – perhaps more so as they are arguably harder to explain than the classical theory and so people are more inclined to take the word of an “expert” on faith rather than really understand the assumptions that it comes from.

  12. Matt, I don’t know if you’ve seen “Zero Dark Thirty”, but there’s a scene near the end where the CIA Director (Played by James Gandolfini) is asking the CIA analysts what the probability is that UBL is living in a particular compound in Pock-i-stahn.

    I won’t ruin it for you, but it sums up the dilemma you describe perfectly. It’s hilarious, too.

Comments are closed.

© 2014 William M. Briggs

Theme by Anders NorenUp ↑