Statistics Compared To Ladies Of Ill Repute?

While Theoretical Statistics is (mainly) a decent albeit rather boring mathematical discipline (Probability Theory is much more exciting), so called Applied Statistics is in its big part a whore. Finding dependence (true or false) opens exciting financing opportunities and since the true dependence is a rare commodity many “scientists” investigate the false ones.

So says Victor Ivrii, a mathematical physicist at the University of Toronto. Ivrii was goaded, but gently, into making his remarks by reporter Joseph Brean of the National Post for Brean’s piece, “How one man got away with mass fraud by saying ‘trust me, it’s science.'”

Brean is joining in on the laughs we’re having after discovering that Diederik Stapel lied, cheated, and bamboozled his way through a slew of social psychology papers. Stapel got away with it so long because he had a keen awareness of what his audience hoped to see. His “findings” include things like advertising makes women feel bad about themselves, white men are homophobic, messiness induces racism, and so forth.

Stapel used statistics to “prove” theories which he and his colleagues hoped were true or that would play well with reporters anxious to write “Stunning new research shows…” The shock to the system after his shenanigans were discovered was so great that even the New York Times was forced to admit that the field of social psychology “badly needs to overhaul how it treats research results.”

Also common is a self-serving statistical sloppiness. In an analysis published this year, Dr. Wicherts and Marjan Bakker, also at the University of Amsterdam, searched a random sample of 281 psychology papers for statistical errors. They found that about half of the papers in high-end journals contained some statistical error, and that about 15 percent of all papers had at least one error that changed a reported finding — almost always in opposition to the authors’ hypothesis.

Stapel did statistics the easy way: he made them up. But that ploy is used (we hope) by only a minority. As the Times discovered, many others make mistakes or commit to various misinterpretations which plague statistics, particularly frequentist, p-value-based statistics. Regular readers will know how easy it is for findings-mad, paper-crazy scientists to “prove” something using statistics.

Which brings us to Ivrii equating the great field of statistics to scientific sporting ladies. Is he right? Can we, by the proper application of money, get statistics to do whatever we want? (First one to quote Disraeli/Twain gets shot.)

As probative evidence, Brean quotes from Simmons, Nelson, and Simonsohn’s paper “False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant.” The group says, “In many cases, a researcher is more likely to falsely find evidence that an effect exists than to correctly find evidence that it does not.” Brean summarizes “that modern academic psychologists have so much flexibility with numbers that they can literally prove anything. False positivism, so to speak, has gone rogue.”

Critics point to the prevalence of data dredging, in which computers look for any effect in a massive pool of data, rather than testing a specific hypothesis. But another important factor is the role of the media in hyping counter-intuitive studies, coupled with the academic imperative of “publish or perish,” and the natural human bias toward positive findings — to show an effect rather than confirm its absence.

All too true. And then there’s the National Institute of Statistical Sciences Stanley Young and Alan Karr’s “Deming, data and observational studies“. Here’s their abstract:

“Any claim coming from an observational study is most likely to be wrong.” Startling, but true. Coffee causes pancreatic cancer. Type A personality causes heart attacks. Trans-fat is a killer. Women who eat breakfast cereal give birth to more boys. All these claims come from observational studies; yet when the studies are carefully examined, the claimed links appear to be incorrect. What is going wrong? Some have suggested that the scientific method is failing, that nature itself is playing tricks on us. But it is our way of studying nature that is broken and that urgently needs mending.

Part of what’s broken is statistics.

Brean’s article, incidentally, says something which is not true, “Science, at its most basic, is the effort to prove new ideas wrong.” Science is the effort to prove which ideas are right.

—————————————————————-

Thanks to Vincent Lee for suggesting Brean’s article.

14 Comments

  1. Speed

    Part of what’s broken is statistics.
    Part of what’s broken is the application of statistics.

  2. Applied statistics is, to me, at heart about how best to test ideas by observation and experiment. As GEP Box has argued, it can also be a catalyst for discovery and invention since the discipline of seeking verifiable ideas and ways to test them can contribute to a very productive environment for coming up with new ones.

    But clearly there are those who do not really want their ideas tested in this way, whether because they deeply want them to be true, suspect them to be false, or merely can’t be bothered with such distractions. Statistics as a subject is not broken if such people use its terminology to decorate their work and mislead their readers. It is merely a subject open to abuse. An abuse which, happily enough, can be exposed using statistical methods.

    What may be broken is the authority given by many to peer-reviewed journals. This automatic trust increasingly looks naive, although I suspect it is a part of the generally high-esteem with which science and scientists seem to be held. An esteem very effectively exploited by the IPCC, for example …

  3. Rockhound

    The correctness, or incorrectness, of statistical results almost always relies on one fundamental principle – you must ask the right question. Getting a good answer does not mean you asked a good question… even if it gets your papers published 😉

  4. GoneWithTheWind

    What Stapel did for social psychology and the field of statistics the IPCC and pseudo-scientists have done for science in general. I am now skeptical of all science and the process of peer review. I once read Scientific American to see what was new in useful in science and for a few years now I have considered on par with super market tabloids. I doubt that science will be able to redeem itself in my lifetime. It has been hijacked by politics.

  5. pip

    “Brean’s article, incidentally, says something which is not true, ‘Science, at its most basic, is the effort to prove new ideas wrong.’ Science is the effort to prove which ideas are right.”

    I don’t think it’s fair to call what Brean says “not true.” It’s more a matter of semantics. The ultimate goal of science is to reach a better understanding of the processes at play in the world around us. If done with complete honesty (to one’s self as much as anyone), then the effort of proving an idea right is equivalent to the effort of proving it wrong. In reality (absence of 100% honesty), I think it’s safer that new ideas should be met with skepticism, and the effort should be to disprove them.

    Also, when it was “found” that “white men are homophobic”, I can’t help but wonder how many homosexual caucasian men began questioning their own racial identity… :-p

  6. DAV

    Statistics don’t cause bad science anymore than guns cause bank robberies. People have applied mathematics poorly before. This difference now: it no longer seems a career breaker. You only need to observe the antics of a certain Penn State entity to see that.

    Eisenhower was right.

  7. JohnK

    First, I am most grateful for all our host’s recent discussions. He deserves better than us.

    To the post at hand: If ‘Science, at its most basic, is the effort to prove new ideas wrong’ implies Popperian falsifiability, then that’s wrong, as our learned host has previously demonstrated/yelled about.

    However, from other of my betters, I have learned that it ain’t necessarily necessary to the avoidance of relativism for science to absolutely, positively prove some ideas ‘right’. All that is strictly necessary to avoid relativism is to prove (actual, extant Theory A) MORE right than (actual, extant Theory B).

    Moreover, I have also learned to understand that the methodologically time-encased or historical version of science’s basic project presented in the previous paragraph may also be preferable on other grounds. (‘Methodological’ in the strict sense, meaning you never, ever remove yourself from blatantly historical comparisons of actual existing theories).

    To somehow wish to ‘escape’ from a history in which real Theory A is compared to real Theory B might not just be a fool’s errand. The timeless sterility implied in actually finding Theory A right, above all possible theories, including theories that exist in time that hasn’t even happened yet, might actually be kind of depressing.

  8. Gordon

    When I encounter a “scientific” paper I first study the title, the names of the authors, where they work, and who funded it.

    Sadly, this is often quite sufficient to tell me what I will find in the Abstract!

    I think we will be able to say that scientific publishing has become less tainted if we begin to see publications of failed experiments or studies too. This has significant practical implications, particularly in the medical field.

  9. Ray

    “Stapel did statistics the easy way: he made them up.”
    Looks like he should be working for the EPA. They are famous for decision based evidence making.

  10. Agesilaus

    It is not science that is going astray. But rather fields that lay claim to being scientific but who are not. Social ‘sciences’ are not not scientific. They have no theoretical unpinnings and no predictive ability. At this point in time they are akin to eleventh century alchemy. They are just too hard for us to currently deal with, especially with the third class minds that are attracted to these fields.

    Physics and Chemistry are healthy along with parts of Biology that have been taken over by Chemists. Derivative sciences like meteorology and geology do well so long as they adhere to physics to describe their observations. When they wander from physics they also fall into error–climatology is a prime example.

  11. Joe

    In medical testing, the “double-blind” model is usually followed. The researchers participating in the study do not know who is getting the new medication versus the old medication or the placebo. The results are tabulated before the “big reveal.”

    Perhaps the social sciences, climatology, et al, would benefit from a similar approach. Statistical analysis could be carried out on the basis of showing P versus notP by using masked data which doesn’t tell the statistician what the underlying data represent.

  12. My heart plummets like a pelican considering the implications for Luis Dias here. But hope springs eternal.

  13. DEEBEE

    49er, 49er, 49er.
    As I was scanning down the comments — seeing the lack of a mention,\ of LD’s gGneral Theory of Relativity-(ism), I was foemulating a similar quip. You beat me to it.

  14. DEEBEE

    Almost a generation ago, my thesis advisor schooled me in the distinction between an MS and a Phd; namely in an MS you can say “you tried and it did not work”, but in PhD you have to “make it work”. Perhaps this creates a built in bias we learn early and some (a lot?) then add other “tools” to make it so.

Leave a Reply

Your email address will not be published. Required fields are marked *