Brits Psychologists Cry “BS!” Over Research Practices. Or, Die P-Value, Die Die Die

This is a bull. And what comes out of this animal?
This is a bull. And what comes out of this animal?

In The British Psychological Society’s official organ The Psychologist, two gents Tom Farsides and Paul Sparks, call BS on standard research practices.

There is a worrying amount of outright fraud in psychology, even if it may be no more common than in other disciplines. Consider the roll call of those who have in recent years had high-status peer-reviewed papers retracted because of confirmed or suspected fraud…It seems reasonable to expect that there will be further revelations and retractions.

That’s a depressing list, but out-and-out lies in psychology may be the least of our worries. Could most of what we hold to be true in psychology be wrong (Ioannidis, 2005)? We now turn to several pieces of evidence to demonstrate compellingly that contemporary psychology is liberally sprayed with bullshit…

Leading this evidence is…drum-roll, please…”Lies, damned lies and statistics”.

Almost all published studies report statistically significant effects…

Which is why the term “statistically significant” ought to be banned, purged, stabbed through the belly, gutted, and left to rot on the street.

But maybe this isn’t harsh enough. Statistical “significance” is pure magical thinking, and nothing else. Results which tout it aren’t science, they’re magic.

There. Is that harsh enough? No scientist wants to be accused of being irrational. And speaking of magic, ladies and gentlemen, I give you…more drums, please…The Magic Number!

So-called ‘p hacking’ also remains rife in psychology. Researchers make numerous decisions about methods and analysis, each of which may affect the statistical significance of the results they find (e.g., concerning sample size, sample composition, studies included or omitted from programmes of research, variables, potential outliers, statistical techniques). Simmons et al. (2011) vividly illustrate this by reporting a study that ‘revealed the predicted effect [that] people were nearly a year-and-a-half younger after listening to When I’m 64 than they were after listening to ‘a control group tune that did not mention age…

If that isn’t asinine enough for you, I don’t know what is. Maybe the hundreds of similar “findings” you and I, dear reader, have dissected over the years.

For example, evidence is increasingly revealing that alarming numbers of psychologists are willing to admit having engaged in questionable research practices…Many published studies have selectively included or omitted evidence to support claims that authors must know are far from accurately representing the truth, the whole truth and nothing but the truth…

Unconvinced readers can discover for themselves how easy it is to ‘Hack your way to scientific glory’ by visiting an online tool ( and selecting different sets of variables from a genuine database to find (or ‘fail’ to find) a significant relationship between the US economy and a particular party being in office.

I’m stealing that line: Hack your way to scientific glory!

Many researchers and reviewers simply do not have the methodological or statistical expertise necessary to effectively engage in science the way it is currently practised in mainstream psychology…Scientists and reviewers also increasingly admit that they simply cannot keep up with the sheer volume and complexity of things in which they are allegedly supposed to have expertise…

That’s because there are too many scientists and too much science, most of it now very poor grade stuff. Having to sort through it all sucks up time that would be better spent doing something useful. Solution: massively cut back on government funding of science.

Who wants to bet that this will happen? I mean, before the collapse.

Few successful attempts have been made to rigorously replicate findings in psychology. Recent attempts to do so have suggested that even studies almost identical to original ones rarely produce reassuring confirmation of their reported results…

Why? P-values, of course, and all the other standard sloppiness already discussed.

And now comes my favorite line in the paper (in the original bold):

The system is screwed

To which we can only say Amen. Preach it.

Most prestigious journals also have a strong preference for novel and dramatic findings…

Whatever brings the press in, eh, boys? And the money. Don’t forget the money. The authors didn’t:

[I]t is in the individual researcher’s best economic interest to downgrade the importance of truth in order to maximise publications, grants, promotion, media exposure, indicators of impact, and all the other glittering prizes valued in contemporary scientific and academic communities…

I wept when I read this. Tears of joy. Among their solutions, these:

Psychologists and their institutions should do everything within their power to champion truth and to confront all barriers to it…

Be honest. Championing truth requires honesty about ignorance, inadequacies, and mistakes…Denying flaws helps no one, especially if our denials are accompanied by poorly received assertions of invincibility and superiority…

Important as they are, experiments are neither necessary nor sufficient for empiricism, scholarship or ‘science’…

Experiments within psychology are usually (at best) little more than demonstrations that something can occur. This is usually in service of rejecting a null hypothesis but it is almost as often misreported as suggesting (or showing or, worst of all, ‘proving’) something much more substantial — that something does or must occur.

Well, universities are no longer the best places to search for and defend truth, so if psychology wants to prosper, it will have to flee the confines of political correctness.

The philosophical point about the value of experiments is spot on. Trumpet it everywhere, neighbors and friends. The epistemological point about what experiments show is also correct and important. Repeat it to yourself often until it sticks.


  1. From the paper:
    4. … To study important phenomena well, we need first to identify what they are and what central characteristics they have (Rozin, 2001). To study things thoroughly, we need to identify processes and outcomes other than those derived from our pet ‘theories’. Evaluating the research literature may well require skills different from those that have been dominant during much of its production (Koch, 1981). In particular, we have found particularly effective accurately describing others’ procedures and outcomes in ordinary language and then examining how well these justify the usually jargonistic ‘theoretical’ claims supposedly supported by them (cf. Billig, 2013).

    I find this a most interesting suggestion. Why shouldn’t there be a formal division of labor in research as in most other complex endeavors? Rare is the person who possesses all the skills to do a complete job excellently in all the parts. Those with the appropriate talents and experience ought to frame the questions; others design the investigations; still others conduct and describe them; and finally independent reviewers criticize the results which then cycle back to the question framers to correct and improve the advance of knowledge.

    I’ll leave incentivising and funding implications to others more skilled than I am at figuring out the logistics.

  2. Gary, but but but it’s not science if you don’t use big words! “the professor had students fill out bubbles on a test” is not nearly as scientific as “an instrument was used to measure students rate of x”.

  3. “it will have to flee the confines of political correctness” This translates to “it’s doomed”. You’re talking about highly social researchers who value public adoration. They often seem to feel like messiahs, bringing us the truth about who we are. Not likely they can withstand being on the outside (of PC), even if that’s where the truth is.

    Gary’s comment on division of labor might help. However, we could end up with something similar to global warming, where scientists run the tests, collect data, make models and then their evangelists spread the word. If we cut out the evangelizing…..and the politics. Could work—should help anyway.

  4. Oh, yeah?

    Well, to combine two issues–“Science” has spoken about bathroom use, and it’s supported by wee-p-values (or something). The results? You’re a bigot! :

    A Time editor waved his hands, recounted some anecdotes about cross-dressers in Samoa, and wrapped it all up in the ultimate “Scientific” belief-system, evolution. All of this Sciencey babbling concluded with the Scientific conclusion that you’re a bunch of bigots, shut-up and let that guy in a dress into the girls room! Science has spoken!

  5. Kent,

    Good find. Evolution also “causes” or “creates” murderers, rapists, pedophiles, woofies, car alarm inventors, and hip hop musicians. Therefore all these things are good.

  6. We are maggot-infested with quacks, frauds, and charlatans posing as “experts”, not just in psychology but in every scientific field.

    Quackery is king. Science is dead, dead, dead. The university has crumbled to dust, the children are starving, and society has devolved into madness and savagery.

  7. Freude, shoener Goeterfunken, …

    But our joy will likely be short lived. This makes perfect sense to the folks who hang out on wmbriggs, junkscience, and numberwatch (to most of us anyway).

    I have some serious doubts about main line skeptics. Science is science. People who oppose science are deniers.

  8. Sheri,
    Of course there “could” be problems with a division of labor. Industrialization isn’t the answer to everything. Think of how Ford’s assembly line innovation made auto workers just cogs in a grander machine.

    However, we already have something of an informal division of labor now, but it isn’t working very effectively to advance knowledge. The present operation: politicians set the agenda, researchers chase the grants available in a restricted political arena, ill-informed journalists publish stories for the politicians to advance their agenda. It’s the same basic structure I proposed, just one co-opted for a different purpose.

  9. THERE’s THIS (about fraud):

    “There is a worrying amount of OUTRIGHT FRAUD in psychology, even if it may be no more common than in other disciplines.” (EMPHASIS added)

    AND ALSO THIS (about quality control, or lack thereof):

    “Scientists and reviewers also increasingly admit that THEY SIMPLY CANNOT KEEP UP with the sheer volume and complexity of things in which they are allegedly supposed to have expertise…
    “That’s because there are too many scientists and too much SCIENCE, MOST OF IT NOW VERY POOR GRADE STUFF.” (EMPHASIS added)

    Literally yesterday the problem was, per Briggs, with “science.”
    Today, the problem is correctly presented as lying with some people, who don’t do “science” right.

    Huge distinction, but certainly a proper distinction with a substantial difference.

    (Note: The very “science” so routinely slandered is the very same “science” that brought to light the misbehavior of some of the self-proclaimed practitioners and the inability of others to do it with the desired thoroughness — an example of the [eventually] self-correcting intrinsic nature of science done right)

    That looks like real progress!!!

    ….and then we read the comments & realize it was illusory… no acknowledgment of the ‘bad science’ that even credible scientists reject (or have never accepted). Any bad example remains a basis to condemn the entire discipline…

  10. The reward system in academia practically incites fraud. Every faculty member is evaluated according to how much research money he obtains and how many refereed publications he has, especially publications in high impact journals, i.e., those with the most citations.

    The actual quality of the work is never an issue. In this age of extremely narrow specialization, promotion and tenure committees (I chaired one for over 10 years) and the general faculty are not competent to judge candidates work, and so they use the prestige of the funding agencies and journals as a surrogate indicator.

    University administrators and faculty and students (especially graduate students) benefit from being associated with high prestige institutions, and they have no interest in reform.

  11. bob sykes,
    I suspect the reward system may be changing (albeit slowly) for some of the USNews ranked 3rd and 4th tier schools who aren’t so wed to high-powered research. The confluence of several factors — demand for institutions to demonstrate that students actually are learning, the squeeze on federal dollars, the increasing real expense of college, the push for everybody to go to college — is causing schools to emphasize teaching more than in the past. At my institution, we created an Office of Teaching and Learning specifically charged with helping both students and faculty to succeed in the primary pedagogical mission. Research is still a significant budget factor, but I see the culture changing, and not only among the Arts & Humanities faculty who didn’t attract many research dollars anyway, but also among some of the STEM faculty. Not everybody can latch onto the prestige bandwagon so they need to look elsewhere for providing quality.

  12. Brad Tittle: Those who oppose bad science are just very rational, concerned people. Those who oppose good science may be deniers. Figuring out which is which is the challenge.

    Gary: I wasn’t disagreeing with the division of labor, just commenting that are challenges there also. When I was college, we were divided into groups of six and assigned a research subject for the semester. We did the paper by having each person do what they did best—collect the research, do the math, type the paper. It worked very well. I think I’m just skeptical of making it work large scale. Perhaps I’m overly skeptical.

    Ken: How long do we wait before stepping in and complaining—until there are 100 good scientists left? Fifty? Do we just let the bad go on and pretend it’s not happening? Like with gay marriage, illegal immigration, bathroom laws? Perhaps condemning the whole discipline is not fair, but at this point, praise gets granted to all the science if you praise one. Only the praise is heard—which may be reason enough to leave it out. Just saying.

  13. This is good news. It surprises me that it has taken so long.
    The beginning of a virtuous circle, I hope. In physiotherapy, huge amounts of money could be saved and that is what will drive it. Not just the funding of the ‘science studies’ but the provision of the treatment which is little better than placebo and rests on the ‘science’ that proves its worth. I have been at odds with most, not just many of my colleagues for years over this.

    I used to have a saying about ‘passive treatment modalities’. In 2011 at a private hospital, a lovely place, I enjoyed it very much but in my first two weeks I nearly left because I “wasn’t doing enough passive treatment on patients” and some only had two sessions when their insurance or the contract covered for four or six.
    When I said that I would finish my week and leave, the supervisor rang from holiday and said “nobody is given treatment they don’t need at this hospital”. Here’s hoping for more truth about what works and what doesn’t.

    It really is a problem and it shouldn’t be because there’s no shortage of work, no shortage of challenges. Nobody actually wants to do the work though everybody just wants to talk about it. hence ‘studies’ and ‘research’. You don’t have to go near a patient, you’ll be paid more, respected more and do less work (I’m speaking for my field, not trying to insult all researchers or all scientists.

  14. Using the phrase ‘Hack your way to scientific glory’ makes it seem like the problem could be a few bad apples cheating in order to gain outsized rewards.

    My impression is that it is more ‘Hack your way to scientific employability’. You do it or you quit. Those are the choices.

Leave a Comment

Your email address will not be published. Required fields are marked *