Statistics

How Paranormal Research Differs From Normal Research

Cornell’s Daryl Bem case is instructive. He’s an academic who has published several notable peer-reviewed articles which claim that (several different versions of) ESP is real. Trouble is, despite the prominence of the journals, and the peer review, almost none of his peers believe his results.

They publish his papers anyway because the papers meet the statistical criterion of success, which is to say the papers contain wee p-values, which are p-values less than the magic number. Bem always finds, at least in the papers he submits for publication, publishable p-values. In his latest work he touts, “all but one of the experiments yielded statistically significant results.” This is code for “p-values less than the magic number.”

This sets up a conflict in the mind of the researcher. Small p-values are thought to be the proof definitive. Yet it is clearly absurd, or at least extraordinarily unlikely, that people can read minds through time and over vast distance, or that they can, by grimacing and grunting, bend spoons using only the power of thought.

The obvious answer—ignore the small p-values and substitute for them a stronger form of evidence—never occurs to the skeptical researcher. Well, it couldn’t really because the researcher has never been taught any other form of statistics. At that is the fault of people like me.

But what the researcher can do, and does, is to question Bem’s experimental protocols. He picks these protocols apart. He shows how other, non-paranormal explanations are just as, or even more, likely to have caused the results. He shows where “sensory leakage” could have crept in and masked itself as extrasensory perception.

In short, disbelieving Bem’s theory behind his statistics, the skeptical researcher picks Bem’s experiments apart. Or skeptics just ignore the statistics knowing that some other explanation besides the paranormal must exist. And all this is good.

Put it another way. The researcher reading Bem’s papers acts as a scientist should, asking himself, “What else could have caused these results?” There must be an end to this question, of course, for it is always possible that an infinite number of things could have caused a certain set of results. But there will, given the evidence available, be a finite list of plausible causes which should receive scrutiny—and which are preferable explanations over ESP.

Now wouldn’t it be nice if researchers in these “softer” fields did this routinely? Not just for extraordinary claims like Bem’s, but for all claims, especially those preposterous claims (we’re sick of these examples, I know) like exposure to the American flag turns one into a Republican, exposure to 4th of July parade turns one into a Republican, or that fMRIs can tell the difference between Christian and non-Christian brains.

These absurd hypotheses never receive the scrutiny Bem endures not because the claims are any more likely, but because they are more likely to match the political and emotional biases of researchers. About the fMRI they might think: Christians are different from us, aren’t they? They at least believe different things. Therefore, their brains must be wired differently, such that the poor souls were forced into believing what they do. Besides, just look at those small p-values! The results must be true.

So today a toast to alternate explanations. May they always be sought.

Categories: Statistics

5 replies »

  1. I can and have bent spoons using the power of thought.

    My thoughts direct my hands to grasp the spoon and bend it. I take this as proof that mind can surely affect matter, and that it is, in fact, a commonplace phenomenon.

    More to your point, surely the heart of the problem is the “procedural” approach to teaching almost all mathematics. We’re programmed from a young age to arrive at the right answer, and the focus on the mechanics overwhelms the understanding of the model.

    In the case of researchers with only a course or two in statistics under their belts, the small p-value is seen as the Holy Grail — the “right answer”, and in many cases it is clear that the researchers have very little insight into the meaning of their statistical manipulations (or they do and choose to ignore it). If you’re a hammer, everything looks like a nail.

    What mystifies me is the willingness to “do research” and then fumble the most important part — understanding the outcome. Actually, it doesn’t really mystify me: the imperative is to publish, not advance understanding. These folks are going through the motions so they may keep their jobs and, most importantly, keep the grant money flowing. A tiny p-value is an important hurdle on the way to publication, regardless of the vacuity of the statistical test applied.

  2. Dr. Briggs,

    You mentioned “other form of statistics” in contrast to p-value usage; how would a non-mathematician identify usage of these other forms?

    Do these other forms provide results that are more reliable than p-value usage coupled with an assessment of alternate explanations? 

    Why are these forms not commonly in use? For example, are these forms more difficult to interpret and validate?

    Finally, how would you recommend promoting these other forms in the wider technical community?

    Thanks!

    V/r.

  3. I expect any sort of research that questions an established research paradigm will be heavily scrutinised. ESP would insert cracks into the physicalist program that has been dominant since the time of Descartes.

  4. Is Dr Bem’s problem that he was over reliant on his p-values? Or is his problem that he has failed to consider what may be driving his results other that ESP? It seems that his problem is more of the later. I am sceptical that better use of statistics would solve the problem.

  5. Just a point, not statistical.

    The history of parapsychology has created a wealth of research, much of which shows amazingly positive statistical results for ESP, Clairvoyance, and many other strange and mysterious things (probably the existence of dragons).
    Without exception these studies when subjected to rigorous scientific conditions have failed to replicate their results. Strangely when a professional magician is brought in to oversee the experimental condition the phenomena disappears as well, almost as though the forces of the “other side” are scared of mere fake trickery? Often (I’m ashamed to reveal) there has been a bit of data fiddling on the researchers’ part, leading to some embarrassing revelations .

    Love of theory perhaps?

Leave a Reply

Your email address will not be published. Required fields are marked *