Statistics

Can fMRI Predict Who Believes In God? Part I

Read Part I, Part II, Part III, Part IV, Part V, Part VI, Part VII

Did you know that “regions” of your brain “light up” when you think about Santa Claus or God? And that these “regions” are thought to be “associated” with various behaviors like excess emotion, schizophrenia, and other, gentler forms of nuttiness?

It’s all true. Scientists regularly stick people’s heads inside machines, ask the people to think of this or that, and then watch as the machines show “regions” of the brain glowing orange. The scientists then employ statistical methods guaranteed to generate over-confidence, but which allow the scientists to write papers which contain broad, even bracing, claims about all of humanity and of how everybody’s brain functions.

This sort of thing is all the rage, so much so that hardly a week passes without new headlines about what secrets the Whitecoat Brigade have uncovered in the brain (this week: Study shows how scientists can now ‘read your mind’).

It is therefore of great interest to us to examine this phenomenon and see what it means. I have chosen one paper which I believe is representative of the worst excesses of the field. My goal is to show you that the conclusion, as stated by the authors, and one the authors believe they have proved, is actually far from proved, is in fact scarcely more likely to be true given the experiment than it was before the experiment, and that what was actually proved was how likely scientist’s are to find in their data their own preconceptions.

Warning: I mean this critique to be exhaustive, so while I run the risk of exhausting your patience, my excuse is that the length of this piece is necessary to do a thorough job.

fMRIs and God

The paper is “The neural correlates of religious and nonreligious belief,” published in PLoS One by Sam Harris and others in association with UCLA’s Staglin Center for Cognitive Neuroscience.

The idea is plausible: perhaps different areas of the brain are used when thinking “religiously” than in thinking non-religiously. After all, different areas are the brain are activated when, say, playing baseball versus working through mathematical proofs. All we have to do is hook up people to functional magnetic resonance imaging (fMRI) machines and their beliefs will be revealed to us.

Of course “thinking religiously” is a term so broad that it would be next to impossible to distinguish between “thinking non-religiously”, so in order to have any hope of answering the main question these terms have to be rigorously quantified. How about dividing people into two groups, those that agree that “Santa Claus is a myth” versus those who hold that “Santa Claus really exists”?

No, I’m only kidding. Though Harris did ask his experimental charges these questions, as a way of calibrating his newfangled phrenological device. His actual definition was to divide the sheep and goats via statements like “The [Christian] Biblical God really exists” versus “The [Christian] Biblical God is a myth.”

Both sheep and goats were asked a bunch of questions about the Christian religion, and their brains mapped via fMRI. Differences between the brains as groups were then used to claim this result:

A comparison of both stimulus categories suggests that religious thinking is more associated with brain regions that govern emotion, self-representation, and cognitive conflict, while thinking about ordinary facts is more reliant upon memory retrieval networks…Our study compares religious thinking with ordinary cognition and, as such, constitutes a step toward developing a neuropsychology of religion.

Let’s examine the study and discover whether these bold statements are backed by the evidence.

Harris gathered 54 people who were “free of obvious psychiatric illness or suicidal ideation” (which does not imply free from psychiatric illness or suicidal ideation). The 54 were called on the phone and asked questions which “allowed us to isolate the variable of religious belief, in an effort to admit only dedicated Christians and nonbelievers into the protocol.” It would have been interesting to know what these questions were, but Harris never says (perhaps they were the same eventually used in the experiment).

The culling began:

Once we had two groups of subjects (Christians and Nonbelievers), we attempted to balance these groups with respect to 1) general reasoning ability, 2) age, and 3) years of education. We also sought to exclude all subjects who exhibited signs of psychopathology…

Forty of these participated in the fMRI portion of our study, but ten were later dropped, and their data excluded from subsequent analysis, due to technical difficulties with their scans (2 subjects), or to achieve a gender balance between the two groups (1 subject), or because their responses to our experimental stimuli indicated that they did not actually meet the criteria for inclusion in our study as either nonbelievers or committed Christians (7 subjects).

Another way to state this is that they hand-selected the people so that they fit the team’s preconceptions about what Christians should be—but did not pay so much attention to what non-Christians were (more on this later). The hope is that all this early manipulation would not impinge upon the results or the general applicability of the findings. Hope is a wonderful thing and accomplishes much, so we should not disparage Harris and his brothers for relying on it. Scientists are people too.

Read Part I, Part II, Part III, Part IV, Part V, Part VI, Part VII

Categories: Statistics

16 replies »

  1. Harris, et al., are (were?) effectively attempting to photograph a concept — a daunting task at the outset and unlikely to happen in a simple experiment. In reality though, they attempted to photograph the belief in the validity of the concept. Something far more difficult. It’s no wonder they stacked the deck. To date, no one has been able to photograph simple concepts like “It’s too cold in here” let alone the belief in a higher one. And yet However, none of these [previous] studies were designed to isolate the variable of belief itself, or to determine whether religious belief differs from ordinary belief at the level of the brain. As if this one was specifically designed to do so.

    The number of variables to be controlled is enormous. One of the “rules” of psych experiments is to avoid direct questions. This experiment seems to have broken that rule. The Results section shows images purportedly taken of both “belief” and “non-belief” states. They established the state through direct questions which makes knowing the true state suspect even if they didn’t screw up the statistics. If they’ve accomplished anything, they’ve attained images of people answering TRUE and FALSE but not necessarily belief in the validity of the answer. IOW, they can’t distinguish truth from a lie.

    both groups were quicker to respond “true” than “false”

    Interesting.

    There’s way too much gobbledygook and wishful thinking in psychology. (I’m sure there’s a psychological reason 🙂 ) The Cognitive Psychology branch has the best chance in my opinion. I was however rather disappointing to find, in the intro course, the fact of WHO did an experiment far was more important than WHAT the experiment meant. Sadly, the impression given is even CogPsych is largely a back-patting society more interested in social interaction than discovering cognitive mechanisms.

  2. Wouldn’t it have been a better experiment to test each subject for two similar things — one of which they believe in, say Christian theology, and the other in which they don’t believe, say astrology?

  3. They did that. A direct comparison of belief minus disbelief in Christians and nonbelievers did not show any significant group differences for nonreligious stimuli. And While the contrast of belief minus disbelief yielded similar activation patterns for both stimulus categories, a comparison of all religious trials to all nonreligious trials produced a wide range of signal differences throughout the brain.

    But their determination of belief vs. non-belief leaves much to be desired.

    Gotta like that “BOLD” acronym.

  4. There’s an interesting statement in the Discussion section: One cannot reliably infer the presence of a mental state on the basis of brain data alone, unless the brain regions in question are known to be truly selective for a single state of mind. followed closely by Nevertheless, our results appear to make at least provisional sense of the emotional tone of belief.

    So what were they measuring? Belief? Emotion attached to Belief? Emotion attached to answering questions about Belief? How much of the brain is used in answering TRUE/FALSE? Can you say “equivocation”?.

    Then there’s this: These results may have many areas of application—ranging from the neuropsychology of religion, to the use of “belief-detection” as a surrogate for “lie-detection,” to understanding how the practice of science itself, and truth-claims generally, emerge from the biology of the human brain.

    The paydirt sentence — more study (and more money of course) will help. Some dolt beat them to the “lie-detection” part a few years ago. I lost the reference but IIRC, he used a sample size of 6 then tried to get it validated by using it in expert witness testimony. Fortunately, the judge tossed it. But one of these days …

    Orwell was sooo nearsighted.

    Alfred Bester wrote a story in the early 50’s about a murderer evading the probing of mind-reading telepaths. Talk about thought police.

  5. This is not the first, and I am sure not the last nonsense publication where the primary tool is fMRI. Remember the (British) study comparing brains of liberals to the brains of conservatives? It made it all the way to the international media. Just in case you forgot: one category has larger amygdala (“the structure responsible for fear and primitive emotions”) and smaller cingulate (“the structure responsible for courage and optimism”). Based on background knowledge we can derive the probability that the group characterized in that way is…………..

    On the other hand without further efforts one can characterize the authors of the current study:
    P(authors knowingly cheat l K) = 0.99

    Note: There must be a small chance that they are just plain stupid.

  6. “…or to achieve a gender balance between the two groups (1 subject), or because their responses to our experimental stimuli indicated that they did not actually meet the criteria for inclusion in our study as either nonbelievers or committed Christians (7 subjects).”

    Drop one to achieve gender balance? You gotta be kiddin’ me. The rest of this smacks of anticipation that the seven would not conform to their desired outcome.

    The thing that galls me most is that I and other taxpayers have to support this garbage.

  7. 54 started

    40 progressed

    30 completed

    So 24 out of the original sample fell out.

    And he calls himself a scientist.

  8. As far as I can tell, neurological studies of psychological phenomena are marked by pathetically-small sample sizes.

    For example, after looking at nine studies of Transcranial Magnetic Stimulation (more than the sample size of one of the studies), I found the largest sample size was a mere 30. Are there any large studies of Transcranial Magnetic Stimulation?

    For another example, Benjamin Libet’s frequently cited study (supposedly discrediting free will) has a sample size of five.

  9. Mr Briggs,
    The kind of brilliant work we are seeing is the culmination of fMRI research and that new branch of science called cognitive neuroscience, now touching its pinnacle. The American Geophysical Union Communicator Chris Mooney has embarked on a new career path based on such precepts in climate change communication.

    Edward Vul and Nancy Kanwisher have papers on the topic.

Leave a Reply

Your email address will not be published. Required fields are marked *