Somebody some enterprising young academic, not realizing he should keep his mouth shut before attaining tenure, will publish a study which examines the bizarraries found in studies which begin with the words, “We recruited participants online.” (See this one, for example.)
The peer-reviewed study by Shenhav, Rand, and Greene begins with the words “We recruited participants online.” They call it “Divine Intuition: Cognitive Style Influences Belief in God”, and it is found in the Journal of Experimental Psychology: General (sorry, no link).
Before revealing the full purpose of our trio, let me explain their three experiments. Bear with me through these: not everything is easy.
Study 1.
Via the internet, they recruited 882 folks, two-thirds female, and asked them if they believed in God or not-God. They also asked about “belief in an immortal soul, familial religiosity during childhood, and change in belief in God since childhood,” etc. They finally asked three “math problems with intuitively attractive
but incorrect answers.” They only tell us one, which went something like this (try to answer before reading the solution):
A bat and a ball costs $137.50 in total. The bat costs $112.20 more than the ball. How much does the ball cost?
This is solved (this is from me, not them) by a “system of equations”, the first of which is “Bat + Ball = $137.50”, the second of which is “Bat – Ball = $112.20.” Therefore, “Bat = $137.50 – Ball” (from the first) and then (substituting into the second) “$137.50 – Ball – Ball = $112.20.” Thus, “Ball = $12.65” which makes “Bat = $12.65 + $112.20 = $124.85.” Checking shows $124.85 + $12.65 = $137.50.
Simple, once you get the hang of it. That is, once the method is taught to you. By a teacher. Who most likely resides in a school or university. Which is the place you’d be sitting when you learned these things. And what kind of students and teachers are more likely to be better at questions like this? Well, math and science students, naturally.
What was your answer before you read the solution? Did you find an intuitively attractive answer? No? Let’s return to where I said the problem “went something like.” The problem actually went:
A bat and a ball costs $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost?
Our trio says, “The response $0.10 springs immediately to mind, but the correct answer is $0.05. Choosing the attractive but incorrect answer signals greater reliance on intuition and less reliance on reflection.”
We math teachers call these kind of questions “gotchas.” Some less scrupulous pedants use them to show students that the students know less than the teacher. The question is a set up, designed as our trio said, to offer an answer which does not require much thinking. Student thoughts usually run along the line, “Who wants to think about a meaningless question about absurdly priced bats and balls? The bat’s a buck more? Gotta be ten cents.”
But some folks, science and math denizens, will recognize the “gotcha”, and work through the math. And so will the other people if the question is not presented in “gotcha” form, like I displayed originally.
So what we have here in this question is a reasonable filter to separate the math and science habituées from those folks who were once humanities majors. Used in this way, as a filter, the math problem would be unproblematic, but our trio did not use it as a filter.
Before telling you how they used it, let me ask you this question. Answer honestly. Among college graduates, who are more likely to be atheists, math/science or humanities majors? The former, of course. And at least some of this difference in attitude is due to acculturation and not deep and lasting philosophical inquiry. Christians (and I don’t mean “creationists”), for example, aren’t particularly welcome in biological circles, especially on the internet. On average. Not always. I mean, there is a tendency for acculturation to explain some but not all of the difference between theism and atheism. Nothing can be plainer than this.
Our trio took, for each participant a total of the wrong answers from the three “gotcha” questions. So everybody would have a score 0, 1, 2, or 3, with higher being “worse.” They then “correlated” this score with the answer the participants gave about belief in God. These two measures were linearly correlated (the wrong measure because of the discreteness of the score) to the tune of 0.18 (a bare whistle). The use of linear correlation can exaggerate this number, but at least this wrong one was accompanied by a wee p-value, which is a measure of success in academic studies.
The interpretation is that among those participants who believed in God, slightly, fractionally more of them scored poorly on the “gotchas” than those who did not believe in God. This was strange because the correlation between “Belief in God” and the separate question, “Convinced of God’s existence” was only 0.62, where one would have guessed it would have been unity.
Could it be an internet-recruited study pool just didn’t take the subject seriously? Well, our trio said they used a series of seriousness checks and tossed data that did not conform. So we are left with a puzzle. Or bad data.
Our trio’s explanation does not include the possibility that the slightly lower scores of the theists are because more atheists are found among those with backgrounds in math and science and of those more highly educated in general. Again, acculturation does explain some (not all) of the differences in belief. The authors summarily failed to recognize this. The effect I have in mind is not large, but then again neither was the effect found by the authors.
Study 2 was so similar to the first study that I refer interested readers to the paper. Relevant correlation here was 0.14, even smaller and well within the realm of explanation I proffer.
Study 3
They had on-line volunteers write about themselves, with instructions such as:
“Please write a paragraph (approximately 8–10 sentences) describing a time your intuition/first instinct led you in the right direction and resulted in a good outcome.” Participants were excluded if they failed to write at least eight sentences.
The condition “your intuition/first instinct” (intuition) was half the time switched with “reasoning through a situation” (reflective), and the “right…good” switched with “wrong…bad” in a 2×2 design.
Sadly for the authors, a low p-value was not to be found in correlating of these four conditions (intuition/good, intuition/bad, reflective/good, reflective/bad) with belief in God. But they were able to do some data churning and find a publishable p-value in the “crossover interaction between the recollected cognitive approach and the valence of the recollected outcome.”
Still, this experiment, since it did not find the main effect, must be considered a bust.
Conclusion
This section of papers is where all the fun is had. It is where speculation is wild and typically free from the burdens of evidence. The same is true in press reports of papers.
Science magazine summarized the paper thusly, “People who chose more intuitive answers on these questions were more likely to report stronger religious beliefs, even when the researchers controlled for IQ, education, political leanings, and other factors.” Which sounds a lot stronger than what we have seen is true. But you can’t blame Science (this time) because our trio open the conclusion with the words, “[we] showed that intuitive thinking predicts belief in God.”
Enter the speculation (references removed):
The observed relationship between reliance on intuition and belief in God may stem from multiple sources. First, as noted earlier, belief in God may be intuitive for reasons related to more general features of human cognition that give rise to tendencies toward dualism, anthropomorphism, and promiscuous teleology.
Good grief! Promiscuous teleology? Who wants to suffer from that? The good news is that there is a cure and belief “can be overridden through the engagement of controlled or reflective processes, with reflective processes enabling or supporting judgments based on less intuitive explanations.”
Our trio then explain that people who believe in God are more likely to find evidence which supports this belief. No word on whether those who believe in not-God are more likely to find evidence which support their belief. But we are warned that “the belief in God may give rise to a feedback cycle whereby satisfying explanatory appeals to God reinforce the intuitive cognitive style that originally favored the belief in God.” No feedback cycle mentioned for atheists.
The paper ends with what you can see the authors hope is a literary zinger:
How people think—or fail to think—about the prices of bats and balls is reflected in their thinking, and ultimately their convictions, about the metaphysical order of the universe.
Now since there is not one word about the limitations inherent in recruiting people via the internet, and there is nothing but mute silence on the data discrepancies (noted above), nor is there even a hint or even the shadow of one about plausible alternate explanations (such as I gave), and since the third experiment which should have had the strongest effect if the authors’ theory was true but which was not a success, and because the authors offer a one-sided conclusion, let me end with this.
How researchers think—or fail to think—about the role of experimental design is reflected in their writing, and ultimately their convictions, about how to draw unjustified conclusions from weak data.
Update From Stephen Dawson comes pointers to the two other “gotcha” questions; and from that same blog (Jospeph Hertzlinger’s) comes more revelations (here and here) about an experiment that your adjunct did not have time to cover (yet).
————————————————————————–
See also this report.
A lot like recruiting posters at El Tamino’s to submit to a survey on attitudes regarding AGW and then claiming universality. Ya’d think their definition of homogeneous is gay genius.
So knowledge of Algebra is evidence of atheism? I worked out the answer in my head in micro seconds, (137.5 – 112.2)/2, and yet I am not an atheist! It seems to me that stupid people are atheists.
Any of you atheists out there want to challenge me to an Algebra contest? Ha ha. I’ll clean your stupid clock. I was born an Algebra whiz. God-given talent. Didn’t do a thing to earn it. You, on the other hand, were robbed by God. No wonder you’re pouting about it.
I love these articles, you should do more of them. Many thanks.
I will not mention if she is a looker or not.
Studies relating physical or mental characteristics to political or religious preferences are interesting, given the screams of outrage occur when its racial characteristics that are being parametrized. Why, it’s almost as if the research is being politically directed.
Another example of preconceived belief leading the “researchers” astray. They could probably divide their sample in any different way, based on gluten in diet, or frequency of transcontinental flights during past 0.5 year , or Kelly blood group and find some difference. The described experiment was conducted many years ago. It shows that the majority of participants will snap the intuitive, and wrong, answer.
I would not go as far as formulating alternative explanations of the data for the following reason.
I suspect that the current authors are trying to use that old experiment as a tool to show the difference between two groups arbitrarily divided along certain variable, in this case religious belief. It seems that they primarily showed that the tool is inadequate for the task.
The peers who reviewed that work clearly flunked the test a bit more complicated than (1.10-1)/2. I fear to ask if they believe in God. But I am willing to bet that they would call Middle Ages the Dark Era.
More tendentious rubbish…from Briggs.
let’s take a closer look.
His primary complaint with Study 1 is this alternative account:
“Our trio’s explanation does not include the possibility that the slightly lower scores of the theists are because more atheists are found among those with backgrounds in math and science and of those more highly educated in general.â€
Our adjunct’s pet account is silly. For one, consider the base rates of math/science “denizens†among those with a college degree. Then remember that this online sample includes people with little or no education. Then be familiar with research indicating that even the stats-savvy fall prey to judgmental heuristics similar to the intuition problem used in this research (e.g., the representative heuristic; see Tversky & Khaneman’s early research conducted among statisticians). Then note that the authors statistically control for education in this study. Then see Studies 2 and 3.
Our adjunct is quick to skip over Study 2. Maybe that’s because the authors find that their predicted relationship holds when cognitive ability is controlled for statistically, further undermining Briggs’ account.
And then there’s Study 3, where participants are randomly assigned to favor intuition vs. reflection. Our adjunct completely misunderstands this study. The design is set up so that a given participant considers a time when intuition or reflection produced a good or bad outcome. Our adjunct is perplexed that the authors didn’t find a main effect for thinking style; anyone who reads the article should be shocked and amazed that an adjunct stats professor can’t (or won’t) understand why the predicted pattern of data is clearly a cross-over interaction (belief in god should be higher when participants think intuition is good and reflection is bad, and lower when reflection is good and intuition is bad). This study was not a bust; the data were as predicted, and blows Briggs’ alternative account out of the water.
Our adjunct ends his disingenuous assessment with a summary of concerns:
“Now since there is not one word about the limitations inherent in recruiting people via the internet..â€
Actually, the authors include two cites that speak directly to what are (and are incorrectly assumed to be) the limitations of internet research. See Buhrmester et al., 2011 and Horton et al., 2011. the bottom line is that internet samples are noisier (more heterogeneous, more error), but produce results similar to those from non-internet samples.
“…and there is nothing but mute silence on the data discrepancies (noted above)…â€
that’s because there weren’t any.
“…nor is there even a hint or even the shadow of one about plausible alternate explanations (such as I gave)..â€
The one Briggs gave doesn’t hold any water. Does he have *any* *plausible* alternative accounts?
“…and since the third experiment which should have had the strongest effect if the authors’ theory was true but which was not a success…â€
But in fact it was a success; see for yourself.
So on the one side we have three studies that provide converging evidence consistent with the authors’ claims (and then three additional studies with different samples, operationalizations, and measure, conducted by different researchers; see the link at the end of Briggs’ rant). And on the other side is the adjunct’s thin critique, sprinkled with the usual disparagement of peer review, p values, and small effect sizes (for an early, non-statistical take on the latter, see Prentice & Miller, 1992, Psychological Bulletin), plus the usual innuendo regarding what he believes is the real agenda of the researchers. As typical, short on substance but heavy on arrogance and cynicism. If only the rest of us could be as bright and pure as adjunct professor Briggs.
CDE, considering the tone and tenor of your rant you’re new to social interaction.
No “p” values necessary, just a hunch.
One does not expect statistical acumen from psychology majors, who generally take “stats for dummies” courses in undergrad. But if any of my quality engineers had come to me braying of running a linear correlation between a Y with two discrete values and an X with four discrete values and finding a correlation of only 0.18, I would have sent them home to lie down with a wet washrag over their face until the fever passed. Why do people outside the hard sciences get so excited about such tiny correlations? Just because they support their preconceived conclusions? That’s way too much reliance on intuitive thinking. At the very least, they should read Brian Joiner’s paper on lurking variables.
http://www.claremontmckenna.edu/math/moneill/Math152/Handouts/Joiner.pdf
i’m sorry Scott, are you implying that it’s not ok to attack others’ integrity or motives in this forum? if so, i don’t think Briggs got the memo. but let me compute a confidence interval…
“Before telling you how they used it, let me ask you this question. Answer honestly. Among college graduates, who are more likely to be atheists, math/science or humanities majors? The former, of course. ”
I doubt that. As far as the professors are concerned, the tendency of science professors to be overwhelmingly atheist appears to be limited to the psychology and biology departments, neither of which is noted for quantitative rigor. In the other direction, professors of accounting or finance are unlikely to be atheists.
Of course? I also doubt that math/science majors are more likely to be atheists. I don’t have any solid evidence. The Humanities colleagues I know all are liberal, though I don’t know if they believe in God; and I know quite a few religious colleagues in math/science.
Why would the correlation between “Belief in God†and “Convinced of God’s existence†be unity? Or does this demonstrate that “the use of linear correlation does NOT exaggerate the number� If one believes in God, does it imply that he is convinced of God’s existence? The data evidence in the paper shows the answer is no. I know “Belief in God†and “Convinced of God’s existence†mean differently to my daughters.
My guess would be that the social sciences have more atheists.
CDE: In terms of alternative conclusions for this study– people who tend to follow instructions are more likely to be atheists.
I dont like these sorts of tests because they tend to ignore the “easy” factor. Anyone taking the survey would know that the consequences for a wrong answer are going to be virtually non existent. Rather than waste their time trying to figure it out, they might have done the easiest thing they could to get them to the “next” button.
Anybody who designs computer software user interfaces is constantly aware of the “easy” factor. People tend to click the first “affirmative” button they see on the screen. This doesn’t make them atheists. I don’t really see how the “trick” question test is all that different.
CDE: if you’re wondering why this may have irked some then you could use a some sensitivity training. Saying that atheists are smarter/more logical/more rational is insulting. Saying that answering a trick question correctly makes someone smart is just plain ignorant (Google game theory). With that in mind it would seem that the authors of this paper (the conclusion was theirs after all) are either jerks, or idiots. You can decide for yourself which it is.
“Moral certainty is always a sign of cultural inferiority. The more uncivilized the man, the surer he is that he knows precisely what is right and what is wrong. All human progress, even in morals, has been the work of men who have doubted the current moral values, not of men who have whooped them up and tried to enforce them. The truly civilized man is always skeptical and tolerant, in this field as in all others. His culture is based on “I am not too sure.â€-H.L. Mencken, writer, editor, and critic (1880-1956)”
A timely “Quote of the Week” from WUWT.
“What’s in a name?” In one of these priceless ironies, the last name of the lead author, Shenhav, means ‘Ivory’ in Hebrew. If only one of the other authors were named Tower (or ‘Migdal’, for that matter)…
But seriously, this crap passes for research in Psych and then they wonder why us ‘hard quant’ types don’t take their ‘work’ seriously?!?
Will: I don’t think the researchers are jerks or idiots, but I do think many people misread basic research to fit their preconceived notions. The authors claim a link between the way people think and belief in God. Thinking intuitively does not mean stupid; indeed, there’s a debate in the JDM field about whether heuristics make us smart (Gigerenzer) or sometimes lead us astray (note: neither camp says stupid). Things that might mean smart – education or cognitive *ability* (as opposed to style) – the authors control for these things in the first two studies, and so obviously they don’t think that’s the cause of their effect. Is it Briggs fault for misunderstanding? He does the same thing with the research on (low-effort) thinking and ideology, but more egregiously, either misunderstanding the research or intentionally misleading his readers to grind an axe. Either way, not cool, especially with his level of haughtiness.
Scott: love the Mencken quote, and love Mencken (less his sexism). Bertrand Russell said something similar, something like: the chief problem with the world is that fools are so cocksure, but the wise so full of doubt.†Studies, including those Briggs mocks on his blog, are not above criticism (regarding the intuition/god research, it appears as though a large number of participants were thrown out. We don’t know numbers, or how those folks might be different from those whose data weren’t removed). No study (or series of studies) is perfect, but we can do better than stretching the truth, out of context quotes, ad hominem attacks, and the like. Researchers (and their peers who review the research) should be given a little more credit.
In my experience, it seems like people who fervently believe in God derive great comfort from that belief. On the other hand, it seems like people who fervently believe in not-God derive great comfort from that belief.
So . . . why did God create the latter group?
WILL: Man, you really whacked one of my big pet peeves with your line “consequences for a wrong answer are going to be virtually non-existent”. Ain’t it the truth! I try to convince the marketing folks that asking customers what they want is among the worst ways of finding out what they want (and will buy). I’ve suffered the consequences (massive layoff) of this type of so-called market research.
As they say, you get what you pay for, and it costs nothing to say you’ll buy something. I even learned this simple concept (to question the question) way back in high school. The adult population in the area where I grew up was pretty traumatized by the changing attitudes towards towards sex and drugs among the youth. Their answer was to survey all the high school students in the area, probably hoping that the survey results would calm all the fears. Their big mistake was announcing the survey ahead of time. Word spread quickly through the student population of the upcoming survey and the suggested answers we were to give. It was sold as a ‘school spirit’ sort of thing – the worse our answers, the cooler our school would be perceived, and clearly this survey was a competition with the other schools. Our student population was particularly successful with this effort, prompting more surveys to confirm the results, and we ended up with a reputation that horrified the parents and earned us the respect of kids from other schools. On the down-side, they closed one popular hangout area where all this sex and drug use was supposedly taking place. All these years later when I meet someone from the same town and tell them where I went to high school, they still say “wow, you guys really had a reputation”.
As a consequence of my profession, I used to get a lot of surveys in the mail, several a week. There was invariably a dollar inside, with the request “we know you are busy, but we hope you will use the enclosed dollar to by yourself a cup of coffee on us and take a few minutes to answer our survey”. I would take the surveys home and let the kids take turns filling them out, and they would get to keep the dollar for their efforts. I found it interesting how seriously the kids addressed the surveys and how diligently they answered the questions to the best of their ability. They never once asked the obvious question (“Isn’t this kinda silly for me to be answering questions I don’t understand?”). It made me wonder if they viewed school the same way, as some sort of arbitrary but structured silliness. Anyway, I loved telling this story whenever the marketing folks presented their latest customer survey proposal. They never seemed to see the humor in it, though.
RE: “Our trio then explain that people who believe in God are more likely to find evidence which supports this belief.”
COMMENT: Every summer, or maybe just every other summer or so, a pair of LDS missionaries and a gaggle of J. Witnesses will stop, separately, by to try & save my soul. Both groups, and the occassional other groups I’ve encountered ultimately resort to exactly the same fact-checking technique to ensure thier version of things is “true”: they pray, and pray, and pray some more until they’re sure they’ve gotten inspiration. Oddly, the prayers & inspiration so far ALWAYS match up with the precise same beliefs they were inculcated with since childhood. I have yet to encounter a missionary Witness who’s claimed they were formerly of LDS until they prayed for guidance, or vice versa, for example.
RE: “No word on whether those who believe in not-God are more likely to find evidence which support their belief.”
THAT question has been addressed & answered repeatedly with the same findings recurring. That question also frames the issue very oversimplistically; a better question, or supporting quesiton, would be why are so many atheists better informed about religion than believers, and, why are so many atheists/agnostics ex-believers??
PEW’s relatively recent survey that brought the recurring finding that atheists/agnostics into the public’s broad awareness reached the following speculation, as reported in the press:
“Why are Atheists and Agnostics better informed? The Los Angeles Times quotes one of the researchers who has a theory:
“”American atheists and agnostics tend to be people who grew up in a religious tradition and consciously gave it up, often after a great deal of reflection and study, said Alan Cooperman, associate director for research at the Pew Forum”
“”These are people who thought a lot about religion,” he said. “They’re not indifferent. They care about it.””
In other words, many non-believers started as believers, typically from childhood where this was presented as prima facie fact. But, over time, nagging doubts from various direct and/or logical contradictions prompted them to study things in depth and with education came skepticism to often outright rejection of of the faith they grew up with believing as fact. The search for clarity led to rejection.
PEW’s Research at: http://pewresearch.org/pubs/1745/religious-knowledge-in-america-survey-atheists-agnostics-score-highest
One representative news report of it at: http://www.csmonitor.com/USA/Society/2010/0928/In-US-atheists-know-religion-better-than-believers.-Is-that-bad
Penn Gillette sums up the matter in his particular style: http://www.youtube.com/watch?v=E3rGev6OZ3w&feature=related
A bat and a ball costs $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost?
10 cents isn’t the “intuitively attractive” answer. It is the answer for people who fail to read the question — who skip over the the words “more than the ball.”
All,
CDE, through his appeal to authority fallacy, reminds me that your adjunct did not emphasize the “deleting data” limitation of this study strongly enough. The authors should have included the results from a simultaneous analysis of the data that the authors tossed. Your adjunct means, the same analysis using all the data at once, including that deemed unacceptable. The effect claimed in minor already, and probably nonexistent to possibly even reversed with that data.
JH is right to emphasize that it is not only math/science graduates but those with more education in general that will tend to score higher on these “gotchas”. But it those with more education in general who are more likely to be atheists through acculturation.
Now your adjunct on this blog has repeatedly emphasized that a good scientist tries to explicate all the plausible explanations for this data. Presenting a one-sided account is not good science, and so as this paper presents only one avenue which might have generated the data it is not good science. It is, of course, logically possible that the effect noted by our trio is real, but it is also possible it is not. It has anyway not been demonstrated.
CDE is wrong, however, to suggest your adjunct believes our trio has ulterior motives. Your adjunct believes they probably have none, except laziness in being satisfied with the explanation that met their preconceptions.
It is another appeal to authority to point to “other studies” which show that internet-gathered data may be okay. Okay that other might be, but that does not prove that this data from this study is okay.
Your adjunct offers as evidence that this data is not okay the trio’s own Table 1 which shows only a modest correlation between two seemingly identical belief questions (see the main post). Not a word from our trio on this most perplexing result.
Also, your adjunct has written extensively showing just why and how p-values mislead and should be consigned to the garbage bin of intellectual ideas. Although your adjunct is only an adjunct, he is still right about this.
It is a pity that some folk don’t seem to realise that if it’s right it’s right irrespective of who says it.
Those folks are fools.
CDE:
You’re avoiding the thrust of my point. It sounds as if you are trying to dillute the original message by saying that its as innocent as ‘theists think different than atheists’.
The authors could have framed their conclusion in any number of ways. Here are two alternatives that you might find offensive:
– Atheists are more likely to do exactly as they are told.
or
– Theists are more likely to spend less time in a survey.
The study clearly sets out to show a link between analytical thinking (i.e. ‘smarts’) and atheism (‘dumbs’). In doing so the authors exposed their prejudices, which is a bad thing in most scientific fields. It’s also a pretty offensive presupposition…
My own biased, ignorant, and prejudiced conclusion is that the authors aren’t ready for prime-time. Their conclusion is unfounded due to a myriad of equally likely (and obvious) alternatives. The fact that they didn’t explore or test for these alternatives leads me to believe that they need to spend more time reading about a) Ergonomics and b) Game Theory before professing to be experts in human behavior.
Having something to publish is better than not having something to publish. Since negative results are only meaningful in the publishing world if positive results have been found somewhere prior, getting excited about negative results is really hard to do. Now, if we can make negative results look like positive results and get everyone excited that positive results have been found even if the results are really negative, we might just get our research published and someone might come along and say “Hey, they got published, maybe giving them more money is a good thing.”
Making sure there is bread on the table is not a bad virtue to have. It doesn’t always coincide with perfect honesty…
Hi Will,
Sorry to miss the thrust of your point. Though it’s worth pointing out that what you suggest may be a dilution on my part is pretty damn close to the authors’ title (“Divine Intuition: Cognitive style influences belief in God”). And pretty damn close to what the article claims. I know because I read it.
The authors could have framed their paper in one of the ways you suggest, but that wouldn’t make a lot of sense. For example, Study 3 is a true experiment; participants were randomly assigned to condition. In order for your account to explain the data, you’d need substantial attrition among theists – but only in 2 of the 4 cells, and/or substantial attrition among non-theists, but only in the opposite two cells. The authors do not confirm that attrition was consistent across conditions (and they should have), but I *seriously* doubt that this is what happened.
I conduct research on Amazon Turk. Folks participate to earn money, and if they don’t do the work–in this case, complete questionnaires appropriately (pages are time stamped so researchers know) they don’t get paid, and they may receive a negative rating which jeopardizes future money-earning opportunities. All of this, and evidence for why it does not skew results, can be found in the works previously cited and quickly disparaged as an appeal to authority by Briggs.
You’re linking analytic thinking with “smarts,†not the researchers. Again, this term and others (intuition, low-effort thinking, etc) have different connotations among cognitive scientists. For better or worse, researchers don’t frame their results for the blogosphere. And so I very much disagree with you about any prejudice displayed by the authors. And according to Briggs’ reply, he does to.
It’s possible these researchers from Harvard are foolish, as are the editors and reviewers at JGP: General (one of the top journals in Psychology) and Science (the top journal, period?). But so far I haven’t seen good reason to think so. And no, Briggs insistence that education and/or ability is an alternative explanation (despite mounds of evidence to the contrary) doesn’t count. Nor does his repeated claim that null hypothesis testing with p values is a suckers game. His own bias is clear enough to see and then discount.
Cheers.
Maybe the mental gymnastics required to believe in God stand a person in good stead to study maths or particle physics. A bit like the Welsh. If they grow up being able to pronounce the names of their towns and villages they are likely to become good singers.
CDE,
Tell you what. You write a guest post defending p-values or hypothesis tests and I’ll rebut showing why you’re wrong. Submit one to my email anytime and I’ll post.
“A bat and a ball costs $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost?”
The ball costs $1.10. The offer was for a bat AND a ball.
Hi Briggs (and the rest),
As a (part-time) academic, I’m sure you know that time is one of two things in short supply (the other of course is money). Perhaps I’ll take you up on your offer when the semester ends. In the interim, something quick and easy for all would be a short, readable summary where bias is quickly offset by those who disagree. You know, Wikipedia:
http://en.wikipedia.org/wiki/Statistical_hypothesis_testing#Controversy
As any who care to read will see, and as nearly all researchers know, null hypothesis testing has major limitations. A quick summary is listed, some (e.g., you don’t test what the researcher wants to know) more serious than others (e.g., statistical significance is not the same as practical significance).
But also note two other things: 1) a Bayesian approach offers up its own set of problems, just as difficult to resolve, and 2) one of the best solutions to the limitations of null hypothesis testing is replication.
#2 is really important. Researchers don’t take any one of the studies mocked by Briggs seriously because they find a low p value; they take it seriously because it was replicated. The low-effort/conservatism studies offer 4 conceptual replications (and others are noted within the paper); the cognitive style/belief in God paper offers 3, and another paper (graciously noted by Briggs) has 3 more. A p value is a p value, not more, maybe less. But when you see the same pattern of data (and low p values) across samples, measures, and researchers, and when the patterns change in meaningful ways under specified conditions, then it’s time to pay attention.
And that includes appropriate criticism (e.g., Briggs and I agree that the belief in God study didn’t disclose enough about who was kept of the final sample and why). Not: assuming the researchers have an agenda, fussing over problems (e.g., a measure’s restriction of range) that in fact strengthen the authors’ conclusions, pointing to what feels like a limitation without being able to specify why (“it’s a college/internet sample, and so…â€), thinking researchers claim more than they do (most psychology research is designed to test a theory within a sample, not apply the theory to all people at all times; see Mook, 1983, In Defense of External Invalidity), and of course ad hominem attacks. It’d also be good to accurately report what the researchers say and do. For example, in both “thought/thinking style†papers, Briggs conveniently misrepresents the strongest evidence for the researchers’ claims (a stats professor pretending that the authors should find a main effect when they clearly predict an interaction…really?!?!)
Briggs is right. There are significant limitations to null hypothesis testing. And yet he’s still wrong when he implies that a paper with multiple studies is necessarily flawed because the researchers rely on it.
Those kinds of masturbatory exercises always tell you a lot more about the “researchers” than about the subjects or the topic.
Pingback: William M. Briggs » Academic Pscyhologists Over-Rely On WEIRD People: Overconfidence Results
CDE: Thank you for taking the time to respond.
You’re appealing to your own authority. 🙂 I too have designed web-delivered surveys similar to the type used in the study.
Logical fallacies aside:
You said “The authors claim a link between the way people think and belief in God”. Now you’re claiming that the authors said which is a link between analytical ability and belief in God. The two statements are entirely different. What the authors really said: “Three studies—two correlational, one experimental—showed that intuitive thinking predicts belief in God.”
While you may say ‘analytical thinking != smarts’ I doubt you really believe this– Da Vinci, Newton, and Eienstien always make the short list of ‘smarts’ and were undoubtably analytical in their thinking.
Regardless, the paper claims that a respondants answer to an analyitical question is a reliable proxy for analyitical ability. They then correlate this ‘proxy’ with the respondants ‘theist’ position with an F-test.
My counter is that the response to the analyical question is _not_ a good proxy for the respondants analytical ability for the reasons I have already given; primarily the low (i.e. 0) cost for an incorrect answer.
Additionally, the test the authors appear to have applied to determine ‘theism’ appears more to be a test of religious conviction– Religion != Theism.
Not only is the test of analyical ability questionable, the classification is also questionable.
The same study, with the same set of tests, using the same methods, could have reported a number of conclusions, not one of which would have anything to do with correlating analytical thinking to theism. If you disagree with this then please, by all means, demonstrate how (using the published study) you came to this conclusion.
I have already admitted my bias– I think that the goal of the research was malicious right from the outset.
Regarding your claim about p-values and reproduceability:
http://news.yahoo.com/cancer-science-many-discoveries-dont-hold-174216262.html
It is not uncommon for government regulations to require p-values for certain projects. Sometimes it’s part of the funding requirement, other times its for regulatory reasons. This is not restricted to medicine. It _does_ produce a bias.
Briggs: I know this article by Heine and his colleagues, and I am in almost total agreement. You might note the many response articles, some of which disagree (Gartner et al) and others that think the target article doesn’t go far enough (Baumard & Sperber). You might also note that this article doesn’t change (speak against) anything I wrote. (and you might further note the article by Gosling and colleagues, who argue that internet research is one solution to the problem.)
Will: Certainly didn’t mean to suggest that I’m an authority or to appeal to it (though I question whether that’s always fallacious; there’s a reason why people pay auto mechanics, lawyers, and cardiologists). No matter, because I gave you the reasons why I doubt your web-based concerns. This isn’t about administering web surveys, but how MTurk works and if it creates some bias and why. There are incentives (time, effort, money) for researchers to get respondents to do what they’re asked, and incentives (money) for respondents to do what they are asked to do. In my experience, when appropriate steps are taken (good survey design, attentiveness checks) nearly all do.
I don’t think the statements you refer to are different. I think one is a subset of the other.
Personally, I think analytic thinking is “smart.†But I also think intuition can be smart. You might see Tim Wilson’s book on the smart unconscious (strangers to ourselves) or David Meyer’s book on intuition. But I don’t care one way or another, because I don’t believe in sacred cows on either side of the fence.
I still think the authors interpretation is appropriate, and far superior to any alternative that I’ve heard. Regarding your alternative explanation, thoughtless responding on MTurk does have (financial) consequences (and you really think atheists are less likely to follow directions?) and the researchers have built-in checks to insure attention and effort (see this appeal to authority, read, cite: Oppenheimer et al., 2009, in the paper). And then there’s the experimental study where none of this is a problem (assuming equal rates of attrition across conditions). As typical, the devil is in the details. Biased blog summaries tends to cut those important corners.
To be clear, I think your concerns are completely legitimate. I happen to think they can all be addressed and dismissed, given the details.
And finally, I’m not sure how this story about cancer research fits in. As I noted previously, replication is what is important; I think your link makes this point. P values alone doesn’t cut it. It’s wrong to think otherwise, and to imply that the researchers think otherwise.
Apologies for being brief, but work and travel call…
Pingback: William M. Briggs » Why Republicans Deny Science—And Reality: Request For Help
It’s not a question of intuition, it’s a question of not reading the question properly.
It should be drummed (I would have said beaten but that’s not allowed anymore!) into every student:
If you don’t answer the question set you get no marks.
The questioner is not interested in what you think the question should be about (unless we’re talking psychology or other touchy-feely namby pamby subjects!).
So RTFQ people!