Statistics

Watching Porn Makes People More Religious? The Severe Limitations of Quantifying Behavior

16009454716_9380e31342_b

One paper reports, “People who watch porn more than once a week tend to become more religious, a researcher claims, while those who watch racy videos occasionally tend to drift away from religion.”

Seems if we want godlier people, we should serve up more hardcore, right? Well, it’s science, and science specializes in discovering anti-intuitive things. It’s dangerous to question science.

The research referred to is “Does Viewing Pornography Diminish Religiosity Over Time? Evidence From Two-Wave Panel Data” by Samuel Perry, published in The Journal of Sex Research.

Now this work has its own difficulties, which I’ll outline. But it is also a prime example of what has gone wrong in much research. For instance, Perry begins his work with this comment: “persons who score higher in religiosity tend to report viewing pornography less frequently”.

Maybe you didn’t notice the fundamental error, accustomed as we are to numbers. Numbers, numbers, everywhere numbers. We are under sway to the same idea that gripped Lord Kelvin, who said, “I often say that when you can measure you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind.” So used to numbers are we that we rarely stop to ask: can we really quantify a man’s religiosity, or any emotion or belief?

If you think we can, ask yourself how happy you are right now. On a scale from -17.2 to π2. We need this numerical scale; it after all is the numbers which turn your happiness into science. What value do you give? Well, are you at ease? Or feeling benign? Or ecstatic, pleased, of good cheer, crapulent, irrepressible, smirking, sanguine, ebullient, exultant, tickled pink, winsome, content? Or maybe you aren’t so happy and are on the sad side of things. Maybe you’re merely melancholic. Or sadhearted. Or forlorn, horrible, or aggrieved or grieved, or woebegone, edgy, shattered, black, downbeat, grim, joyless, distressed, iridescent, glum, or just plain awful.

We have so many words for states or absences of happiness, or for any other emotion, because moods are infinitely shaded and impossible to capture perfectly. Literature is a better guide to human nature than slide rules.

Anyway, suppose your neighbor scores himself a 2.017 on our happiness scale, and you say 2.016. Is your neighbor happier than you? Always? In every aspect?

So how religious are you? Just what does being religious mean? Exactly, now. Are you always as religious as you are now? Or do you vary? How do you capture this variability? And how much porn do you watch? Are you into gay sex? You can tell me. It’ll be our secret: I won’t tell your wife. Are people who are more religious likely to rate material as pornographic as non-religious?

The conceit of science is that emotions or states of belief can be captured and graded numerically. Though no scientist claims numbers assess emotions perfectly, scientists do act as if the unmeasurable aspects of emotions and beliefs can be safely ignored. This always leads to the Deadly Sin of Reification, where the measurement becomes the thing; the numbers become all that is seen. Yet the reality must surely be that that which can’t be quantified is essential.

All this introduction is necessary to understand the central criticism of Perry’s work (and other works like his). He says:

While the general assumption is that religiosity leads to lower levels of porn use, recent research suggests that more frequent porn consumption, especially for religious persons, is associated with guilt and embarrassment, potentially diminishing interest in religious or spiritual activities while also potentially creating feelings of scrupulosity that may draw individuals away from religious community.

Academics specialize in making the simple sound important with highfalutin language. They also create for themselves claims of discovering things already known well by common people. But skip that.

Perry’s idea was to look at surveys of some folks over a six-year period asking them of and then quantifying their “religiosity” and porn habits. Perry admits “social desirability could discourage honest answers, given that porn consumption in larger amounts is still viewed as morally objectionable.” Still.

His numbers were input into a statistical model the limitations of which Perry is apparently unfamiliar. (Regression with quadratic effects on porn viewing.) There isn’t space here to criticize his technique (I have done so here and here), but suffice to say his technique cannot prove cause, and that it’s far, far too easy for the method he used to declare “significance.” Worse, Perry does not show the numbers from the survey; instead, he gives only the output of his statistical model.

Now Perry admits his model agrees that, as all non-scientists already knew, “viewing pornography can reduce religiosity over time” and that porn is “secularizing agent, helping to weaken religious vitality among those who consistently view it.”

But then Perry’s model says porn viewing “at more extreme levels may actually stimulate, or at least be conducive to, greater religiosity”. Which sounds absurd—but only to those who have, say, a Christian notion of “religiosity”. Echoing this, Perry says “greater levels of religious practice do not necessarily amount to traditionalist sexual views” and that some porn viewers “see no severe moral conflict between viewing sexually explicit materials and their religious beliefs”. If this is true, it also means his quantification of “religiosity” has no comparative meaning, but so hungry for numbers is he that he didn’t notice this.

That his “finding” is likely a statistical artifact did not occur to Perry. And he, like nearly all researchers, shows no comprehension of the severe limitations of quantifying human behavior.

Categories: Statistics

18 replies »

  1. I repeat my objection to Matt’s off-repeated statement that we cannot “really quantify a man’s religiosity, or any emotion or belief.” The proof that this statement goes too far lies in Matt’s own words: “We have so many words for states or absences of happiness.” Yes, yes, yes, we do. We natively and characteristically assign ‘more’ or ‘less’ to ‘religiousity’ and to “any emotion or belief.”

    That “moods are infinitely shaded and impossible to capture perfectly” is not at all the same thing as saying that the very act of assigning a number to ‘more or less’ is automatically incorrect. We assess and compare our own and others’ emotional states all the time. Matt’s objection to assigning numbers to religiousity, moods, and emotional states, writ large, would assert that ‘literary’ words like ‘sad’ or ‘happy’ cannot possibly be put on any numerical scale, such that we could express in numbers that ‘sad’ is less happy than ‘happy’.

    Sure, we can quibble over whether ‘sad’ is exactly 4.356 ‘happiness units’ from ‘happy’. But that’s not Matt’s thesis. His thesis is that any numerical comparison of ‘sad’ and ‘happy’ is false and inherently inaccurate.

    But we natively and characteristically assign ‘more’ or ‘less’ with some amount of agreement. For instance, we generally agree that some things are more attractive than other things, though we may quibble about the exact numbers.

    But Matt is stating that somehow, saying that ‘Wow’ is a 10, and ‘ick’ is a 1, so distorts the meaning of ‘Wow’ and ‘ick’ that we can’t possibly derive any meaning from that.

    I submit that this is obviously false, woolly-headed, tendentious, and unhelpful to Matt’s analysis. Reifying a number is bad; but it is no worse than sacralizing words in common human use, such as ‘sad’ and ‘happy’, ‘Wow’ and ‘ick’, such that they become ineffable, too ‘pure’ for us, too delicate for us to examine, lest they dissolve in our hands.

  2. “Well, it’s science, and science specializes in discovering anti-intuitive things. It’s dangerous to question science.”

    EEeeegggaaaaaddssssss…here we go again, picking a data point (one researcher’s “study”) and extrapolating to an entire discipline as if one practitioner [or a small group of’m] is representative. Garbled press summaries of whatever study are not science/part of the scientific process — and have no business being interleaved with a critique of the underlying study.

    While the study addresses porn & religiosity, those of us working in the real world have long observed a similar pattern (covering every territory): Hypocrites of all stripes/areas of practice tend to include a sizable proportion who will expend considerable energy railing against the very thing they’re very guilty of or outwardly behaving in the opposite manner. Dirty cops, for example, commonly present a persona of exemplary competence & professionalism; Ted Haggard [preacher] spent a lot of time bemoaning homosexuality while he had ongoing affairs with boys; consultants knowing little of the subject will project decisive demeanor & make firm assertions…and so on it goes with the same basic pattern… Cher, ages ago, came out with a hit song describing this (Gypsies, Tramps and Thieves). But I digress a bit…but only to note how blatant and common hypocrisy is in our society by brief example.

    Science is a process of theories tested by experiment and the results challenged by independent testing to reproduce, or refute, or refine early results. Science is self-questioning. That self-questioning aspect of science is fundamental to the scientific process. Science is not a single study–it is the result of a process of evaluation, challenge, confirmation.

    But Briggs, again, presents science as something it is not (“It’s dangerous to question science.”), and hyper-generalizes a single point reference (what may be bad or pseudoscience) as broadly representative of science as a discipline. If we could put numbers on such sweeping generalizations (a logical fallacy) those numbers would meet all of Briggs’ criteria for being an[other] example of the “sin of reification.”

  3. “Anyway, suppose your neighbor scores himself a 2.017 on our happiness scale, and you say 2.016. Is your neighbor happier than you? Always? In every aspect?”

    Yes! Furthermore, ‘significantly’ happier than you (because N=10000).

    Latest example of this here
    http://www.culturalcognition.net/blog/2016/5/19/serious-problems-with-the-strongest-evidence-to-date-on-cons.html
    People are ‘significantly’ (1%) more likely to support climate action if you tell them there’s a 97% consensus of climate scientists…

  4. Remember, just because we can measure it doesn’t mean we can control it.

    Of course religious people who believe pornography is wrong but watch it anyway gravitate away from religion. It “solves” the cognitive dissonance created by the behaviour. It seems evident that continually engaging in a behaviour will lead to the person accepting the behaviour as right—happens every second. Not a new discovery—seems a better topic could have studied.

    JohnK: Many people argue in the opposite direction you seem to be—they argue that a generalization is wrong and they throw out exceptions to prove it. You seem to be bundling the exceptions and all as one “number”. Your comment is quite different from what I usually see.
    One very annoying example of the demands of quantification of non-quantifiable things is the “Pain Scale” virtually every medical practitioner uses. It is completely useless. However, you can’t opt out—medicine demands a number be provided. I often just make one up to make the asker happy. It means nothing but they don’t care—they just need a number.

    Ken: Have you closely checked out social science, nutritional science, etc. lately? Have you tried saying global warming is not real? That salt and sugar won’t kill you? That gluten-free is only important if you have Celiac’s disease? You won’t get very far because the “science” (in quotes because much of it may not be actual science) is against you. These are taken to be absolute truths by many, many people.
    “… those of us working in the real world have long observed a similar pattern (covering every territory)…” leads me to ask why there was a study necessary then. We know these things, why must they be quantified by science? Just in case we’re wrong? Couldn’t further real-world observation tell us that?

  5. JohnK,

    Excellent points. Comparisons are surely possible, hence not only more and less but also happy and sad. I do not say, and do not imply, the invalidity of comparisons, nor do I claim absolute irredeemable error in quantification. I do say that, for instance, we do better with words than with numbers at making these comparisons and at drawing distinctions, etc. And I absolutely insist that over-certainty is guaranteed when the quantification of emotions and beliefs is used.

  6. In the business world, marketers & customer service folks love surveys with lots of choices (1-10, 1-5, Strongly Disagree to Strongly Agree, etc). I think it gives them the ability to act like they’re doing “science”!

    I much prefer the forced Yes/No choice constrained as much as possible to the metric that we are trying to find – “Was your problem completely resolved?”, “Did we have the part or a suitable similar part that you were looking for?”

    Surveys that ask nebulous questions like “How likely are you to recommend our product? 1 ‘not likely’ – 10 ‘very likely'” are pretty much useless. A better one might be “Have you recommended our product to someone you know?”

    Fundamentally though I think numerical scales in human behavior need to just go out the window. We simply cannot measure human feelings. But we can measure specific outcomes, so focus should be on those.

  7. I have noticed that many studies claim to have measured something when in fact nothing was measured. People were just asked questions by the researcher and then numbers were assigned to the answers. It’s like the EPA study that claimed exposure to secondhand smoke caused lung cancer but no exposure was measured. People were asked about their exposure and a numerical value was assigned to their answer. Dr. Melvin First actually measured exposure, but the EPA ignored his study.
    http://dengulenegl.dk/blog/wp-content/uploads/2008/12/first2.jpg

  8. “Surveys that ask nebulous questions like “How likely are you to recommend our product? 1 ‘not likely’ – 10 ‘very likely’” are pretty much useless.”

    Nate, I would agree with your take here. However, these questions do have some use when used in conjunction with yes/no choice constrained questions to calibrate the range based questions. They lead to some interesting feedback on customer “stated” behaviour. Of course, it’s important to compare this with “actual” customer behaviour i.e. purchases.

    For example, it is a fairly common result in customer sat surveys to get higher “will recommend” results if the customer experiences a problem that is handled quickly and to the customer’s satisfaction than if the customer doesn’t ever experience a problem at all. People like to tell their friends when something goes wrong but have nothing to talk about when things go right.

    One shouldn’t conclude from this that it’s a good idea to screw up with the customer, on purpose, and fix it fast. 😉 But when you do screw up, fixing it quickly and to the customer’s satisfaction will help you keep a customer.

  9. Matt,

    Those of us who have to make real forecasts know about
    1. Marketing studies (real changes in volume)
    2. Discrete choice studies
    3. Sequential monodic (RCB, PBIBS, etc.)
    4. Regular monadic, and,

    Calibration.

    So we can quantify volume changes reasonably well thank you.

    If you are interested, there is an actual literature in this. Consider
    * Abelson & Tukey http://projecteuclid.org/euclid.aoms/1177703869
    * most of Thusrstone’s and Luce’s stuff
    * item response theory and psychometrics in general (skip the intro level)

    Or failing that try running an experiment with balanced presentation orders. Linear scales fall out. The trick is the balanced presentations.

  10. I should mention that the experiment works with purely ordinal inputs (e.g. Total orders) too. No scale necessary (as with “likeing “)

  11. Briggs is right on here. There is no need for a number when a description will do fine and give more valuable information.

    ‘Religiosity’ is a quality without a commonly agreed meaning so this renders the given quantity suspect. How vague, useless clumsy and shallow. It’s no use declaring a scientific victory because a number has been obtained. It’s repetitive but why is a number better than words in the case of human experience?

    This is subjectivity treated as pure objectivity. Objective things have actual values and measures. Subjective things have description and explanation they don’t lend themselves to numbers because they are complex and unquantifiable. Numbers can’t do everything. It’s wrong to fool the public into thinking that they can. It’s numerical sophistry!

    As explained before something is only possessing of an amount of a given quality by comparison with something else from the same identical group. Just like with the intellect argument and IQ scores.
    Pain scales:
    There’s no accepted number or amount of pain for a given problem. (there is a relative description or rough end of the scale expected) There are responses to pain which are considered by the examiner. Only one has to do with sheer severity.
    The patient’s pain description is quoted in the notes where special note is required. It can be understood by others and gives more information. The pain scale is only one question in a host of questions. It is useful to show the attitude towards the pain and whether they find it manageable rather than the number itself which is of little interest. Listening to how the patient answers, their tone, how long they take to answer, nothing should go unnoticed! It’s all guess work but the patient is always telling the answer. It doesn’t have to be like an interrogation. If the same person is asked the same question about the same problem causing the same pain then there is some value in recognising that the pain is the same better or worse (also a useful scale).

  12. ‘crapulent’ is a fabulous word, never heard it before.
    Today I have been mostly peaceful in a John Freidaish kind of way.
    Sounds like a weather forecast. “today will be mostly crapulent with a slight chance of tickled pink later in the afternoon.”

  13. I would think that a scientist would define the term ‘religiosity.’ Without definition of terms we none of us know what we are talking about as it means something different to all the contributors. To me as an English speaker it means a fake over-scrupulous practise of religious duties without regard for the meaning of the religion at all. Like Pharisees.
    Jersey Mc Jones will need to look on a web site of gospel translations to find out what a Pharisee was .. It will be quite easy to find for those who are open minded enough to look.
    I don’t know what social science is about but I am of an opinion that you can’t quantify feelings. and I take it that our learned blog master is trying to put that across in his summary.

  14. JMJ: You can be matched paper for paper with “atheism is bad for society”. It’s the beauty of statistics and social research. You can prove a Jesuit priest is the most likely serial killer in a group or an atheist is a most likely a well-paid televangelist. It’s meaningless.
    (Interesting that the paper is from Creighton, a Jesuit school. Seems they are more open-minded than the progressive schools).

    M E: In part the problem is scientist and social scientist are not the same creatures. Social scientists are trying to quantify that which is not scientific in the first place—morals, behaviours, etc. It seems likely this is done in order to be able to label the practice important and to garner research money. (Scientism runs rampant.)
    I agree that the term “religiosity” was often associated with the falsely pious people, so labeling some with the term might be thought of a negative and having nothing to do with actual religion. Good point.)

  15. we do better with words than with numbers at making these comparisons and at drawing distinction, etc.”

    How about an example of such case? The above statement may be true in qualitative studies in the area of education that oftentimes involve only a few students. How do you compare the words “sad” and “miserable”? Isn’t the use of words for feelings also subjective?

    If you think we can, ask yourself how happy you are right now. On a scale from -17.2 to ?2. We need this numerical scale; it after all is the numbers which turn your happiness into science. What value do you give?

    No one is going to deny that there are issues that place limits on the validity of subjective measures. Question wording and response formats are important. The question of “how happy are you” is hopeless vague.

    In the economics of happiness, the driver of the subjective well-being is of interest. The subjective well-being is evaluated with respect to various, clearly-defined aspects of life, e.g, financial situation and health. On a scale of 1 to 10 with 1 being very unsatisfied and 10 being very satisfied? The numerical evaluations are indeed subjective, perhaps less so than words.

    I have not read the paper, but I doubt the authors of the paper under discussion employed a vague question such as “how religious are you.” If they did, such problem could have been avoided by taking a basic social research course.

    People can only fool us with statistics when we are ignorant.

Leave a Reply

Your email address will not be published. Required fields are marked *