# William M. Briggs

### Statistician to the Stars!

#### Page 150 of 408

Today, just a pointer to a hilarious story by Robert McHenry in The American of how a workaday paper in a chemical journal with the title

Evidence for the Likely Origin of Homochirality in Amino Acids, Sugars, and Nucleosides on Prebiotic Earth

became, after a severe manhandling by journalists,

Do Intelligent Dinosaurs Really Rule Alien Worlds?

Since we’ve so often seen how some obscure paper with dicey conclusions is tarted up in the press to confirm this or that bias, it’s good to read the steps McHenry identifies in the common process:

1. Some scientists publish a report of their work.

2. An alert PR guy who works for the university or institute notices some potentially hype-able words in the report.

3. He writes up a release, under the impression that he is Arthur C. Clarke.

4. J-school grads at a number of media outlets, whose science education ended in 8th grade, pick up the release, change three words to make it their own, and it is published to an unsuspecting public.

5. The unsuspecting public, which is not as dumb as the PR guy believes, dismisses the story as bushwah and blames the scientists.

First the good news. It’s completely unrelated to what follows, but it is good news. Scientists say “Aliens ‘wouldn’t want to eat or enslave us’ says ET-hunting expert – the first ones we meet will be FAR too civilised.”

So we at least have that going for us. And then we have this Slate article, which asks, “Do Men Find Dumb-Looking Women More Attractive?” The scientific answer is yes.

I was going to critique the study on which this article is based in the usual manner, but after reading the article I gave up hope and went instead to search for good news, any good news, about our race. Hence the headline that aliens won’t, thank the Lord, want to Serve Man.

Anyway, here’s the study which plunged me into the gloomy depths (the “they” are the researchers).

To figure out which sorts of women might be deemed most receptive to a sexual advance or most vulnerable to male pressure or coercion, they asked a large group of students (103 men and 91 women) to nominate some “specific actions, cues, body postures, attitudes, and personality characteristics” that might indicate receptivity or vulnerability [in women].

This pool of WEIRD people came “up with a list of 88 signs that…a woman might be an especially good target for a man who wanted to score.”

The researchers then searched—wait for it—the internet for images of women who might be amenable to be scored upon. “Once they had pictures of women licking their lips, partying, circling their areolas, and all the rest, they cross-checked them with a separate group of students who surmised” that, yes indeedy, these are women who wanted it. Other items: tight clothing, open body posture, and lying back.

The researchers also found a list images that tended to be mood dousers. Such things as: skinny, old, passed out, sad, distressed, and crying.

This is science, folks.

A fresh group of 76 male participants [college students] was presented with [the postive] images in a randomized sequence and asked what they thought of each woman’s overall attractiveness, how easy it would be to “exploit” her using a variety of tactics (everything from seduction to physical force), and her appeal to them as either a short-term or a long-term partner.

There is no word whether beer was served during this “Hot or Not” rating party (the paper unfortunately doesn’t show us the pictures). Good news for the ditzy, though: “pictures of dimwitted- or immature-seeming women, for example, or of women who looked sleepy or intoxicated, did seem to have an effect: Not surprisingly, men rated them as being easy to bed.” And here’s the big “finding”, these easy scores “were also perceived as being more physically attractive than female peers who seemed more lucid or quick-witted.”

“These findings suggest that men are sensitive to cues in a variety of domains when assessing the sexual exploitability of women.”

Golly.

The authors tied their stunning results (all confirmed with wee p-values) to deep theory in evolutionary psychology. But even the Slate author was able to ask “Do photos of boozed-up young women posted on the Internet simply happen to depict more physically attractive females—ones who’ve dolled themselves up for parties, say—than the sober head shots of those who party less?” She also quipped, “It also seems to me that although men may lower their standards when it comes to judging women for casual sex, even the creepiest, horniest, coldest man has his aesthetic limits.” She forgot to mention that we have other “research” that demonstrates that these limits are an inverse function of blood alcohol content.

Perhaps the best news is that new research is called for: “investigating men’s approach likelihood or arousal level when exposed to women displaying cues to exploitability will shed light on the behavioral output that results from this attraction.” Right on.

Even better, “Future work also could profitably examine men’s conscious awareness of the relationship between perception of cues to exploitability and the sexual attraction they experience, as well as the potentially conflicting emotions they experience when presented with the opportunity to engage in a sexually exploitative strategy.”

I think that means, stated in plain English, that men probably know what they’re looking for and that they might sometimes feel badly about it.

Vengeance Is Mine

The actual title of the Live Science press release was “Believers Leave Punishment to Powerful God,” a story which opens with the memorable words:

Believing in an involved, morally active God makes people less likely to punish others for rule-breaking, new research finds.

Which I hope you agree is equivalent to saying that atheists are less forgiving, less compassionate, less merciful, and—oh, let’s just say it: they are worse people. Now don’t get mad at me. This is research, complete with wee p-values.

But then maybe this summary is too telegraphic. Because the very same research that proves that atheists are more bloodthirsty than theists also proves “that religious belief in general makes people more likely to punish wrongdoers — probably because such punishment is a way to strengthen the community as a whole” (emphasis mine).

In other words, theists are less forgiving, less compassionate, less merciful, and just plain worse people than more enlightened atheists. Except when they aren’t and when their roles are reversed. The press release explains the conundrum thusly: “In other words, religion may introduce two conflicting impulses: Punish others for their transgressions, or leave it to the Lord.”

This, friends, is the power of statistics, a field of science which, given the routine ease with which two opposite conclusions are simultaneously proved, we may now officially dub Orwellian Analytics.

Research Shows…

The paper is “Outsourcing punishment to God: beliefs in divine control reduce earthly punishment” by Kristi Laurin, and three others, and published on-line in the Proceedings of the Royal Society B.

After a lengthy introduction arguing that all morality (except presumably the morals of the authors) can be reduced to urges induced by evolutionary “pressures,” and defining something called “altruistic punishment”, the authors describe how they gathered small pools of WEIRD people (i.e. undergraduates) and had them play games. The results from these games told the authors all they needed to know about who enjoys punishment more. Incidentally, about the punishment, they said this:

Prior to effective and reliable secular institutions for punishment, large-scale societies depended on individuals engaging in ‘altruistic punishment’—bearing the costs of punishment individually, for the benefit of society.

And did you know that “According to theory”—are you ready?—”Though administering punishment benefits society as a whole, it has immediate costs for punishers themselves.” Who knew?

Experiment one corralled “Twenty undergraduates” who “participated in exchange for course credit.” That’s one more than nineteen, friends. The supplementary data (which is mysteriously left out of the main article, but which is linked there) shows that these participants contained 8 whites and 9 Asians, with 1 black and 1 Arabic left over; 10 Christians, 1 Buddhist, 1 Hindu, 1 Muslim, 1 “Other”, and 6 Atheists. The authors claim to have “measured participants’ belief in powerful, intervening Gods, and their general religiosity.” Which makes you wonder how they classed the Buddhist and “Other.” No word on the breakdown of how participants answered the “religiosity” question.

Ah, skip it; because the next is more fascinating. “We then employed the 3PPG–an economic game commonly used to measure altruistic punishment.” The words which struck yours truly were “commonly used.” It must be common, because there isn’t word one in the paper or supplementary material of what this creation is. But I can reveal to you it is the “Third-Party Punishment Game,” a frivolity invented by academics designed to flummox undergraduate participants in studies like this. About that, more another day.

The “game” runs so (sorry for the length, but do read it):

player A receives 20 dollars, and must share that money between herself and player B in two-dollar increments, without input from player B. In the second stage, player C [who presumably knows what A did], who has received 10 dollars, can spend some or all of that money to reduce player A’s final payout: For every dollar that player C spends, player A loses three dollars. Player A’s behaviour does not affect player C, all players are anonymous and expect no further interactions, and punishing player A costs player C money. People treat sharing money evenly between players A and B as the (cooperative) norm; thus, player C’s willingness to punish player A for selfishly violating this norm can be taken as an index of altruistic punishment of non-cooperators.

In other words, Player C looks at how much A gave B. If C thinks this too low, C sacrifices some of his own money to reduce the amount A kept. But A and C got the money for free and since these are students we do not know if A actually knew B in real life, or if C knew either. For example, if I as A and Uncle Mike as B and Ye Olde Statistician as C were to play this game, I would split the money with Uncle Mike and Ye Olde Statistician would go along. This is because we were pals before the game commenced. But if we were enemies, something entirely different would occur. The authors never mention if they look for these kinds of effects in this or in any experiment. Leave finding flaws and contrary evidence for others.

But never mind, because C giving up some of his play money is scarcely the same thing as C desiring that a child rapist be tossed in jail to rot, even though C knows that the cost of the rapist’s cell will be taken from his wallet. But C in real life hardly knows even that. C knows that he pays taxes and that some of his taxes go to prison upkeep, but those taxes also go to pay for the fuel to ferry the president around on Air Force One from fund raiser to fund raiser. That is, most of us Cs don’t think that ponying up taxes is altogether altruistic.

The authors are mute on this objection, too.

Enter The P-value

We regressed participants’ levels of altruistic punishment [amounts of money] on their God beliefs and their religiosity (both centred around 0) simultaneously…participants who believed more strongly in a powerful, intervening God reported less punishment of non-cooperators, β = -0.58, t(17) = 2.22, p = 0.04; whereas more religious participants showed a trend towards reporting greater punishment, β = 0.33, t(17) = 1.67, p = 0.11.

And there it is. Theists reported less punishment and more punishment. Except that the p-value for the “more punishment” isn’t small enough to excite. (And a linear regression is at best an approximation here.) The authors also discovered “more religious people tended to believe in powerful, controlling Gods.” The correlation wasn’t perfect, but neither should it be when you mix Buddhists and Christians. Let’s don’t forget that this regression model only included 6 atheists for its contrast.

The really good news is that “Given the strong correlation between religiosity and conservatism (r = 0.52), we conducted an additional analysis including conservatism in the regression. Results are reported in table 1; we found no evidence that conservatism explains the religion–punishment association.”

Sorry, Chris Mooney.

The authors did four more studies, all similar to this one, but with increasingly complicated regression models (lots of interactions, strong hints of data snooping, etc.). The findings don’t change much. In their conclusion, however, they include these strange words: “In our research, we found it necessary to remind participants of their beliefs for these beliefs to influence their decisions.” This sounds like coaching, a way to induce results the authors expected.

The real lesson to us is how this complex mass of data is squeezed into the terse, and misleading, headline.

HT HotAir.

Regular readers will recognize frequent commentor and foil Luis Dias, who today offers us his defense of relativism. As the Staford Encyclopedia of Philosophy notes, “Although relativistic lines of thought often lead to very implausible conclusions, there is something seductive about them…”

I know dear reader that just by reading the title you will raise your eyebrows in irritated frustration. How could one possibly defend a position like this? How could one defend nihilism, moral relativism and other most vile depravities that mankind’s ever produced? I can already smell your growing ennui over the stubborn usual liberal idiocies…Well I’ll be glad to try, not to dissuade your absolutist beliefs, but at least to give you the tools to properly judge Relativism and its merits apart from the usual caricatures. So please indulge me, I’ll try to be as brief and as clear as I can be.

I propose to go directly to the juice here and try not to derail too much. Relativism is so false, you will shout, due to the following obvious undeniable truths:

• A philosophy that states that the Truth is there is no Truth is nonsensical, and self-contradictory;
• A philosophy that states morality is not objective is equivalent to have morality as a “fad”, neither right or wrong, but just whatever people in a given time feel “right”, and in such a situation “rape” could even be considered a “good thing”;

These criticisms may sound robust enough to end the conversation, until you actually ponder for a moment and try to judge the alternative with the same scalp, the same rigor. Once you start doing this, you see the cracks opening, an entire edifice shattering before your eyes. The alternative I am talking about is the Absolute or Objective truth, which is a Truth that is “independent” of the human mind, that is absolutely true irrespectively of any other thing. Intuitively you’d guess this is a much more robust philosophy. Things are either Right or Wrong with a capital, and it’s merely our fault for not getting it, after all Errare Humanum Est and all that.

That’s where the problem lies, that’s where the cracks open. We have to ask ourselves this simple question: is there any Truth about the World that we can be 100% sure of? Aren’t all truths either conjured by ourselves or plain hearsay? Isn’t everything we can utter just a conjecture hinged upon other conjectures? Is an absolutist philosopher capable of producing the Absolute Truth about anything at all (other than tautologies, that is)? . You may think it rather easy to come up with at least a proposition of the sort (you may even try the Descartian one, for the sake of Tradition you may be so enamorated with), but even in those we can also easily inject the poison of doubt and ambiguity. Cogito Ergo Sum is filled with assumptions about how the world works which are accepted without questioning. What if we question them? What is left of this absolute truth but ashes?

My position is that this Truth hasn’t been established at all. Mr. Briggs will tell you that there are some Objective Truths that we “just know” intuitively with our “gut,” and that the “null hypothesis” is that these truths exists; those who are sceptical of these are welcome to try to prove they do not. To me this is not only a terrible cop out, but it brings huge problems. To a Socratian inquisition “how do you know this to be true then?” such people will just irritatedly answer “I just KNOW ok? Get out of my lawn!”. Sorry, not a good enough. What if my gut tells me a different story than your gut? What then: will you simply deny my gut’s “authority” over yours? You can see the deluge of silliness that comes from assuming we have such a direct connection with the Truth stemming from these simple questions. Why is this important, you’ll probably ask. Men and women may not know when they actually stumble upon absolute truths, but they exist nevertheless, don’t they?

But if you have no tools to assess when you know when we stumble upon those truths, how do you know they even do? More importantly, if you cannot know an absolute truth, what makes you any way different from a relativist? From the omniscient point of view, you are just as cluelessly wandering around silly pseudo-truths as the perverted are. The only difference is that the latter aren’t blinded by some righteous posturing on the issue.

But that’s not…”What about morality!,” you will cry. If Relativism were “true”, wouldn’t we be rapists and criminals? Wouldn’t it be possible to create a moral rule where you could just do whatever you wanted to anyone else? Well dear reader, if that is an empirical test to an hypothesis, then clearly Relativism wins. Even an absolutist people like the Hebrews raped, tortured, killed, genocided lots of others and not against, but in the name of their god as the most just thing to do. IOW, what you consider the Relativism’s worst nightmare were it to be “true” already happened lots of times in History. Absolutism didn’t stop it from happening, it actually condoned it all. Slavery was deemed ok. Beating children was deemed moral. Eating beef is still deemed awesome and juicy instead of barbaric like it will be in a hundred years (my prediction!).

Absolutism is, ironically, more relativist than Relativism itself, for it does not even recognize its temporality, and like in an Orwellian nightmare, is always insisting that Eurasia was always and will always be a good thing to bomb (we just didn’t know it before, and will probably forget it in the future, my bad).

Why now, you are just being beyond silly

No, I am not. Ponder, what is worse. A philosophy that accepts the fragility and limited point of view of its wisdom and knowledge, or a philosophy that really thinks there is an ultimate point of view of absolute wisdom worthy of being in possession of? A philosophy that is humble enough to understand the temporality of its judgements, or a philosophy that arrogantly judges everything around it with a scent of the intemporal forever, always forgetting how in the past other absolutists judged with the same arrogance but with wildly different moralities?

Last but not least, the so-called inconsistency. It’s a myth friends. No Relativist will claim to absolutely know there are no absolutes. Please give us more credit than that. It’s very easy to understand: we are claiming we cannot see how one can possibly assert anything absolutely. We are not saying that no one could ever do such a thing. Such a statement is a strawman. Bury it, leave it alone. We do not absolutely know there are no absolutes. We, like Poincaré, simply do not care about that hypothesis. We simply require none of it, we can live without all of those absolutist requirements. We live in the Sea of Limitations, the Valley of Finitude, the Mountains of Humility. We have the gall to say we ought to be humbler. And so can you.

I’ll be ready to further these thoughts and more in the comments below. Fire away, friends.

You might have heard what happened to when Stanford professor Andrew Ng put his machine learning (a practical kind of statistical modeling) on line. He expected mild interest. One hundred thousand students signed up.

This was enough to excite even the New York Times, which dispatched Tom Friedman to investigate. He called Ng’s success a “breakthrough”, which it certainly is, in its way.

Friedman also puts us on to a new company called Coursera, which offers courses by professors from well known universities, in much the same spirit as Ng offered his. “The universities produce and own the content, and [Coursera is] the platform that hosts and streams it.” Too, many universities already have on-line courses housed on campus.

Several things. Stanford pays Ng’s salary. This money is only partly for teaching, and more so for Ng to publish papers—which contain the material which will eventually be taught. If Stanford didn’t pay him, he would have little time to think of what new to say. Also, Stanford rightly owns the content to Ng’s lectures. Pay for work, etc. Same thing at Coursera, which hires out the professors for a cut of the pie. All well and good.

Some courses are ideal for the web. The closer the course content is to cookbook recipes, the apter. I mean no disrespect. A courses which shows you how to install and run a certain program is nothing but a series of recipes, a well marked path with milestones and a known destination. Basic machine learning fits this scheme. As do the courses offered at Coursera: Algorithms, Calculus, Introduction to Logic, Vaccines, and Securing Digital Democracy (electronic voting schemes). Recipe does not mean easy.

But maybe not statistics. Unless you want a course in classical frequentist thinking, which is cookbook all the way. Coursera has one just like this. Taught, like it is in many places, by a psychologist (who I’m sure is a nice guy).

Don’t see any courses in history, poetry, literature, high-level philosophy, and the like; all classes which are not amendable to multiple choice testing. These are courses which don’t necessarily have an end, or have different possible ends depending on the mixture of students, or which require students do a lot of writing and talking.

On-line, it’s just as easy for a professor to grade one multiple choice or only-one-correct-answer test as it is to grade 100,000 such tests. But only if the class is recipe-based. One professor could not read through 100,000 essays, or listen to as many presentations. In fact, for these kinds of “free form” courses, he may not be able to handle as many students on-line as he could in person, since the material delivered remotely introduces some level of ambiguity and slows down interaction.

Student interaction for recipe courses, which are (and should be) more popular, is greater, because if the recipe says that “at time T, add X cups,” then it’s likely many students will have this information at hand when they are queried at some forum. This is not as likely in the free form class, where the answers are rarely as firm.

The way I run my introductory class is free form. It is also not a regular course in statistics, but Bayesian from the get go, and in the predictive sense (“all parameters are a nuisance”) regular readers will understand. This marks it an oddity. I get away with it because of a certain rare confluence of events. But it would never fly at most universities, where professors at their professions are more conservative than Rick Santorum (“What! Teach Bayesian probability before frequentist? Never!”). Just ask any professor how easy it is to introduce a new course into the system.

I lecture, but not in contiguous blocks of time. I ask the students lots of questions and frequently. The answers they give provide direction for the course. Students come to the board and work out various matters with themselves, guided by me. I have students collect data (which fits a given paradigm) on any subject which interests them. This a wonderful way to maintain interest, but it limits the use of canned examples, the use of which would free up time.

I also have to spend a lot of time walking people through basic computer tasks, especially R. As a final exam, each student presents a talk on their subject as to an audience presumably unaware of statistics (many use data from their workplace, which they later show their bosses, reportedly to good effect). They must describe their interest, the data, the pertinent questions, show pictures, explain how they quantified their uncertainty, and finally detail how they would check all when new data arises.

Could this work on line? I’m skeptical, but intrigued. Places like Phoenix University push thousands of students through their pipes, and not all the classes are recipe-like, so maybe it can be done. All ideas welcomed.

The last difficulty is “credit.” Some courses earn credit as normal classes. But the free courses universities and Coursera offer come with nothing except a pat on the back or perhaps a letter stating that the student made it through. Of course, I love this trend away from formal “certification” and towards actual love of learning. Seems to work best for recipe courses, though, whose students actually want to bake a cake.

The “free” part doesn’t hurt when attracting students: a fine plan for behemoth institutions; wouldn’t work well for little guys like me.

Ignore below

Test of latex. Should be pretty: $sin(x) \ne \int e^x dx$, \Pr(x|e) = 0.5[\latex]