William M. Briggs

Statistician to the Stars!

Page 150 of 549

Celeste Greig Fired Over Rape Comments

Celeste Greig

A hoard of angry abortion supporters—one wonders if there are other kinds—succeeded in removing Celeste Greig from her post as president of the state’s Republican Assembly. Greig’s thought crime? In March she said:

The percentage of pregnancies due to rape is small because it’s an act of violence, because the body is traumatized.

I wrote about this earlier in A New Row Over Pregnancy Caused by Rape at Crisis Magazine.

The spectacle of folk who ran screaming in horror from Greig resembled residents of Tokyo fleeing Godzilla. No, strike that. Godzilla is scary and should be fled (fleed?). It’s more accurate to say that the politicians who shunned Greig were like kindergartners shrieking over the belief one of their classmates had cooties.

The San Jose Mercury News reports that Aaron Park, a prominent California Republican, called Greig’s comments “embarrassing.” This flak worried that the party—The party is mother, The party is father—would suffer were it known to consort with Grieg: “You cannot put faith in someone who’s talking about the virtue of saving babies but looks like they don’t care about women who are sexually assaulted.”

That this apparatchik thought his non sequitur applicable reveals what everybody already knows: that (most) politicians care more about attaining and maintaining power than in speaking truth.

What comes of examining Greig’s comment dispassionately? Is it true or false that “The percentage of pregnancies due to rape is small because it’s an act of violence, because the body is traumatized”?

The best answer is that nobody knows, not for certain or with anything approaching certainty, whether pregnancy rates are higher, lower, or identical in women who are raped and in those who were not. There are plenty of theories, conjectures, and surmises about the subject, but little concrete knowledge. There has been no systematic or convincing collection of data and therefore no definitive study (see link above for more detail).

And then it doesn’t sound “outrageous” to suggest that a body undergoing trauma will not operate as efficiently as a body swimming in more placid waters. Surely it isn’t beyond the realm of reasonable possibility to suggest that, ceteris paribus, a woman purposely aiming for motherhood has a greater chance to conceive than a woman who was brutalized. Greig’s only real error lay in asserting this most plausible supposition was a certainty.

“Insensitive!” said the activists, a group to whom any trace of any whisper of any glimmer of any hint that abortion is morally wrong is met with squalls, squeals, spit, and specious squabbling. Unless the subject is the emotional state of a person, the charge of insensitivity is always a fallacy. I ask you to draw the obvious inference about the class of people who so eagerly and so often embrace it.

Suppose, arguendo, that the rate of conception for raped woman was higher than for similar women who were purposely trying to conceive (or where not “purposely”, but in those who took no steps to prevent conception; see the link). That is, assume Greig’s comment is false. Now what?

Does it follow that rape is therefore morally acceptable? Surely not, and only a mind deranged by passion would claim anybody would make such an inference (see the commentors at this page for examples).

Maybe it follows that abortion should more accessible if rape-conception rates were higher? Well, no. If a woman conceived other than by rape, it cannot matter to her about her abortion whether some other woman was raped. She would still have to decide whether her abortion was morally acceptable or not.

But what about a woman who was raped contemplating abortion? Again, it does not follow that because other women were raped and conceived that therefore her abortion is morally acceptable. If abortion is allowable in cases of rape-conception, it does not matter how many rape-influenced abortions there are. If abortion is morally wrong in cases rape-conception, then hers is morally wrong too, even if the rape-conception rate is high.

Logically speaking, then, and granting the wholesale slaughter of reason so common on moral questions, why the flap? Because many people find abortion agreeable in cases of rape-conception, but far fewer folk find abortion acceptable for the sake of convenience (which are the vast majority of abortions). Abortion cheerleaders worry that if rape-conception rates were low, the populace might ban abortions altogether. And that is anathema to them.



21 Comments

Over-Optimism In Physician Prognoses

Poor Michael Boren lay dead at one of the ugliest places in New York City, the on-ramp to the Queensboro Bridge. Heart attack. He was only 51. He made it through just two boroughs of Sunday’s Five Boro Bike Tour before ceasing to be (materially).

Thing is, Boren’s doctor “gave him the OK to participate in the race.”

Another busted forecast.

Or rather, another plastered prognosis. Turns out doctors, like most of us, aren’t such great predictors. Human behavior is too complex for anybody to nail with anything approaching consistent accuracy. This includes experts prognosticating in their fields of expertise, too. That was the lesson of, among many others, Phil Tetlock’s Expert Political Judgment.

I was reminded of this when reading a physician’s lamentation of over-confidence, in which he pointed to a British Medical Journal paper “Extent and determinants of error in doctors’ prognoses in terminally ill patients: prospective cohort study.

This is just one example of an endless supply. The study: “343 doctors provided survival estimates for 468 terminally ill patients at the time of hospice referral.” There are a lot of words in the paper, but it all comes down to this picture, which isn’t as good as it could be:

Don't wait to fill out that will

Don’t wait to fill out that will

The chart is backwards to custom, which would place the predicted survival days on the “x” or horizontal axis, and the observed data on the “y” or vertical axis. The chart is also on a log-log scale, which makes it difficult to appreciate the magnitude of the errors.

But, forgiving all that, let’s take a look. If the doctors gave perfect forecasts, all the dots would line up on the solid black diagonal line: the distance from this line is the error. Not too many dots on or near the line.

Put your finger in the leftmost dot at 30 days, which is one doctor’s prediction of how long his patient would survive. Drop down from that 30 to the x-axis to learn that the patient actually lived just 2 days. That’s a huge error, especially considering this is the End Of The Road, a time when families are making hard decisions.

Because of the log-log scale the errors are larger than you would think in some places. For example, look at topmost dot at just over 1000 days, which is about three years. The patient only lived a month (30 days). That’s a bigger error than the one at the far left, where the doctor said the patient would live around 400 days, but where the patient made it only to the next day (oops). The error is smaller here even though the distance to the black line is longer, because the scale is not linear.

Notice that most of the dots are on the north side of the line which means, for this group of patients and doctors, the forecasts were too much on the optimistic side; that is, the doctors said patients would live a lot longer than they actually did. You can also see a bit of cultural bias in the data: e.g., the cluster of points predicting 90 days (3 months) to live.

One problem in this study is the discrete nature of the prognoses. No doctor and no patient believes he will live precisely 90 days when given that forecast. There is some plus-or-minus which is understood, but maybe not in the same way by both parties. The doc’s window may be narrower than the patient’s, or vice versa.

Every good forecast provides an indication of its uncertainty. A prediction of “90 days plus or minus two months” is different from one which says “90 days to a year.” And of course, doctors more often give predictions in this form. The uncertainty is needed because the decisions a patient and his family makes given a forecast are vastly different than the decisions the doctor makes.

Incidentally, assessing the quality of predictions which come with uncertainty is more difficult than making simple plots like this, but the methods to do so are well understood.

And there’s more to think of. Should a physician give his patients hope by telling them they’ll live longer than he really thinks? “Buck up Mr Jones! I’ve known patients in your condition who lived for years.” Optimism is a sort of placebo, is it not? But can you tell a patient he will live “years” when you believe that patient is circling the drain? Optimism has limits, and the power of the mind (placebo effect) not omnipotent. Bad forecasts aren’t helpful to families, either.


11 Comments

Subjective Versus Objective Bayes (Versus Frequentism): Part II

John Maynard Keynes, Chance Master

Read Part I.

What is the probability that “The Detroit Tigers win today’s game” (which has not yet been played)? The truth of the proposition (in quotes) is not known and is therefore uncertain. Enter probability.

Some will used words to express their answer (“Pretty likely”, “They don’t have a chance”, “No way they can lose”), others will provide rough quantification (“90%”, “3 to 1 against”), while still more will provide serious quantification (“$50 bucks says they win”). Finally, some will not answer at all (“I have no idea”, “I hate baseball”).

Which of these is the right answer? Assuming nobody is fibbing, they all are. (The frequentist response is given below.) Each reply is subjective because each is conditional of a set of premises supplied by each individual, premises which may or may not be articulated.

For example, “Pretty likely, given that they won their last three and the Astros (their opponent) are dead last.” Another thinks, “They don’t have a chance because I suspect Justin Verlander (Tigers’ starting pitcher) is injured.” But when you ask the man who was willing to bet $50 why, he might say, “I don’t know. It feels like the right amount.” Or he might say, “I always bet $50 on the team I think has the best chance” which again fails to provide a list of premises why he thinks the Tigers have the best chance.

This kind of situation is what people have in mind when they think of subjective probability. Answers can range from no probability at all (“I hate baseball”), to vague but real probabilities (“Pretty likely”), to actual quantifications (“3 to 1 against”, “$50 to win”). All depend on individual premises which we may or may not be able to elicit. This includes those situations where the person doesn’t want to or has no answer. For example, you might be asked, “底特律老虎隊奪冠的概率今天的比賽是什麼”? If you don’t speak Mandarin and haven’t any idea of the context the best answer is, “I have no idea what you’re talking about.” (Real speakers of Mandarin will say the same thing of this translation.) That is, there is no probability for you.

It is never an answer to say, “‘The event will either happen or it won’t’ therefore, the probability is 50%”. That number can never be deduced from a tautology. That is, it is always (as in always) true “The event will either happen or it won’t” for any event, which is what makes it a tautology, and that adding a tautology to a list of premises cannot change the truth or probability of a proposition. Any number of tautologies may be added, not just one. For example, “At today’s game it will either rain or it’ll stay dry, Verlander will either pitch well or he won’t, so the Tigers will lose or they will win.” There is no content in that phrase except that the Tigers will play (and Verlander will be the pitcher).

From these simple examples, we may conclude several things. (1) Probability is not always quantifiable; that is, not every probability is a precise number; (2) Probability is sometimes a range (another example: “They’ll either lose or come close to losing”); (3) Probability can be a fixed number; (4) Not all probabilities can be known; (5) The weight of evidence, how stable the probability of the proposition appears, depends on the list and strength of the premises.

What a person says may be at odds with what he believes, not only because of deception, but because sometimes words take slightly different meanings for different people or because not everybody is attentive to grammar. Our man might list as one of his premises, “Verlander will either pitch well or he won’t”, which is formally a tautology and therefore of no probative value, but subjectively he gives more weight to “pitch well” than to “not pitch well”, and so this tautology-in-form is actually informative. This is why there is confusion on the subject.

Consider two men. One gives the premise, “I don’t know much about the Tigers, but they won their last three.” Another says, “The Tigers’ batting is on fire; here are their stats. And Verlander is the best pitcher in baseball, and here is why” plus many more (a real fan). But suppose both men say the chance the Tigers will win is 80%. Adding or subtracting a premise from the second man will not change his stated probability by a great degree. But adding or subtracting a premise (particularly subtracting!) from the first man changes his by a lot. We would say the weight of evidence of the first is less than that of the second, even though both have the same probability. And this is because of the differences in the premises.

What is the objective probability the Tigers win? There isn’t one, at least, not yet. And the frequentist probability? Same answer: there isn’t one yet.

Now the difference between subjective and objective probability is this: when presented with a list of premises (of unambiguous words) a subjectivist can state any probability for the conclusion (proposition) he wishes, but the objectivist must take the premises “As Is” and from these deduce the probability. The subjectivist is free, while the objectivist is bound. This is why there is no objective probability the Tigers win, because there is no “official” list of premises for the proposition.

The lack of an official list of premises is also why the frequentist must remain mute, because in order to calculate any probability the frequentist must embed the proposition of interest in an infinite (as in infinite) sequence of events which are just like the event on hand, except that the other events are “randomly” different. This constrains the type and kind of premises which are allowable. (I discuss “random” in another post.)

For example, if the “official” list—which merely means those premises we accept for the sake of argument—are one: “The Tigers always win 80% of the time against the Astros”, the objectivist must say (given the plain English definitions of the words) the probability of a win is 80%. The subjectivist may say, if he likes, 4%. He won’t usually, but he is free to to do. The frequentist may be tempted to say 80%, but he has to first add the premise that “Tigers vs. Astros” events are unchanging (except “randomly” different) and will exist in perpetuity. Perpetuity means “in the long run.” But as Keynes reminded us, “In the long run we shall all be dead.” In other words, unless the frequentist “cheats” and adds to the official list of premises suppositions about infinite “trials”, he is stuck. Incidentally, the subjectivist who does say other than 80% is also usually cheating by adding or subtracting from the official premise list, or by (subjectively) changing the meaning of the words.

Now this is not nearly yet a complete proof which shows that frequentism or subjectivism are doomed; merely a taste of things to come. What is clear is that probability can seem subjective, but only because, as was showed in Part I, the list of agreed upon premises for a proposition can be difficult or impossible to discover. Next time: simpler examples. Maybe where “priors” come from.

Read Part III.



13 Comments

Mexican Hat Fallacy

Hola!

Hola!

Reader Kip Hansen asks, “Can you please run a brief explanation of what the Mexican Hat fallacy is statistically?”

I can. The Mexican Hat Fallacy, or Falacia Sombrero, is when a man moves from sunny to cloudy climes, such as when a hombre shifts from Veracruz to Seattle, and thus believes he no longer need wear a hat. This is false. A gentleman always wears a hat—not a baseball cap—not just because it regulates heat and keeps you dry, but because it completes any outfit.

Well, that’s the best joke I could come up with.

The term was coined by Herren von Storch and Zwiers in the book Statistical Analysis in Climate Research and it came about like this: Some fellows were wandering in the Arizona dessert sin sombreros and came upon the curious rock formation pictured above (image source).

One fellow said to the other, “Something caused those rocks to resemble a sombrero.” The other fellow, more sun-stroked then the first, disagreed, “No, no thing was its cause. That’s my null.” Quoting from a paper by Herr Gerd Bürger (because I had never heard of this fallacy before):

By collecting enough natural stones from the area and comparing them to the Mexican Hat [formation], one would surely find that the null hypothesis ‘the stone is natural’ is quite unlikely, and it must be rejected in favor of human influence. In view of this obvious absurdity [von Storch and Zwiers] conclude: ‘The problem with these null hypotheses is that they were derived from the same data used to conduct the test. We already know from previous exploration that the Mexican Hat is unique, and its rarity leads us to conjecture that it is unnatural.’ A statistical test of this kind ‘can not [sic] be viewed as an objective and unbiased judge of the null hypothesis.’

Which leads me to (hilarious) joke number two. There are two kinds of people: those who find null hypotheses a useful philosophical concept, and those who don’t. This description is confusing—but then so ultimately are most stories about “null” hypotheses.

If the “fallacy” merely means that the closeness of model fit to observed data is not necessarily demonstrative of model truth, then I am with them (this is why p-values stink). You can always (as in always) find a model which fits data arbitrarily well—as psychoanalytic theory does human behavior, for example—but that does not mean the model/theory is true. Good fit is a necessary but not sufficient condition for model/theory truth. A (nearly) sufficient condition is if the model/theory predicts data not yet known, or not yet used (never used in any way) to fit or construct or posit the model/theory—as psychoanalytic theory does not well predict new behavior.

The parenthetical “nearly” is there to acknowledge that, in most cases, we are never (as in never) 100% certain an empirical model/theory is true. But we can be pretty sure. Thus we do not say “It is 100% certain evolutionary theory is true,” but we can say, “It is nearly certain evolutionary theory is true.”

So much is Stats 101. Yet I’m still perplexed by Bürger-von Storch-Zwiers’s example. If we “already know from previous exploration” that the Mexican Hat formation was caused by (say) weathering, then collecting rocks from nearby isn’t useful—unless one wants to play King of the Hill. And what does “comparing” these rocks to the formation mean? Should the individual stones resemble the formation in some way for the formation to be “natural”? The rocks nearest will be made of the same material as the formation, so this is no help.

Regarding the possible causes or hypotheses of formation, they are infinite. It is we who pick which to consider. It could be, for example, that we’ll soon see a History Channel “documentary” which claims ancient Egyptians were flown to Arizona by aliens under the guidance of angels to build the Sombrero so that the Hopi could use it in a religious ceremony that was eventually secretly used by Hitler in his bid to conquer the USSR.

Let’s call this the “null” hypothesis. Why not? The “null” is ours to choose, so it might as well be something juicy. I bet if we link this around that, give the ingenuity of internet denizens, within a week we would have enough corroborative evidence for it to satisfy any NPR listener.

Speaking of hats, if you’re looking for a genuine Panama to cool your pate in the summer months, may I recommend Panama Hats Direct? I get nothing for this endorsement, except the satisfaction of helping this fine company stay in business. (If this is your first, go for the $95 sub fino. It is a fantastic deal.)



19 Comments
« Older posts Newer posts »

© 2015 William M. Briggs

Theme by Anders NorenUp ↑