Skip to content
June 17, 2008 | 5 Comments

Please don’t let them do it

You will have by now heard that some are advocating the use of “instant replay” in baseball. The, for lack of a better word, entities pushing for this realize its nefarious implications, and so suggest the video tape be referenced only for disputed home run calls.

Please, God, do not let them do it.

I used to enjoy watching American football when I was boy. Two things destroyed my pleasure in this sport. The first, and most obvious, is the increasing non-stop blather from the sportscasters, now crammed three or four to a booth. These guys never know when to shut up. Worse, broadcast colleagues in baseball thought that they should get in on the act and not just call the game, but to analyze every triviality. No, instead of great announcers like Ernie Harwell and Phil Rizzuto—gentlemen who knew when to shut up and let us hear the relaxing sounds of the ballpark—we have corporate types with “communications degrees” endlessly uttering profundities like “This game isn’t over, Jim.”

This would have been tolerable in football if it weren’t for the second degrading change: The instant replay. Games now drag by as referees, doubtless worried their calls might be challenged, gather at the end of nearly every play to have a little chat about what just happened. And then there is the ridiculous spectacle of a coach prancing up to the sidelines to delicately toss a little red flag on the field when he feels piqued. It is a pathetic thing to see.

I predict that not too many years from now, the game of football will have evolved so that each team’s rosters are supplemented by attorneys (both offensive and defensive ones, naturally). At the conclusion of each play, the lawyers will charge the field to dispute the play—challenging the outcome on the grounds of insanity, income disparity, etc.—to be settled by a jury of tennis fans (who presumably will not prejudice the outcome). Some plays will be so contentious that they will end up in court. It will eventually take years to finish a “season” as the courts become backlogged with cases.

Please do not let this happen to baseball. Umpires, like MBA business executives who think of things like instant replay, make mistakes, but so what. You will get over a bad call. The instant replay some say makes good “business sense” because “so much is at stake.” Nonsense. It is only a game and it is meant to be entertaining.

It will suck the life out of baseball, interrupt the natural flow, and make watching the games more of a chore than a pleasure.

June 12, 2008 | 4 Comments

Peer-reviewed research: Men find looking at nearly naked women distracting

Nothing is true unless it has been demonstrated and published in a peer-reviewed journal. For example, until last week, many people suspected that when men look at nearly or completely naked women, they tend to be distracted. Anybody who believed that was foolish to do so because it had never been “scientifically” proven.

If they did believe it, they probably did so based on their academically-discredited intuitions. Amateurs.

But scientific researchers Bram Van den Bergh, Siegfried Dewitte,and Luk Warlop have finally leant scientific credibility to the popular belief, which we are now free to label as “scientific.” These researchers published their stunning findings in the June 2008 issue of the Journal of Consumer Research. The journal article was summarized in a newspaper report here.

The title of their article is “Bikinis Instigate Generalized Impatience in Intertemporal Choice.” Their abstract follows

Neuroscientific studies demonstrate that erotic stimuli activate the reward circuitry processing monetary and drug rewards. Theoretically, a general reward system may give rise to nonspecific effects: exposure to ?hot stimuli? from one domain may thus affect decisions in a different domain. We show that exposure to sexy cues leads to more impatience in intertemporal choice between monetary rewards. Highlighting the role of a general reward circuitry, we demonstrate that individuals with a sensitive reward system are more susceptible to the effect of sex cues, that the effect generalizes to nonmonetary rewards, and that satiation attenuates the effect.

In you cannot read this, do not worry, for it is not written in English, but in academese, a language which frequently borrows English words, but changes their meanings and which otherwise has no similarity to plain English. Luckily for you, dear reader, I have been trained in academese and can translate the abstract for you:

When men look at naked women, their brains get excited and they have thoughts of getting lucky. When men see naked women, they get distracted and cannot concentrate on the tasks at hand. When we showed a group of men pictures of nearly naked women, they lost patience with a betting game we tried playing with them. The hornier the men were the less they were interested in our game, and in anything else we had to say. After a while, the men got bored of looking at the same women and wanted to move on.

As I said, this is ground-breaking research as it brings to light relationships of men to naked women never before suspected.

Rumor has it the three researchers, who are from Belgium, plan on studying the effects of increasing dosages of the C2H4OH molecule on men’s perception of female attractiveness. I for one, cannot wait to find out.

June 10, 2008 | 4 Comments

Bill Clinton’s “Pump Head”

I have never, and will never, read Vanity Fair. Given our culture is already saturated, more mindless celebrity tittle tattle written by besotted suck-ups I do not need. So I missed the piece on Bill Clinton that suggested he might have suffered from a malady called “pump head”, brought on by his heart surgery.

Melinda Back, at the Wall Street Journal, wrote an article on this subject today (I have no idea how long that link will be good) which alerted me to the topic.

When surgeons cut a guy open to chop away at his heart, they usually stop it from beating (presumably, this makes it less slippery). They then hook up a machine, a pump, to oxygenate and circulate the patient’s blood. Some people are concerned that the machine, which is certainly necessary, causes harm, usually mental degradation, to those patients who live through the surgery. Lots of mechanisms have been proposed which might cause this harm, but there is no agreement or even direct evidence that any of them actually do cause harm.

“Pump head”, not to put too fine a point on it, is bunk.

The first “diagnosing” of this strange malady came from a series of experiments that gave people before- and after-surgery mental exams. The researchers found that a certain proportion of people scored worse on the after-surgery tests, which confirmed the idea that people get dumber after having been on the pump.

To show this, they created a conglomeration of the tests that were given using a dicey statistical technique called “factor analysis,” a method with which it is far too easy to generate spurious results. But even given that this method was applied properly and conservatively, there is still a large, glaring error in these analyses.

It is true that some people scored worse on the conglomeration-test after surgery. This is the sole evidence for “pump head.” But it is also true that some people scored better! In fact, the same exact proportion of people who scored worse, scored better. This means you could just as easily write a paper suggesting open-heart surgery as a method to boost IQ!

The problem was that the original researchers never bothered to look for people who scored better, only those who scored worse; they only examined those patients who looked like what they hoped they would look like, that is, those who seemed to get dumber.

What’s really going on is nothing more than the banal phenomena of “regression to the mean.” If you take a test, some days you will do better, other days worse. Everybody has a natural background variability. Now, if you do score high one day, chances are that the next time you take the test, you will achieve only your average performance. Same thing if you first tested low: next time, you’re likely to improve.

If you look at a bunch of people who take the test, and create two groups, one with those who scored high and another with those who scored low, and then later re-test both groups the high group will show lower scores on the re-test, and the low group will show higher scores. It is impossible for the situation to be other than this.

This phenomena is a boon to researchers who want to prove spurious effects, because, as I said, it is impossible for it not to manifest itself. You can prove the efficacy of or show the potential harm of absolutely any therapy this way.

So pump head, so far as it has been demonstrated in tests like these, is nonsense.

This means that Bill Clinton is probably no dumber now than he was before.

June 6, 2008 | 10 Comments

Publisher needed: Stats 101

I’ve been looking around on various publisher’s websites over the past few weeks to see which of them might take Stats 101 off my hands. I have also been considering bringing it out myself, like my other bestseller, but would rather avoid that.

Here is an overview of (tentative title) Stats 101: Real Life Probability and Statistics in Plain English in case anybody knows a publisher.

I have successfully used this (draft) text in several introductory, typically one-semester, courses, and will do so again this summer at Cornell in the ILR school. It is meant for the average student who will only take one or two courses in statistics and who must, above all, understand the results from statistical models yet will not do much calculating on their own. Examples come from various fields such as business, medicine, and the environment. No jargon is used anywhere except when absolutely necessary. The book has also be used for self-study.

Many books claim to be a “different” way of teaching introductory statistics, yet when you survey the texts the only thing that changes are the names of the examples, or whether boxplots are plotted vertically or horizontally.

Not this book. This is the only volume that emphasizes objective Bayesian probability from the start to the finish. It is the only one that stresses what is called “predictive” statistics. I do not mean forecasting. Predictive statistics focuses on the quantification of actual observable, physical data. This book teaches intuitive statistics.

Nearly all of classical statistics and much of Bayesian statistics concentrate their energies making statements about the parameters of probability models. The student will learn these methods in “Stats 101”, too. But what the other books will not do is to put the knowledge of parameters in perspective. Concentrating solely on parameters makes you too confident and gives the student a misleading picture of the true uncertainty in any problem.

Hardly any equations appear in the book. Only those that are strictly necessary are given. The soul of this book is on understanding, which is crucial for students who will not become statisticians (it’s crucial for the later group, too, but they will seek out more math). Pictures, instead of confusing formulae, are used whenever possible.

All computations are done in R, and are presented in easy-to-follow recipes. An appendix of R commands leads the students through several common examples. No calculations are done by hand and the student is never asked to look up information in some obscure table. I have also set up a book website where the data used can be downloaded.

There are 15 chapters plus the aforementioned appendix. The book starts, unlike any other statistics book except Jayne’s advanced Probability Theory, with logic. This easy introduction intuitively leads to (discrete) probability. After that, three chapters lead up the binomial and normal distributions emphasizing their duty in quantifying uncertainty in observable data. Building intuition is stressed. These chapters are followed by two others on R and on real-life data manipulation (all at a very basic level, presented in a very realistic, plain spoken manner).

Chapter 8 introduces classical and Bayesian (unobservable) parameter estimation. Chapter 9 brings us back from parameter estimation to observables. The true purpose of building a probability model is to quantify the uncertainty of data not yet seen, yet no book (except very advanced monographs like Geisser’s Predictive Statistics) ever mentions this.

Chapters 10 and 11 go over classical and Bayesian testing, and again brings everything back to practicality and observables.

Chapters 12 and 13 introduce linear and logistic models; again classical, Bayesian, and observables methods are given.

The most popular chapter by far is 14, which is “How to Cheat” (at statistics). It is in the nature of Huff’s well known How to Lie with Statistics, but brought up to date and has many examples of how easy it is to manufacture “significant” results, particularly if you use classical methods.

Finally, the last chapter gives a philosophical overview of modern, observable statistics, and ties everything together.

Each chapter has homework questions, and I am working on an answer guide now, which I imagine can be published separately. Most homework, especially in the chapters on statistics, have the students gather, prepare, and analyze their own data, which works wonders for their understanding.

There is a division, and sometime animosity, that splits our field along classical and Bayesian lines. This book adds a third division by taking the minority position in the Bayesian field. The objective, logical probability camp is small and growing, and is, as I obviously feel, the correct position. Most of us are not in statistics departments, but are in physics, astronomy, meteorology, etc.— fields in which it is not just desired, but necessary to properly quantify uncertainty in real life data. Naturally, we argue that everybody should be interested in observable data, because it is the only data that can be, well, observed.

Because of these ideas, the book is not likely to be adopted as a primary text in many statistics classes; at least, not right away. However, I have had interest from professors, especially Bayesians, who would like to use it as a supplementary text. Other professors in computer science, physics, astronomy etc. would use it directly. It’s about 200 pages in a standard trade paperback format, ideal for an optional or secondary text.

Lastly, statistics professors themselves will form a likely audience. They will not necessarily teach from the book (not all professors, obviously, teach introductory classes), but will use it as a source for a clear introduction to logical probability and non-parameter statistics. This is a new and growing area and there is a clear benefit to being first.