Skip to content
March 1, 2009 | 2 Comments

We’ll miss you, Paul Harvey

Those of us who love radio love the human voice. Few, if any, voices were as beautiful as Paul Harvey’s.

I can remember listening to him when I was a kid in Detroit. His voice came out of a small radio my grandparents had on a chair in the corner of the dining room. Before cable TV, that radio was always on, either playing news or this kind of piano music that mimicked, badly, popular tunes in the lowest register. I didn’t know a piano had that many keys below C0.

A little older and a lot farther north, we still had Paul Harvey, though only when the skies were clear because the station that carried him was back in civilization.

Harvey was even in Okinawa, broadcast over the a-farts—that is, Armed Forces Radio & Television Service—network.

He was on WABC here in New York, on during the old Curtis & Kuby show.

Every five- and fifteen-minute show was tight, and flowed beautifully. Never mind his editorializing. It was a small price to be able to hear the rest of the show.

The way he was going, you’d think he would have lasted forever—especially if his advertisers got their way. Nobody could sell like Harvey.

We’ll miss him because he was one of the last distinctive voices left in radio.

Chicago Tribune obit here.

Most shows ended with a groaner, like this one:

February 28, 2009 | No comments

RSS content feed still busted; comments ok

The RSS content feed still has an extra space in it somewhere. I have checked many, many files, used a plug-in that automatically cycles through files to remove heading and trailing spaces, re-written some PHP, all to no avail.

I cannot find that damn space.

I’ve managed to fix the comments feed (a space in feed-rss2-comments.php).

I’m still on it, but I’ll be unable to work on it today.

I also owe my wishcasting pals a better explanation. But that will have to wait until the weekend’s over.

February 27, 2009 | 36 Comments

Wishcasting the McCain-Obama presidential election

Update: 2 March 2009. I am withdrawing this post, in the sense that I no longer think it is strictly accurate. See this post to see why. But I want to leave this post up for others to see how not to do statistics.


Right after the close of the Republican and Democrat conventions (in 2008), I asked readers to participate in a study where they could guess who would win the presidential election. They could also indicate who they wanted to win, and if they ordinarily skewed Conservative or Liberal.

The idea was to search for traces of wishcasting, which is what happens when people let their desires influence their judgment about what will happen.

A wishcasting sales manager might suppose his will make a higher sales number next month than he probably will because he wants to save his job. A climate activist might forecast a higher probability of doom because he wants mankind to be responsible for various real and imagined ills. A sports fan will similarly put too much weight on a victory for his favorite team.

Or a voter might guess too high a probability for his candidate’s victory given that he desires him to win.

It is important to recognize wishcasting traits in yourself—or in your company—so that you can reduce or eliminate them and thus produce more accurate, and thus more valuable, forecasts. Recognizing wishcasting in your forecasts is the first step to making better predictions.


Please remember that these guesses were made right after the conventions, well before the end-game of the campaign. At that point, the information on the candidates was roughly equal. Both media reports and personal conversations indicated that both candidates had a chance to win. That would obviously change later in the campaign, but a crude parity was in place after the conventions.

We received 624 legitimate responses (see the Data Caveats section for more details). 79% thought McCain would win, 21% Obama. Obama won 53% of the popular vote, McCain 46%. Given the information they had at the time, voters guessed too high for McCain and too low for Obama.

Not surprisingly, given the nature of this blog, 83% wanted McCain to win, 15% wanted Obama, and 2% chose not say. This was the same breakdown for philosophy: 83% reported Conservative, 15% Liberal, and 2% unknown.

80% of the participants were men. The median age was 51 (same for men and women), with a range between 16 and 89.

The tricky part of this analysis is that it is impossible to say exactly how much wishcasting was done. It might be true that each of the respondents did not let their desires influence their guess at all. Given what we know of human nature, this is probably false, but we cannot say with certainty. Everything below, therefore, is just an estimate.

We also cannot say about the level of wishcasting in any person: all estimates can only be for the “average” voter, that is, the type of voter who would take part in an internet study like this. If we had repeated guesses from one person, then we could hone in on his wishcasting component, but we only have one guess per person, so we can only say something about the group.

This table gives a first indication. It estimates the probability of who wins given who you wanted to win:

Who win?
McCain Obama
Want win?    McCain 89% 11%
Obama 25% 75%
None 67% 33%

Of those who wanted McCain to win, 89% thought he would win, and only 11% thought Obama would. Likewise, of those who wanted Obama to win, 75% thought he would. Of the small number who didn’t express a desire, two-thirds thought McCain would win. That 89% and 75% are pretty big and indicate that some kind of wishcasting has taken place. Why?

If there were no wishcasting, and assuming the 600-some people’s actual votes did not influence the election unduly (presumably everybody who wanted a candidate to win voted for him), we would expect that who you wanted to win would have no bearing on who you thought would win. People should be able to separate their desire from their judgment.

Since overall 79% thought McCain would win, if no wishcasting took place, we would expect that 79% of those who wanted McCain to win to say he would. But 89% did. That 10-percentage point difference is the amount of wishcasting that took place among McCain supporters.

Since overall 21% thought Obama would win, then if no wishcasting took place, we would expect that 21% of those who wanted Obama to say he would. But 75% did. Thus, the 54-percentage point difference is the amount of wishcasting among his supporters.

For the people who didn’t say who they wanted, we would have expected a 50/50 split, but we saw a slight leaning towards McCain, 66/34. We shouldn’t read too much into this, as this was only for 9 people.

First finding

In other words, Obama supporters, as a group, wishcasted at a rate five times higher than McCain supporters. This result is not now surprising, given how the election played out, particularly in the media.

We cannot dismiss the element of doomcasting, either. This is when people guess what they don’t want to happen. Doomcasters have a counter-balancing effect on wishcasters, pulling the data back to where we would expect it had people not let their desires influence their guesses in a positive direction. Since we see a wishcasting effect, we cannot reliably estimate any doomcasting effect from the above table.

But we can say more. The next tables are just like the first, but broken up by those who identified themselves as Conservative or Liberal.

Estimated probability of who wins given who you want to win by philosophy:

  Conservative          Liberal
Who win?        Who win?
McCain Obama        McCain Obama
Want win?    McCain 89% 11%        Want win?    McCain 89% 11%
Obama 35% 65%        Obama 23% 77%

For Conservatives, the amount of wishcasting for McCain was the same: a 10-percentage point bias. But for Obama supporters, the wishcasting fell 10-percentage points to only a 44-percentage point bias.

For Liberals, the amount of wishcasting for McCain was the same. But for Obama supporters, it increased a slight amount, to a 56-percentage point bias.

There were not enough people who did not specify a desired candidate to break down the numbers into philosophical groups.

Second finding

In other words, Liberal Obama supporters wishcasted at a rate higher than Conservative Obama supporters. The amount of McCain wishcasting remained the same regardless of philosophy, indicating more consistency. These results are also not surprising.

Third finding

If we break the data down by age into two groups (tables not shown): greater or lesser than the median 51 years, we find the McCain wishcasting bias remains the same, but for the older Obama supporters it goes up to a 65-percentage point bias. This was somewhat surprising given that we heard during the campaign that younger people who preferred Obama were more zealous. The younger group had just about the same percentage point bias as before (49 points).

Fourth finding

Among males, the McCain wishcasting bias stayed the same. But among females it increased just slightly to 13-percentage points. The amount of Obama wishcasting bias was the same for both males and females.

There was not enough data to reliably break the data down any further: for example, age by philosophy, sex by age, and so on.


Wishcasting almost surely took place in the McCain-Obama presidential election. This conclusion is conditional on the poll giving usable data (as to that, see the next section).

We must remember that these results are relevant for people who would come to a blog like this, during the specified time, and for elections like the one we had. At the least, we have said something about our small community; at the most, we have said something about average web browsers. We have probably not said much about the general population.

We should understand that any given Obama voter should not have necessarily subtracted 50-percentage points from his guess that Obama would win. That 50-percentage point bias was for the group and not necessarily for any individual. The implies that some people should have subtracted more, some less. The only way to say something about an individual is to collect more data on that individual so that we can estimate his typical bias.

Who wishcasted? Well, your author took part in the study: I thought McCain would win and also wanted him to. Wishcasting was probably there to some extent.

Other McCain supporters wishcasted, too, but not by very much on average. It didn’t seem to matter if they were young or old, male or female, or whether they identified themselves as Conservative or Liberal, the amount of McCain bias was about the same. Again, this was characteristic of readers of this blog, so we should be careful to say the same is true for all McCain supporters.

Those who wanted Obama to win really let that desire influence their guesses. Liberal Obama supporters wishcasted the most on average, a 56-percentage point bias; those who listed themselves as Conservative were more temperate, but not as tempered as McCain supporters. Sex didn’t make any difference to the results, but age did: older Obama supporters wishcasted more than younger ones did, a finding that goes against conventional wisdom. Once more, this result is for readers of this blog, or people like them.

Suppose you are, or were, an Obama supporter and you say, “So what if I say I wanted him to win. He did win, didn’t he? What does wishcasting have to do with anything? I made the right guess.” Yes, you did. This time.

The result—at the time of the conventions—was by no means a forgone conclusion. McCain might have won. You were making a guess about an uncertain future. You got it right this time, but you might not get it right next time. In making predictions of this type, if you are like the typical Obama supporter in this study, you are letting your desires influence your judgment too much, and over the course of many of guesses you will make more mistakes than the folks who do not wishcast. You’ll be losing either money, or prestige, or something.

There are probably plenty of formal studies, of which I am not aware, that show that the more controversial the matter the more people let their desires influence their predictions of the future. These findings do support the anecdotal perception that Obama supporters were more emotional than were McCain voters. It’s not that McCain supporters did not let their feelings influence them—again this data can say nothing about any individual, just the average behavior about a group of persons—but that influence was not sharp.

Obama supporters were, as everybody knows, more passionate (particularly read the next section). They let this passion sway them more. It worked out for them this time, but that passion can easily work against them the next time they make a prediction.

Data Caveats
Continue reading “Wishcasting the McCain-Obama presidential election”

February 25, 2009 | 30 Comments

Peer review

Here is how peer review roughly works.

An author sends a paper to a journal. An editor nearly always sends the paper on to two or more referees. The referees read the paper with varying degrees of closeness, and then send a written recommendation to the editor saying “publish” or “do not publish.” The editor can either accept or ignore the referees’ recommendations.

The paper is then either published, or sent back to the author for revisions or rejection.

If the paper is rejected, the author will usually submit it to another journal, where the peer review process begins anew. This cycle continues until either the paper is published somewhere (the most typical outcome) or the author tires and quits.

Here are two false statements:

(A) All peer-reviewed, published papers are correct in their findings.

(B) All papers that have been rejected1 by peer review are incorrect in their findings.

These statements are also false if you add “in/by the most prestigious journals” to them. (A) and (B) are false in every field, too, including, of course, climatology.

A climatology activist might argue, “Given what I know about science, this peer-reviewed paper contains correct findings.” This is not a valid argument because (A) is true: the climatology paper might have findings which are false.

If the activist instead argued, “Given what I know, this peer-reviewed paper probably contains correct findings” he will have come to a rational, inductive conclusion.

But a working climatologist (gastroenterologist, chemist, etc., etc.) will most likely argue, “Given my experience, this peer-reviewed paper has a non-zero chance to contain correct findings.” Which is nothing more than a restatement of (A).

The “non-zero chance” will be modified to suit his knowledge of the journal and the authors of the paper. For some papers, the chance of correct findings will be judged high, but for most papers, the chance of correct findings will be judged middling, and for a few it will be judged low as a worm’s belly.

Here is a sampling of evidence for that claim.

(1) Rothwell and Martyn (abstract and paper) examined referees’ reports from a prominent neuroscience journal and found that referee agreement was about 50%. That is, there is no consensus in neurology.

(2) No formal study (that I am aware of) has done the same for climatology, but personal experience suggests it is similar there. That is, there is at least one published paper on which the referees do not agree (at what is considered the best journal, Journal of Climate).

(3) Pharmacologist David Horrobin has written a commentary on peer-review in which he argues that the process has actually slowed down research in some fields. He also agrees with my summary:

Peer review is central to the organization of modern science. The peer-review process for submitted manuscripts is a crucial determinant of what sees the light of day in a particular journal. Fortunately, it is less effective in blocking publication completely; there are so many journals that most even modestly competent studies will be published provided that the authors are determined enough. The publication might not be in a prestigious journal, but at least it will get into print.

(4) I have just received an email “Invitation to a Symposium on Peer Reviewing” which, in part, reads:

Only 8% members of the Scientific Research Society agreed that “peer review works well as it is”. (Chubin and Hackett, 1990; p.192).

“A recent U.S. Supreme Court decision and an analysis of the peer review system substantiate complaints about this fundamental aspect of scientific research.” (Horrobin, 2001)

Horrobin concludes that peer review “is a non-validated charade whose processes generate results little better than does chance.” (Horrobin, 2001). This has been statistically proven and reported by an increasing number of journal editors.

But, “Peer Review is one of the sacred pillars of the scientific edifice” (Goodstein, 2000), it is a necessary condition in quality assurance for Scientific/Engineering publications, and “Peer Review is central to the organization of modern science…why not apply scientific [and engineering] methods to the peer review process” (Horrobin, 2001).

This is the purpose of the International Symposium on Peer Reviewing: ISPR ( being organized in the context of The 3rd International Conference on Knowledge Generation, Communication and Management: KGCM 2009 (, which will be held on July 10-13, 2009, in Orlando, Florida, USA.

Be sure to visit the first link for more information.

(5) Then there is the Sokal Hoax, where a physicist sent a paper full of gibberish to a preeminent social science journal to see if it would be published. It was. Sokal was careful to play to the preconceptions of the journals’ editors to gain acceptance. The lesson is the oldest: people—even scientists!—easily believe what they want to.

(5) John Ioannidis and colleagues in their article “Why Current Publication Practices May Distort Science.” The authors liken acceptance of papers in journals to winning bids in auctions: sometimes the winner pays too much and the results aren’t worth as much as everybody thinks. A review of the article here.

(7) UPDATE. Then there is, the repository of non-peer-reviewed “preprints” (papers not yet printed in a journal). Arxiv is an acknowledgment by physicists, and lately mathematicians and even climatologists, that it’s better to take your findings directly to your audience and bypass the slow and error-prone refereeing process.

(8) It is easy to get a paper into print when the subject is “hot”, or when you are friends with the editor or he owes you a favor, or your findings shame the editor’s enemies, or through a mistake, or by laziness of the referees, or in a journal with a reputation for sloppiness. In most fields, there are at least 100 monthly/quarterly journals. Thus it is exceedingly rare for a paper not to find a home, no matter how appalling or ridiculous its conclusions.

Update: 4 March; Reader Jack Mosevich reminds us to see this article at Climate Audit. Real-life example of politically correct refereeing.

The listing of these facts is solely to prove that (A) and (B) are false, and that peer review is a crude sifter of truth.

Thus, when an activist or inactivist points to a peer-reviewed paper and says, “See!”, he should not be surprised when his audience is unpersuaded. He should never argue that some finding must be true because it came from a peer-reviewed paper.

This web page has also tracked several peer-reviewed, published papers that are crap. Examples here, here, here, and here (more are coming).


1Incidentally, I have only had one methods paper rejected; all others I wrote were accepted to the first journal I sent them to. Nearly every collaborative paper I co-wrote has also been accepted eventually. I am an Associate editor at Monthly Weather Review, and have been a referee more times than I can remember, both for journals and grants.

I mention these things to show that I am familiar with the process and that I am not a disgruntled author seeking to impugn a system that has treated him unfairly. To the contrary, I have been lucky, and have had a better experience than most.