It was Sophocles who first noted the truism, “No one loves the messenger who brings bad news.” But the converse is also true. As it is written (in Romans 10:15), “How beautiful are the feet of those who bring good news!”
To many on the Left, Nate Silver has some pretty toes. It was he whose statistical models proclaimed the coming of The One in 2008, his models calling correctly 49 of 50 states.
And in this election, as of yesterday, Silver’s models put an Obama victory at a “74.6 percent chance.” Note that this is 74.6 and not 74.5. In any case, this good news makes Silver gorgeous to many progressives, and is what draws his comparisons with Rosie O’Donnell waking after a spree from some members on the right.
That’s “many”, not “all” progressives. There is growing suspicion that Silver’s feet may be constituted of clay. Politico’s Dylan Byers yesterday wrote that Silver “often gives the impression of hedging.”
For this reason and others — and this may shock the coffee-drinking NPR types of Seattle, San Francisco and Madison, Wis. — more than a few political pundits and reporters, including some of his own colleagues, believe Silver is highly overrated.
MSNBC’s “Joe Scarborough took a more direct shot, effectively calling Silver an ideologue and ‘a joke.'” David Brooks said “pollsters tell us what’s happening now. When they start projecting, they’re getting into silly land.”
A poll can always be interpreted as a prediction of what the popular vote percentage will be. But since polls are (sliding) snapshots, models can be superior in guessing outcomes. Silver’s models take as input polls and other secret sauce and produce predictions. Whether these prediction are “silly” can only be known after the fact.
Silver is not the only one in the prediction biz. The Weekly Standard is reporting, “The bipartisan Battleground Poll, in its ‘vote election model,’ is projecting that Mitt Romney will defeat President Obama 52 percent to 47 percent.”
This is a different kind of prediction than Silver’s. The Battleground prediction (as reported) effectively says the probability of Romney winning is 100% because there is no stated uncertainty. Silver gives us his uncertainty by putting Romney’s chance at 25.4%. He explicitly says Romney has a chance, and so does Obama. The Battleground prediction (as reported) does not say this: it says “Romney wins,” though of course many read uncertainty into the prediction. Silver’s prediction is superior at least in the sense that he includes his uncertainty.
To turn either forecast into a bet, you have to take the given probability and weigh it against what happens if the prediction turns out right or wrong. In other words, it’s a decision analysis problem, which we won’t be solving today. For most people, any probability higher than 50% means that candidate is the bet to make. A bet with Silver’s forecast is Obama. A bet with Battleground is Romney.
What does Silver’s prediction mean? He effectively says, “Given my model and the data that went into it, there is a 74.6% chance Obama wins.”
This does not mean that Obama wins 746 out of every 1000 presidential elections in which he takes part. That would be the relative frequency interpretation, which obviously fails (and which always fails in the strict sense, though sometimes probabilities do match relative frequencies).
It does not mean that 746 out of every 1000 voter pulls the lever for Obama. That’s another version of relative frequency, and is therefore also wrong. It is also wrong on its face. Nobody but nobody expects Obama to win 74.6% of the popular vote, even in California. But if there is anybody out there would would like to bet $1,000 wins at least 74.6% of the popular vote, even in California, give me a ring.
It does not mean that Silver constituted 1000 scenarios and in 746 of them, Obama won and in 254 Romney won. Just think: we can generate scenarios endlessly, and since we are free to pick them, we can edge them in the direction we want.
So here is what it does mean: just what we started out saying. Given Silver’s model and the data that went into it, there is a 74.6% chance Obama wins. And nothing more. The truth of the statement “Obama wins” has probability 0.746, given our evidence. Given Battleground’s model and the data that went into it, there is a (stated) 100% chance Obama loses.
There are two variables in these two predictions: the data and the model. I have no idea what data both organizations used, but suppose it is exactly the same (a doubtful assumption). Therefore the only difference is the models. Is it a surprise to learn that a forecast is conditioned so strongly on the model specified? After all, two different models give strikingly different probabilities.
Well, it’s true. And it’s not only true for political polls and prognostications, but whenever statistical results are announced. You almost never hear of the dependence on the model though. People are too anxious to talk about their results.
Incidentally, there are ways to judge a prediction’s goodness after the fact. Ask me about these sometime.
Update Not all probability is quantifiable. I put Romney’s chance at very good, but I can’t give you a number. My bet is also Romney, and has been since last December.