Nate Silver’s Obama Prediction: What Does It Mean?

Nate Silver
It was Sophocles who first noted the truism, “No one loves the messenger who brings bad news.” But the converse is also true. As it is written (in Romans 10:15), “How beautiful are the feet of those who bring good news!”

To many on the Left, Nate Silver has some pretty toes. It was he whose statistical models proclaimed the coming of The One in 2008, his models calling correctly 49 of 50 states.

And in this election, as of yesterday, Silver’s models put an Obama victory at a “74.6 percent chance.” Note that this is 74.6 and not 74.5. In any case, this good news makes Silver gorgeous to many progressives, and is what draws his comparisons with Rosie O’Donnell waking after a spree from some members on the right.

That’s “many”, not “all” progressives. There is growing suspicion that Silver’s feet may be constituted of clay. Politico’s Dylan Byers yesterday wrote that Silver “often gives the impression of hedging.”

For this reason and others — and this may shock the coffee-drinking NPR types of Seattle, San Francisco and Madison, Wis. — more than a few political pundits and reporters, including some of his own colleagues, believe Silver is highly overrated.

MSNBC’s “Joe Scarborough took a more direct shot, effectively calling Silver an ideologue and ‘a joke.'” David Brooks said “pollsters tell us what’s happening now. When they start projecting, they’re getting into silly land.”

A poll can always be interpreted as a prediction of what the popular vote percentage will be. But since polls are (sliding) snapshots, models can be superior in guessing outcomes. Silver’s models take as input polls and other secret sauce and produce predictions. Whether these prediction are “silly” can only be known after the fact.

Silver is not the only one in the prediction biz. The Weekly Standard is reporting, “The bipartisan Battleground Poll, in its ‘vote election model,’ is projecting that Mitt Romney will defeat President Obama 52 percent to 47 percent.”

This is a different kind of prediction than Silver’s. The Battleground prediction (as reported) effectively says the probability of Romney winning is 100% because there is no stated uncertainty. Silver gives us his uncertainty by putting Romney’s chance at 25.4%. He explicitly says Romney has a chance, and so does Obama. The Battleground prediction (as reported) does not say this: it says “Romney wins,” though of course many read uncertainty into the prediction. Silver’s prediction is superior at least in the sense that he includes his uncertainty.

To turn either forecast into a bet, you have to take the given probability and weigh it against what happens if the prediction turns out right or wrong. In other words, it’s a decision analysis problem, which we won’t be solving today. For most people, any probability higher than 50% means that candidate is the bet to make. A bet with Silver’s forecast is Obama. A bet with Battleground is Romney.

What does Silver’s prediction mean? He effectively says, “Given my model and the data that went into it, there is a 74.6% chance Obama wins.”

This does not mean that Obama wins 746 out of every 1000 presidential elections in which he takes part. That would be the relative frequency interpretation, which obviously fails (and which always fails in the strict sense, though sometimes probabilities do match relative frequencies).

It does not mean that 746 out of every 1000 voter pulls the lever for Obama. That’s another version of relative frequency, and is therefore also wrong. It is also wrong on its face. Nobody but nobody expects Obama to win 74.6% of the popular vote, even in California. But if there is anybody out there would would like to bet $1,000 wins at least 74.6% of the popular vote, even in California, give me a ring.

It does not mean that Silver constituted 1000 scenarios and in 746 of them, Obama won and in 254 Romney won. Just think: we can generate scenarios endlessly, and since we are free to pick them, we can edge them in the direction we want.

So here is what it does mean: just what we started out saying. Given Silver’s model and the data that went into it, there is a 74.6% chance Obama wins. And nothing more. The truth of the statement “Obama wins” has probability 0.746, given our evidence. Given Battleground’s model and the data that went into it, there is a (stated) 100% chance Obama loses.

There are two variables in these two predictions: the data and the model. I have no idea what data both organizations used, but suppose it is exactly the same (a doubtful assumption). Therefore the only difference is the models. Is it a surprise to learn that a forecast is conditioned so strongly on the model specified? After all, two different models give strikingly different probabilities.

Well, it’s true. And it’s not only true for political polls and prognostications, but whenever statistical results are announced. You almost never hear of the dependence on the model though. People are too anxious to talk about their results.

Incidentally, there are ways to judge a prediction’s goodness after the fact. Ask me about these sometime.

Update Not all probability is quantifiable. I put Romney’s chance at very good, but I can’t give you a number. My bet is also Romney, and has been since last December.

21 Comments

  1. Luis Dias

    Well at least Silver’s model can be evaluated on its merits after the election is over regardless of who wins, due to his focus on state electoral process. So he predicts which states Obama or Romney win.

    Thus even if Romney wins the election, we can see why Silver’s model failed, and by how much in each state. And then we are allowed to call BS on Silver.

    49 out of 50 in the last elections is pretty good. I am really not qualified to make that judgement, but it seems pretty good. Much better than Scarborough’s ever been, if that can be a measure of anything reasonable at all.

  2. Tom M

    Silver’s new book “The Signal and the Noise” spends quite a bit of time (about the entire latter half of the book) discussing the merits of Bayesian analysis over frequentism. This is not a comment on the validity of his political prediction model or its relative value compared to others; I prefer to stand on the sidelines when it’s not my expertise.

    The book discusses all sorts of probabilistic uncertainties in life, including earthquakes, meteorology, climatology, sports, poker, politics, and others. Silver argues that Bayes’ theorem is superior to others. If you ever read the book, I’d like to hear your take on his analysis.

  3. Big Mike

    I always feel the tiny gears in my head gnashing when someone talks about the probability of a unique event either occurring or not.

    It seems to me the only way you can model an event is by considering it a member of set of events that you have made conformable by selecting only a small amount of information about each to represent the events themselves. The hope is, of course, to not eliminate important information that may render the stylized events non-equivalent. A subset of these events will have to be past events so that the model can be tested.

    So what Mr. Silver must be saying is that his model predicts Mr. Obama’s reelection, but he’s only around 75% sure that his model has it right; a figure that must be based on testing his model against previous elections.

    This doesn’t sound to me to be quite the same as saying “Mr. Obama has a 75% chance of winning”, which is a meaningless proposition, given that the probability can never be observed, only asserted.

    Do I have this wrong?

    As an aside, I would be more than happy to place a bet with Mr. Silver on Mr. Romney’s winning, using Mr. Silver’s model to price it. A very large bet, for I have a 100% conviction that Mr. Romney will win (and whatever it takes to posit a testable hypothesis — no wiggle room for me!).

  4. andyd

    Hmm…
    WM Briggs … WM Romney.
    Commonly goes by middle name Matt … Goes by middle name Mitt.
    Co-incidence? You be the judge.

  5. rank sophist

    Nate Silver did a solid job predicting the House and Senate seats taken by the Dems and GOP in 2010, too. After reading this blog, I’ve learned to be skeptical about statisticians (other than Prof. Briggs, of course), but Silver seems to be a lot better than most. I can only assume that he doesn’t rely on p-values–although perhaps they aren’t designed for statistics in his field.

  6. DAV

    Isn’t judging a prediction’s goodness after the fact a bit like testing firecrackers by exploding them? (* Yep! That was a good one! How about the next one? *)

  7. Briggs

    DAV,

    Best question. Answer: no, not if you expect the source of the forecast to make new ones and you want in advance to ascertain their likely goodness or badness.

    rank,

    Silver is a Bayesian, so he at least has that going for him.

  8. David

    Also read Silver’s book, which made me think of this site, since he clearly favors a Bayesian approach. He is far from being a joke, and has great insights to share (his climate chapter is the weakest one though, seems he only spoke to one side…). It would be nice to have the Statistician to the Stars have a look at this book, and get his comments!

  9. Ken

    Whereever any subjectivity enters a model/analysis the analyst’s desired outcome invariably influences how the analyst interprets, weighs & applies the various pieces of data. Commonly the desired outcome overcomes all other factors.

    This is particularly apparent with parents gambling on the outcome of their children’s high school football games. Invariably someone will bet against the home team, thinking, logically, that regardless of how much they’d like their kid’s team (their kid) to win, the reality is that most likely they’ll lose (i.e. they make a prudent financial investment). Pretty much without exception the majority of other parents (& any others aware) treat this as treasonous–as if betting against the home team somehow confers an advantage for the opposing team (and this can get surprisingly heated).

    Outside of such overt situations, this kind of irrationality (bias) tends to be pervasive, and usually is subconscioius & undetected by the participants.

    The question for the moment is: Is N. Silver’s ‘desired outcome bias’ leading to a predicted Obama win stronger or weaker than the same bias operating on those involved in predicting a Romney come-from-behind win? The question itself contains a strong clue about where that particular bias is strongest…

  10. DAV

    Briggs,

    OK. So how does one test a 75/25 prediction? It allows both possibilities so wouldn’t it always be right?

    Actually, I can see a way but it would take a lot of predictions. I doubt I’ll see even 20 more presidential elections in my lifetime. Maybe you’ve something quicker?

  11. Briggs

    DAV,

    Like any other model, really. You have past predictions and outcomes which you model, and along comes a new prediction which you can use make a guess of future outcome.

  12. DAV

    Maybe a good topic soon?

  13. JH

    DAV,

    In a binary response (win or lose) model, the probability of a win can be estimated by collecting historical data on election results and explanatory variables such as economic indicators and poll results. (I think this is done for each state first and then followed by intensive calculations for all possible combinations of 50 states.)

    A cut off of 50% is usually used to make a prediction of win or lose, and it allows the evaluation of a model adequacy or comparisons of models.

    You are right that we cannot call Silver a fraud based on his 75% chance of Obama win because it doesn’t exclude a Romney win. We can’t prove him wrong with one election, either. A hand-waving analogy: A quiz consists of one multiple-choice question with four choices. There is only one correct answer. The chance of guessing the correct answer is ¼. Now, suppose you guess it wrong, does it mean the probability of ¼ is wrong? No.

  14. DAV

    JH,

    But that would be converting a fuzzy prediction into a crisp one — essentially calling Silver’s 74.6% noise. He stated his confidence within plus/minus 0.1%. That sounds like a WAG or thoughtlessly taking the result of a calculation at face value. It would be hard to test his prediction to that level. I would be more interested in knowing how to apply his confidence level. If he were to make a future prediction with a 60/40 split, how would I know how much less confidence I should use to weight his prediction?

  15. DAV

    Yes, I know I could just use 0.6 but how meaningful would it be?

  16. Ye Olde Statistician

    Sounds like a Crystal Ball monte carlo simulation. Run the model 1000 times using various assumed priors and see how often the Fates smile.

  17. JH

    DAV,

    I think that oftentimes probability is only meaningful when you have to make decisions. If I am to bet based on the prediction of 60% chance of Obama win, I’d bet on Obama but the number 0.6 will affect how much money I am willing to wager.

    I don’t know exactly how Silver derives his number. If Bayesian analysis is employed, we can question the prior distribution he has chosen. He seems to know election politics well.

    As the way it goes, it looks like that I’d have a good (which is not the same as “very good”) chance of getting another opportunity to tease Briggs about his prediction that “I put Romney’s chance at very good.” ^_^

  18. BobN

    As I understand it, Silver upped the probability of an Obama win to 90% just before the election. And as was shown on the electoral map, whatever his algorithms are, he nailed it pretty much dead on.

Leave a Reply

Your email address will not be published. Required fields are marked *