Statistics

# On The Cubs’ & Trump’s Victories, Nate Silver’s Predictions & All That

Some crow is presumably being eaten by Nate Silver and the folks at FiveThirtyEight.com this fine morning. I recommend they bake it into a pie (which I learnt from an episode of Rumpole).

This isn’t the first time Silver & Crew have got it wrong about Trump. Indeed, his group is on a failure streak. To his credit, Silver admitted his mistakes, writing earlier “How I Acted Like A Pundit And Screwed Up On Donald Trump“. I did the same thing with Obama—twice (which makes me dumber). I, like Silver, let wishcasting get the best of me.

And, like Silver, though he doesn’t know it yet, I relied on a bad model—not once, but twice. Mea maxima culpa. Silver is still relying on his model, which performed well before, but which I believe is now misleading him.

The first time Obama ran, Silver created a quantitative model which took account of information others had not, or had treated less than optimally. Silver’s New & Improved! quantitative model made better predictions than other models, and thus he very rightly was rewarded (that he told our elites what they wanted to hear is beside the point).

I have predicted Trump’s victory since January (and as regular readers know) have believed it months before that. Silver has long been predicting Trump’s defeat (just as he jokingly predicted the Cubs’ loss back in May). But Silver and I are using different models, which therefore give us different probabilities. So first a lesson in probability (which you can learn all about in this must-have book).

Probability does not exist. Therefore nothing has a probability. It is always an error to say things like “The probability the Cubs win is X”, “Trump’s chances are Y.” By “always” I mean always.

Since probability does not exist, and nothing has a probability, the only way to speak of probability is in reference to exterior information. You can say, “If you believe the following things, the probability the Cubs win is X” and “According to my model, Trump’s chances are Y.”

What’s a model?. A model is a set of propositions that are, or are assumed, true. That’s it and nothing more. The propositions don’t even have to be related to one another: they can be gibberish. But gibberish is not going to make for a good model.

Models always speak of a proposition of interest, such as “Cubs win”, “Trump wins”. The goal of modeling is to find a set of premises (propositions) that are probative of the proposition of interest such that the probability deduced from the model is high or one (or low and zero). Actually, it’s more than that. Premises which make the POI high are easily found. Take “The Cubs will win” as the model. Then the probability “Cubs win” given that model is 1. But this would obviously be a lousy model at the start of the season (it is the true model now).

What we’re after in models are not only probative premises, but premises which are true, or are believed true, at the time the model is created. What were the best premises back in May which gave “Cubs win” high probability? Well, we can argue about that. What were the best premises back in January which gave “Trump wins” a high probability? We can argue about that, too: followers of politics will have lots of examples spring to mind.

Silver had and has (I’m guessing) a fixed set of premises, many or which are strictly mathematical, and which tacitly assume that this election would be “much like” past elections. Those premises include the data from various polls (data are always premises); those premises also include information on how to manipulate that data. It’s all very rigorous and scientific.

It’s the rigor and sciency nature of the mathematics which misleads many. They look upon the equations, shiny and beautiful—and they are—and they become alive. The Deadly Sin of Reification strikes! The model becomes—in the mind of the user—reality itself. Before he sins, the modeler says “Given my model, the probability Trump wins is X”, but after Reification hits, the modeler says, “The probability Trump wins is X”. Probability has been conjured into existence!

But it’s all a fantasy.

My guess (and it’s only a guess) is that the rigor and past good performance of Silver’s model has dissuaded him from realizing many of the premises in it are now false. This election is not like others. The polls mean different things than they have before. The labels (and that is all they are) “Democrat” and “Republican” are amorphous. If these changes are not made in his model—and, truthfully, it’s a mystery how to best incorporate them—his model will make rotten predictions. And Silver & Crew have been making bad predictions.

Now my model is not quantitative. Probability is usually not a number, anyway. My model does say, without quantifying, “Trump is likely to win”. If he loses, then I’ll have to examine the individual premises which made the model and discover which were in error.

But even if he wins, it’s not clear the premises I used will be as efficacious in the future. It is not even necessarily true that my model had true or good premises! I could have got lucky.

Predicting complex events is hard (anything to do with human behavior is complex). Much, much harder than those who tout “machine” or “deep learning” algorithms or whatever the going term of the day has it.

Imperfection

Incidentally, it is not true that a set of premises (that we can know) exist which allow us to make perfect predictions. Think, for example, of quantum events. There we can prove we cannot know the premises which say what will happen with certainty. There is no anti-Hari-Seldon like equivalent proof for human events, but I believe the same limitations hold.

Categories: Statistics

### 23 replies »

1. Senghendrake says:

Most of this is Greek to me. All I know is, at this rate, maybe the Toronto Maple Leafs stand a chance…

2. Briggs —

We need a ruling here.

Senghendrake used the pejorative phrase, “(I)s Greek to me.” As that must offend — and subsequently outrage — millions, including some on this blog, will you allow it to stand? Or will you rightfully force Senghendrake to denounce himself in a public forum?

I await your ruling, which is an indication of your level of correctness.

3. DAV says:

Or should we all join fraternities and sororities and stand in solidarity with Senghendrake?

4. Steve E says:

Senghendrake, for that prediction to be true I believe the premises must include flying pigs and Hell maintaining a constant temperature below 0 Celsius. ?

Briggs, I posted the same comment but entered an incorrect email address that the moderation filter caught. Please delete the other entry.

5. The premise that Americans are woefully ignorant of the issues and even their own government is a fact.

JMJ

6. Steve H says:

From “Julius Ceasar”:

CASSIUS Did Cicero say any thing?
CASCA Ay, he spoke Greek.
CASSIUS To what effect?
CASCA Nay, an I tell you that, Ill ne’er look you i’ the
face again: but those that understood him smiled at
one another and shook their heads; but, for mine own
part, it was Greek to me. I could tell you more
news too: Marullus and Flavius, for pulling scarfs
off Caesar’s images, are put to silence. Fare you
well. There was more foolery yet, if I could
remember it.

7. Steve H says:

“Caesar”- sorry!

8. Hrodgar says:

Re: JMJ

Perhaps that is as it should be?

I am absolutely certain I do not understand the inner workings of the government or more than a handful, at most, of “the issues”, and I have an above average (though obviously inadequate) grasp of relevant subjects, having wasted a not insignificant chunk of my relatively abundant free time on them. I have no reason to believe that the majority of voters would have much better luck.

Even if we assume that all, or even just a majority of Americans have the time and abilities necessary to grasp the issues well enough to, for instance, reliably predict election outcomes, most aren’t going to make a difference anyway and have better things to do with their time.

I suppose all I’m really arguing against is the adjective “woefully.” It is natural and right and proper for most people in most places most of the time to be largely ignorant of national politics.

9. Steven Fraser says:

Nice reference to Asimov’s fine work!

10. JH says:

Incidentally, it is not true that a set of premises (that we can know) exist which allow us to make perfect predictions.

Define “perfect prediction”! This just sounds like an oxymoron to me.

But, yeah, a prediction based on a qualitative model built upon my personal political biases is imperfect, in the sense that I cannot be sure if I’ll turn out to be right in my prediction.

11. Spoken like true Social Darwinist, Hrodgar.

Compared to most of our international peers, we are woefully behind. It speaks to our anti-intellectual culture. There is nothing whatsoever good about that, unless you really want to live a in severely stratified third-world state, that is.

JMJ

12. acricketchirps says:

“Woefully behind” must so often be understood with the knowledge that the speaker is facing not so much forward as left.

13. Well, cricket, the truth has a well-known liberal bias, as Colbert says.

JMJ

14. Andrew says:

Is not saying that you got “lucky” itself an instance of the Deadly Sin of Reification? (where I define “lucky” as a beneficial event with a low estimated probability of occurring has occurred)

15. Hrodgar says:

Re: JMJ

How is acknowledging limitations on human ability and resources (including time) Social Darwinism? I just don’t think it’s woeful that a lot of folks worry more about things where they can actually make a difference.

16. Michael 2 says:

Steven Fraser: Amen, brother! Isaac Asimov Foundation Trilogy

17. Michael 2 says:

There’s an odd behavior on this blog; if I comment using Firefox it says “Comments have been temporarily suspended to prevent spam” but if I use Internet Explorer it works.

18. DAV says:

Firefox test. Works for me.

19. acricketchirps says:

JMJ and Colbert both say. It’s true.

20. Wait, where do Jon Stewart, Samantha Bee and Seth Meyers come down on this?

21. Doug M says:

If Silver’s model says Trump has a 30% chance of victory, and Trump wins, was the model wrong? Not really. We can only conclude that the model was wrong, if a very similar model has been used to handicap multiple races, forecast some number of favorites with similar winning probabilities and been upset more than 30% of the time.

There are not enough events to give these models a thorough evaluation.