When I checked FiveThirtyEight.com’s Senate prediction, it said “Republicans have a 72.3% chance of winning a majority.” There were also words that the “probability that each party will win control of the Senate by running those odds through thousands of simulations.”
What are these “simulations”? Who knows. But probably not what you or even the staff at FiveThirtyEight think they are. “Simulations” have more than a patina of mysticism about them and often mask what’s really happening. But it doesn’t matter. Call whatever evidence Nate Silver’s team assembled, which includes the evidence of their calculation methods, S.
S has whatever polling data Silver thought relevant (at the date I accessed his forecast), details on past races, maybe even information about the voters in each state, anything that was thought probative of the proposition R = “Republicans win the majority.”
Put into equation form, we have Pr( R | S ) = 0.723. Note that that’s 0.723 and not 0.722. Never mind.
Now the outcome was that the Republicans did take the Senate. This observations does not mean the probability was right nor that it was wrong. Assuming Silver didn’t make any errors—a big assumption, given that they admitted to using “simulations”—the probability was correct. It would have been correct no matter what happened in the elections. That is, the probability did not have to be Pr( R | S ) = 1 or 0.
Here’s why. Suppose our evidence is that C = “We have a two sided-coin which when flipped must show one and only one of two sides, head or tail” and proposition H = “A head shows.” We have Pr(H | C) = 1/2. We apply this model to a real coin and discover, after flipping, the head shows, i.e. H is true given this observation. The probability of H given C was correct. The probability of H given the observation, which equals 1 (of course), is also correct.
With a coin it is possible, and it has been done, to find the precise physical conditions which will tell us exactly which way the coin will land. That is, given sufficient resources we can find a C’ for which either Pr(H | C’) = 0 or Pr(H | C’) = 1, depending on whether the coin will land tails or heads. This C’ is then a causal model.
The original model C had nothing to do with causality, except in the weak sense that it correctly listed the possibilities. Now we don’t know for sure, but it’s likely that S contained no causal elements either. What might one of these elements have looked like? Suppose one Senate race was fixed (say in Illinois or New York, where such things are de rigueur) and that S had that information. This part of the election was purely causal. But all of S obviously was not.
Other examples of mixed models, which contain both causes and probabilities are, yes, Global Warming Models, thought they lean toward being fully causal. The final forecasts are not fully causal, however, which is important. Here’s why. C’ was a fully causal model. Suppose we asked the same folks who program GCMs to model our coin. Because they’re too busy applying for new multi-year grants, they don’t pay too much attention and say Pr( H | C’) = 0, when in fact H occurs. Then C’ has been falsified.
But suppose a climatologist makes a forecast which says, “Pr( Temperatures soar | 98% Consensus Model) = 1 – epsilon”, where epsilon is any number greater than 0 (and less than 1, naturally). Then even if the temperatures do not soar, which they did not, the model has not been falsified.
No non-causal probability model can be falsified, as long as that model gives non-zero probabilities to possible events. This is a fact of life.
Thus can pollsters and climatologists remain ever hopeful, because, I must emphasize, the probabilities they issued (assuming no error in calculation) are correct, they are right, they are true. And truth is always to be embraced.
Now Silver’s and any modeler’s job is to find those elements and conditions that make their models as close to causal as possible, which means making their predictions as close to 0 or 1 as possible. It often turns out that two or more different people have differing information, which results in different—and correct—probabilities. Thus is gambling born.
From this, it might be clear that we can also find models which are as far from causal as possible, but which still contain all the correct information about possibilities. C was as an example. These pure probability models are extremely important, and words like “uniformity”, “maximum entropy”, “information”, and “random” (as computer science theorists use the term) arise. These for another day.
Update I idiotically forgot to say anything about usefulness. So here’s a teaser: probabilities are not decisions!
Twice you write: Pr( S | R ).
I’m so stupid I think to myself, shouldn’t he have written: Pr( R | S )?
Where do I make my mistake?
Iggy,
That’s jet lag for you. Idiotic error # 2. And the day’s still young. It’s fixed. Thanks.
As you pointed out Nate’s models aren’t really falsifiable. He is right no matter what happens. About the only way to assess them is to see if his individual picks turn out to be more right than wrong (whatever that means). In any case, his predictions weren’t all that exceptional (http://www.nytimes.com/newsgraphics/2014/senate-model/) and even Nate admits it.
Other examples of mixed models, which contain both causes and probabilities are, yes, Global Warming Models, thought they lean toward being fully causal.
The probabilities think they lean toward being fully causal? Hmmm …
BS ASSERTION: “…about usefulness. … a teaser: probabilities are not decisions!”
IN THE REAL WORLD good statistics depends on clearly communicating them — communication that includes consideration & accommodation of non-technical [“softâ€] human factors that influence decision-making (not saying this is good/bad/sensible/etc….but this is how the real world operates, and if one is therein, one must deal with that):
Consider the L’Aquila earthquake & ensuing trial where scientists were convicted for manslaughter – here’s an interesting analysis that explains some facets most seldom consider: https://medium.com/matter/the-aftershocks-7966d0cdec66
L’Aquila – the prosecutor’s view: http://www.newscientist.com/article/dn22416-italian-earthquake-case-is-no-antiscience-witchhunt.html
Bad Weather, sue the weatherman: Part 1: http://sciencepolicy.colorado.edu/admin/publication_files/2002.21.pdf & part 2: http://sciencepolicy.colorado.edu/admin/publication_files/2002.22.pdf
Stated reasoning might seem to have a certain logic, while the actual truth will very often be much more banal (and, sometimes/oftentimes, below the threshold of conscious thought); for example, were the infamous Salem bewitchery cases of children’s play getting out of hand, or, motivated by simple jealousies prompting a creative revenge: http://www.bpi.edu/ourpages/auto/2012/9/5/59524640/Visible_and_Invisible_Worlds%20of%20Salem.pdf
Logical analysis is only good to a limited point, invariably, in real-world situations, highly personal human-factors become dominant. As J. P. Morgan & others have noted, “A man always has two reasons for doing anything: a good reason and the real reason.” To be truly effective, one must be capable of sifting out the asserted “good reason,” identify the unstated but operative “real reason” and respond to reality.
Key & general issues associated with the explanation of what a statistic means (converting numerical analysis into “information” via “words”) start on page 15 of the 26 page article at (this includes the observation that some languages, such as Italian, are deficient in words capable of communicating sometimes highly relevant nuances, becoming a factor impeding the ability to explain significant meaning and thus impeding the ability for suitable decision-making — and that’s on top of how different audiences [other scientists vs the lay public] will interpret & ascribe wildly different meanings [often with some consistency within a given subgroup] to the exact same words) : https://medium.com/matter/the-aftershocks-7966d0cdec66
Amazing how they can calculate those probabilities to three decimal places. Reminds me of the AGW zealots calculating the average temperature to a thousandth of a degree when you are lucky to be able to read a thermometer to a tenth of a degree.
How long before Ted Cruz complain about the demo rate filibuster?
In the same speech last year he complained how anti-democratic the democrat were in using the filibuster and later claim that he was patriotic for doing the same thing.
“Mixed models†refers to specific statistical models!!!
Ha! You do not know what those simulations are. However, you know that
The simulations basically generate all possible scenarios based on the estimated probability of winning for each election in all states, and how many percent (72.3%) of the scenarios resulting in “republicans winning a majority†is then computed. The estimated probability of winning for each election is supposedly derived by Bayesian analysis and updated as new information becomes available.
The usefulness of Simulations! No magic. No masking of anything. Just one way of using information as best as one can.
Whether one wants to engage “inductive behavior,†which is what Neyman decided to call it, such as betting based on Silver’s probability estimation or avoiding betting, involves an act of will or decision.
Sylvain: I’d give it a couple of days. Surely you realize that politics is completely about hypocrisy and lacks any intelligence except by accident. My state voted to continue the graft, hypocrisy and status quo as long as my elected representative was willing to keep stealing from the other 49 states and bringing it home. They flatly rejected the candidate that opposed all of the above. And this is allegedly a conservative republican state—which has no meaning any more. Pointing out hypocrisy in politics is like pointing out the horribly dressed Walmart customers—there’s a plethora of both, basically an endless supply, and to expect different is to illustrate how far out of reality one actually is.
IOW, a probability doesn’t become a prediction until you put money on it?
Gary,
The problem is how to crisp it, that is, turn the prediction into a yes/no matter. As long as it’s anything can happen it isn’t really a prediction. Not even sure if it’s useful. If there was a 75% chance of the reverse of last night would it have made any difference?
That being said, the predictions might be self-fulfilling. Maybe enough of those inclined to vote for retaining the power balance in the Senate said, “Hey! What’s the use of voting for it?” In my state, it wasn’t even on the ballot.
Totally on the side: we did get a Republican governor but the power balance in the State House didn’t change. It’s still Democratic and the State House has a lot of power in this state. The counties have to go to the House for ratification of everything.
DAV, I was half-facetious with my comment. I understand Briggs to be saying that a properly calculated probability from a model is always “correct” because it is completely controlled by the model’s components — that are associated with causality to varying degrees. So what I’m asking is what a creates a prediction if it’s not the model probability estimate? Seems like it’s an external factor; i.e., somebody making a statement (a bet, for example) about an outcome to be proven true or false in the future. Models only inform the statement; they aren’t capable of making it.
JH,
See the article on “simulations” to understand why there’s more than a hint of magical thinking about these creatures.
And quite right. “Mixed modelling”, like “likelihood”, has a technical meaning but is used by civilians and by me in a different manner.
For others, “mixed models” are frequentists attempts to prove they are not Bayesians, while still acting almost like Bayesians. These models still get all the stuff about “randomness” etc. wrong.
Gary,
The probabilities are always correct because they can’t be wrong — ever. They cover every possibility unless they are 0 or 1. If the models can only inform predictions and not make them there would be no way to validate the models so I disagree.
Gary, DAV,
The probabilities are predictions, yes. But how useful they are we didn’t cover.
Are they really predictions? They seem equivalent to saying some horse will win without being specific about which horse. Saying A has a 75% chance of winning can only be tested by assuming it means “horses similar to A win 75% of the time” then going out and seeing if that holds. Sounds like a frequentist notion of probability.
DAV,
Yep, they really are. Saying this horse has a 75% chance of winning this (given some evidence) means just what it sounds like. That this horse given this evidence, has a 75% chance of winning this race. Not in repeated races. This race.
Briggs,
“mixed models are frequentists attempts to prove they are not Bayesians, while still acting almost like Bayesians. These models still get all the stuff about randomness etc. wrong.”
This sounds like a hot topic. Would be great, if you could do a post on this one. Many people here in europe misunderstand this.
Best regards
Pawel
This blog is getting worse and worse… don’t you have anything interesting to show? This critique of your is really shallow, you just repeat yourself saying “don’t reify probabilities”, “weeeeeee p-values!!!” . Come on, show some substance too.
Gee, John, I thought you would have at least liked the post on woofies.
Maybe John would like to demonstrate “something interesting to show” us by submitting a guest column. It’s so easy to just complain, isn’t it?
The big error in the model here appears to be in the forcings. In the contested states polling data was in error by an average of over 5%. Republicans won by >5% more than polling suggested. That’s a pretty big miss. At least he should know where to look to improve the model.
Looks like he already figured that out…. http://fivethirtyeight.com/features/the-polls-were-skewed-toward-democrats/
Maybe he should work on climate models next.
Mr. Briggs, there is more than a hint of magical thinking to you, but not to me. Anyway, it is not point worth addressing.
Amusing to see that JH still has learned nothing, still forgotten nothing….
Perhaps I’m being insomniaclly naive here, but isn’t the determination of the “correctness”** of a statistical model based on 1 trial basically an undecidable problem.? Any result including wins for the Aryan Brotherhood in Harlem and the American Communist Party in Texas are *possible* and have therefore been predicted if they occur. OTOH were we to sneak a camera into a Vegas casino (posit not getting caught and ending up as Gila Monster snack) and spot that one of the wheels was turning up red to black on a 60:40 ratio. That prediction can be repeat trialled and could yield a useful practical results (subject to the necessary precautions against ending up the wrong side of the sand.) then it’s been a long jet lagged day and perhaps I’m babbling….
** the ability to suggest a course of action on which you’d put your own money..
Briggs,
Saying this horse has a 75% chance of winning this (given some evidence) means just what it sounds like.
Two things are being stated so it depends on what is being predicted here: the win by horse A, the 75%, or both. It’s necessary to know this when testing for success.
If it’s the win and every horse has been given a probability what exactly has been predicted except one of the horses will win unless we are to assume the most probable horse was the predicted one?
If it is the probability value alone, how could it be tested except to assume the probability value was intended to imply a frequency?
If it wasn’t the probability value itself why then show the probability range and what would you call a statement “A will win with a probability of 40-60% and so will B”? An “I don’t know”? If so, does it count as a prediction?
JohnK,
I am too old for your kind of sh*** comment. Not a valid opinion. No nutrition but foul smelling gas. I usually just ignore them.
So I see you believe everything Briggs says!
With a PhD in Statistics from a school ranked better than Cornell and more than 25 years of studying, researching and teaching, I really do not learn anything new from this blog. Briggs knows about this. What he blogs about is often basic and old stuff and his opinions about statistics. Yes, all very basic, go ask a statistics professor at a nearby university.
I get to learn what kind person Mr. Briggs is and what he knows about statistics, which I have rarely expressed… for good reasons.
So what have you learned from this blog in addition to what John said in his comment?
I used some simulations (of an idealized decision-maker under uncertainty) in my dissertation, and my approach to them is as follows: First, we need to make sure the program correctly simulates our theory — it’s free of programming errors and internal contradictions. Then, if the observed behavior of the simulation matches our real-life experience, we can say that the simulation supports the validity of the theory. On the contrary, if the observed behavior differs from real-life experience, we have shown that the theory is either wrong or incomplete. (Which may be even better, if you are trying to get a publication by overthrowing someone else’s theory.)
The problem with the global warmists is that they assume their theory is valid from the get-go, so when the world doesn’t behave as the simulation, they conclude that the world is wrong.