William M. Briggs

Statistician to the Stars!

Page 149 of 627

On Nate Silver’s Republicans-Take-The-Senate Prediction

By law, statisticians must look like this.

Yet the first bringer of unwelcome news
Hath but a losing office, and his tongue
Sounds ever after as a sullen bell,
Remember’d tolling a departing friend.

Henry IV, part II

So there was Nate Silver, statistician par excellence, wearing the oak leaf cluster and crown of laurel, holding a purple slide rule, riding his chariot triumphantly through the blogosphere commemorating his famous victory over Uncertainty. He had predicted a high probability Barack Obama would be re-elected to the presidency.1

The Media loved him for his divination and showered him with much praise, honors, and gold.

But riding on the chariot with Silver was one of Uncertainty’s vanquished generals who whispered into Silver’s ear, “All glory is fleeting.”

Boy, was he right. And in spades.

Because Silver has again ventured forth into battle, facing his old enemy, but this time his augury is unwanted: “GOP Is Slight Favorite in Race for Senate Control.”

The same Media who once loved him is now hot on Silver’s tail, fangs bared, pitchforks and torches waving, for the crime of foreseeing unfavorable events. Business Insider says Democrats Are Freaking Out About Nate Silver’s Latest Prediction. The National Journal leads “Democrats to Nate Silver: You’re Wrong“.

Guy Cecil, executive director of the Democratic Senatorial Campaign Committee, is not happy and insists Silver is wrong. This judgment is itself a prediction, for there is no way for Cecil to know Silver is wrong, but it’s a happy one because it tells Democrat supporters what they wish to hear. But then, even if the GOP does not re-take the Senate, it doesn’t make sense to say that Silver was wrong.

Cecil said, “In fact, in August of 2012 Silver forecast a 61 percent likelihood that Republicans would pick up enough seats to claim the majority,” but the Democrats held. Again Cecil said Silver was wrong.

But Silver wasn’t (and can’t be) wrong because there isn’t any way a (non-extreme) probability forecast can be wrong.

All probability forecasts sound like this: “Given my evidence, the probability of Q is P”. As long as P is less than 1 and greater than 0, there is uncertainty whether Q, the proposition of interest, is true. For Silver, Q = “The GOP retakes the Senate.” His evidence is proprietary and his P isn’t explicitly stated (but note that it can be calculated from the table he gives here; 20 bonus points for the reader who does it).

To be wrong, Silver’s forecast has to say P = 1 and the Democrats must retain control. There is no other way to err. If Silver’s P = 0.99 (and it isn’t), and the Dems keep regulating our lives, then Silver would still not be wrong.

There is a sense, though, that Silver’s prediction, in the light of the cold reality of the Dems holding power, might be seen as less than useful (we are imagining a future in which the GOP loses). This sense highlights the very real difference between a prediction and a decision. We’ve seen what a prediction is. A decision takes a prediction and acts on it. Decisions can be wrong. Non-extreme probability predictions cannot be.

One decision might be to bet that the GOP takes it. If the Democrats win, you lose your bet, and you lose because your decision is wrong. The prediction remains a probability, a true statement of the evidence used to create it.

Not everybody will use Silver’s prediction to bet. Why? Because some people don’t like to bet, or others like to but don’t see much pay off, or because a prediction which is just the other side of a coin flip doesn’t instill enough courage to gamble. But others might love to have a go and will plunk down lots, whether in terms of real money or in reputation points (a pundit might say “The GOP is gonna take it all!”).

Thus a prediction which is useful for one person can be of no use to another. Decisions made on predictions are so varied that there’s no way to know who, if anybody, they might be useful for. (Though there are ways to look at collections of predictions and surmise what might happen if these predictions are used for future decisions of a known sort.)

It’s clear, though, that Cecil doesn’t feel Silver’s latest clairvoyance is useful for him. If people act on Silver’s prediction such that they cease donating to Democrat candidates, thinking these candidates will lose, those candidates deprived of money will be more likely to lose. So Cecil must do what he can do to plant doubts about Silver’s prediction—and about Silver himself—even though Silver is scarcely making a bold guess.

Luckily for Cecil, the Media is ready to shoot the messenger for him.


1Many statisticians of lesser repute, such as Yours Truly, have done much worse.

Evidence Does Not Support Low Consumption Of Total Saturated Fats

Fat is where it’s at.

The full version of the headline is this:

Current evidence does not clearly support cardiovascular guidelines that encourage high consumption of polyunsaturated fatty acids and low consumption of total saturated fats.

So says the team led by Rajiv Chowdhury in their “Association of Dietary, Circulating, and Supplement Fatty Acids With Coronary Risk: A Systematic Review and Meta-analysis” in the journal Annals of Internal Medicine.

If true, this is mighty bad news for those politicians, bureaucrats, and other busybodies who have made careers nagging citizens to avoid cream, cheese, butter, ghee, suet, tallow, lard, and, of course, red meats (Wikipedia has a list of tasty fats). Examples of such folk includes the government’s newly formed Dietary Guidelines Advisory Committee. Reason magazine noted that “A look through the transcript of last week’s hearing reveals the word ‘policy’ (or ‘policies’) appears 42 times. The word ‘tax’ appears three times.”

There is nothing the government likes better than telling you what to do1—it’s for your own good. But they do like to sound sciency about their dictates, which is why papers like Chowdhury’s will be disquieting. The paper eats the wind out of the sails of the low-fat and “good”-fat touts. And a sober reminder how delicate and changeable evidence of diet and health really is.

Before we discuss the results, if you don’t already know why you should (roughly) double every confidence interval you see, please first read the notes below.

Chowdhury’s paper is a meta-analysis, which is a way to group studies of a similar nature and say something about them in toto. There are two kinds of meta-analysis. The first groups studies the majority of which individually did not show “statistical significance”, i.e. showed no effect, but which when grouped (somehow) show the hoped-for effect. Because of the misinterpretation of things like confidence intervals, these kinds of meta-analyses should rarely be trusted.

The second kind of meta-analysis, and the kind which Chowdhury did, is to group studies the majority of which did not show significance but when grouped…also show insignificance. Because standard statistical evidence is designed to give positive results so easily, these kinds of meta-analyses can almost always be trusted.

Of course, no meta-analysis is ever perfect: there are too many ways of going wrong; but this one seems fairly solid.

Our authors examined studies which paired cardiac outcomes and various kinds of fats. For example, the group of fatty acid supplementation observational studies gave a joint relative risk for coronary disease from 0.98 to 1.07 (this is the 95% confidence interval; which if it contains 1 is “not significant”). For use in real predictions, to first approximation, double this to get 0.93 to 1.12. In other words, fatty acid supplementation does squat for avoiding heart disease.

Similar results were had for saturated fats (0.91 to 1.10), monounsaturated fats (0.78 to 0.97), long-chain ω-3 polyunsaturated fats (0.90 to 1.06), and even, glory be, trans fatty acids (1.06 to 1.27; but doubled is 0.96 to 1.38). The paper lists several more, but the results are similar to these. (See at the bottom of this page some minor numerical corrections admitted by Chowdhury, none of which change the conclusions.)

To repeat the juiciest findings (emphasis mine):

Our findings do not support cardiovascular guidelines that promote high consumption of long-chain ω-3 and ω-6 and polyunsaturated fatty acids and suggest reduced consumption of total saturated fatty acids.

They also say, “Nutritional guidelines on fatty acids and cardiovascular guidelines may require reappraisal to reflect the current evidence.”

But will they be reappraised? Doubtful. It would be too much like admitting a mistake.


1From Reason: “The Washington Free Beacon’s Elizabeth Harrington reported last week that NIH had spent nearly $3,000,000 in recent years to fund studies looking into the possibility of using text messages and web tools to treat obesity.”


On confidence intervals: (1) They don’t mean what frequentists say they mean, but always in practice take the definition of Bayesian credible intervals. A credible interval speaks of the guess of a probability model parameter, “There is a 95% chance the true value of the parameter lies in this interval, given all the data we have and assuming the model is true.”

(2) The Bayesian credible interval does not mean what it says. Instead, everybody always takes the interval to speak of reality (about real risk, say) and not a model parameter. Because of this, as a rough rule of thumb, always multiply the stated interval by about (at least) 2. See this or this article for insight.

Thanks to @Mangan150 where I first learned of this study.

Update About the paper’s data corrections.

HANDY Not So Dandy: NASA-Funded Mathematical Model Of Doom

I'd pound Equality into the heads of my enemies.

I’d pound Equality into the heads of my enemies.

Mathematically minded

Remember back in the 1990s when otherwise intelligent people would look to the scientific literature and say, “Those guys must be right. They used a computer model.”

A computer! How could they be wrong?

Yet once computers became ubiquitous, once they turned into mere “lifestyle” statements, once we started carrying the things everywhere in our pockets, this kind of talk died out. Results were no longer deemed extra-special-shiny just because they were formed inside a computer. Traditional notions of evidentiary goodness reasserted themselves, including the knowledge that computers only do what they’re told.

But when a result is based on a mathematical model—well! Just look at all those equations! Colorful graphs, too! We look at a paper written by mathematicians and think, “Anybody smart enough to go on and on about “dimensionless parameters” and “optimal depletion factors” must know what they’re talking about.” Right?

No, of course not.

All you need to understand about mathematical modeling is this: equations can be as error-free as Aristotelian syllogisms; nary a decimal out of place, but that does not imply that the use to which the equations are put is valid, or even sensible. Applying math to real life is not a mathematical operation, but a human act of interpretation.

For example, here is a perfectly respectable equation: y = x. We fill in numbers on the right-hand side and “solve” for numbers on the left-hand side. Simple. But what if I were to tell you—and recall I have a PhD in the subject from an Ivy League university—that “x” stands for Inequality and “y” for Distress, one an intolerably fuzzy notion and the other a raw emotion, and both scarcely measurable quantities. Would you be satisfied if I insisted that my mathematical model proved that distress causes inequality?

You might, if you had a stake in a political ideology which insisted on the relationship. And then you might not if you realized that quantifying such complex intricate inchoate ideas into two arbitrary numbers was next to impossible.

Does anybody create these kinds of model? Do they offer trivial equations with unquantifiable entities which purport to demonstrate, say, how entire civilizations might collapse?

Yes, of course.

X = The end of the world

NASA in its wisdom, through the grant “NNX12AD03A”, which it now publicly regrets issuing, funded the study, “Human and Nature Dynamics (HANDY): Modeling Inequality and Use of Resources in the Collapse or Sustainability of Societies” by Safa Motesharrei, Jorge Rivas, and Eugenia Kalnay, which will appear in the journal Ecological Economics.

The authors note that Rome is no longer with us, neither the Han Dynasty (though many Chinese might dispute this). And since these, and other civil societies such as the Aztec Empire, Carthage, East Germany, the Yugoslavia and many more, disappeared because of some cause, it makes sense to look for this cause.

Scholars are divided on why Rome collapsed, but a not unpopular answer is decadence coupled with an over-extended military, the rise of Christianity, and one or two other matters. Few would say a lack of food. Carthage had the bad habit of killing her newborns (good thing we don’t do this). The Aztec Empire let its population age a while before large numbers of them were sacrificed. (It was only later that science realized wholesale slaughter of one’s residents tends to cause demographic decline.)

Anyway, the authors note that “cultural decline and social decadence, popular uprisings, and civil wars” can cause or contribute to societal collapse. But none of these have to do with the environment, our modern obsession. So the authors ignored all possible causes except the environmental in “a simple model, not intended to describe actual individual cases, but rather to provide a general framework that allows carrying out ‘thought experiments’ for the phenomenon of collapse and to test changes that would avoid it.” They call this curiosity the “Human And Nature DYnamics (HANDY)” model.

It’s based on standard predator-prey models which work like this: a population of wolves eat the locally available deer, whose population necessarily declines, perhaps to the point where some wolves starve, decreasing their population; the concomitant reduced predation allows the deer to rebound, which gives the opportunity for more hot dinners for the wolves, which begins the cycle anew.

Human wolves

HANDY swaps the wolves for human beings and the deer for “Nature.” Just how people prey on Nature is not too clear, especially since people are part of Nature. This difficulty is ignored. HANDY introduces a twist which allows mankind to accumulate Nature for future use, a surplus which the model calls “wealth”. This would be like the wolves discovering how to make and store venison jerky.

About that wealth: “Empirically, however, this accumulated surplus is not evenly distributed throughout society, but rather has been controlled by an elite. The mass of the population, while producing the wealth, is only allocated a small portion of it by elites, usually at or just above subsistence levels.” Some wolves are more equal than other wolves.

Some humans do eat less than others, but this is a culture-relative measure. For instance, “poor” people (what the authors call “Commoners”) in the USA have much higher obesity rates than the “rich” (“Elites”). In medicine-speak, being a Commoner is a “risk factor” for obesity. What do the authors say to this clear objection?

The meaning of life. Says the model.

The meaning of life. Says the model.

They say this, the HANDY model. The claim is that culture’s fortunes are folded into these four simple equations.

The dots over the letters mean the thing represented by the letter changes over time. The names are: xC = the number of Commoners, xE = the number of Elites, y = the amount of Natural Resources, and w = Wealth. The latter two are expressed in units of “eco-Dollars”, a fictional entity which isn’t well described and doesn’t appear to map to any real thing.

The C’s are functions of w, xC, and xE, and the various Greek letters on the right-hand side allow the model to be tuned to give results the authors hope to see. Being able to put numbers to all these unquantifiable entities is what makes it Science.


The authors fiddled with the parameters to tell three stories: (1) The first of an Egalitarian society, one with no Elites, and therefore necessarily meeting the goal of Equality (where all have the same number of “eco-Dollars”); (2) An Equitable society, one with Elites, but where all are Equal; (3) An Unequal society, one with Elites and Commoners and imperfect Equality (where Elites have more “eco-Dollars” than Commoners).

Now without knowing the solution to the equations (which are easy to come by), and not knowing the values of all the tweakable parameters, just you take a guess which of these three scenarios reached Sustainability and which led to inevitable Collapses.


Under the Unequal society “collapse is difficult to avoid”, because why? Because “Elites eventually consume too much, resulting in a famine among Commoners that eventually causes the collapse of society.” But notice the confusion: eco-Dollars have somehow been transformed into food. But man does not live on bread alone. He also needs energy and shelter. So if eco-dollars are proxies for food, then where in the model are these other necessary quantities?

Particularly absent is the idea that only in a society which admits Elites can there be technological progress (what else is a writer of scientific papers but an Elite?). Consider how much food production has swelled over the last century as the discoveries made by Elites are implemented to the benefit of Commoners. Nature, in this sense, is not static as the model assumes.

But as already noted, in Western culture the opposite of the HANDY model has been observed. Elites, skipping over the precise meaning of “eco-dollars”, eat fewer calories than Commoners. They breed less, too. Having access to more Nature strangely means less food consumption and fewer humans. Think of Japan.

And in non-Western cultures, many of which are more Egalitarian relative to the West, there are good arguments that the meddling of Western Elites is what causes some famines, many of which are artificial owing to local (and global) politics.

Yet there is no notion of politics, and no mechanisms for the more important senses of cultural change (such as population-reducing war, disease, abortion and contraception use) in HANDY, and no idea that, as happens in reality, Commoners become Elites and vice versa. There is no real-life strict dichotomy between an Elite and Commoner. And new forms of wealth and ways of increasing Nature’s bounty are often unanticipated.

The HANDY model says Unequal societies must collapse. But which societies, say over the last century, in reality gave up their ghosts? We must ignore those that collapsed because of war (such as the Austro-Hungarian Empire, Cambodia) or politics (e.g. Rhodesia, Czechoslovakia) because HANDY is silent on these important subjects. The remaining collapsees were those societies which were Egalitarian (e.g. the Soviet Union, Cambodia again?). And how many collapsed solely because of the non-Egalitarian use of “eco-Dollars”? It’s hard to make a case that any did. Ireland is still with us. There were large famines in, say, Uganda, but the culture is still extant and the famines were in large part caused by war.

I don’t mean this to be a complete history, but as in our “y = x” model, saying the symbols mimic real life does not make it so, especially when the model makes predictions opposite to reality. HANDY has no applicability to human cultural change. But it’s a matter of interest to see who is most anxious to believe the model.


The authors, whose faith in their model is strong, are ready with their political suggestions, which include recommending “major reductions in inequality and population growth rates”. The Guardian eagerly agrees and says “The NASA-funded HANDY model offers a highly credible wake-up call to governments, corporations and business—and consumers—to recognise that ‘business as usual’ cannot be sustained, and that policy and structural changes are required immediately.”

Once again, the triumph of theory over reality.

Update Ivy League Statistician Debunks NASA-Funded ‘Socialism or Extinction’ Study

Pick Your Lotto Numbers With a Greater Chance to Win? Paper From Brazil

The Blaze ran a story with headline “Mathematician Thinks There’s a Way to Pick Your Lotto Numbers With a Greater Chance to Win“.

Turns out Renato Gianella thinks he’s discovered a way to boost the odds of winning a lottery which, as far as I could discover, is no longer in existence.

But I don’t buy it; at least, I don’t think I do. Gianella wrote the paper “THE GEOMETRY OF CHANCE: LOTTO NUMBERS
” in Revista Brasileira de Biometria, an obscure journal. I don’t mean any insult to Gianella, but the paper, which is (mostly) written in English is difficult to follow. (Of course, if I wrote a paper in Portuguese the results would be dismal.) I gather the journal couldn’t afford a copy editor, because there are misspellings, many words running into one another (perhaps Gianella learned English from a German?), a lack of equation numbers, and similar difficulties.

Gianella writes of the Brazilian Super Sena, a lottery which appears to have folded up shop in 2001. I believe it was replaced then by the Mega Sena, a gamble which I’m guessing is operated similarly to the Super Sena, but with more numbers (i.e. a lower chance to win, but with higher jackpots).

That’s a lot of supposing, I know. But I’m not done guessing.

The Mega, and, I’m supposing, the Super, has two bins, the first with balls labeled 0-5, and the second with balls labeled 0-9. A ball from the first bin is drawn—say, 0—then one from the second—say, 3. The string “03” makes 3 (“00” becomes 60). Six times this is done. I haven’t been able to learn for certain, but it looks like if there is a duplicate number drawn, it is tossed out and there is another drawing, so that the final result gives 6 unique numbers. (Allowing duplicates would make an enormous difference in the probabilities.)

One more cute twist: the Mega allows you to buy up to 15 numbers, but you still only have to match 6 to win. The cost of buying 6 numbers is (I think) 1 Brazilian real, but the cost of buying 15 is 5,005.

I couldn’t follow Gianella’s math, which has a funny feel to it (funny strange, not funny ha ha). I also think he’s made a logic mistake. This table from The Blaze shows Gianella’s method.

The so-called Lotto Rainbow

The so-called Lotto Rainbow

Gianella first forms “monochromatic” groups, those rows of colored numbers. And from these, he asks how numbers from different groups can combine in various ways, say one red with two greens. And from those various combinations, using some opaque (to me, but I’m lazy) combinatoric methods, he figures his probabilities.

Problem is, there’s no reason in the world to group the strings “00”, “01”, “02”, …, “09” into one color, and the strings “10”, “11”, …, “19” into another, and so on. The machines just spit out balls with writing on them. The groupings are what we humans see. So it appears that his results would change if we were to, say, swap the green “89” with a red “61”, because this would change the combinations. And if that’s so—if we can swap any string, which we obviously can—then his method, assuming all swaps, give the standard result.

The third reason I don’t buy it is this table of results:

His comparative results.

His comparative results.

The templates are his groupings of colors combinations. The theoretical probability are what his model predicts and the observed frequency is what was seen (over some period). Do these two columns really differ? Well, yes they do. But do they differ enough to be suspicious that he’s on to something? No: not really.

Just as I was about to give up on the paper entirely, I read his final words:

As a main aspect, it reveals that, although all bets are equally likely, behavior patterns obey different probabilities, which can make all the difference in the concept of games, benefitting [sic] gamblers that make use of the rational information revealed by the Geometry of Chance.

“Behavior patterns”? As of patterns in behavior of the gamblers? Do the Sena payouts depend on the gambler’s behavior? The big lottery jackpot payouts here in the States (Mega Millions, Power Ball, etc.) do depend on gambler behavior—the more people who buy tickets the higher the chance of smaller winnings (winners might have to share jackpots). But payouts are different than chances of winning, which are the same for all. I admit to leaving the paper feeling very confused.

Gianella has set up a site to cash in on his tricks. He uses his methods on the American lotteries, too, but unless I’m badly mistaken, he’s fooling himself.


Thanks to reader Kent Clizbe for alerting us to this story.

« Older posts Newer posts »

© 2015 William M. Briggs

Theme by Anders NorenUp ↑