The full version of the headline is this:
Current evidence does not clearly support cardiovascular guidelines that encourage high consumption of polyunsaturated fatty acids and low consumption of total saturated fats.
So says the team led by Rajiv Chowdhury in their “Association of Dietary, Circulating, and Supplement Fatty Acids With Coronary Risk: A Systematic Review and Meta-analysis” in the journal Annals of Internal Medicine.
If true, this is mighty bad news for those politicians, bureaucrats, and other busybodies who have made careers nagging citizens to avoid cream, cheese, butter, ghee, suet, tallow, lard, and, of course, red meats (Wikipedia has a list of tasty fats). Examples of such folk includes the government’s newly formed Dietary Guidelines Advisory Committee. Reason magazine noted that “A look through the transcript of last week’s hearing reveals the word ‘policy’ (or ‘policies’) appears 42 times. The word ‘tax’ appears three times.”
There is nothing the government likes better than telling you what to do1—it’s for your own good. But they do like to sound sciency about their dictates, which is why papers like Chowdhury’s will be disquieting. The paper eats the wind out of the sails of the low-fat and “good”-fat touts. And a sober reminder how delicate and changeable evidence of diet and health really is.
Before we discuss the results, if you don’t already know why you should (roughly) double every confidence interval you see, please first read the notes below.
Chowdhury’s paper is a meta-analysis, which is a way to group studies of a similar nature and say something about them in toto. There are two kinds of meta-analysis. The first groups studies the majority of which individually did not show “statistical significance”, i.e. showed no effect, but which when grouped (somehow) show the hoped-for effect. Because of the misinterpretation of things like confidence intervals, these kinds of meta-analyses should rarely be trusted.
The second kind of meta-analysis, and the kind which Chowdhury did, is to group studies the majority of which did not show significance but when grouped…also show insignificance. Because standard statistical evidence is designed to give positive results so easily, these kinds of meta-analyses can almost always be trusted.
Of course, no meta-analysis is ever perfect: there are too many ways of going wrong; but this one seems fairly solid.
Our authors examined studies which paired cardiac outcomes and various kinds of fats. For example, the group of fatty acid supplementation observational studies gave a joint relative risk for coronary disease from 0.98 to 1.07 (this is the 95% confidence interval; which if it contains 1 is “not significant”). For use in real predictions, to first approximation, double this to get 0.93 to 1.12. In other words, fatty acid supplementation does squat for avoiding heart disease.
Similar results were had for saturated fats (0.91 to 1.10), monounsaturated fats (0.78 to 0.97), long-chain ω-3 polyunsaturated fats (0.90 to 1.06), and even, glory be, trans fatty acids (1.06 to 1.27; but doubled is 0.96 to 1.38). The paper lists several more, but the results are similar to these. (See at the bottom of this page some minor numerical corrections admitted by Chowdhury, none of which change the conclusions.)
To repeat the juiciest findings (emphasis mine):
Our findings do not support cardiovascular guidelines that promote high consumption of long-chain ω-3 and ω-6 and polyunsaturated fatty acids and suggest reduced consumption of total saturated fatty acids.
They also say, “Nutritional guidelines on fatty acids and cardiovascular guidelines may require reappraisal to reflect the current evidence.”
But will they be reappraised? Doubtful. It would be too much like admitting a mistake.
1From Reason: “The Washington Free Beacon’s Elizabeth Harrington reported last week that NIH had spent nearly $3,000,000 in recent years to fund studies looking into the possibility of using text messages and web tools to treat obesity.”
On confidence intervals: (1) They don’t mean what frequentists say they mean, but always in practice take the definition of Bayesian credible intervals. A credible interval speaks of the guess of a probability model parameter, “There is a 95% chance the true value of the parameter lies in this interval, given all the data we have and assuming the model is true.”
(2) The Bayesian credible interval does not mean what it says. Instead, everybody always takes the interval to speak of reality (about real risk, say) and not a model parameter. Because of this, as a rough rule of thumb, always multiply the stated interval by about (at least) 2. See this or this article for insight.
Thanks to @Mangan150 where I first learned of this study.