A version of this post originally appeared on 20 October 2012. But after a Twitter conversation with our friend @Neuro_Skeptic, it’s time for an addition.
Comes a Salon article entitled “The Internet Blowhard’s Favorite Phrase: Why do people love to say that correlation does not imply causation?“.
That article uses a lot of words, more than I in my shaken state can assimilate. It also has pictures of Karl (but unfortunately not Egon) Pearson and some blurry lines whose meaning escaped my bleary eyes.
Anyway, here’s the short truth of it: if there is causation, there is correlation. If there is no causation, there might be aberrant correlation. If there is correlation, there might be causation. If there is no correlation, there is no causation.
Thus correlation implies, but does not prove, causation. It implies in the colloquial definition of that word; it suggests. Its presence does not prove implication in the logical sense, though. But since most people are unaware of the distinction in meanings, and most take {\it to imply} as a synonym of {\it to suggest}, it’s not improper to say correlation implies causation.
Suppose you see me shoot my pistol at a plate glass window and observe that it breaks. There is correlation: the two events are coincident. There is causation. The correlation implies the causation.
Later you see a second person doing the shooting. But you learn after the fact that he is a magician practicing the bullet-catching trick. The window indeed breaks after you hear the bang but it breaks because of a hidden device he activates on his person. The bullet is not causing the window to break, but to your mind there is the correlation, the coincidence between the bang and breaking. So there is correlation and no causation.
A correlation of X and Y is this: knowledge of X changes our judgment of the possibilities of Y, and vicey verso. If X and Y are not correlated, then knowledge of X does not change our judgment of the possibilities of Y, and the other way round, too.
(Here’s the update.) It was suggested to me that there might not be correlation in the presence of causation. The example given was that X causes Y but Z counteracts X. But this isn’t right, because X did not cause Y, because it has been counteracted by Z. X can cause Y but not all the time because Z (or whatever) sometimes counteracts X. In this case the observed correlation between X and Y might not be recognized. This is true; but in those times that X did cause Y, there is correlation. We musn’t confuse our statistical tests’ ability to recognize correlation with the presence or absence of correlation.
I threw a bunch of navy beans (X) onto the floor, where they remarkably self-ordered into the vague visage of our Dear Leader (Y). That’s aberrant correlation for you, for, as is well known, navy beans normally take the shape of farm animals.
Now along comes an academic anxious for a paper. He sees my X and Y, puts the beans into a statistical model, and out pops a wee p-value, which is nothing more than the same evidence of our eyes unfortunately quantified. I say unfortunate, because the unnecessary quantification gives more evidentiary weight than is due. The X/Y observed correlation has been given a number, and numbers are easier to believe and to reify than anecdote.
And there is where the story usually stops, until the peer-reviewed paper1 is published, where it catches the eye of one of my dear readers, who dutifully forwards it on to me, wherein I in vain point out once again that academics should be penalized for publishing more than one paper per decade.
It is surely true that, for the observed experiment, X and Y are correlated, and thus causation is implied or suspected. Which is why we have to wait for replication. The statistical model that claimed X and Y are mates can be, and should have been, turned into a predictive model (all statistical models can and should so usually be turned). And that predictive model should have been compared against new data, data never before seen or used in any way to form the original model. Only then can we know if the observed correlation is aberrant or lasting and real.
Problem is, the academic who published the paper can’t be bothered to wait for new data: he is already out in the field on the hunt for new correlations, the curiouser or the more at odds with common sense the better. Find them he will, and so too will his brother and sister academics, who will flood the journals with their research.
This phenomena will be noted by civilians who will form the opinion, wrong as we now know, since every time spurious correlation is found causation is claimed by some academic, that correlation does not imply causation.
The whole thing is rather depressing.
Update Clearly there is more to be said about causation and correlation. We barely scratch the surface here.
——————————————————————————————
1“The tendency of startled legumes to spontaneously form images of sainted politicians,” Journal of Psycho-Social Academic Research, vol. 72, pp 513-589.
Aha! Statistical proof that Y is a farm animal.
Not really. Just another example of stepping outside of expertise. The use of X is the proper domain of bean counters.
The difference between implication and proof is several dozen light years at least. That article is a lot of huffing and puffing about statistics when misuse of the word “implies” is the culprit. Don’t kids learn vocabulary words in school any more?
I would have thought that it would take far fewer than 76 pages to show the tendency of startled legumes to spontaneously form images of sainted politicians.
But then again I thought that legumes were smarter than that.
Too often the correlation is not that X causes Y, but that X is a proxy or indicator of the true cause of Y, leading leaders to attempt to gain Y by manipulating X.
Example A-1, Stalin, believing that steel production was a _cause_ of industrial growth instead of an indicator of factors supporting growth. This error was repeated by Mao. Both leaders slashed and burned the real sources of economic gain pursuing gains in X (steel production) resulting in _falls_ in Y, (national wealth). Epic tragic fail.
Example B — School lunches (or breakfasts) It was correctly observed that kids with lunches got better grades and test scores. It was hypothesized that well-fed kids could concentrate on learning rather than their next meal. So gov’t bought kids lunch, then breakfast, then summer meals, then supper, and now is tinkering with the assortments of nutrients … In fact kids with “with it” parents who cared enough to pack lunch, or ensure a hearty breakfast, also care about homework, grades, test scores, etc. Kids with parents who care also care, and do better. But we’re still focused on the menu at school and can do nothing to get parents into “with it-ness.”
Example B, It has long been observed that drinking wine may be followed by increased incidence of singing silly songs, dancing without inhibition, and making risque’ remarks to members of the opposite sex. Many believe, therefore, that wine makes the drinker happy. In fact, the wine makes the drinker _more_ of whatever he was to begin with. Singers sing, dancers dance, flirts flirt , but lazy bums become bigger bums, and thugs become more thuggish.
I suspect the term is “co-factor” but there may be another better.
So often in science reporting, I see some wacky explanation as to why X might have caused Y when there was no reasearch on this causation.
Mr. Briggs,
Why did you make such a ridiculous statement… based on a Hotair report?
Please correct the last word of the fourth paragraph
X: Pollution increases
Y: Life expectancy increases
Does pollution increase life expectancy?
… or do elderly pollute more?
We need a peer revised paper on that…
Pingback: Meanings of Halloween and Hallowmas Clerical Celibacy | Big Pulpit
Those with a wee knowledge of statistics will know that there may be hidden variables that are the cause of two variables being correlated. Here’s an example from my graduate statistics class. You look at a scatterplot of height versus hair length for students in a statistics class in 1951 (you’ll see below why it’s important that the 1951 was chosen as the year) and there is a strong negative correlation. Now you can’t say that being short causes your hair to grow long, but if you look at the sex (not gender) of the students you’ll see that those with shorter hair and greater height are male, while those with longer hair and shorter height are female. Hence, the hidden variable in this example is sex (not the act, but the quality). This is why statisticians say correlation is not causation.
Reynold’s Law points out one of the great correlation errors of our time
Reynolds’ Law: “Subsidizing the markers of status doesn’t produce the character traits that result in that status; it undermines them.â€
http://philoofalexandria.wordpress.com/2010/09/25/reynolds-law/
Does this work at the quantum level? It sometimes seems, when physicists talk about quantum levels, causation does not work the same way as it does for us. I’m way out of my league here…
I learned an example in my intro class of correlation from a proxy variable that some may wish to hold onto for an example. The example is that higher reading comprehension/ability is correlated well with arm length. A little thought reveals that arm length is just a proxy for age, and people generally become better readers with age.
The proxy/hidden variable problem is especially problematic with anything that varies in time.
A pal once told me, “Correlation does not make for causation…. but it is the only way to bet.”
Briggs wrote: “Problem is, the academic who published the paper can’t be bothered to wait for new data: he is already out in the field on the hunt for new correlations, the curiouser or the more at odds with common sense the better. Find them he will, and so too will his brother and sister academics, who will flood the journals with their research.”
My impression of how science works tells me that you are correct. Max Planck, who was closer to science’s inner workings than I’ll ever be said it nicely, “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.”
I add some comments to Planck’s observation. Every year cranks out the new generation of scientists anxious to make their mark. One way to get noticed (i.e. generate grant money) is to ask, and test, the kinds of questions their elders left unasked. Or, as is also commonly the case, asked but not pursued.
The holy grail of science is to overturn established theory by some new line of inquiry and/or evidence. I would expect younger scientists to be more motivated by this than older ones. Nevertheless, I believe this is also true of how science works, and you leave it out at the peril of appearing to succumb to exactly the same thing which you critique.
Look to Richard Feynman (and many others) for a really pernicious feature of research and publishing, consider all the research gathering dust in desk drawers, file cabinets and floppy disks that went unpublished because they were never even submitted to a journal … for the reason that the pee values were not wee. Publishing the failures would keep other researchers from independently and unknowingly making the same mistakes, thus generating more of the exact same “unpublishable” negative conclusion. And even if the results were “bad”, the data may still be “good”, or at least relevant to someone else, for whom it might give other ideas or at least save duplication of data gathering. And think of the impact on meta-studies that could rely not only on a broader set of data, but some hints about both “right” and “wrong” lines of inquiry.