Reader Ken pointed us to Space.com’s click-me-click-me! article “The 10 Mistakes People Make When Arguing Science”. Since everybody loves numbered lists, I’ll duplicate their efforts here, using their mistakes. Isn’t it curious most are probabilistical (yes, probabilistical)?
1. Wait! That’s Just One Study!
In some cases, one study is enough. In some, none is plenty. Gort the caveman is out in the rain and sees a bolt of yellow light blast a stand of trees. Tree catches fire and a deer which had been standing nearby keels over. Gort reasons, “Nasty business those bolts, what? what?” He then advises Qweeloc, another member of the tribe, to keep clear. Gort needn’t know how or why the bolt bangs to understand that its deadliness. And all Qweeloc has to do is trust Gort.
A lightning bolt is a physical thing, and physical things are are easy to understand. Consider: Gort brought the skin of the dead deer home to Fuh, his darling mate, thinking she would be well pleased. She wasn’t and complained about the smell. This perplexed Gort extremely. He reasoned, “Sometimes my gifts please her, and sometimes they don’t. I can’t figure out what works.” And that’s because human behavior is hideously difficult to predict.
Update Since Gort only has a sample of 1, he’d never get a wee p-value, thus he could never prove that lightning kills, and his paper would be rejected.
2. Significant Doesn’t Mean Important
But human behavior is shockingly to easy to mistakenly claim to have been understood. If Gort would have treated this latest gift as a part of a “random” experiment for which he calculated a p-value, he could have claimed “statistical significance”. He then would have been able to publish a peer-reviewed drum song in Caves and Hovels and secured his success as a researcher.
“My results are statistically significant!” sounds juicy and sounds like truth. It isn’t. Significance means finding a wee p-value and nothing else. This is why your better sort of statisticians says Die P-Value, Die Die Die.
3. And Effect Size Doesn’t Mean Useful
Knowing that having a second weekly doughnut “doubles the risk” of splenetic fever is frightening. Until you realize that the risk goes from 1 to 2 in 10 million. As I wrote:
Not a trick question: what’s the difference between a risk of one in ten million and one of two in ten million? The official answer is “Not much.” Though I would also have accepted “Almost none”, “Close enough to be the same,” and “Who would care?”
Effect sizes communicate risk in terms of parameters, little mathematical bits inside models which are of no interest to anybody who wants to know the risk of a real thing. The risk of real things is necessarily smaller than the risk in the reported effect sizes. Effect sizes are a sure way to produce over-confidence. Selling fear is a risky—but profitable business.
4. Are You Judging the Extremes by the Majority?
A scientist stuffs a rat living in an artificial environment full of chemical X and watches it develop cancer. The researcher then suggests humans “exposed to” X will, too. A reporter will then write a headline, “X Causes Cancer! When Will The Government Do Something?”
The government then does something. But the government also ignores hormesis, which describes the benefit low doses of X provide. Worried about your kid developing asthma? Some say, “Send him outside to play in the mud and dust and dirt.” Outside, incidentally, is that vast uncovered place far from “devices.”
5. Did You Maybe Even Want to Find that Effect?
Every working scientist knows about confirmation bias. Just as every working scientist knows it only happens to the other guy.
“Still—it is possible…for a historian or a scientist or, indeed, for any thinking man to present evidences, from a proper employment of sources, that are contrary to his prejudices, or to his politics, or indeed to inclinations of his mind. Whenever this happens, it manifest itself in his decision to present (which usually means: not to exclude) evidences not supporting his ideas or theses.” So says John Lukacs in his At the End of an Age. He calls the class of behavior of scientists like this, not objectivity, but honesty.
How much of it do we have in science today?
6. Were you Tricked by Sciencey Snake Oil?
The science is settled! It’s a Consensus! And what says Science better than Consensus?
Read: The Consensus Fallacy. Or, even better, read The Consensus In Philosophy.
7. Qualities aren’t Quantities and Quantities aren’t Qualities
How much do you feel, on a scale of -3.4 to 117 2/3, that bulleted lists make you happy? Be precise. I intend to collect the answers of this instrument and write a scientific paper on how I feel about your feelings.
Improper and absurd quantification is a plague on science. See this growing list of Asinine Uses of Statistics.
8. Models by Definition are Not Perfect Representations of Reality
Don’t tell physicists who are on the hunt for the multiverse this. It will make them sad. And that could be a hate crime.
See this video on the love of theory. And remember what that fairly often sober statistician said, “The love of theory is the root of all evil.” Also see Theory confirmation and disconfirmation.
9. Context Matters
Thinking that it doesn’t is one of the great causes of over-certainty. Scientific statements are usually (mostly? always?) probabilistic, and all probability is conditional, which is another way to say context matters. Read There Is No Such Thing As Unconditional Probability or the Monty Hall Problem
10. And Just Because It’s Peer-Reviewed Doesn’t Make It Right
Peer review is, at this point in our watered down world of science, nearly useless. If you don’t believe this, look at the asinine list or at these peer-reviwed studies.
3.14159….
I hope that helps you in your research
My desire (87 on a scale of -2 to 117 1/2) to click on a link promising a numbered list is greater than the satisfaction that said list provides (54 on same scale). Nonetheless it may be an good marketing move to increase page views.
Hope these numbers help with your study.
Hey Briggs: I’ve said it before, but once again won’t hurt. With respect to your category 8, much of theoretical physics/cosmology is mathematical metaphysics, not science. For example, with respect string theory, read “Not Even Wrong” by Peter Woit… (Title refers to Pauli’s comment about a paper so bad “it isn’t even wrong”).
“with respect TO string theory”…not “with respect string theory”…
PROOF READ! PROOF READ!! PROOF READ!
Peer reviewed papers are written to be read by other peers. This article should probably be titled “10 of the usual mistakes science reporters make.”
Well said Briggs
I may just have to quote you from time to time
Cheers
H
Isn’t calling someone a science denier sufficient to win a scientific argument?
Pingback: This Week in Reaction (2015/04/17) | The Reactivity Place