Culture

This Week In Doom: Bad Science Edition

mad-science

3 out of 4 scientists agree

The bagatelle about how more than half of published research is wrong—a fact well known to regular readers—is garnering comment hither and yon.

Joanne Nova has “The bureaucratic science-machine broke science, and people are starting to ask how to fix it.”

Science is broken. The genius, the creative art of scientific discovery, has been squeezed into a square box, sieved through grant applications, citation indexes, and journal rankings, then whatever was left gets crushed through the press. We tried to capture the spirit of discovery in a bureaucratic formula, but have strangled it instead.

Junk Science weighs in, too. “It is our theme, sort of–our beta noire, our obsession—why can’t scientists and intellectual issue investigators tend to the evidence and insist on good methods?”

Joe Bast, Heartland’s chief, sent my piece around and John Droz responded (via email) saying Horton and I did not “adequately distinguish between Science and scientists.” Droz says, “Science is (at its core) a process” and he asks, “can Science be ‘wrong?'” answering “I think not.”

Well, sure. It takes a scientist to do science, bad or good. It can be said science is a process, but then so is history, theology, literature. Knowledge does not stand still. It is added to and subtracted from continuously. We happen to be in a period where the subtractions outweigh the additions.

Science can be wrong. That is, propositions which are believed by all or most scientists can be false. The belief is “science is self-correcting”. Always? How can you prove it? Answer: you cannot. We may forever be stuck with ideas that are wrong.

Chocolate is good for you

It’s pretty darned easy to pull off a nutritional “science” hoax. Or any kind of science hoax, really.

John Bohannon teamed up with a German documentary crew to undertake a crappy junk-science study on the effects of bitter chocolate on weight loss, and managed to push their hoax to major media outlets all over the world — here’s how.

First, they created a fake science institute, The Institute of Diet and Health, and then recruited a friendly MD to help them recruit a small number of volunteers for a weight-loss trial…

…researchers paged through 18 other factors they’d measured in the study — “weight, cholesterol, sodium, blood protein levels, sleep quality, well-being, etc” — and cherry picked a couple of factors that looked better in the chocolate than in the low-carb group. On this basis, they were able to assert that adding chocolate to a low-carb diet made you lose weight 10 percent faster.

Statistics! “They wrote up a paper that contained obvious statistical canards — small sample size, bad sampling methodology, p-hacking, poor control group analysis” and got the thing published in a “premiere journal”.

And then mainstream reporters, self-described geniuses to a man, swallowed it whole: “headlines in media outlets from Huffington Post to The Times of India.”

Is there any field in which the egos of its indigenous populants outstrips ability to such a degree as journalism?

Read the whole story from the trickster.

A gay old time

Speaking of fooling reporters, a study so outrageous that even the New York Times was forced to cover it.

In 2012, as same-sex marriage advocates were working to build support in California, Michael LaCour, a political science researcher at the University of California, Los Angeles, asked a critical question: Can canvassers with a personal stake in an issue — in this case, gay men and women — actually sway voters’ opinions in a lasting way?

Since the study—published in Science, the queen of American journals—accorded with the ideological convictions of the elite, it was gobbled up, its author feted, lionized. Turns out the “researcher” made up his data, which certainly makes the statistics easier.

More details here (sort of long and boring, though): The Case of the Amazing Gay-Marriage Data: How a Graduate Student Reluctantly Uncovered a Huge Scientific Fraud.

Mark Regnerus, who is often accused of fraud by the same sort of people who believed in the real fraud, has a piece: “A recent paper on same-sex marriage appears fabricated; an earlier, disparaged one on same-sex parents was not.

I know how it feels to be accused of scientific malfeasance and sampling and data manipulation. I do not, however, know what it feels like to actually be guilty of those things. And yet over at the New York Times, my 2012 studies have been opportunistically lumped in with Mr. LaCour’s in an effort to tag my New Family Structures Study as tainted data. (They are not.)…

It’s the latest in a very long string of efforts to criticize the data, together with its sample, its author (and his friends), its funders, its measures, its analytic approach, its terminology, its data-collection organization, its reviewers, its journal’s editor, and its supporters. First one, then another, university inquisition has come to naught.

Moral of the story

Letting the public and the mainstream media decide what’s right and wrong is folly.

————————————————————

Update Forgot to say thanks to Paul Martin for the chocolate study.

Categories: Culture, Statistics

16 replies »

  1. Perhaps any paper using statistics as a modus operandi (other than to give error limits for measurements) should not be classified as “science” but …maybe socio-statistics or medico-statistics or ??? In any case, that classic video comes to mind (this is a different version–Buzz Aldrin and Thomas–no commercial prelim):
    https://www.youtube.com/watch?v=BpRgY9GXLO0

  2. @Bob,

    On very simple thing that might help curb bad statistics in the scientific literature would be to require that at least one reviewer for any submitted paper that is dependent on statistics for it’s conclusions be a statistician.

  3. MattS,,

    Even that is no guarantee. The real problem is that much of the scientific literature is based on small p-values. Observe the number of frequentists who think p-values say anything about the P(model|parameters, data). The first step would be abolishment of the p-value as a useful measure.

    But even that may not help. many of the -ologies dress up as science but down deep they don’t adhere to the scientific method. A case in point: all of my psych courses centered on the who/where/when and rarely on what/why. Perhaps partially this is due to the fact that there isn’t much evidence supporting the whats/whys in psychology so most of it is WAG. Voodoo and astrology seem more like sciences than psych.

  4. Matt, what I’m saying is that statistics is used in the quasi- or pseudo- sciences–sociology, economics, psychology (much of it)–to give the quantification that is necessary to call it science. If qualitative judgments were made, then they might be useful, but it wouldn’t be science. I’m following my hero, Fr. Stanley Jaki, who has written that only that which can be quantified can dwell in the halls of science. If you go to physics or chemistry, the only time you need statistics (I’m not talking about probability or statistical mechanics) is to ascertain limits of error for measurement. So let’s come right out and remove the sacred cloak of science from all the disciplines pretending to the rigor of physics and chemistry (and allied stuff–biophysics, etc.). Down with climate science! Down with psychology! Down with sociology! Down with economics! (Or at least, show them the door.)

  5. The abandonment of truth, honesty, and personal effort.

    Who cares what the truth is, when people are all to happy to accept whatever anyone says without verification, as long as it agrees with their own bias.

    Test everything, hold on to what is good.

  6. Matt: The corruption described by Jo Nova, Horton, and you are a direct result of the failure of the educational systems to adequately educate people about science. Carl Sagan explained it well in his last interview in 1996.
    Sagan expressed great concern that “if the general public doesn’t understand science and technology, then who is making all of the decisions about science and technology that are going to determine what kind of future our children live in, some members of congress? There are only a handful who have any background in science at all, and some of them don’t even want to know about it.” – Carl Sagan
    He also said, “We’ve arranged a society on science and technology in which nobody understands anything about science and technology, and this combustible mixture of ignorance and power sooner or later is going to blow up in our faces. I mean, who is running the science and technology in a democracy if the people don’t know anything about it.” – Carl Sagan
    Thus whoever control the distribution, interpretation, and acceptance of science information will easily be able to control the uneducated electorate by influence and corruption.

  7. Part of Carl Sagan quote: “I mean, who is running the science and technology in a democracy if the people don’t know anything about it”

    No one should be running science, technology, or the economy. They should all be free and unfettered. The democracy quip is irrelevant. Sadly Sagan shows that, like many academics, he was seduced by the siren call of central planning.

  8. jon shively: “The corruption described by Jo Nova, Horton, and you are a direct result of the failure of the educational systems to adequately educate people about science.”

    Perhaps that’s part of it. But I think the big thing is that science is hard. It’s hard for us laymen, yes, but it’s hard for scientists, too. All scientists are human, and most humans–scientists included–don’t have the inclination and/or the time really to dig into everything scientific.

    Our host’s being duped by Christopher Monckton to sign on to their recent paper is an example. Their Eq’n (1), on which the whole paper was ostensibly based, is an appalling bit of pseudo-science that an undergraduate student of control systems could have recognized as bogus if it had not been camouflaged in cumbersome notation. Anyone who had worked through the paper’s examples would have seen that it wildly miscalculates model responses.

    Yet I’ve seen no evidence that anyone (other than me) has gone through that exercise. Why? The same reason why I usually don’t do that type of thing myself: it’s hard and takes time. And that’s true not only of us laymen observers. It appears to have been true of the authors themselves.

    No matter how well people are educated in science, science will remain so hard that large numbers of mistakes will go undetected.

  9. Joe,

    That explains why his Lordship got me liquored up before agreeing to sign on to the paper.

    On the other hand, maybe that equation is all it says it is and nothing more: a crude cartoon whose purpose is to emphasize the failures of the real models.

  10. Joe,

    And yet their model results more accurately match real world observations than the pretty, shiny models informing the IPCC reports.

    Results or Bullsh!t!

  11. Briggs: “On the other hand, maybe that equation is all it says it is and nothing more: a crude cartoon whose purpose is to emphasize the failures of the real models.”

    That’s fair enough in the case of equilibrium results or memoryless models. Yes, the paper claims only to be taking the outputs of Roe’s models as rough cuts at what the IPCC models might generate. I’m not trying to hold the paper to a standard any stricter than that.

    But the equation treats a model of a memory-implementing, time-invariant system as though it were a model of a system that’s memoryless and time-variant. As a consequence, in that paper’s Section 8.2 the equation gets results that aren’t even close to what Roe models’ results actually actual are. The inferences to be drawn from the Section 7 calculation differ wildly from what Roe’s outputs would imply, too.

    If the paper takes the IPCC to task for ECS values half again too high, then it’s hard to see why results off by more than a factor of three in calculating the chosen models’ responses can be considered acceptable.

    (Also, the heading of Table 2 says all its entries were “derived” from Roe, yet there is nothing identifiable in Roe that implies, as Table 2’s entries do, that lower feedback would result in higher transient response in the early years. I think it’s misleading, at the very least, to give the impression that Roe dictates the memorylessness that Table 2’s first row implies for lower feedback values.)

  12. Umm, let me offer “Murphy’s moral” to augment yours:

    The liklihood that the majority will be wrong on any issue varies directly with the complexity of the issue.

  13. The problem with any proposed solution to the current low standard for published scientific work is that people on the internet will simply pee on it. This is because it’s much easier to pee on something than to try to fix something.

    Each scientific field has its own set of problems. The journals are the gate keepers here. Perhaps a professional standards board for each field would assist. This would strictly be an optional exercise and each journal might receive a grading for its quality control.

    But what do you do with a field such as climate science when very little is well understood yet the public wish to pretend that things are well understood? I don’t think a problem like that can be fixed. People have to learn the hard way. If they fail to learn, the best we can do is try to remind them.

    Here is an example from climate science. Your paper proposes X is true. What’s the basis of the claim that X is true. Is it a model? OK, how well does the model hindcast? How well does the model forecast? While providing proof of accurate model forecasting would be difficult, what is the point of allowing a paper to be published when there is no need to demonstrate hindcasting?

  14. Steve E.: “And yet their model results more accurately match real world observations than the pretty, shiny models informing the IPCC reports.”

    You can take their word for that, or you can do some critical thinking.

    Their claim of skill comes in their Section 9. Read it carefully. They make a projection based on forcings probably half again what forcings were during the previous 67 years, and they infer skill from the fact that their projection is less than the previous 67 years’ trend–even though they’re basing this on a memoryless model.

    Be honest. Does this make any sense to you?

Leave a Reply

Your email address will not be published. Required fields are marked *