Drug Companies Tweaking Results To Produce Publishable P-values?

A person who Nature is calling a “whistle blower” has written a brief confessional in the British Medical Journal admitting to what might be termed statistical fiddling at a “major” drug company. (The just-over-one-page article requires a subscription to view.)

It is always difficult to trust fully any gossip which is reported anonymously, for if a man can hide behind a letter he may say anything, even that which is not so, without fear of reprisal. Too, the person or organization who publishes the gossip has no way of verifying it.

Now Mr X, if we may call him that, speaks of a company’s post-FDA-approval observational studies on drugs, studies whose primary purpose are to tout the drugs. “Patented Xlimicorconaphil is better than the generic! Ask your doctor if Xlimicorconaphil is right for you. Hint: it is.”

Mr X criticizes the statistical methods of these observational studies. He claims, “the truth is that these studies had more marketing than science behind them.” Worse:

Since marketing claims needed to be backed-up scientifically, we occasionally resorted to “playing” with the data that had originally failed to show the expected result. This was done by altering the statistical method until any statistical significance was found. Such a result might not have supported the marketing claim, but it was always worth giving it a go to see what results you could produce. And it was possible because the protocols of post-marketing studies were lax, and it was not a requirement to specify any statistical methodology in detail. On the other hand, the studies were hypothesis testing (such as cohort studies, case-control studies) rather than hypothesis generating (such as case reports or adverse events reports), so playing with the data felt uncomfortable.

The dreadful, should-be-banned term “statistical significance” means a publishable p-value, i.e. one less than the magic, never-to-be-questioned number, a number given to us (rumor has it) by Merlin himself. The number is sacrosanct, it is written into the law. Studies which cannot produce the required number are shunned. Those that find wee p-values are glorified.

Now especially in observations studies, this desirable creature, the wee p-value, can always be found, as long as one is willing to rummage around the data for a sufficient length of time. Mr X claims that is what his drug company has done. He appears to think this practice unusual and a bit shifty. Shifty it may be, but unusual it is not. It is not confined to observational studies, but appears even in designed experiments. And this is to be expected when success is defined in terms of p-values.

Statistics in this way is like a machine into which is fed data, a crank is turned, and out pops a rotten egg or one made of gold. Turning the crank longer increases the chance of gold. Success is trivially identified, but so is failure. The process requires no thinking (except by the nameless mechanics who keep the machine running).

Mr X also claims:

Other practices to ensure the marketing message was clear in the final publication included omission of negative results, usually in secondary outcome measures that had not been specified in the protocol, or inflating the importance of secondary outcome measures if they were positive when the primary measure was not.

Which sounds like standard politics. But I wonder. How often do drug companies try to hide negative results? Truly negative, I mean. Like discovering that widows who eat Xlimicorconaphil stroke out at rates exceeding the general population? What happens when this aberration finally outs? Smells like jail time.

Instead it’s more likely that the kind of “negative” results Mr X means are slight increases of slightly higher blood pressures in some subset of a subset of the population of those who take especially high doses of Xlimicorconaphil. Not a good thing, but not as awful as death or disfigurement.

Anyway, those negative findings are just that: findings. Produced using the same questionable statistical procedures as the positive findings which Mr X isn’t so keen about. How robust are they then? Probably not very.

The truth for most drugs is usually something like this: Xlimicorconaphil was found, via the usual FDA process, to be marginally better than the generic in some subset of the population. Xlimicorconaphil produces slightly different side effects, or of different intensity or frequency. The drug company, having to recoup its investment, takes this information, dresses it up, and sells the pill as New and Improved!

Nothing shady about this, especially in our all-marketing-all-the-time culture where such behavior is expected of everyone. The real worry is if doctors cease being skeptical gatekeepers.


Thanks to Brad Tittle who suggested this topic.


  1. What a great addition to your indefatigable campaign against the tyranny of the p-value!

    However, it’s just the tip of the iceberg. Today I coached a class of undergrads through a simple exercise in meta-analysis, using Fisher’s and Stouffer’s methods for combining the p-values from independent studies. Wow! Under the rubric of “gaining strength by consolidating information,” these techniques let you combine large (not so good) p-values with small (definitely publishable) ones to get…an even smaller p-value! This would appear to be Really Good News, and an antidote to File Drawer Bias. (Of course, my cynical wife would call this “making fried chicken out of chickens**t”.) My students were impressed.

    All was not lost however, in the campaign to unmask the dastardly p-value. My students applied a third meta-analysis recipe (the Weighted Z-Method) to the same collection of studies, and arrived at a not-so-hot p-value of 0.08, whereupon they recognized the potential for statistical jiggery-pokery, as described above by Whistleblower X. My students now have a better, and more realistic, appreciation for frequentist statistics as currently practiced.

  2. “a number given to us (rumor has it) by Merlin himself. ”
    I thought the great statistician Dr. Fisher came up with the magic number that could turn dross into gold. It’s the statistical version of alchemy. Mr. X confesses they would resort to the data dredge to obtain sometning publishable. What a stuning revelation. I’m shocked, shocked I tell you.

  3. Briggs:

    Related to the wonderful insights of statistics, I’m wondering if you might be willing to comment on the following story (reported here and many other places):



    Granted, it is not about hats (your typical subject of regalement), but you have in the past discussed shoes too, if memory serves. I thought, in light of your sartorial excellence, you might enlighten us!

  4. “all marketing all the time” – So what’s the problem with that?

    “making fried chicken out of chickens**t” – I don’t understand why I never heard of this one. I even lived in Texas for a while where this surely originated.

  5. Only want to mention that this type of article is what brings me back to your blog daily. Thank you.

  6. “ill no doubt come more earlier again as exactly the similar nearly very regularly inside of case you defend this increase.” I had that a couple of weeks ago but I got rid of it with large doses of vitamin C. Generic vitamin C too!

Leave a Comment

Your email address will not be published. Required fields are marked *