Peer Review Fails? Sympathetic Priest? What’s Going On Here?

Duke Scores Another Own Goal

According to Science, Duke University oncologist Anil Potti quit after being caught cheating:

Anil Potti had published papers in prominent journals identifying gene signatures in tumors that could predict how a patient would respond to treatment. But his work came under scrutiny after two biostatisticians at M.D. Anderson Cancer Center in Houston, Texas, spent years trying and failing to replicate it. The case broke wide open this summer when The Cancer Letter discovered that Potti had falsely claimed to have won a Rhodes scholarship. Duke placed Potti on administrative leave soon after.

I do not want to emphasize the theme of Statisticians To The Rescue—as heartening and as obvious as that would be—but instead want to ask why the Journal of Clinical Oncology had to retract one (so far) of Potti’s papers. (It is impossible to resist saying Potti paper; I have not resisted.)

What’s odd is that this particular Potti paper was peer-reviewed—peer-reviewed Potti paper!—and what follows from that is, of course, the peer reviewers got it wrong. They accepted when they should have rejected. Why?

Part of the reason must be that the peer reviewers reviewing Potti’s paper wanted to believe the results. The Potti paper examined “gene expression signatures” in the hopes that personalized cancer treatments could be designed, a different one for every genetic signature. Not a bad idea, that.

“But the gene signatures used to define tumor types—and there are many candidates out there—have been difficult to replicate. (link)” Until up stepped Anil Potti who flushed the old ideas away and reported that he could do what none before him could. Evidently, this was welcome news to the reviewers of the Journal of Clinical Oncology, and of The New England Journal of Medicine and Nature Medicine where Potti also mouthed off in print (I had to work that in).

Thus, we have yet another incident where results that were devoutly to be wished for were uncritically accepted. What’s unknown is how often this happens elsewhere.

A Sanctimonious Priest

Sanctimonious didn’t used to be a bad word. It originally meant “possessing sanctity”, which is to say one who demonstrate holiness or who is devout. But the dictionary says this meaning is obsolete and that sanctimonious is a mere synonym of “hypocrite.”

The wordsmiths at Merriam Webster cannot imagine somebody actually meaning what he says when he speak about matters spiritual. They figure that the speaker must be lying for effect.

But I mean it in the original and better sense when I say that Father Jack Landry is sanctimonious. He’s the priest on the television show V, mysteriously a product of Hollywood. It’s a mystery how any show which features a sympathetic priest ever made it past the television censors.

According to the New York Post, the actor who portrays Father Landry, Joel Gretsch, was himself raised Catholic.

Priests on TV are, these days, child molesters or behind-the-collar schemers. It’s only natural considering recent headlines.

But “V” having a heroic priest a [sic] as one of its main characters? That makes Father Jack Landry practically one of kind in prime time.

That makes him one-of-a-kind in any time. On film, that is—since roughly 1970. If you don’t know any Catholic priests, I can tell you that the Father Landry character is, strange as it might seem, modeled after real-life priests, the majority of whom are just what they try to be: sanctimonious. To say that not all succeed is merely to say that they are human.

What about the show itself? Let’s not forget that this is a series about evil aliens come to Earth to deliver “free universal health care”, and who announce that all they want to do is take care of us. Their real plan is to at least enslave us, probably to eat us. Everything about this program runs counter to traditional Hollywood ethics, yet it survives. I can find no suitable explanation for why this is so.

About that free health care. It’s not that unusual that those who wish to conquer us would offer it. Every good farmer ensures his livestock are visited by the veterinarian.


  1. “Part of the reason must be that the peer reviewers reviewing Potti’s paper wanted to believe the results”

    The peer review process is far from perfect, but I would not put wishful thinking on its list of failings. Reviewers rarely hold back with criticism. It seems that the paper was a skilled forgery that deceived the experts, including for several years the two biostatisticians who tried to replicate the results.

    As it is unlikely that anyone could get away with high-profile scientific fraud in the long term, I suppose the author must have been convinced that it was just a matter of time before someone else would find the evidence that eluded him.

    Why are there so many of these fraud cases in the medical field?

  2. I have not yet seen father Jack on V, but I approve of him. There are a lot of good men in the priesthood, and caricatures never helped anyone really.

    What about the show itself? Let’s not forget that this is a series about evil aliens come to Earth to deliver “free universal health care”, and who announce that all they want to do is take care of us. Their real plan is to at least enslave us, probably to eat us.

    Never pondered about this very particular facet of the series, then again never watched it apart from the odd episode (I liked the old series but I was too young to understand anything back then).

    To conclude that health care is “out to get you” because some aliens use it in order for you to trust them, is silly. The very fact that they use it “to get you” is because it is a good thing. And health care isn’t being provided by aliens. At least I think so.

  3. The thing to remember about “news” is that its presumably about something novel or at least unusual. This must also apply to stories about wayward priests. You will never see “Beach Party Tanned by Sun!” headlines. You won’t hear about an accident on the expressway unless it ties up traffic for hours. Why people think that things in the news are common (planes crashing; hijackings; whatnot) is beyond me.

    I watched V last season. It’s full of preposterous premises. Not the least is where members of a non-caring and cold species suddenly begin to fall in love despite millenia of apparently contrary evolution. Sounds like Hollywood (or maybe just Star Trek) tradition to me. OTOH, we do have people who hug trees. Father Landry is least preposterous of all of the premises.

    Sorry, Luis, “free universal health care” means aliens are going to eat you and your children. Heinlein got it right: there ain’t no such thing as a free lunch. Health care may become universal but you can count on it not being free.

  4. Sorry, Luis, “free universal health care” means aliens are going to eat you and your children.

    Dammit! Well … ok then, I’ll stick to “Universal Health Care” then.

    Heinlein got it right: there ain’t no such thing as a free lunch.

    Heinlein? Good sources you got there.

  5. The purpose of peer review is not to detect and expose fraud. It’s supposed to be a quality bar for publication, nothing more than that. Only in extremely unusual cases (cold fusion, say) would reviewers try to establish whether the authors were being untruthful.

    You’ve obviously been involved on both sides of peer review and you know this, so I’m not sure what honest point you’re trying to make.

  6. OK, then let’s ignore the peer review part for a minute. This research was being actively tested in clinical trials. The trials were suspended once because of the issues around the research. The biostatisticians published their critique in Nov 2009.

    ‘However, even though Duke suspended those trials in October 2009, they were restarted again in January 2010 after an internal investigation by Duke’s Institutional Review Board confirmed the research and concluded that this approach was “viable and likely to succeed.”

    When contacted by The Cancer Letter and shown documents obtained under the Freedom of Information Act, the 2 statisticians from M.D. Anderson who had questioned the technology said they were not satisfied by the internal review. “Duke’s statement implies that other members of the scientific community should be able to replicate the reported results with the data available,” they told the publication. “Having tried, we can confidently state that this is not yet true.”‘

    This is way beyond peer review issues, it’s hundreds of cancer patients undergoing therapy based on predictions from this research. What finally stopped the trials was not the shoddy research, but the revelation of resume padding.

  7. And yet those who should know better keep denying a “wishful thinking” motivation. What higher impulse than benefiting society would explain most scientists going astray this way? The “road to hell is paved……..,” etc., etc. And yes, there will be an odd exception or two.

  8. I can’t imagine what Potti and his research team has gone through. You might say that the entire ordeal was self inflicted, but don’t believe everything you read.

    I have a good friend who got caught up in a “scandal” that one of the PIs was accused of plagiarism. The accused is now working at another institution. None of the reports accurately summarized the facts, including those by an independent panel and by the university that was clumsily padded to prevent further damage to the university. The issue could’ve been solved without being blown out of proportion, but someone had a vendetta against the accused. And you bet that I avoid confiding to this someone since he/she could use my words against me in the future.

  9. You will find a lot more sympathetic portrayals of priests in movies and tv shows than you will of protestant preachers (especially preachers with a southern accent). Preachers are portrayed as uniformly evil, much like business executives and real estate developers.

  10. Regarding the Potti paper, I wouldn’t say that Potti was caught cheating. I’d say he was caught publishing a bungled analysis, something that unfortunately isn’t remarkable. Of course the Rhodes scholarship claim is another matter.

    I submit that bioinformatics articles are usually not peer reviewed, not in the sense of someone examining the statistics in the detail that my colleagues applied to Potti’s paper. Keith Baggerly and Kevin Coombes invested hundreds of hours not only trying to reproduce Potti’s results but also reverse-engineering the analysis to determine what calculations were actually done. It’s not possible to give anywhere near as much attention to reviews routinely.

    Also, because journals don’t want to print lengthy statistical details, papers do not carry the necessary information to reproduce the analysis, even if a reviewer were willing to try. Of course full details could be published as online supplements, but this isn’t often done.

    These analyses are so complex, so error prone, that they should be automated and the automation code published. That is just what Baggerly and Coombes did. They didn’t just claim Potti was wrong, they published the source code of their analysis for anyone to examine.

  11. I thought I had something useful to contribute, but Ceri nailed it better and more succinctly than I would have. Seconded.

    Your anti-science bent of late is a bit strange.

  12. Kevin,

    No anti-science; no, sir. Just documenting an instance of high-profile peer review gone wrong. One would think, listening to others, that peer review is a near perfect process.

    And I sincerely hope to God nobody is thinking I say cigarettes are conducive to good health.

  13. Mr. Briggs, but… the peer review process is never meant to stop once the paper is published. A researcher often wishes to replicate and build upon existing results. Wouldn’t you say that this is a great of example of why the scientific method is self-correcting? ^_^

  14. Where’s the incentive for a thorough peer review? Why should you try to reproduce someone else’s analysis? If you conclude “Yep. Jones was right.” who would want to publish that? Worse, if you conclude Jones was *wrong*, you may not be able to publish that either. Journals are loathe to publish articles saying they let a bad paper slip through.

    The only reason Baggerly and Coombes went to the effort that they did is that they work for a cancer center. Their work was done as part of their service to physicians who were interested in the results. A typical academic statistician would have no incentive to put in that kind of time reviewing a paper.

  15. John,

    Why should you try to reproduce someone else’s analysis? …A typical academic statistician would have no incentive to put in that kind of time reviewing a paper.

    Why? Let me you give you a reason. When a new statistical methodology is proposed, one (probably an academic statistician) would need to compare its performance to those of existing methods, which can be done by either simulations or applications to real data. That is, one would need to reproduce someone else’s analysis. A referee (may not be a typical academic statistician) would ask you why the comparison hasn’t been done, and the paper might end up being rejected or having to be resubmitted with major revisions. And I am not talking about application papers.

  16. JH: I agree that if you’re publishing a methodology paper, you have to do some comparison. There are ways to game this comparison, but that’s another topic.

    It’s hardly ever possible to reproduce a microarray analysis without some trial-and-error guesswork. At best, the original analysis will be correct but incompletely documented. This best-case scenario happens about half the time. As Keith Baggerly has said, the most common errors are simple, and the most simple errors are common. (These statements here are based on my conversations with colleagues who work in this area, not my personal experience.)

  17. [quote]Why are there so many of these fraud cases in the medical field?[/quote]

    From my understanding, a lot of these medical papers are long on clinical trials and statistics but short (or nonexistent) on the theory behind why the results end up the way they are.

    So if peer review is focused on the theory, there is much less for them to review prior to publication compared to, say, a physics paper on gravitic lensing or some such.

Leave a Comment

Your email address will not be published. Required fields are marked *