Class - Applied Statistics

The Controversy Over Randomization And Balance In Clinical Trials

There was a paper a short while back, “Why all randomised controlled trials produce biased results“, by Alexander Krauss, in the Annals of Medicine. Raised some academic eyebrows.

Krauss says, “RCTs face a range of strong assumptions, biases and limitations that have not yet all been thoroughly discussed in the literature.”

His critics says, “Oh yes they have.”

Krauss says that the “10 most cited RCTs worldwide” “shows that [RCT] trials inevitably produce bias.”

His critics say, “Oh no they don’t.”

Krauss says, “Trials involve complex processes — from randomising, blinding and controlling, to implementing treatments, monitoring participants etc. — that require many decisions and steps at different levels that bring their own assumptions and degree of bias to results.”

His critics say, “No kidding, genius.”

Those critics—Andrew Althouse, Kaleab Abebe, Gary Collins, and Frank E Harrell—were none too happy with Krauss, charging him with not doing his homework.

The critics have the upper hand here. But I disagree with them on a point or two, about which more below.

The piece states that the simple-treatment-at-the-individual-level limitation is a constraint of RCTs not yet thoroughly discussed and notes that randomization is infeasible for many scientific questions. This, however, is not relevant to the claim that all RCTs produce biased results; it merely suggests that we should not use randomized controlled trials for questions where they are not applicable. Furthermore, the piece states that randomized trials cannot generally be conducted in cases with multiple and complex treatments or outcomes simultaneously that often reflect the reality of medical situations. This statement ignores a great deal of innovation in trial designs, including some very agile and adaptable designs capable of evaluating multiple complex treatments and/or outcomes across variable populations.

They go on to note some of these wonders. Then they come to one of the two key points: “there is no requirement for baseline balance in all covariates to have a valid statistical inference from [a statistical] trial”, calling such a belief a “myth”, meaning (as moderns do) a falsity.

It is false, too. Balance is not necessary. Who cares if the patients in group A used to own just as many marbles as the patients in group B when they were all six? And, of course, you can go on an on like that practically ad infinitum, which brings the realization that “randomization” never brings balance.

Control is what counts. But control is, like probability itself, conditional on the premises we bring to the problem. The goal of all experimentation is to discover, to the closest possible extent, the cause of the thing studied. If we knew the cause, we would not need to do the study. If we do not know the cause, a study may enlighten us, as long as we are measuring things in what I call the “causal path” of the item of interest. Also see this, for the hippest most modern-year analysis ever!

We con’t control for past marble ownership in most clinical trials, nor do we wish to, because we cannot bring ourselves to believe the premise that marble ownership is in the causal path of the thing under study. If we knew, really knew, the exact cause, we could run an experiment with perfect controls, since what should be controlled is a known part of the cause.

That we know so little, except in the grossest sense, about the right and proper controls, is why we have to do these trials, which are correlational. We extract, in our minds, the usable, and sometimes false, essences in these correlations and improve our understanding of cause.

Another reason balance isn’t needed: probability conditions on the totality of our beliefs about the proposition of interest (models, on the other hand, condition on a tiny formal fraction). Balance doesn’t provide any special insight, unless the proposition of interest itself involves balance.

Notice that medical trials are not run like physics experiments, even though the goals of both are the same, and the nature of evidence is identical in both setups, too. Both control, and physics controls better, because physical knowledge of of vastly simpler systems, so knowledge of cause is greater.

The differences are “randomization” and, sometimes, “blinding”.

Krauss’s critics “It is important to remember that the fundamental goal of randomization in clinical trials is preventing selection bias”.

Indeed, it is not just the fundamental, but the only goal. The reason “randomization” is used is the same reason referees flip the coin at the start of ballgames and not a player or coach or fan from one of the sides. “Randomization” provides the exact same control—yes, the word is control—that blinding performs. Both make it harder to cheat.

There is nothing so dishonest as a human being. The simplest of most frequent victim of his mendacity is himself. Every scientist believes in confirmation bias, just as every scientist believes it happens to the other guy.

“Randomization” and “blinding” move the control from the interested scientist to a disinterested device. It is the disinterestedness that counts here, not the “randomness”. If we had a panel of angelic judges watching over our experiment and control assignments, and angels (the good kind) finding impossible to lie, well, we would not need “randomness” nor blinding.

The problem some (not all) have with “randomization” is that they believe it induces a kind of mystical condition where certain measurements “take on” or are imbued with probability, which things can do because (to them) things “have” probability. And that if it weren’t for “randomization”, the things wouldn’t have the proper probability. Randomization, then, is alchemy.

Probability doesn’t exist, and “random” only means unknown (or unknown cause), so adding “unknownness” to an experiment does nothing for you, epistemologically speaking.

There are some interesting technical details about complex experiments in the critics’ response that are also worth reading, incidentally.

5 replies »

  1. Many thanks for that, Dr. Briggs.

    I am reminded of a man who learned that 73% of fatal traffic accidents occur within 2 miles of home, so he moved.
    Ba-da-BOOM!

  2. This post, and today being election day, brings to mind a recent episode involving an independent candidate running for the office of governor of my state who trails badly in the published polls. He just released the results of his own internal polling showing him within striking distance of the other two candidates. His method? Sending his campaign workers out to shopping malls and other public places to ask people approached “randomly” who they would be voting for. The workers did not identify themselves as affiliated with the candidate. They may even have tried some protocol like asking every fifth person encountered; we don’t know. Critics of the candidate were not impressed and scoffed at the effort. He is supremely confident his maverick image will rule the day and that his poll is correct. We will see tonight how badly he has fooled himself.

  3. The candidate fooled himself quite badly. His poll said 31%; he got less than 5% of the vote. Besides the flawed method, the poll premises that the respondents were telling the truth and would stick with their reported choice are shadowed with doubt.

Leave a Reply

Your email address will not be published. Required fields are marked *