Forget Randomization

Forget Randomization

I am only six months behind in answering thoughtful emails from readers. Here is one from Stephen Puryear:

Dr. Briggs, please go back and elaborate on your ideas about why randomizing is such a bad idea. So far I’m intrigued but I don’t really understand your position. The fact that its the “gold standard” that everyone bows down to with no further thought makes me even more curious to hear your complete thoughts and analysis.

It should be clear at this late date that that which everybody bows to is no longer a blanket recommendation in a thing’s favor. We have science pozzing, we have the replication crisis, and we have the slo-mo collision of physics with metaphysics. All old ideas are fair game.

Randomization was never a good idea for science, but it sometimes can be a good idea for scientists.

I give the example in Uncertainty of the Apostles casting lots to decide on a replacement. Sounds like randomization, or worse.

Were the apostles hoping God would use (cause) the most minimal physical force necessary to shift the lots in His preferred direction in order to guide the apostles to the best choice? Or were the apostles equally split among choices and, wanting to avoid strife and acrimony, removed the choice (cause) to something which they could not predict or influence (cause)?

Cause is emphasized because, of course, the event of choosing a new apostle had a cause, or causes. There will be all kinds of causes associated with the casting of the lots (as there are with dice etc.), all of which are there even when we don’t know or can’t control them. It’s possible God directly intervened in a miraculous way in those causes to affect the outcome. But it’s also possible the ordinary secondary causes operated in their usual way to cause the outcome.

The apostles could have voted, or had several rounds of voting, but with voting there are winners and losers, with both sides unhappy with the result. The losers wanted to win, and the winners realize they could have lost. Look what voting does to a democracy.

How much better to say “Let the cause be other than us!”

The key to this randomization was that the causes could not be manipulated, or could not be (largely) known. Whatever causes affected the outcomes, these would not be the apostles themselves, except for the trivial causes of collecting the lots and so forth. If there was instead an election, the causes would have been naked and obvious.

This kind of randomization can and should be used whenever there is suspicion that the causes of the effect can be manipulated in undesired and undesirable ways. The stock example is the referee doing the coin flip.

Referees doing coin flips would be an awful way to learn the causes of coin flips, though. Yes, we could set up a grand “randomized controlled experiment”, but why not instead very carefully control the known or suspected physical causes? Just as physicists actually do in their real experiments?

What are called “controlled” trials usually aren’t. They’re found in medicine and agriculture, and things like that, for the simple and obvious reason that the causes of effects are largely unknown. The causes of an electron taking a certain path are far fewer and simpler than what stopped a man from developing liver cancer, but still somewhat unknown or the experiments would never be run for the electron. Main causes might be known, but myriad small ones aren’t. The controls, when they exist, are for aspects that are probably or known partial causes like sex in medicine (well, in the old days before transanity hit), and plot placement in agriculture.

The only reason randomization should be used in any trial is because we cannot trust the interested experimenter. The classic example is of a doctor interested in a new treatment. He may, scrupulously or not, send the sicker patients to the old treatment. Forcing him to randomize—i.e. removing control of patient placement to some unpredictable cause—removes the causes made from his interest. Of course, if he is the attending physician (or biologist etc.) he may still treat the patients differently, hence the need for blinding.

Blinding is exactly the same as randomization. They both remove the ability to manipulate known or suspected causes of an effect. Or try too. Men are natural con artists.

Now, as for whether randomization does anything mystical to the experiment, as frequentists attest, we can dismiss as superstitious nonsense. There is no guarantee, and even the positive definite chance, that randomization can screw up control of a known or suspected cause. It could happen, for example, in small trials, that all the men end up in one group, and women the other. So we “control” for sex, by forcing a split.

Well, since we don’t know all the causes of the effect, hence the experiment, randomization could just as easily stack the deck for any or all of these uncontrolled causes. And we’d never know it! If we knew it, we would have controlled for them, like we did with sex. Randomization, therefore, does nothing. It is control that is wanted.

I stand by my recommendation that the best way to design experiments is, blind when you can, and have a disinterested panel control the placement of each patient (or crop of whatever). Certainly do not be tied to the old, proven wrong, methods of statistics. Understanding cause is the reason for the experiment: acknowledge that.

To support this site and its wholly independent host using credit card or PayPal (in any amount) click here

12 Comments

  1. Well said, sir.

    Too many people do not think. They learn by rote, repeat upon command, and congratulate themselves for their cleverness.

    This principle, when applied to the sciences, results in scientists and researchers who do not understand. To use the old phrase, “Science is full of bottle washers and button counters.”

  2. Sheri

    Use my blood sugars to define radomization. I guarantee they are absolutely random!

  3. Sylvain Allard

    « « « « Boom » » » »

    Bolton’s grenade just exploded

  4. Theodoros

    The Coptic Orthodox Church up to today selects new Popes based on lots

  5. Mariner

    One of your best ever. Should go on the classics.

  6. Bill_R

    Randomization defines the reference set. See Kempthorne (1955) or Senn (2004). None of your imaginary parameters or distributions are then needed. No pretending about “normal” errors or sampling frames (if you know what that is). In practice, experimenters want to mess with the assignments to bias the results in their favor, just like their post-hoc identification of “outliers” is in their favor.

  7. Bill_R

    blind when you can, and have a disinterested panel control the placement of each patient

    .

    That really should be block, blind, and allocate. It’s really hard to have a “disinterested panel” that’s more disinterested than a RNG. Your recommendation suggests you’ve never really dealt with an IRB. They love to stick their fat greasy thumbs into experimental designs “in the interest of fairness, of course”.

  8. Kalif

    “…Yes, we could set up a grand “randomized controlled experiment”, but why not instead very carefully control the known or suspected physical causes? Just as physicists actually do in their real experiments?…”

    But that’s what IS done. Randomization in experiments is not the same thing as ‘random sampling’ known to average Joe. People are not ‘randomly’ pulled into one of the two or more groups. Various inclusion/exclusion criteria serve as careful controls and all threats to validity are usually checked way before randomization into groups is done. Also, when you have double/triple blind and cross-over versions, the nuisance variables are usually kept isolated.

    Physicists don’t need to do much, except precise calibration and careful observation/counting. The stable natural laws of physics do the job for them, just like bees do the pollination, but farmers take the credit for the result. Basic descriptive stats are all many physicists use. Take your average exp. Physicist and give him a hard problem from Epidemiology, Sociology, Cog. Psych. or Neuroscience and see how he fares. Let’s see how he controls for all the nuisance variables and such.

  9. DaveW

    Thank you WMBriggs. My first exposure to a randomised experimental design was in a Population Ecology practical. The class was divided into teams and randomly selected starting points for transects through mudflats to estimate shellfish population size. Overall, the various teams pretty throughly sampled the mudflat (so a reasonable idea of the number of clams there was available), but my team drew a cluster of transects in the highest population density area. That was annoying because of a lot more muddy digging and counting on our team than the others, but also because it tended to undercut the theory of the prac. The prof argued that a stratified random design would be less likely to result in a biased sample, but how does one stratify a mudflat in terms relevant to a clam? Anyway, nothing over the next 40 years of research has convinced me that randomisation is other than an, often vain, attempt to keep the scientist honest. Nice to know a statistician feels the same.

  10. I think the evidence is clear that randomness (whether true, whatever that means, randomness exists, or pseudorandomness) helps. Specifically,

    help infer cause and effect
    help infer from sample to population
    help eliminate bias
    help ensure privacy
    create simulations

    For the last one, from “The Algorithmic Foundations of Differential Privacy” (https://www.cis.upenn.edu/~aaroth/Papers/privacybook.pdf), there is this strong, and mathematically provable, statement:

    “Randomization is essential; more precisely, any non-trivial privacy guarantee that holds regardless of all present or even future sources of auxiliary information, including other databases, studies, Web sites, on-line communities, gossip, newspapers, government statistics, and so on, requires randomization.”

    Three additional points:
    -it has been estimated that medical trials with inadequate or unclear randomization tended to overestimate treatment effects up to 40% compared with those that used randomization
    -in experimental design, if you happen to get a pattern from randomization that is not desirable, this can be taken care of by ‘restricted randomization’, that was developed in the 1950s.
    -been shown over and over that overall error from random sample surveys is smaller than overall error from censuses

    And that’s just for science. Not to mention other numerous non-science applications of randomness or pseudorandomness, such as:

    spicing up exercise routines
    overcoming boredom
    choosing a restaurant to eat out at
    making flash cards for studying any topic
    revitalizing chess with randomized starting positions
    casinos
    lotteries
    making fair decisions
    making scatterplots more readable by jittering
    making video game experiences different with each play
    shuffling the music you listen to
    generating poetry and art

    Cheers,
    Justin

  11. BillR wrote

    “No pretending about “normal” errors or sampling frames (if you know what that is).”

    The use of normal distribution is because
    -large number of things are in fact observed to have a normal distribution-like distribution
    -normal distribution approximates the more difficult randomization distribution

    Justin

  12. Bill_R

    Hi @Justin,

    point 1 is only approximately true for simple physical measurements, when you’re measuring aggregates of things that can be aggregated/concatinated (e.g. weights, lengths, bushels of wheat from a largish plot). The aggregation is what pushes it towards normality. It totally falls apart in much of biology, psychology, etc. where measurement is difficult and the units can’t be meaningfully aggregated.

    Point 2 is usually correct. That’s what “ Randomization defines the reference set” does (frequently). For a contrary case, consider real-life survival analysis with heterogeneous strata. Context, Design, and Randomization determines the appropriate reference set. Julian Simon (1968) comments on normality and it’s origin.

    Are you familiar with Patrick Laurie-Davies? His “Data Analysis… “ book is An interesting read on the relation between data and abstract distributions. See also his arxiv.org papers.

Leave a Reply

Your email address will not be published. Required fields are marked *