Skip to content
November 9, 2018 | 16 Comments

Swear Fealty To Diversity, Or You’re Out

I’ve told the story a dozen times, but allow me to bore you with it once more for the sake of new readers, and for emphasizing the story below.

The last—and final—time I interviewed for a university professorship, was in a math department. I flew through the interview fine. Until the last question, asked of me by two mathematicians: “Were you ever involved in any diversity initiatives?”

This was like asking a steakhouse owner if he had ever been involved in any vegan initiatives.

Any employment in any western university is, for me, thus impossible. To be fair, I not only loathe the god Diversity, but just one week’s Insanity & Doom installment is enough to torpedo my chances. Not that I’d want to voluntary place myself among the heathen, thank you very much.

Mathematicians can no longer hide behind their theorems. The Diversity Thought Police are after them. The story is mathematicians at UCLA “have to pledge in writing a commitment to diversity, equity, and inclusivity.”

In fact, all professors applying for a tenure-track position at UCLA must write a statement on their commitment to diversity, showing, for example, their “record of success advising women and minority graduate students,” according to the UCLA’s Office of Equity, Diversity and Inclusion.

Such mandated statements reflect a push by college bureaucrats “to ratchet up the requirements” to achieve more diverse campuses, said Peter Wood, president of the National Association of Scholars. Until recently, diversity programs tended to focus on mandatory training and sanctions for policy violators.

Now “you have to make a public confession of faith,” said Wood. “You’re essentially citing a creed,” and “all the more effectively, they force you to put that creed into your own words.”

I’m not against statements of faith and oaths of fealty, mind you. I am against putting incense into the fire to worship false gods. And there are no more false gods than Equity, Diversity and Inclusion. The Triumvirate of Evil. The three-headed gatekeeper of Hell.

It’s not only UCLA: “UC Riverside, UC San Diego, and UC Berkeley all require such statements. UC Santa Cruz requires them for candidates for faculty Senate positions.”

The written pledges are used to “identify candidates who have the professional skills, experience, and/or willingness to engage activities that will advance our campus diversity and equity goals,” said Judy Piercey, senior director of strategic communications at UC San Diego.

There are big incentives to achieving tenure. Not only do tenured professors typically make tens of thousands of dollars more annually than their non-tenured colleagues (full professors can make more than twice as much as instructors and lecturers) but they cannot be fired except in the most extraordinary cases.

Judy girl there nails it: willingness to engage activities that will advance our campus diversity and equity goals. This isn’t so much a pinch of incense, but the requirement of routine shovelfuls.

Now it is obvious to any non-university employee that goals of Equity, Diversity and Inclusion are the opposite of the goals of mathematics. Mathematics is unequal, non-diverse, and exclusive to the nth degree. It’s hard.

Well, we all know that.

Wood says “Most will probably regard the requirement as no big deal and write a statement”. They’ll think their complicity is nothing; besides, what’s a little Equality, Diversity & Inclusion? They’ll still be able to work, right?

Do you really think it will stop with some dumb statement?

UC Merced sociologist Tanya Golash-Boza advises professors, “Do not write a throwaway diversity statement.” In her experience, job candidates’ diversity statement are “scrutinized.” Strong statements reflected candidates’ “experiences teaching first-generation college students, their involvement with LGBTQ student groups, their experiences teaching in inner-city high schools and their awareness of how systematic inequalities affect students’ ability to excel.”

Scrutinized, as in the less able mathematicians are preferred over the more ideological one. And just what does LGBTQWERTY have to do with proving theorems? Nothing is no longer an acceptable answer. The identity of the person making a claim carries more weight than truth.

Mathematicians, like every other field, had better start handing out awards, editorships, positions, perks, and whatnot to official victims, and they had better do it fast. Of course, doing so it slitting their own throat, but only slowly. Because the more Equal, Diverse, and Inclusive a department becomes, for the sake Equality, Diversity, and Inclusivity, the worse it becomes.

So hurry and create new categories of awards, and even redefines what “good” mathematics is. Or you can find yourself out beyond the gate. Bit-by-bit compromise is better than speaking out and losing your job all at once.

November 8, 2018 | 22 Comments

Science Is Magic & Miracles Aren’t

What is it a witch is doing when she mixes up some foul concoction, or lights a black candle, or casts a spell? I am not asking what her intent is, but by what mechanism does she hope to bring about the intended effect?

Well, by magic. So what is magic?

Magic is an attempt to harness a natural, but occult, mechanism, to bring about an effect. Occult means hidden, or rather (in this context) known only by adepts. So magic is science, or a kind of technology.

This also follows if the witch is calling on a “spirit” or “entity” to do her bidding. She expects that this spirit will use the means at its disposal, its natural means, to bring about the effect.

It is not that this natural mechanism is easy to implement or approachable by every person. It does not even have to be a known mechanism. Most people have no idea how cars work. They know that if they (these days) press the ON switch, the motor starts and the car goes. In the same way, the witch can, in the absence of any theory how her magic works, press a “button” and hope the spell goes.

Of course, witches are wrong about how effects come about. Their magic doesn’t work (I do not dismiss that people can contact spirits or entities, i.e. demons, which can bring about effects by natural means). But that doesn’t matter, because they think they are right. We’re only interested in what they believe they are doing. And what they believe they are doing is obscure or arcane science.

Arthur C Clarke, as every literate person knows, said, “Any sufficiently advanced technology is indistinguishable from magic.” This is almost right. He could have said science is magic, or magic is science, and have been done with it.

By natural means I have in mind a process that exists, that can be “tapped”, like starting a car is a process that can be tapped if one has the proper fob. Magic does not create the process; it uses processes that are thought (incorrectly, as all evidence attests) to exist.

Contrast magic with miracles. When Jesus turned the water into wine, he did not use magic. It is not that there is not some obscure, hugely energy-expensive mechanism to transform the mass of water (and trace chemical) molecules into ethanol and other molecules. This might exist. But Jesus certainly did not use it, not having the means to employ such a thing.

Instead, Jesus changed the essence of the material, the form of it, into something new. Changing the essence of a thing requires unnatural, supernatural powers; indeed, abilities no science can ever reach. Science (or technology) can only twist the pre-existent dials of nature. It can’t create those dials. Miracles aren’t interferences in the “laws of physics”, they are changing of the very nature of nature.

This is why you have to pray for a miracle, because you can never do it yourself. Miracles by definition require the cooperation of God.

Superstition is thus obviously a form of magic, of science. It (and even magic) works variously well, depending on how closely the superstitious act accords with nature. It fails when there is no accord, where the user has mistaken correlation for causation.

Is a Christian lighting a candle attempting superstition? Certainly this is not an attempt at magic. But perhaps superstition is a good charge.

In some cases the charges of superstition are probably true. None of us are perfect. But most of the time the Christian uses the candle as a means of prayer, a devotional object, therefore there is no sin; that is, no attempt at magic.

November 7, 2018 | 3 Comments

Making P-values Weer To Achieve Significance Won’t Help

Mini-paper out in JAMA by Matt Vassar and pals: “Evaluation of Lowering the P Value Threshold for Statistical Significance From .05 to .005 in Previously Published Randomized Clinical Trials in Major Medical Journals”. Thanks to Steve Milloy for the tip.

Authors scanned JAMA, Lancet, and NEJM for wee ps, and then asked how many study’s p’s survived being wee after dividing the magic number by 10. Seventy-percent was their answer. Meaning 30% of official findings would have to be tossed for not achieving super significance.

Somewhat amusingly, and unnecessarily, they computed regression models on the results and reported 95%—and not 99.5%—confidence intervals.

Never mind. Making ps weer does not solve any of the logical and philosophical difficulties of p-values, as in part is as partly explained in this peer-reviewed (and therefore perfectly true and indisputable) paper: Manipulating the Alpha Level Cannot Cure Significance Testing.

As a bonus, here is just one of a dozen or two criticisms of p-values that will appear in a new peer-reviewed (and therefore true and indisputable) paper in January. This is not the strongest criticism, nor even in the top five. But it alone is enough to quash their use.

(I’m leaving it in LaTeX format so you can get a hint about the citations.)

Excerpt

P-values are Not Decisions

If the p-value is wee, a decision is made to reject the null hypothesis, and vice versa (ignoring the verbiage “fail to reject”). Yet the consequences of this decision are not quantified using the p-value. The decision to reject is just the same, and therefore just as consequential, for a p-value of 0.05 as one of 0.0005. Some have the habit of calling especially wee p-values as “highly significant”, and so forth, but this does not accord with frequentist theory, and is in fact forbidden by that theory because it seeks a way around the proscription of applying probability to hypotheses. The p-value, as frequentist theory admits, is not related in any way to the probability the null is true or false. Therefore the size of the p-value does not matter. Any level chosen as “significant” is, as proved above, an act of will.

A consequence of the frequentist idea that probability is ontic and that true models exist (at the limit) is the idea that the decision to reject or accept some hypothesis should be the same for all. Steve Goodman calls this idea “naive inductivism”, which is “a belief that all scientists seeing the same data should come to the same conclusions,” \cite{Goo2001}. That this is false should be obvious enough. Two men do not always make the same bets even when the probabilities are deduced from first principles, and are therefore true. We should not expect all to come to agreement on believing a hypothesis based on tests concocted from {\it ad hoc} models. This is true, and even stronger, in a predictive sense, where conditionality is insisted upon.

Two (or more) people can come to completely different predictions, and therefore difference decisions, even when using the same data. Incorporating decision in the face of uncertainty implied by models is only partly understood. New efforts along these lines using quantum probability calculus, especially in economic decisions, are bound to pay off, see e.g. \cite{NguSri2019}.

A striking and in-depth example of how using the same model and same data can lead people to {\it opposite} beliefs and decisions is given by Jaynes in his chapter “Queer uses for probability theory”, \cite{Jay2003}.

November 6, 2018 | 5 Comments

The Controversy Over Randomization And Balance In Clinical Trials

There was a paper a short while back, “Why all randomised controlled trials produce biased results“, by Alexander Krauss, in the Annals of Medicine. Raised some academic eyebrows.

Krauss says, “RCTs face a range of strong assumptions, biases and limitations that have not yet all been thoroughly discussed in the literature.”

His critics says, “Oh yes they have.”

Krauss says that the “10 most cited RCTs worldwide” “shows that [RCT] trials inevitably produce bias.”

His critics say, “Oh no they don’t.”

Krauss says, “Trials involve complex processes — from randomising, blinding and controlling, to implementing treatments, monitoring participants etc. — that require many decisions and steps at different levels that bring their own assumptions and degree of bias to results.”

His critics say, “No kidding, genius.”

Those critics—Andrew Althouse, Kaleab Abebe, Gary Collins, and Frank E Harrell—were none too happy with Krauss, charging him with not doing his homework.

The critics have the upper hand here. But I disagree with them on a point or two, about which more below.

The piece states that the simple-treatment-at-the-individual-level limitation is a constraint of RCTs not yet thoroughly discussed and notes that randomization is infeasible for many scientific questions. This, however, is not relevant to the claim that all RCTs produce biased results; it merely suggests that we should not use randomized controlled trials for questions where they are not applicable. Furthermore, the piece states that randomized trials cannot generally be conducted in cases with multiple and complex treatments or outcomes simultaneously that often reflect the reality of medical situations. This statement ignores a great deal of innovation in trial designs, including some very agile and adaptable designs capable of evaluating multiple complex treatments and/or outcomes across variable populations.

They go on to note some of these wonders. Then they come to one of the two key points: “there is no requirement for baseline balance in all covariates to have a valid statistical inference from [a statistical] trial”, calling such a belief a “myth”, meaning (as moderns do) a falsity.

It is false, too. Balance is not necessary. Who cares if the patients in group A used to own just as many marbles as the patients in group B when they were all six? And, of course, you can go on an on like that practically ad infinitum, which brings the realization that “randomization” never brings balance.

Control is what counts. But control is, like probability itself, conditional on the premises we bring to the problem. The goal of all experimentation is to discover, to the closest possible extent, the cause of the thing studied. If we knew the cause, we would not need to do the study. If we do not know the cause, a study may enlighten us, as long as we are measuring things in what I call the “causal path” of the item of interest. Also see this, for the hippest most modern-year analysis ever!

We con’t control for past marble ownership in most clinical trials, nor do we wish to, because we cannot bring ourselves to believe the premise that marble ownership is in the causal path of the thing under study. If we knew, really knew, the exact cause, we could run an experiment with perfect controls, since what should be controlled is a known part of the cause.

That we know so little, except in the grossest sense, about the right and proper controls, is why we have to do these trials, which are correlational. We extract, in our minds, the usable, and sometimes false, essences in these correlations and improve our understanding of cause.

Another reason balance isn’t needed: probability conditions on the totality of our beliefs about the proposition of interest (models, on the other hand, condition on a tiny formal fraction). Balance doesn’t provide any special insight, unless the proposition of interest itself involves balance.

Notice that medical trials are not run like physics experiments, even though the goals of both are the same, and the nature of evidence is identical in both setups, too. Both control, and physics controls better, because physical knowledge of of vastly simpler systems, so knowledge of cause is greater.

The differences are “randomization” and, sometimes, “blinding”.

Krauss’s critics “It is important to remember that the fundamental goal of randomization in clinical trials is preventing selection bias”.

Indeed, it is not just the fundamental, but the only goal. The reason “randomization” is used is the same reason referees flip the coin at the start of ballgames and not a player or coach or fan from one of the sides. “Randomization” provides the exact same control—yes, the word is control—that blinding performs. Both make it harder to cheat.

There is nothing so dishonest as a human being. The simplest of most frequent victim of his mendacity is himself. Every scientist believes in confirmation bias, just as every scientist believes it happens to the other guy.

“Randomization” and “blinding” move the control from the interested scientist to a disinterested device. It is the disinterestedness that counts here, not the “randomness”. If we had a panel of angelic judges watching over our experiment and control assignments, and angels (the good kind) finding impossible to lie, well, we would not need “randomness” nor blinding.

The problem some (not all) have with “randomization” is that they believe it induces a kind of mystical condition where certain measurements “take on” or are imbued with probability, which things can do because (to them) things “have” probability. And that if it weren’t for “randomization”, the things wouldn’t have the proper probability. Randomization, then, is alchemy.

Probability doesn’t exist, and “random” only means unknown (or unknown cause), so adding “unknownness” to an experiment does nothing for you, epistemologically speaking.

There are some interesting technical details about complex experiments in the critics’ response that are also worth reading, incidentally.