Skip to content

Author: Briggs

October 11, 2018 | 26 Comments

On Witch Hunts

Help me, here. We need a new term. Witch hunt has an entirely negative connotation, and this should not be. Genuine witches should be hunted, as doubtless all agree. Witches, by definition—real ones, I mean—are evil. And evil should be hunted. But those that are not witches shouldn’t.

Rather, those who are accused of having non-existent occult powers should not be so accused. What somebody means by a witch hunt is a ferreting out and persecuting of illogically or wrongly charged people.

Example headline: Google conducts a witch hunt for non-progressive employees.

Google is anti-reality and calls those who espouse reality witches, and it hunts them. Think James Damore. We don’t want to say what Google is doing is witch hunting. Realists are the only heroes we have left.

How about snark hunting? Snarks are imaginary beasts which are hunted by those to be initiated. Unfortunately, snark also means caustic sarcasm, the continuous practitioners of which are not far removed from witches (yes, I know). Snark hunts, then, are not always bad.

Chimera hunt has no ring to it, not the least because chimera has more than one syllable and few will know what it means. Antirevolutionary hunt? Innocent chase? Take-down?

Witch hunt is not always used incorrectly, even for non-witches. Infamous homosexualist Fr James Martin says “The witch hunt for gay priests must end. Now.” Since Martin is an inverse barometer, we know the opposite is true. Those men who enjoy sodomy and seek it out, especially with teenage parishioners should be hunted and chased from the priesthood.

Martin uses the term knowing his sympathetic listeners will understand there are no such things as witches, and should therefore not be hunted.

Again, this calls for a need to restore witch hunt to its proper sense and former glory. Thanks to KA Rogers for the tip on the story So Just What Was It That Caused The Witch Hunts? They mean the genuine ones, the late Sixteenth and early Seventeenth Century witch hunts during a period in which people still retained their belief in witches.

Popular opinion has long held that Europe’s ‘witch craze’, which between 1520 and 1700 claimed the lives at least 40,000 people and prosecuted twice as many, resulted from bad weather. Not without reason: European witch hunting overlapped with the ‘Little Ice Age’. During this period, dropping temperatures damaged crops and thus citizens economically, and disgruntled citizens often search for scapegoats – in the 16th and 17th centuries, literal witches…

Crop failures, droughts, and disease were hardly unknown in Europe before the witch craze. In the early 14th century, for instance, the Great Famine decimated populations in Germany, France, the British Isles, and Scandinavia; yet there were no witch hunts. Further, while weather could not have varied dramatically between neighboring locales in 16th and 17thcentury Europe, the number of people prosecuted for witchcraft often did…

In a recent paper, Jacob Russ and I hypothesise a different source of historical Europe’s witch hunts: competition between Catholicism and Protestantism in post-Reformation Christendom (Leeson and Russ 2018). For the first time in history, the Reformation presented large numbers of Christians with a religious choice: stick with the old Church or switch to the new one. And when churchgoers have religious choice, churches must compete.

One way to deal with competitors is to ban them legally; another is to annihilate them violently. The Catholic Church tried both approaches with its Protestant competitors but had little success….Protestant rivals…In an effort to woo the faithful, competing confessions advertised their superior ability to protect citizens against worldly manifestations of Satan’s evil by prosecuting suspected witches.

Leeson and Russ have a sharp info-graphic on the number of accused and executed as witches by time. A brief version heads this post, but click over to the main page for a better view.

I don’t buy their theory in whole. The “wars of religion” were not wars over religion, but wars over territory and power, coming at the time of the Great Protest (a.k.a. Protestant Revolution) when a weakness was sensed and exploited. An analogy is the pegging of Archduke Ferdinand, which unleashed the war everybody wanted (before it happened).

The witch hunts began shortly after the Great Protest, but then they dipped. Notice when? Right: at the beginning the Thirty Years war, when the slaughter became especially nasty and vindictive. People were too busy killing each other in earnest and in ways most creative to chase witches.

When that was over, the hunting of witches peaked and subsided as Catholic power reasserted itself. And, since many were undoubtedly falsely accused, witch hunting got a bad name.

October 10, 2018 | 7 Comments

Road Rage: Paper Says Living Near Road Causes Dementia

Dust is thrilling researchers these days. Dust, or the more scientific sounding PM2.5, is the next greatest struggle after global warming. We’ve already seen attempts to prove dust, as measured by the proxy of living next to roads, causes heart disease. The attempt failed, for all the usual statistical reasons: wee p-values, no causal link, the epidemiologist fallacy, and so forth.

Same thing with our latest paper, “Living near major roads and the incidence of dementia, Parkinson’s disease, and multiple sclerosis: a population-based cohort study” by Hong Chen and a slew of others in the Lancet.

Because I don’t want the point to get lost, I want to emphasize PM2.5 papers, or actually their proxies—it’s almost always proxies—do nothing except wreak havoc, they are similar to global warming papers. Researchers cannot fathom living near a “major” roadway can do anything except cause harm, that it can do no good. So they never check for the good, just like with global warming, which is everywhere malevolent.

Skip that. It’s on to the paper!

Emerging evidence suggests that living near major roads might adversely affect cognition. However, little is known about its relationship with the incidence of dementia, Parkinson’s disease, and multiple sclerosis. We aimed to investigate the association between residential proximity to major roadways and the incidence of these three neurological diseases in Ontario, Canada.

So what about the data for this statistical scavenger hunt? For those people who had medical records and who were diagnosed for certain comorbidities, they have them. For those without records or with undiagnosed illnesses, they didn’t have them. Obvious points, but it cuts back on the idea disease states were “controlled” in the statistical models.

Income was not measured; they used an error-prone proxy instead, assigning people to incomes buckets belonging to neighborhoods.

Then came the weird measures. “To control for regional differences in the incidence of
dementia, Parkinson’s disease, and multiple sclerosis, we created a variable for urban residence (yes/no), density of neurologists using the ICES Physician Database to represent accessibility to neurological care, and the latitude of residence given the reported latitude gradient with multiple sclerosis.”

Density of neurologists? Yes, sir. Density of neurologists.

We now enter the realm of the epidemiologist fallacy. “Briefly,” our authors say, “estimates of ground-level concentrations of PM2.5 were derived from satellite observations of aerosol optical depth in combination with outputs from a global atmospheric chemistry transport model (GEOS-Chem CTM). The PM2.5 estimates were further adjusted using information on urban land cover, elevation, and aerosol composition using a geographically weighted regression….Similarly, we derived long-term exposure to NO2 from a national land-use regression (LUR) model…”

Actual PM2.5 and NO2 exposure was not measured. The proxies were assumed to be the real exposures. This is the epidemiologist fallacy. But wait! That isn’t always bad. There’s a possibility of saving the day. That’s if (a) the uncertainty in the proxies as measures were used in the statistical models at all stages, and (b) the uncertainty in the chemistry transport etc. models was used in the statistical models at all stages.

And were these crucial uncertainties—large ones, too: “The final LUR model explained 73% of the variation in annual 2006 measurements of NO2”—used in the statistical models at all stages?

Alas, dear reader, they were not (at least, I did not find any evidence they were).

There were more uncertainties, mostly involving complex measures of distances of this and that, which are important but would distract us (you can read the paper). I’m more interested in the uncertainties in the outcomes themselves, and the comorbidities. I’ve already mentioned the difficulty that only diagnosed maladies were in the databases, and that undiagnosed ones weren’t.

But that doesn’t mean that what appeared in the databases was error free. As the authors say: “These databases have been validated previously using chart review, with sensitivity of 78–84% and specificity of 99–100%.”

Dude. That doesn’t translate into an unimpeachable accuracy rate. Meaning, as should be obvious, the outcomes and some measures had uncertainty, too. Which is not damning. Measurement-error models exist to handle these kinds of things.

So. Was the uncertainty in the outcomes and measures incorporated into the exquisitely complex statistical models?

Alas again, my dear ones, it does not appear to be so.

It was worse than this, because after all this came the wee p-values (or confidence intervals, which amount to the same thing).

They found that the hazard rate of Dementia, but none of the other maladies, was highest for those with addresses nearest to “major” roadways, after “controlling” for that other stuff (in an incomplete way).

Curiously, having an database address nearest “major” roads was as dangerous living farthest away for Multiple Sclerosis, and that addresses in the middle range fared best. Which is a signal something screwy is going on. But since none of the p-values for MS were wee, this oddity was dismissed.

Why the wee ps? Well, the datasets were huge. A major (huge!) failing of p-values is that ps are always wee for large enough sample sizes, even in the complete absence of cause. Here, the effects weren’t so big, either.

The confidence intervals were parametric, not predictive. Huge sample sizes make for short CIs, just as they make for small ps. What’s wanted are actual predictive intervals, but we don’t see them.

Their conclusion: “In this large population-based cohort, living near major roadways was associated with increased dementia incidence. The associations seemed stronger among urban residents, especially those living in major urban centres and those who never moved.” Who lives near “major” roadways and doesn’t move in major urban centers? People who usually can’t afford to, and who might not be in as good health and their more mobile not-so-near neighbors.

“We observed that exposures to NO2 and PM2.5 were related to dementia and that adjusting for these two pollutants attenuated its association with roadway proximity, suggesting that the effect of traffic exposure might, at least in part, operate through this mechanism.”

At best this is translated into “Breathing pollution isn’t good for you,” which isn’t a major breakthrough. On the other hand, exposure to NO2 and PM2.5 was not measured. Proxies were.

My bet, or really my challenge to the authors, is to redo the whole shebang, only this time incorporate all the uncertainties I mentioned (and some which were too boring to bring up) and recast the results in predictive not parametric terms. I’ll lay fifty bucks that says all the associations disappear or become trivial.

October 9, 2018 | 11 Comments

“Doctors” Suggest Hacking Up Live Patients For Their Organs, Then Killing Them

The paper in the oh-so-prestigious New England Journal of Medicine is “Voluntary Euthanasia — Implications for Organ Donation” by Ian M. Ball, Robert Sibbald, and Robert D. Truog, a couple of docs and somebody else. We shall see that this paper at least proves knowing the knee bone is connected to the thalamus, or whatever, does not train one especially well to make ethics decisions.

Now doctors don’t kill patients, except by accident or neglect. Executioners kill people by design, on purpose, and with no legal culpability. When a person who was formally a doctor on purpose or by design kills somebody (and is not in the military engaged in war and such like), he is no longer a doctor but an executioner. You can never again have the same trust you had in this individual that he has your best interest in mind when you suspect he might be smiling at you because he likes the shape of your liver.

Doctors, as I know by many, many years association with them, really do think well of themselves. Because they are, mostly, engaged in enhancing and saving lives, this excess ego can be forgiven them. Unless it causes them to start believing their own press.

We can look to doctors regarding the ethics and morality of organ donation in the same way we look to physicists about the capabilities of nuclear weapons. The physicist can tell us what will happen, and of the nature of the effects, but the physicist is in no way especially competent to say when and under what circumstance such weapons should be used. Physicists are not moralists. Neither physicians, though they do gain some practical experience in the area.

This means we cannot leave physicians to themselves to decide what is best and what worst and what is anathema about killing somebody to take their organs. For the least proof of this we see that few or no doctors yet (to my knowledge) have embraced the term executioner, even as they advocate the active killing of patients.

Secondly, they never say killing and always employ a euphemism. Euphemism, except for comedic effect, always indicate somebody is hiding something. The euphemism (in this paper and elsewhere) “voluntary euthanasia” is interesting. Why the “voluntary”? Why its emphasis? (These are rhetorical questions.)

If you’re in the market for used spleens, you can’t be thrilled when a spleen holder dies at home, far from a hospital and its facilities for spleen removal. Bodies left to linger for even small amounts of time are like fish left in the sun. First thing you can do, then, if your hunger for used spleens is to encourage people to come to (warm, quiet) hospitals to die. Dying, it seems, requires expertise (just like births). The authors of this paper do not say “Do not die at home”, but the bias for a hospital death is there.

The dead donor rule — a traditional ethical principle guiding organ procurement — states that vital organs may not be retrieved before the patient’s death and that the procurement of organs may not cause the patient’s death. This principle assures patients and the public that physicians will be bound to the interests of their patients before the interests of potential organ recipients.

The dead donor rule doesn’t do that for me, because of the suspicion a doctor turned executioner will hasten the patient’s death. Assisted suicide is the euphemism. Our authors aren’t keen on the rule, either, because of the possibility of spoilage (my emphasis) done by the killing method.

Although some patients may want to be sure that organ procurement won’t begin before they are declared dead, others may want not only a rapid, peaceful, and painless death, but also the option of donating as many organs as possible and in the best condition possible. Following the dead donor rule could interfere with the ability of these patients to achieve their goals. In such cases, it may be ethically preferable to procure the patient’s organs in the same way that organs are procured from brain-dead patients (with the use of general anesthesia to ensure the patient’s comfort).

Whose goals? Drug ’em up and start cuttin’. What can’t be used is easily disposed of.

Patients who want a rapid, painless, and peaceful death while optimizing the number of organs they can donate are best cared for in an operative setting, where they can be fully anesthetized and where optimal organ procurement is supported.

There’s the death-in-hospital preference, even for patients the doctors kill.

The authors also recognize the idea of “non-therapeutic practices” has to be jettisoned. Pumping chemicals into a body you’re about to go shopping in is not by definition therapeutic.

Well, once you’ve given up on the idea physicians should do no harm, abandoning the rest of traditional medical ethics is far less painful.

October 8, 2018 | 15 Comments

Proof Cause Is In The Mind And Not In The Data

Pick something that happened. Doesn’t matter what it is, as long as it happened. Something caused this thing to happen; which is to say, something actual turned the potential (of the thing to happen) to actuality.

Now suppose you want to design a clever algorithm, as clever as you like, to discover the cause of this thing (in all four aspects of cause, or even just the efficient cause). You’re too busy to do it yourself, so you farm out the duty to a computer.

I will take, as my example, the death of Napoleon. One afternoon he was spry, sipping his Grand cru, and planning his momentous second comeback, and the next morning he was smelling like week-old brie. You are free to substitute this event for one of your liking.

Plug into the computer, or a diagram in the computer, or whatever you like, THE EVENT.

Now press “GO” or “ACTIVATE” or whatever it is that launches the electronic beastie into action.

What will be the result?

If you said nothing, you have said much. For you have said your “artificial intelligence” algorithm cannot discern cause. Which is saying a bunch. Indeed, more than a bunch, because you have proven lifeless algorithms cannot discover cause at all.

End of proof.

“Very funny, Briggs. Most amusing. But you know you left out the most important element.”

I did? What’s that?

“The data. No algorithm can work without data. It’s the data from which the cause is extracted.”

Data? Which data is that?

“Why, the data related to the event your algorithm is focused on.”

Say, you might be right. Okay, here’s some data. The other day I was given a small bottle of gin. In the shape of a Dutch house in delft blue. You weren’t supposed to drink it, but I did. In defense, I wasn’t told until after I drank it that I shouldn’t have.

“What in the name of Yorick’s skull are you talking about? That’s not data. You have to use real data. Something that’s related to your event. What’s this Dutch gin house have to do with that?”

Well, you know what Napoleon did in Holland. And what’s my choice have to do with anything? We want the algorithm to figure out the cause, not me. Shouldn’t it be the business of the algorithm to identify the data it needs to show cause?

“I’m not sure. That’s a tall order.”

An infinite one, or practically so. Everything that’s ever happened, in the order it happened, is data. That’s a lot of data. That tall order is thus not only tall, but impossible, too, since everything that’s ever happened wasn’t, for the most part, measured. And even it if it was (by us men), no device could store all this data or manipulate it.

“Of course not! Why in the world are you bringing in infinity and all this other silly business? You can be obtuse, Briggs. No, no. The data we want are those measurements related to the event you picked.”

Related? But don’t you mean by related those measures which are the cause of the event, or which are not the direct causes, but incidental ones, perhaps measures caused by the event itself, or measures that caused the cause of the event, and that sort of thing? Those measures which a prominent writer called in his award-eligible book (chap. 9) “the causal path”?

“They sound like it, yes.”

Then since it is you who have partial or full knowledge of the full or partial cause of the event, or of other events in the causal path of the event itself, isn’t it you and not the algorithm that is discerning the cause? Any steps you take to limit the data available to the algorithm in effect makes the algorithm’s finding of cause (or correlation) a self-fulfilling prophecy. By not putting in my gin means you are going all the work, not the algorithm. It means you have figured out the cause and not the algorithm. That makes the cause in your mind and not the data, doesn’t it?

“Perhaps.”

The best any algorithm can do is to find prominent correlations, which may or may not be directly related to the cause itself, using whatever rules of “correlation” you pre-specify. Your algorithm is doing what it was told in the same way as your toaster. These correlations will be better or worse depending on your understanding of the cause and therefore of what “data” you feed your algorithm. The only way we know these data are related to the cause, or are the cause, is because we have a power algorithms can never have, which is the extraction of and understanding of universals.

“I guess.”

And all that that is even before we consider predictive ability or, more devastating to your cause (get it? get it?), under-determination, Duhem, Quine, and all that. The idea that even if we think we have grasped the correct universal, and have indeed used our algorithm to make perfect predictions, we may be in error and that another, better, explanation is the truly true cause.

“That seems to follow.”

Then it also follows is that the only reason we think algorithms can find cause is because we forgot the cause of causes, or rather the cause of comprehending causes, which are our own minds.

Note that this explanation, which is a proof, does not explain why most use algorithms in the hope of finding “causes” to repeated events, or events which are claim to be repeated. That’s a whole ‘nother story, which involves, at the end, abandoning the notion probability is a real thing.