Skip to content

Category: Statistics

The general theory, methods, and philosophy of the Science of Guessing What Is.

October 15, 2018 | 11 Comments

Amazon’s AI Proves Men Better Than Women At Tech Jobs

Item Amazon scraps secret AI recruiting tool that showed bias against women (Thanks to Mark Charters for the tip.)

Amazon.com Inc’s (AMZN.O) machine-learning specialists uncovered a big problem: their new recruiting engine did not like women.

The team had been building computer programs since 2014 to review job applicants’ resumes with the aim of mechanizing the search for top talent, five people familiar with the effort told Reuters…

“Everyone wanted this holy grail,” one of the people said. “They literally wanted it to be an engine where I’m going to give you 100 resumes, it will spit out the top five, and we’ll hire those.”

But by 2015, the company realized its new system was not rating candidates for software developer jobs and other technical posts in a gender-neutral way.

That is because Amazon’s computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry.

We’ve seen before that “algorithms” are called “racist.” Read it.

Feed the algorithm, the curve-fitting AI, measures such as use of purple hair dye, purchased tampons or bought video game, and so forth, and it will, for the painfully obvious reasons, pick out men from women. Not with perfection, of course, but it will be pretty good.

Then, since as everybody knows, but many don’t like knowing, men at the extremes are better at analytic tasks than non-men, an algorithm to maximize a candidate’s ability to code not fed sex, but measures highly predictive of sex, will pick out more men than women. The algorithm will be “biased” (to reality).

There are only two ways to avoid the algorithm suggesting more men than women: (1) feed the algorithm only measures which in no way are predictive of sex; but, since men (at the extremes) are better than non-men at coding, the algorithm will do a lousy job predicting coding success; (2) instruct the algorithm to spit out Equality; which also will force the algorithm to do a rotten job.

Equality is defined as the hope in absence of all evidence that men and women are equal. But if men and women were equal, we would not even know to say “men” and “women.”

Bias is defined as politically unacceptable result.

October 11, 2018 | 19 Comments

On Witch Hunts

Help me, here. We need a new term. Witch hunt has an entirely negative connotation, and this should not be. Genuine witches should be hunted, as doubtless all agree. Witches, by definition—real ones, I mean—are evil. And evil should be hunted. But those that are not witches shouldn’t.

Rather, those who are accused of having non-existent occult powers should not be so accused. What somebody means by a witch hunt is a ferreting out and persecuting of illogically or wrongly charged people.

Example headline: Google conducts a witch hunt for non-progressive employees.

Google is anti-reality and calls those who espouse reality witches, and it hunts them. Think James Damore. We don’t want to say what Google is doing is witch hunting. Realists are the only heroes we have left.

How about snark hunting? Snarks are imaginary beasts which are hunted by those to be initiated. Unfortunately, snark also means caustic sarcasm, the continuous practitioners of which are not far removed from witches (yes, I know). Snark hunts, then, are not always bad.

Chimera hunt has no ring to it, not the least because chimera has more than one syllable and few will know what it means. Antirevolutionary hunt? Innocent chase? Take-down?

Witch hunt is not always used incorrectly, even for non-witches. Infamous homosexualist Fr James Martin says “The witch hunt for gay priests must end. Now.” Since Martin is an inverse barometer, we know the opposite is true. Those men who enjoy sodomy and seek it out, especially with teenage parishioners should be hunted and chased from the priesthood.

Martin uses the term knowing his sympathetic listeners will understand there are no such things as witches, and should therefore not be hunted.

Again, this calls for a need to restore witch hunt to its proper sense and former glory. Thanks to KA Rogers for the tip on the story So Just What Was It That Caused The Witch Hunts? They mean the genuine ones, the late Sixteenth and early Seventeenth Century witch hunts during a period in which people still retained their belief in witches.

Popular opinion has long held that Europe’s ‘witch craze’, which between 1520 and 1700 claimed the lives at least 40,000 people and prosecuted twice as many, resulted from bad weather. Not without reason: European witch hunting overlapped with the ‘Little Ice Age’. During this period, dropping temperatures damaged crops and thus citizens economically, and disgruntled citizens often search for scapegoats – in the 16th and 17th centuries, literal witches…

Crop failures, droughts, and disease were hardly unknown in Europe before the witch craze. In the early 14th century, for instance, the Great Famine decimated populations in Germany, France, the British Isles, and Scandinavia; yet there were no witch hunts. Further, while weather could not have varied dramatically between neighboring locales in 16th and 17thcentury Europe, the number of people prosecuted for witchcraft often did…

In a recent paper, Jacob Russ and I hypothesise a different source of historical Europe’s witch hunts: competition between Catholicism and Protestantism in post-Reformation Christendom (Leeson and Russ 2018). For the first time in history, the Reformation presented large numbers of Christians with a religious choice: stick with the old Church or switch to the new one. And when churchgoers have religious choice, churches must compete.

One way to deal with competitors is to ban them legally; another is to annihilate them violently. The Catholic Church tried both approaches with its Protestant competitors but had little success….Protestant rivals…In an effort to woo the faithful, competing confessions advertised their superior ability to protect citizens against worldly manifestations of Satan’s evil by prosecuting suspected witches.

Leeson and Russ have a sharp info-graphic on the number of accused and executed as witches by time. A brief version heads this post, but click over to the main page for a better view.

I don’t buy their theory in whole. The “wars of religion” were not wars over religion, but wars over territory and power, coming at the time of the Great Protest (a.k.a. Protestant Revolution) when a weakness was sensed and exploited. An analogy is the pegging of Archduke Ferdinand, which unleashed the war everybody wanted (before it happened).

The witch hunts began shortly after the Great Protest, but then they dipped. Notice when? Right: at the beginning the Thirty Years war, when the slaughter became especially nasty and vindictive. People were too busy killing each other in earnest and in ways most creative to chase witches.

When that was over, the hunting of witches peaked and subsided as Catholic power reasserted itself. And, since many were undoubtedly falsely accused, witch hunting got a bad name.

October 10, 2018 | 7 Comments

Road Rage: Paper Says Living Near Road Causes Dementia

Dust is thrilling researchers these days. Dust, or the more scientific sounding PM2.5, is the next greatest struggle after global warming. We’ve already seen attempts to prove dust, as measured by the proxy of living next to roads, causes heart disease. The attempt failed, for all the usual statistical reasons: wee p-values, no causal link, the epidemiologist fallacy, and so forth.

Same thing with our latest paper, “Living near major roads and the incidence of dementia, Parkinson’s disease, and multiple sclerosis: a population-based cohort study” by Hong Chen and a slew of others in the Lancet.

Because I don’t want the point to get lost, I want to emphasize PM2.5 papers, or actually their proxies—it’s almost always proxies—do nothing except wreak havoc, they are similar to global warming papers. Researchers cannot fathom living near a “major” roadway can do anything except cause harm, that it can do no good. So they never check for the good, just like with global warming, which is everywhere malevolent.

Skip that. It’s on to the paper!

Emerging evidence suggests that living near major roads might adversely affect cognition. However, little is known about its relationship with the incidence of dementia, Parkinson’s disease, and multiple sclerosis. We aimed to investigate the association between residential proximity to major roadways and the incidence of these three neurological diseases in Ontario, Canada.

So what about the data for this statistical scavenger hunt? For those people who had medical records and who were diagnosed for certain comorbidities, they have them. For those without records or with undiagnosed illnesses, they didn’t have them. Obvious points, but it cuts back on the idea disease states were “controlled” in the statistical models.

Income was not measured; they used an error-prone proxy instead, assigning people to incomes buckets belonging to neighborhoods.

Then came the weird measures. “To control for regional differences in the incidence of
dementia, Parkinson’s disease, and multiple sclerosis, we created a variable for urban residence (yes/no), density of neurologists using the ICES Physician Database to represent accessibility to neurological care, and the latitude of residence given the reported latitude gradient with multiple sclerosis.”

Density of neurologists? Yes, sir. Density of neurologists.

We now enter the realm of the epidemiologist fallacy. “Briefly,” our authors say, “estimates of ground-level concentrations of PM2.5 were derived from satellite observations of aerosol optical depth in combination with outputs from a global atmospheric chemistry transport model (GEOS-Chem CTM). The PM2.5 estimates were further adjusted using information on urban land cover, elevation, and aerosol composition using a geographically weighted regression….Similarly, we derived long-term exposure to NO2 from a national land-use regression (LUR) model…”

Actual PM2.5 and NO2 exposure was not measured. The proxies were assumed to be the real exposures. This is the epidemiologist fallacy. But wait! That isn’t always bad. There’s a possibility of saving the day. That’s if (a) the uncertainty in the proxies as measures were used in the statistical models at all stages, and (b) the uncertainty in the chemistry transport etc. models was used in the statistical models at all stages.

And were these crucial uncertainties—large ones, too: “The final LUR model explained 73% of the variation in annual 2006 measurements of NO2”—used in the statistical models at all stages?

Alas, dear reader, they were not (at least, I did not find any evidence they were).

There were more uncertainties, mostly involving complex measures of distances of this and that, which are important but would distract us (you can read the paper). I’m more interested in the uncertainties in the outcomes themselves, and the comorbidities. I’ve already mentioned the difficulty that only diagnosed maladies were in the databases, and that undiagnosed ones weren’t.

But that doesn’t mean that what appeared in the databases was error free. As the authors say: “These databases have been validated previously using chart review, with sensitivity of 78–84% and specificity of 99–100%.”

Dude. That doesn’t translate into an unimpeachable accuracy rate. Meaning, as should be obvious, the outcomes and some measures had uncertainty, too. Which is not damning. Measurement-error models exist to handle these kinds of things.

So. Was the uncertainty in the outcomes and measures incorporated into the exquisitely complex statistical models?

Alas again, my dear ones, it does not appear to be so.

It was worse than this, because after all this came the wee p-values (or confidence intervals, which amount to the same thing).

They found that the hazard rate of Dementia, but none of the other maladies, was highest for those with addresses nearest to “major” roadways, after “controlling” for that other stuff (in an incomplete way).

Curiously, having an database address nearest “major” roads was as dangerous living farthest away for Multiple Sclerosis, and that addresses in the middle range fared best. Which is a signal something screwy is going on. But since none of the p-values for MS were wee, this oddity was dismissed.

Why the wee ps? Well, the datasets were huge. A major (huge!) failing of p-values is that ps are always wee for large enough sample sizes, even in the complete absence of cause. Here, the effects weren’t so big, either.

The confidence intervals were parametric, not predictive. Huge sample sizes make for short CIs, just as they make for small ps. What’s wanted are actual predictive intervals, but we don’t see them.

Their conclusion: “In this large population-based cohort, living near major roadways was associated with increased dementia incidence. The associations seemed stronger among urban residents, especially those living in major urban centres and those who never moved.” Who lives near “major” roadways and doesn’t move in major urban centers? People who usually can’t afford to, and who might not be in as good health and their more mobile not-so-near neighbors.

“We observed that exposures to NO2 and PM2.5 were related to dementia and that adjusting for these two pollutants attenuated its association with roadway proximity, suggesting that the effect of traffic exposure might, at least in part, operate through this mechanism.”

At best this is translated into “Breathing pollution isn’t good for you,” which isn’t a major breakthrough. On the other hand, exposure to NO2 and PM2.5 was not measured. Proxies were.

My bet, or really my challenge to the authors, is to redo the whole shebang, only this time incorporate all the uncertainties I mentioned (and some which were too boring to bring up) and recast the results in predictive not parametric terms. I’ll lay fifty bucks that says all the associations disappear or become trivial.

October 8, 2018 | 15 Comments

Proof Cause Is In The Mind And Not In The Data

Pick something that happened. Doesn’t matter what it is, as long as it happened. Something caused this thing to happen; which is to say, something actual turned the potential (of the thing to happen) to actuality.

Now suppose you want to design a clever algorithm, as clever as you like, to discover the cause of this thing (in all four aspects of cause, or even just the efficient cause). You’re too busy to do it yourself, so you farm out the duty to a computer.

I will take, as my example, the death of Napoleon. One afternoon he was spry, sipping his Grand cru, and planning his momentous second comeback, and the next morning he was smelling like week-old brie. You are free to substitute this event for one of your liking.

Plug into the computer, or a diagram in the computer, or whatever you like, THE EVENT.

Now press “GO” or “ACTIVATE” or whatever it is that launches the electronic beastie into action.

What will be the result?

If you said nothing, you have said much. For you have said your “artificial intelligence” algorithm cannot discern cause. Which is saying a bunch. Indeed, more than a bunch, because you have proven lifeless algorithms cannot discover cause at all.

End of proof.

“Very funny, Briggs. Most amusing. But you know you left out the most important element.”

I did? What’s that?

“The data. No algorithm can work without data. It’s the data from which the cause is extracted.”

Data? Which data is that?

“Why, the data related to the event your algorithm is focused on.”

Say, you might be right. Okay, here’s some data. The other day I was given a small bottle of gin. In the shape of a Dutch house in delft blue. You weren’t supposed to drink it, but I did. In defense, I wasn’t told until after I drank it that I shouldn’t have.

“What in the name of Yorick’s skull are you talking about? That’s not data. You have to use real data. Something that’s related to your event. What’s this Dutch gin house have to do with that?”

Well, you know what Napoleon did in Holland. And what’s my choice have to do with anything? We want the algorithm to figure out the cause, not me. Shouldn’t it be the business of the algorithm to identify the data it needs to show cause?

“I’m not sure. That’s a tall order.”

An infinite one, or practically so. Everything that’s ever happened, in the order it happened, is data. That’s a lot of data. That tall order is thus not only tall, but impossible, too, since everything that’s ever happened wasn’t, for the most part, measured. And even it if it was (by us men), no device could store all this data or manipulate it.

“Of course not! Why in the world are you bringing in infinity and all this other silly business? You can be obtuse, Briggs. No, no. The data we want are those measurements related to the event you picked.”

Related? But don’t you mean by related those measures which are the cause of the event, or which are not the direct causes, but incidental ones, perhaps measures caused by the event itself, or measures that caused the cause of the event, and that sort of thing? Those measures which a prominent writer called in his award-eligible book (chap. 9) “the causal path”?

“They sound like it, yes.”

Then since it is you who have partial or full knowledge of the full or partial cause of the event, or of other events in the causal path of the event itself, isn’t it you and not the algorithm that is discerning the cause? Any steps you take to limit the data available to the algorithm in effect makes the algorithm’s finding of cause (or correlation) a self-fulfilling prophecy. By not putting in my gin means you are going all the work, not the algorithm. It means you have figured out the cause and not the algorithm. That makes the cause in your mind and not the data, doesn’t it?

“Perhaps.”

The best any algorithm can do is to find prominent correlations, which may or may not be directly related to the cause itself, using whatever rules of “correlation” you pre-specify. Your algorithm is doing what it was told in the same way as your toaster. These correlations will be better or worse depending on your understanding of the cause and therefore of what “data” you feed your algorithm. The only way we know these data are related to the cause, or are the cause, is because we have a power algorithms can never have, which is the extraction of and understanding of universals.

“I guess.”

And all that that is even before we consider predictive ability or, more devastating to your cause (get it? get it?), under-determination, Duhem, Quine, and all that. The idea that even if we think we have grasped the correct universal, and have indeed used our algorithm to make perfect predictions, we may be in error and that another, better, explanation is the truly true cause.

“That seems to follow.”

Then it also follows is that the only reason we think algorithms can find cause is because we forgot the cause of causes, or rather the cause of comprehending causes, which are our own minds.

Note that this explanation, which is a proof, does not explain why most use algorithms in the hope of finding “causes” to repeated events, or events which are claim to be repeated. That’s a whole ‘nother story, which involves, at the end, abandoning the notion probability is a real thing.