Skip to content

Category: Statistics

The general theory, methods, and philosophy of the Science of Guessing What Is.

October 18, 2018 | 7 Comments

Proof Education Is Bad For You

Headline The Age That Women Have Babies: How a Gap Divides America

First-time mothers are older in big cities and on the coasts, and younger in rural areas and in the Great Plains and the South. In New York and San Francisco, their average age is 31 and 32. In Todd County, S.D., and Zapata County, Tex., it’s half a generation earlier, at 20 and 21, according to the analysis, which was of all birth certificates in the United States since 1985 and nearly all for the five years prior…

The difference in when women start families cuts along many of the same lines that divide the country in other ways, and the biggest one is education. Women with college degrees have children an average of seven years later than those without — and often use the years in between to finish school and build their careers and incomes.

People with a higher socioeconomic status “just have more potential things they could do instead of being a parent, like going to college or grad school and having a fulfilling career,” said Heather Rackin, a sociologist at Louisiana State University who studies fertility. “Lower-socioeconomic-status people might not have as many opportunity costs — and motherhood has these benefits of emotional fulfillment, status in their community and a path to becoming an adult.”

Here is it in pictures. Age of first birth in 1980:

Age of first birth in 2016:

The scooping out of data of women in their 20s is caused by education. Not education, of course, but the encampment at degree-producing centers.

At least with a degree you have earned the privilege to sit in a soul-sucking cubicle and devote your life to creating meaningless PowerPoint “decks.”

October 15, 2018 | 11 Comments

Amazon’s AI Proves Men Better Than Women At Tech Jobs

Item Amazon scraps secret AI recruiting tool that showed bias against women (Thanks to Mark Charters for the tip.)

Amazon.com Inc’s (AMZN.O) machine-learning specialists uncovered a big problem: their new recruiting engine did not like women.

The team had been building computer programs since 2014 to review job applicants’ resumes with the aim of mechanizing the search for top talent, five people familiar with the effort told Reuters…

“Everyone wanted this holy grail,” one of the people said. “They literally wanted it to be an engine where I’m going to give you 100 resumes, it will spit out the top five, and we’ll hire those.”

But by 2015, the company realized its new system was not rating candidates for software developer jobs and other technical posts in a gender-neutral way.

That is because Amazon’s computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry.

We’ve seen before that “algorithms” are called “racist.” Read it.

Feed the algorithm, the curve-fitting AI, measures such as use of purple hair dye, purchased tampons or bought video game, and so forth, and it will, for the painfully obvious reasons, pick out men from women. Not with perfection, of course, but it will be pretty good.

Then, since as everybody knows, but many don’t like knowing, men at the extremes are better at analytic tasks than non-men, an algorithm to maximize a candidate’s ability to code not fed sex, but measures highly predictive of sex, will pick out more men than women. The algorithm will be “biased” (to reality).

There are only two ways to avoid the algorithm suggesting more men than women: (1) feed the algorithm only measures which in no way are predictive of sex; but, since men (at the extremes) are better than non-men at coding, the algorithm will do a lousy job predicting coding success; (2) instruct the algorithm to spit out Equality; which also will force the algorithm to do a rotten job.

Equality is defined as the hope in absence of all evidence that men and women are equal. But if men and women were equal, we would not even know to say “men” and “women.”

Bias is defined as politically unacceptable result.

October 11, 2018 | 26 Comments

On Witch Hunts

Help me, here. We need a new term. Witch hunt has an entirely negative connotation, and this should not be. Genuine witches should be hunted, as doubtless all agree. Witches, by definition—real ones, I mean—are evil. And evil should be hunted. But those that are not witches shouldn’t.

Rather, those who are accused of having non-existent occult powers should not be so accused. What somebody means by a witch hunt is a ferreting out and persecuting of illogically or wrongly charged people.

Example headline: Google conducts a witch hunt for non-progressive employees.

Google is anti-reality and calls those who espouse reality witches, and it hunts them. Think James Damore. We don’t want to say what Google is doing is witch hunting. Realists are the only heroes we have left.

How about snark hunting? Snarks are imaginary beasts which are hunted by those to be initiated. Unfortunately, snark also means caustic sarcasm, the continuous practitioners of which are not far removed from witches (yes, I know). Snark hunts, then, are not always bad.

Chimera hunt has no ring to it, not the least because chimera has more than one syllable and few will know what it means. Antirevolutionary hunt? Innocent chase? Take-down?

Witch hunt is not always used incorrectly, even for non-witches. Infamous homosexualist Fr James Martin says “The witch hunt for gay priests must end. Now.” Since Martin is an inverse barometer, we know the opposite is true. Those men who enjoy sodomy and seek it out, especially with teenage parishioners should be hunted and chased from the priesthood.

Martin uses the term knowing his sympathetic listeners will understand there are no such things as witches, and should therefore not be hunted.

Again, this calls for a need to restore witch hunt to its proper sense and former glory. Thanks to KA Rogers for the tip on the story So Just What Was It That Caused The Witch Hunts? They mean the genuine ones, the late Sixteenth and early Seventeenth Century witch hunts during a period in which people still retained their belief in witches.

Popular opinion has long held that Europe’s ‘witch craze’, which between 1520 and 1700 claimed the lives at least 40,000 people and prosecuted twice as many, resulted from bad weather. Not without reason: European witch hunting overlapped with the ‘Little Ice Age’. During this period, dropping temperatures damaged crops and thus citizens economically, and disgruntled citizens often search for scapegoats – in the 16th and 17th centuries, literal witches…

Crop failures, droughts, and disease were hardly unknown in Europe before the witch craze. In the early 14th century, for instance, the Great Famine decimated populations in Germany, France, the British Isles, and Scandinavia; yet there were no witch hunts. Further, while weather could not have varied dramatically between neighboring locales in 16th and 17thcentury Europe, the number of people prosecuted for witchcraft often did…

In a recent paper, Jacob Russ and I hypothesise a different source of historical Europe’s witch hunts: competition between Catholicism and Protestantism in post-Reformation Christendom (Leeson and Russ 2018). For the first time in history, the Reformation presented large numbers of Christians with a religious choice: stick with the old Church or switch to the new one. And when churchgoers have religious choice, churches must compete.

One way to deal with competitors is to ban them legally; another is to annihilate them violently. The Catholic Church tried both approaches with its Protestant competitors but had little success….Protestant rivals…In an effort to woo the faithful, competing confessions advertised their superior ability to protect citizens against worldly manifestations of Satan’s evil by prosecuting suspected witches.

Leeson and Russ have a sharp info-graphic on the number of accused and executed as witches by time. A brief version heads this post, but click over to the main page for a better view.

I don’t buy their theory in whole. The “wars of religion” were not wars over religion, but wars over territory and power, coming at the time of the Great Protest (a.k.a. Protestant Revolution) when a weakness was sensed and exploited. An analogy is the pegging of Archduke Ferdinand, which unleashed the war everybody wanted (before it happened).

The witch hunts began shortly after the Great Protest, but then they dipped. Notice when? Right: at the beginning the Thirty Years war, when the slaughter became especially nasty and vindictive. People were too busy killing each other in earnest and in ways most creative to chase witches.

When that was over, the hunting of witches peaked and subsided as Catholic power reasserted itself. And, since many were undoubtedly falsely accused, witch hunting got a bad name.

October 10, 2018 | 7 Comments

Road Rage: Paper Says Living Near Road Causes Dementia

Dust is thrilling researchers these days. Dust, or the more scientific sounding PM2.5, is the next greatest struggle after global warming. We’ve already seen attempts to prove dust, as measured by the proxy of living next to roads, causes heart disease. The attempt failed, for all the usual statistical reasons: wee p-values, no causal link, the epidemiologist fallacy, and so forth.

Same thing with our latest paper, “Living near major roads and the incidence of dementia, Parkinson’s disease, and multiple sclerosis: a population-based cohort study” by Hong Chen and a slew of others in the Lancet.

Because I don’t want the point to get lost, I want to emphasize PM2.5 papers, or actually their proxies—it’s almost always proxies—do nothing except wreak havoc, they are similar to global warming papers. Researchers cannot fathom living near a “major” roadway can do anything except cause harm, that it can do no good. So they never check for the good, just like with global warming, which is everywhere malevolent.

Skip that. It’s on to the paper!

Emerging evidence suggests that living near major roads might adversely affect cognition. However, little is known about its relationship with the incidence of dementia, Parkinson’s disease, and multiple sclerosis. We aimed to investigate the association between residential proximity to major roadways and the incidence of these three neurological diseases in Ontario, Canada.

So what about the data for this statistical scavenger hunt? For those people who had medical records and who were diagnosed for certain comorbidities, they have them. For those without records or with undiagnosed illnesses, they didn’t have them. Obvious points, but it cuts back on the idea disease states were “controlled” in the statistical models.

Income was not measured; they used an error-prone proxy instead, assigning people to incomes buckets belonging to neighborhoods.

Then came the weird measures. “To control for regional differences in the incidence of
dementia, Parkinson’s disease, and multiple sclerosis, we created a variable for urban residence (yes/no), density of neurologists using the ICES Physician Database to represent accessibility to neurological care, and the latitude of residence given the reported latitude gradient with multiple sclerosis.”

Density of neurologists? Yes, sir. Density of neurologists.

We now enter the realm of the epidemiologist fallacy. “Briefly,” our authors say, “estimates of ground-level concentrations of PM2.5 were derived from satellite observations of aerosol optical depth in combination with outputs from a global atmospheric chemistry transport model (GEOS-Chem CTM). The PM2.5 estimates were further adjusted using information on urban land cover, elevation, and aerosol composition using a geographically weighted regression….Similarly, we derived long-term exposure to NO2 from a national land-use regression (LUR) model…”

Actual PM2.5 and NO2 exposure was not measured. The proxies were assumed to be the real exposures. This is the epidemiologist fallacy. But wait! That isn’t always bad. There’s a possibility of saving the day. That’s if (a) the uncertainty in the proxies as measures were used in the statistical models at all stages, and (b) the uncertainty in the chemistry transport etc. models was used in the statistical models at all stages.

And were these crucial uncertainties—large ones, too: “The final LUR model explained 73% of the variation in annual 2006 measurements of NO2”—used in the statistical models at all stages?

Alas, dear reader, they were not (at least, I did not find any evidence they were).

There were more uncertainties, mostly involving complex measures of distances of this and that, which are important but would distract us (you can read the paper). I’m more interested in the uncertainties in the outcomes themselves, and the comorbidities. I’ve already mentioned the difficulty that only diagnosed maladies were in the databases, and that undiagnosed ones weren’t.

But that doesn’t mean that what appeared in the databases was error free. As the authors say: “These databases have been validated previously using chart review, with sensitivity of 78–84% and specificity of 99–100%.”

Dude. That doesn’t translate into an unimpeachable accuracy rate. Meaning, as should be obvious, the outcomes and some measures had uncertainty, too. Which is not damning. Measurement-error models exist to handle these kinds of things.

So. Was the uncertainty in the outcomes and measures incorporated into the exquisitely complex statistical models?

Alas again, my dear ones, it does not appear to be so.

It was worse than this, because after all this came the wee p-values (or confidence intervals, which amount to the same thing).

They found that the hazard rate of Dementia, but none of the other maladies, was highest for those with addresses nearest to “major” roadways, after “controlling” for that other stuff (in an incomplete way).

Curiously, having an database address nearest “major” roads was as dangerous living farthest away for Multiple Sclerosis, and that addresses in the middle range fared best. Which is a signal something screwy is going on. But since none of the p-values for MS were wee, this oddity was dismissed.

Why the wee ps? Well, the datasets were huge. A major (huge!) failing of p-values is that ps are always wee for large enough sample sizes, even in the complete absence of cause. Here, the effects weren’t so big, either.

The confidence intervals were parametric, not predictive. Huge sample sizes make for short CIs, just as they make for small ps. What’s wanted are actual predictive intervals, but we don’t see them.

Their conclusion: “In this large population-based cohort, living near major roadways was associated with increased dementia incidence. The associations seemed stronger among urban residents, especially those living in major urban centres and those who never moved.” Who lives near “major” roadways and doesn’t move in major urban centers? People who usually can’t afford to, and who might not be in as good health and their more mobile not-so-near neighbors.

“We observed that exposures to NO2 and PM2.5 were related to dementia and that adjusting for these two pollutants attenuated its association with roadway proximity, suggesting that the effect of traffic exposure might, at least in part, operate through this mechanism.”

At best this is translated into “Breathing pollution isn’t good for you,” which isn’t a major breakthrough. On the other hand, exposure to NO2 and PM2.5 was not measured. Proxies were.

My bet, or really my challenge to the authors, is to redo the whole shebang, only this time incorporate all the uncertainties I mentioned (and some which were too boring to bring up) and recast the results in predictive not parametric terms. I’ll lay fifty bucks that says all the associations disappear or become trivial.