Skip to content
February 10, 2018 | 15 Comments

Insanity & Doom Update XXI

Readers will have to help me. I’ve been posting insane and doom-laden articles for so long they all start to look alike. I worry that I repeating links. But it may just be an acceleration in doom. Let me know if you spot duplicates.

Item SAS may make entry tests easier for women

The SAS is reportedly considering changes to its notoriously difficult entry tests to ensure women have a fair chance of making the grade.

All applicants to the special forces must complete an initial fitness test before they are considered further. The tasks, which involve various long marches over mountainous terrain while carrying heavy rucksacks, are understood to be prohibitive to women and inessential to selecting candidates with the attributes required to succeed in the SAS.

The government said it wanted to see all close combat units in the British military open to women by 2019. Last year, then-prime minister David Cameron announced he was removing a ban on women in close combat roles, upending hundreds of years of tradition.

As regular readers know, Equality and Diversity (1) always lead to mandatory quotas, (2) always result in a weakening of standards, and (3) always results in an argument the standards were never really necessary. This is the ironclad rules of progressivism and the reason any group, state, nation or military which embraces Equality and Diversity is doomed.

Want more proof? Today’s your unlucky day.

Item Top firms given four years to appoint ethnic minority directors

Britain’s biggest companies have been given four years to appoint one board director from an ethnic minority background as part of a package of measures outlined in a government-backed review into the lack of diversity at the top of corporate Britain.

Sir John Parker, the chairman of the mining company Anglo American, who conducted the review, said it should be a wake-up call for major companies that only 85 of the 1,050 director positions in the FTSE 100 companies were held by people from ethnic minorities.

The companies in the FTSE 100 have been told to end their all-white boardrooms by 2021 while those in the FTSE 250, the next tier of companies on the stock market, have until 2024. The target is voluntary, but companies that fail to comply will have to explain why.

Parker was first linked to the diversity campaign in 2014 when Sir Vince Cable was business secretary and set a goal to end all all-white boards by 2020.

No word on whether the same quotas will be applied to those companies who have no whites on their boards. I’ll take any comers at fifty bucks and say they will not only not be required to have a token white, but they’ll be praised (in some way) for their “diversity”.

Item Berkeley’s Judith Butler wants to ban socially conservative speech

At the University of California, Berkeley, last week, Judith Butler said that while she considers herself to be an “advocate” of the First Amendment, the articulation of socially conservative viewpoints should be banned…

Butler argues that unless we restrict speakers who reject the left’s social ideology, “We should perhaps frankly admit that we have agreed in advance to have our community sundered, racial and sexual minorities demeaned, the dignity of trans people denied, that we are, in effect, willing to be wrecked by this principle of free speech, considered more important than any other value.”

First off, note that Butler takes as a given the notion that socially conservative speech splits apart a “community.” Here we see the implicit understanding that a community is only defined by those who subscribe to the left’s righteous orthodoxy.

Not being a tenured professor, we’d hate to disagree with Butler’s conclusion, which may well be true. Injecting sanity and Reality into Berkeley would split the community apart. Some of the inmates would realize what they’ve been missing out on and would, in the spirit of Truth, try to inform other Berkeleyites. Result: pandemonium. Best to remove anybody with any association with Truth and Reality and let the inmates eat themselves. When they’re finished, we can move back in and clean the place up.

February 9, 2018 | 5 Comments

Equality of Opportunity Always Masks Desire For (More Than) Equality Of Outcome

Last November, Spiked magazine held a discussion panel on “Is the left eating itself?” at which appeared ex-Evergreen State College professor and self-described progressive Bret Weinstein. You may recall that Weinstein was chased off campus by a mob of social justice warrior students.

Weinstein answered yes to the question, the only possible response. Except for the amplification that they’re eating everything else, too.

Why the voracious appetite?

“I recognized that there was a hidden dichotomy between two populations within the left.” He continued, “One of those populations earnestly wishes equality, and there can be some debate over what it is that is being equalized, but virtually everybody on the left would say that they are for equality of opportunity.

“Then there is another population that does not wish equality of opportunity, what it wishes to do is to turn the tables of oppression…you would discover that some of the people who had been pursuing some nominal version of equality were really about some radical version of inequity with new people at the head. And I do think that is what we are facing.”

Genuine equality of opportunity is rare, found only in carefully controlled situations. Take runners toeing the line in a race. Everybody starts in the exact same position, measured down to the millimeter. Any runner found edging off the mark before the gun, even by a fingertip, is disqualified, or causes a re-do.

But this careful scrutiny only occurs because the runners have proven themselves eligible to participate in the race in the first place. Years of inequality (training, biology, etc.) went into creating a moment of controlled equality of opportunity.

It’s also plain that this controlled equality is expensive. The groomed track, trained judges, even the audience: it all adds up. What’s maybe not as obvious is the glaring inequality necessarily created in this mini-equalitarian scenario. Not just that only the best runners will be there, but they will either be all men or all women.

True, some of the women might be men pretending to be women, as in this race, but the natural and ineradicable inequality between the sexes will be manifest. Who would host a race pitting the best men against the best women? Who could doubt the outcome? Only somebody who is convinced in genuine equality and who desires equality of outcome.

There is no evidence of genuine equality. All outcomes, except in specialized or trivial circumstances, are unequal. Men and women do not race equally in the sense the top runners will be on average male, nor do they take math tests equally in the sense the top and bottom scores will on average be male. Men and women have never produced equal outcomes (in these senses). There is no observation that confirms equality. Yet some still believe in it. This can only be the result of ideology, which is the only possible way thousands of years of observation can be dismissed in favor of theory.

Those who preach for equality of opportunity generally believe in equality in general, though they will claim this equality is occult. Genuine or true equality really does exist, but it is hidden or suppressed, and there would be genuine real equality of outcome if not for forces holding back equality. That these forces exist is, of course, proof in inequality, at least in ability to wield these forces. Believing in forces thus disproves equality.

At any rate, nothing but equality of outcome will do for some. And by equality of outcome, what supporters mean is the superior result of some favored group or groups.

Here’s the headline: Oxford University gives women more time to pass exams.

Students taking maths and computer science examinations in the summer of 2017 were given an extra 15 minutes to complete their papers, after dons ruled that “female candidates might be more likely to be adversely affected by time pressure”. There was no change to the length or difficulty of the questions.

Equality demands men and women are no different, therefore equality of outcome should result. When it does not, forces are at work. At the least, it must be that men are better suppressing women who take math and science tests, or that women can’t face the pressures of testing as men can. True inequality must exist. So equality is false. Thus there is no reason to expect equality of outcome.

In this case, changing the test time changed nothing: “Men continued to be awarded more first class degrees than women in the two subjects.”

The next step is to change the tests, and make them so that equality of outcome occurs. Equality, since it doesn’t exist, must always be enforced by artificial means. And, of course, this force proves the inequality. Satisfaction will only be announced when more women than men produce top scores.

February 8, 2018 | 64 Comments

You Don’t Have Free Will, Which Is Why You Make Such Bad Choices

Stream: You Don’t Have Free Will, Which Is Why You Make Such Bad Choices

There is a special kind of stupid achievable only by the intelligent (I resemble this remark). I’d ask you to pardon me for such a harsh statement, except that I can’t.

I didn’t have a choice but to say it. You didn’t have a choice in how you reacted to it. And if philosophy professor Tamler Sommers is right, nobody has any choice in anything they do.

Sommers says “recent advances in cognitive neuroscience” show that we must “abandon the deeply problematic concept of free will and ultimate moral responsibility.” We “feel free” and “We feel responsible”, but we are not.

One reason we don’t choose to leave behind the belief we can make choices is “is that the ethical implications of denying free will and moral responsibility seem terrifying.”

That sentence might not have been clear, so let me restate it. Sommers argues we have to abandon the idea we choose our actions. Only then will we make better choices. If we accept we are not morally responsible for our behavior, then our behavior will become more moral.

Be Not Afraid

There is not much sense in those renditions, either. Because there is no sense in Sommers’s position. If we cannot make choices, we cannot make choices. We can’t freely acknowledge we can’t make choices if we can’t make choices. If we are not morally responsible for what we do, then there are no immoral or moral acts.

Sommers is not alone in disbelieving in free will. Many modern philosophers agree with him. They acknowledge we common folk feel like we have free will, but they argue we are suffering an illusion.

Yet this is impossible. In order to have the “illusion” of making a free choice, a person had to have the ability to freely make a choice. As Alfred R. Mele says in Free: Why Science Hasn’t Disproved Free Will, “If there is an illusion…it’s the illusion that there’s strong scientific evidence for the nonexistence of free will.”

There just is no philosophically consistent argument against free will. The acres of paper darkened with ink on this subject always end in absurd spectacle: a philosopher arguing why you have to freely choose to not believe in free will. And the implied farcical cry, “I do not have free will!”

The First Mistake

Why do philosophers like Sommers make this mistake? For two reasons.

The first is []

You have no choice: you must click here to read the rest.

February 7, 2018 | No comments

Free Probability-Statistics Class: Predictive Case Study 1, Part XI

Review!

We left off with comparing the standard, out-of-the-box linear regression with our multinomial predictive observable model. The great weaknesses of the regression were probability leakage (giving positive probability to impossible values) and that the normal gives densities and not probabilities. We can’t fix the leakage with this model (it’s a built-in shortcoming of this model), but we can generate probabilities.

Now the densities are predictions from the normal regression model, but they are not in a form that can be used. In order to create probabilities from densities, we need to make a decision. The densities are of course easily transformed into cumulative distributions, which are probabilities, but they will give positive probabilities to an infinity of results (all numbers along the continuum). We only care about our decision points, which for our fictional Dean are the five values 0, 1, 2, 3, 4.

The decision we need to make is in how to cut the infinity into 5 blocks. There is of course no unique way to do this. But it may be reasonable to cut the values at the midpoints of the five values. For example, given the regression, the decision probability of having a CGPA = 0 would be the regression probability of being between 0 and 0.05. For 1, it would be 0.5 to 1.5, and so on. That’s easy to do.

Mixing code and output in the obvious way:

d=as.numeric(levels(x$cgpa))   
caps = d[-length(d)] + diff(d)/2
caps = c(min(d),caps,max(d))

# output
> caps
[1] 0.0 0.5 1.5 2.5 3.5 4.0

I used caps because R has a native cut function inconvenient here (it outputs factors, and we want numbers). Let’s next recreate the picture of the predictions for good and bad students. Then we’ll add on top of it the regression probabilities.

y = data.frame(cgpa = c("4","4"), sat=c(400,1500), hgpa = c(1,4)) #bad and good student
a=predict(fit, newdata = y, type='prob')$p #our mulitnomial moels

plot(levels(x$cgpa), a[1,],type='h',xlab='College GPA',ylab='Probability',ylim=c(0,1.1*max(a)),lwd=5)
  lines(levels(x$cgpa), a[2,],type='h',lty=2,col=2,lwd=3)
  grid()
  legend('topright', c("SAT =  400, HGPA = 1","SAT = 1500, HGPA = 4"),lty=1:2,col=1:2,bty='n',lwd=2) 
 
s1 = obs.glm(fit.n,y[1,]) # the prediction from the regression
s2 = obs.glm(fit.n,y[2,])
 
 m = s1$central-s2$central  # plotting up the probability DENSITIES, NOT PROBABILITIES
 s = sqrt(s1$scale^2+s2$scale^2)
 w = seq(min(s1$central,s2$central)-3*s, max(s1$central,s2$central)+3*s, length.out = 100)
lines(w,.7*dt((w-s1$central)/s1$scale,s1$df)/s1$scale,lty=1,lwd=2) 
lines(w,.7*dt((w-s2$central)/s2$scale,s2$df)/s2$scale,lty=2,col=2,lwd=2) 

# regression probabilities at the caps
p = matrix(0,2,length(levels(x$cgpa))) # storage
for (i in 1:2){
     b=obs.glm(fit.n,y[i,])
     d=pnorm(caps,b$central,b$scale) # the predictions
     p[i,] = diff(d) # d[1] prob < 0; 1-(sum(diff(d))+d[1]) prob >4
     lines(as.numeric(levels(x$cgpa))+.05,p[i,],col=i+2,type='h',lty=i,lwd=3)
}

Green solid is for the bad student, regression model; blue dashed is for the good student, regression model. You can see the spikes follow, more or less, the densities. The black solid is our original multinomial model for the bad student, and the red dashed the good student.

Which model is better? There is no way to tell, not with this plot. They are simply two different predictions from two different models. The only way to gauge goodness is to—all together now—wait for new data which has never been seen or used in any way, and then compare both models’ predictions against what happened.

Nevertheless, we can gather clues and build another, different predictive model, which predicts how well our original models will perform. Be sure you understand the distinction! We have an original predictive model. Then we create a model on how well this original predictive model will perform. These are not the same models! This right here is another major departure of the predictive method over classical counterparts.

Let’s first look at the regression predictions a little more closely (mixing code and output):

> p
             [,1]        [,2]       [,3]        [,4]         [,5]
[1,] 2.152882e-01 0.568404758 0.12265228 0.002511169 4.024610e-06
[2,] 1.109019e-06 0.001036698 0.07554629 0.511387677 2.646423e-01

> rowSums(p)
[1] 0.9088604 0.8526141

The first row is the bad student (low SAT and HGPA) at each of the five CGPAs (here labeled by their matrix notation), and the second the good student (high SAT and HGPA). The rowSums(p) gives the total probability of all possibilities. This should be 1 (right?). It isn’t; it is less, and that is because of probability leakage.

You can see that leakage is conditional on the assumptions, just like probability. Leakage isn’t constant. (It also depends on how we define our caps/cut points.) We earlier guessed, looking only at the densities, that it wasn’t that bad. But it turns out to be pretty large. We’re missing about 10% probability from the first prediction, and about 15% from the second. These are the probabilities for impossible events.

The leakage does not mean the model is useless, but it is evidence the model is sub optimal. It also does not mean our multinomial model is better, but at least our multinomial model does not have leakage. The multinomial model, for instance, can give large probabilities to events that did not happen, while the leakage regression model gives decent probabilities to what happened. We’ll have to see.

And that’s what we’ll do next time. It’s too much to try and begin formal verification this week.

Homework Assume the Dean wants to do CGPAs at 0, 0.5, …, 3.5, and 4. Rerun the code from the beginning, and end with the plot seen above, complete with the regression model using the obvious cut points. Notice anything different?