William M. Briggs

Statistician to the Stars!

Page 149 of 707

Race Norming At Universities And The Doctrine Of Unintended Consequences


The Los Angeles Times had a story about how hard East Asians have to work to be admitted to college. Problem is these kids are too smart, an attitude which is not in the best spirit of Diversity.

The most interesting (to us) part comes in discussing a study:

Lee’s next slide shows three columns of numbers from a Princeton University study that tried to measure how race and ethnicity affect admissions by using SAT scores as a benchmark. It uses the term “bonus” to describe how many extra SAT points an applicant’s race is worth…

African Americans received a “bonus” of 230 points, Lee says…

“Hispanics received a bonus of 185 points.”…

Asian Americans, Lee says, are penalized by 50 points — in other words, they had to do that much better to win admission.

No word about whites, but presumably their scores are remain unadjusted, a natural assumption given that more East Asians score higher on IQ tests than whites, and that more Hispanics and blacks score lower. Plus, one group has to be the comparator or “base”.

IQ and SAT are related, of course, and it’s of some interest to understand how knowledge of a person’s SAT score gives information about his IQ.

There are several sites that give “maps” of SAT to IQ, but since I’m not especially familiar with the literature on this topic, I’m taking—and you should take—their guidance with a grain or two of salt.

One site (which uses a table found in several places) says an SAT change of 50 points corresponds to an IQ change of about 4 points, whereas a change of 185 points corresponds to an IQ change of some 13 points, and a change of 230 points corresponds to about 18 points.

These conversions seem in the ballpark given Rushton and Jensen, who, using data from 2000, report (Table 3) blacks have an average IQ of 85, whites 102, and East Asians 106.

In other words, the study quoted in the LA Times looks to have it about right.

Consequence? Accepting Rushton and Jensen’s figures and the IQ-SAT map, the average East Asian kid has an IQ 21 points higher than the average black. That’s in the street. In the incoming Elite U. freshman class, Asians have to be smarter than average, thus they’ll have IQs not 21 but about 25 points higher than Blacks. The exact numbers are open to question, but not the direction of effect: Asians have to score higher.

Race norming thus exacerbates any real-life intellectual discrepancy, putting things even more in favor of the East Asian college student. Meaning, any acrimony caused by differences in intellectual achievement will be magnified by these do-gooding (yes, do-gooding) policies.

Isn’t that curious?

This is also premised on more-or-less constant percentages of each racial group in elite colleges, i.e. quotas. On the other hand, if a given university wants to up its percentage of blacks and Hispanics, the discrepancies will become wider, and whites and East Asians will do even better in comparison. And, if a college education has any meaning, these wider discrepancies will carry forward past college to, say, hiring decisions. This in turn puts pressure on society, which wants to maintain population-percentage-based quotas, to indulge in more affirmative action do-gooding.

Mitigating these effects is the expansion in the percentage of kids going to college, which has the effect of lowering the average IQ of incoming students. Why? In the days of yore when college was a special treat for a certain layer of society, the average IQ was high. Contrast this to our rapidly approaching Egalitarian Future where every single kid will or must enroll. The average IQ must necessarily shrink.

Colleges have been tackling the diminishing intellectual capital problem by creating easier courses and novel majors. These include “African American Studies”, “Science in Society”, “Feminist, Gender, and Sexuality Studies” and so forth.

For instance, Barnard College “is considering slashing its foreign-language and science requirements while requiring students to complete a new requirement on ‘diversity'”. Why? “During hearings last fall, several students had complained that taking mid-level classes to fulfill the language requirement was ‘too hard.'” Poor things. And don’t forget a “diversity” requirement brings thrill of allowing “students to escape an exclusively ‘Western’ worldview.”

If you find yourself short of breath upon reading this, recall that any member of any race can have any possible IQ, and that we have done much hand waving.

Sampling Variability Is A Screwy, Misleading Concept

A statistician about to draw snakes from an urn.

A statistician about to draw snakes from an urn.

Because of travel and jet lag, exacerbated by “springing forward”, we continue our tour of Summa Contra Gentiles next week.

If you can’t read the tweet above, it says “Pure data analysis cannot kill inference. Sampling variability cannot be hidden!!!” This was in response to my “Journal Bans Wee P-values—And Confidence Intervals! Break Out The Champagne!” post.

Sampling variability is a classical concept, common to both Bayesian and frequentist statistics, a concept which is, at base, a mistake. It is a symptom of the parameter-focused (obsessed?) view of statistics and causes a mixed up view of causation.

There’s 300-some million citizens living in these once United States. Suppose you were to take a measure of some characteristic of fifty of them. Doesn’t matter what, just so long as you can quantify it. Step two is to fit some statistical model to this measurement. Don’t ask why, do it. Since we love parameters, make this a parameterized probability model: regression, normal or time series model, whatever. Form an estimate (using whichever method you prefer) for the parameters of this model.

Now go out and get another fifty folks and repeat the procedure. You’ll probably get a different estimate for the model parameters, as you would if you repeated this procedure yet again. Et cetera. These differences are called “sampling variability.” There is no problem thus far.

Next step is to imagine collecting our measurement on all citizens. At this point there would be no need for any statistical model or probability. Our interest was this group of citizens and none other. And we now know everything about them, with respect to the measurement of interest. Of course it depends on the measurement, but it’s not likely that every citizen has the same measurement (an exception is “Is this citizen alive?” which can only be answered yes for members of the group now living). The inequality of measurement, if it exists, is no matter, the entire range of measurements is available and anything we like can be done with it. Probability is not needed.

So why do I say sampling variability is screwy?

Why did we take the measurements in the first place? Was it to learn only about the fifty citizens polled? If that’s true, then again we don’t need any statistical models or probability, because we would then know everything there was to know about these fifty folks with respect to the measurement. There is no need to invoke sampling variability, and no need for probability.

If our goal wasn’t to say something about only these fifty, then the measurements and models must have been to say something about the rest of the citizenry, n’cest-ce pas? If you agree with this, then you must agree that sampling variability is not the real interest.

To emphasize: the models are created to say something about those citizens not yet seen. There is information in the parameters of the model about those citizens, but it is only indirect and vague. There can be information in the internal metrics like p-values, Bayes factors, or other model fit appraisals, but these are either useless for our stated purpose or they overstate, sometimes quite wildly, the uncertainty we have in the measurement for unseen citizens.

That means we don’t really care about the parameters, or the uncertainty we have in them, not if our true interest are the remaining citizens. So why so much passionate focus on them, then? Because of the mistaken view that the measures (of the citizens) are “drawn” from a probability distribution. It is these “draws” that produce, it is said, the sampling variability.

The classical (frequentist, Bayesian) idea is that the measures are “drawn” from a probability distribution—the same one used in the model—that that measures are “distributed” according to the probability distribution, that they “follow” this distribution, that they are therefore caused, somehow, by this distribution. This distribution is what creates the sampling variability (in the parameters and other metrics) on repeated measures (should there be any).

And now we recall de Finetti’s important words:


If this is so, and it is, how can something which does not exist cause anything? Answer: it cannot.

The reality is that some thing or things, we know not what, caused each of the citizens’ measures to take the values they do. This cannot be a probability. Probability is a measure of uncertainty, the measure between sets of propositions, and is not physical. Probability is not causality. If we knew what the causes were we would not need a probability model, we would simply state what the measurements would be because of this and that cause.

Since we don’t know the causes completely, what should happen is that whatever evidence we have about the measurements lead us to adopt or deduce a probability model which says, “Given this evidence, the possible values of the measure have these probabilities.” This model is updated (not necessarily in the sense of using Bayes’s theorem, but not excluding it either) to include the set of fifty measures, and then the model can and should be used to say something about the citizen’s not yet measured.

Since I know some cannot think about these things sans notation, I mean the following. We start with this:

     [A] Pr( Measure take these values in the 300+ million citizens | Probative Evidence),

where the “probative evidence” is what leads us to the probability model; i.e. [A] is the model which tells us what probabilities the measures might take given whatever probative evidence we assume. After observations we want this:

     [B] Pr( Measure take these values in remaining citizens | Observations & Probative Evidence).

This gives the complete picture of our uncertainty given all the evidence we decided to accept. Everybody accepts observations, unless doubt can be cast upon them, but the “Probative evidence” is subject to more argument. Why? Usually the model is decided by tradition or some other non-rigorous manner; but whatever method of deciding the initial premises is used, it produces the “Probative evidence.”

There is thus no reason to ever speak of “sampling variability.” If we do happen upon another set of measurements—not matter the size: only theory insists on equal “n”s each time—then we move to this:

     [C] Pr( Measure take these values in remaining citizens | All Observations thus far & Probative Evidence).

Once we measure all citizens, this probability “collapses” to probability 1 for each of the measures: e.g., “Given we measured all citizens, there is a 100% chance exactly 342 of them have the value 14.3,” etc.

Sampling variability never enters into the discussion because we always make full use of the evidence we have to say something about the propositions of interest (here, the measurement on all citizens). We don’t care about the interior of the models per se, the parameters, because they don’t exist in [C] (either they never exist, which is ideal, or they do as an approximation and they are “integrated out“). Neither does [C] say what caused the measures; it only mentions our uncertainty in unseen citizens.

The measure is not “distributed” by or as our model; instead, our model quantifies the uncertainty we have in the measure (given our probative premises and observations).


The incorrect idea of “drawing from” probability distributions began with “urn” models, an example of which is this. Our evidence is that we have an urn from which balls are to be drawn. Inside the urn are 10 black and 15 white marbles. Given this evidence, the probability of drawing a white marble is 15/25.

Suppose we drew out a black; the 10/25 probability did not cause us to select the black. The physical forces causing the balls to mix from the initial condition of however they were put there and considering the constituents of the marbles themselves and the manner of our drawing caused the draw. This is why we do not need superfluous and unduly mystical words about “randomness.”

We don’t need sampling variability here either. If we draw more than one marble, we can deduce the exact probability of drawing so-many whites and so-many blacks, with or without considering we replace the marbles after each draw. This isn’t sampling variability, merely the observational probability [C]. And, of course, there are no parameters (and never were).

If you get stuck, as many do, thinking about “randomness” and causality, change the urn to interocitors which can only take two states, S1 or S2, with 10 possibilities for the fictional device to take S1 and 15 for S2. Probability still gives us the (same) answer because probability is the study of the relations between propositions, just like logic, even though interocitors don’t exist. Think of the syllogism: All interocitors are marvelous and This is an interocitor; therefore, This is marvelous. The conclusion given the premises is true, even though there are no such things as interocitors. See these posts for more.


Reports are the Japanese are hard at work inventing new orientation letters for export.

Reports are the Japanese are hard at work inventing new orientation letters for export.

I admit it. I am afraid of the sort of person who boasts of being a card-carrying LGBTTQQFAGPBDSM member. I would not want to be in a room alone with one of them, nor would I leave any under thirty in their care unsupervised.

It’s not that I’m afraid of my person or my soul, you understand, but I would positively dread having to listen to a LGBTTQQFAGPBDSM enthusiast chatter away on “gender theory”, perhaps those most stultifying subject known to mankind—excepting speeches by Nancy Pelosi, of course. Can you even bring yourself to imagine being lectured on, say, “Feminist solidarity after queer theory: the case of transgender“?

It is by now clear that feminist politics needs to speak to (and be spoken by) many more subjects than women and men, heterosexual women and lesbians. How—in theory and in practice—should feminism engage bisexuality, intersexuality, transsexuality, transgender, and other emergent identities that reconfigure both conventional and conventionally feminist understandings of sex, gender, and sexuality?

Maybe that passage would have been just tolerable as a way to pass the time before the warden came to get you for your final walk, except for a footnote appended onto “transsexuality”:

There is some political debate about whether “transexual” is a spelling preferable to “transsexual.” Some critics have suggested that an integrated rather than a compound noun avoids the problematic implication that transsexuals “cross sexes.”

Incidentally, that’s exactly the sort of thing many predicted we could get as a substitute for scholarship when universities went on their female-hiring binges to fulfill their quota of that sex (or is it ssex?) and so avoid charges of femaleaphobia.

All this was on my mind when I contemplated the new LGBTTQQFAGPBDSM theme house at Wesleyan university, a real place. Wesleyan, I said, named after John “Method in his Madness” Wesley. Would he be proud? Boy.

Open House is a safe space for Lesbian, Gay, Bisexual, Transgender, Transsexual, Queer, Questioning, Flexual, Asexual, Genderf**k, Polyamourous, Bondage/Disciple, Dominance/Submission, Sadism/Masochism (LGBTTQQFAGPBDSM) communities and for people of sexually or gender dissident communities. The goals of Open House include generating interest in a celebration of queer life from the social to the political to the academic. Open House works to create a Wesleyan community that appreciates the variety and vivacity of gender, sex and sexuality. [Edited]

Flexual? Must have something to do with those sweat-soaked cushy yoga napkins carried everywhere by women wearing ridiculously colored stretchy pants. Fashion hint: Ladies, no, you do not look good.

And, say, are pedophiles and woofies considered members of “gender dissident communities”? I’d guess yes.

That’s a better question than it first appears. Are there any behaviors that are still considered perversions? You can’t go by us uneducated civilians who still suffer from our various phobias, but do any academically trained gender theorists admit the existence of perversion? I don’t know the answer. Maybe one of you could wander into the Wesleyan LGBTTQQFAGPBDSM safe house and ask one of its habitues about that subject. If this person grows angry with you, punch this person in the nose. If this person complains, tell her/him/hyr/em/it you’re a sadist and got sexual satisfaction out of it. This person will have to let you go.

Now for the real reason for this column. To boast and brag. A year ago a contest to guess the next “orientation” letter was launched here.

What will be the next letter? Until recently, smart money was a second B for “bestiality.” Letters do not have to be unique; but if they did, then Z for “zoophile”, their preferred term. In Sweden and Germany, “Here, boy!” carries a different connotation than Stateside. But bestiality, while legal in those countries, is under fire from animal “rights” activists who are concerned about emotional scars on the Fluffies of the world.

I later modified that Z to W for woofies, which is a politer label.

None of us guessed the exact string used by Wesleyan—who could have?—but that there would be an agglomeration to LBGT most of us correctly forecasted. And since our forecast was right, it is good evidence that the theory which drove the forecast is true. That theory is the loss of essence, about which more another time.

We end with a quiz. Without peeking above, what were the exact letters and order of those letters in the new Wesleyan house of non-judgmentalism? Bet you don’t remember!

Journalist Bias For Sale Vs. Academic Freedom: More On The Soon Pseudo-Controversy

"Organizations cannot make a genius out of an incompetent. "

“Organizations cannot make a genius out of an incompetent.”

Two parts to this. An email exchange with a reporter, and the column he eventually wrote. Regular readers know I’ve been documenting these so other can people can see just how honest, bright, unbiased, and flexible—or their opposites—members of the fourth estate are.

The reporter’s eventual column is duller than a rebuttal speech to a State of the Union address, so I quote from it only lightly. All I want to demonstrate is that reporters aligned with the mainstream are nearly unteachable.

Email exchange with yet another reporter

Dante Ramos of the Boston Globe emailed on 23 February and said:

I’m a columnist for the Globe’s op-ed page, and I hoped to check in with you about the recent stories in our paper and the New York Times. I was particularly interested in how the debate about your work relates to the concept of academic freedom.

About the column he wrote, more in a minute. First, I answered:

Mr Ramos,

Now that is an excellent question.

The topic is large, but consider the effect government money has on science. Somehow, and against direct evidence to the contrary, funds from government are seen as pure and uncorrupting. And the same is largely true about the lucre from non-profits, such as the cult-like Greenpeace, an organization with an extreme ideology and long history of despicable (and illegal) behavior.

Yet money from private individuals and companies is uniformly seen as tainting.


People who receive (say) NIH or NSF grants pass a review process, but that is also true of private disbursements. The difference is that the government grant recipients of today are tomorrow’s government grant reviewers, and vice versa. The cliché Old Boys’ Club is not inappropriate.

The government hands out billions, and it is it that largely decides the course of science. This can be and has been for the good. But also for the bad, as when politics enters the system. Nowhere is this more obvious than with global warming. Even good scientists feel pressure to conform to The Consensus. Better to keep quiet and keep your reputation—and keep the grants flowing.

You’ve doubtless heard about President Eisenhower’s farewell speech in which he warned of the “military-industrial complex.” But less well known, and in that same speech, he also said this:

“Today, the solitary inventor, tinkering in his shop, has been overshadowed by task forces of scientists in laboratories and testing fields. In the same fashion, the free university, historically the fountainhead of free ideas and scientific discovery, has experienced a revolution in the conduct of research. Partly because of the huge costs involved, a government contract becomes virtually a substitute for intellectual curiosity. For every old blackboard there are now hundreds of new electronic computers.

The prospect of domination of the nation’s scholars by Federal employment, project allocations, and the power of money is ever present — and is gravely to be regarded.

Yet, in holding scientific research and discovery in respect, as we should, we must also be alert to the equal and opposite danger that public policy could itself become the captive of a scientific-technological elite.”

Gravely to be regarded indeed.

Now as I have told your Sylvan Lane, and many many other reporters, for our paper “Why models run hot”, we received no money from anybody. We asked nobody for anything. Nobody offered us any anything. We did this paper on our own time driven only by our intellectual curiosity. We had no conflict of interest of any kind. And we said so. This is real academic freedom.

The whole flap over this paper is ideologically generated, as I have extensively documented. Why?

To distract. It is a fundamental unquestionable scientific principle that bad theories make bad predictions. Climate model predictions are lousy. It is thus necessarily true that the theories which underlie these models are bad.

We proposed one reason why this might be so. We might be wrong. But even if we are, it is still utterly certain that climate models are broken and cannot be trusted.

Every scientist used to not only know this, but they said it loudly and clearly. Not so now. Now everybody pretends not to notice how rotten climate models have become. Too dangerous.

There is no intellectual freedom left in climatology except by going outside the system as we did.

I’m happy to answer other questions you might have.

William M. Briggs

He responded:

Dear Dr. Briggs,

Thanks for setting out your thoughts at length. I gather from the
email chain that Dr. Soon forwarded along my message. Just so I
understand, are you speaking on his behalf?

If so, I suppose an obvious question is why he wouldn’t just disclose the funding sources for the papers in question up front, and let the journals involved draw their own conclusions? I gather that Harvard-Smithsonian was aware of where Dr. Soon’s grants were coming from, so that wasn’t a secret…


Finally, me:

I speak for the four of us on the paper “Why models run hot”. Your second set of questions (1) assume the conclusion you wish to ascertain and (2) are not unlike the “Have you stopped beating your wife?” ploy. Try again.

Anyway, you don’t understand. There was NO FUNDING SOURCE for “Why models run hot.” Where by “NO FUNDING SOURCE” I mean “NO FUNDING SOURCE.” I’m not sure how I can make it any plainer.

I don’t want to paint with too broad a brush, but reporters can be awfully lazy and narrow-sighted. You are aware, are you not, that there were four authors on the “Why models run hot” paper? Why are you focusing on just one? Why are you so eager to change the subject from Climate Models Stink to a false accusation about a conflict of interest?

Did you get the point of science I made? If so, this is your big chance. You can start a revolution in the public understanding of science! Now that would make a difference. Every reporter’s dream, eh?

You’ll take a lot of heat from your colleagues, sure—but hey, that’s what bravery is all about.

Ramos’s article

Ramos’s Academic freedom for sale appeared on the 24th (and I only saw it two days ago).

Ramos starts by saying some fool (Gillis of the NYT) reported Willie Soon had done wrong. Ramos does not attempt to correct this lie, letting that Soon was accused imply his guilt, a standard journalistic cheap trick. He then mentions academic freedom:

…such freedom is most vital for those who, like Soon, do research that rubs their colleagues the wrong way. But it doesn’t mean researchers should never have to answer for who funds them or how they conduct themselves. “Academic freedom” isn’t an all-purpose excuse, behind which anything goes. And for institutions, the term shouldn’t be bureaucratese for “looking the other way.”

A lot of nonsense. Ramos was told we received nothing for “Why models run hot”, but this factual news was of no interest to him. Ramos then began walking away from the charge that Soon’s employer remained unaware of Soon’s activities (that walk back will be back to the beginning, as it were, and Soon will, of course, be vindicated, because it was Soon’s employer who signed all contracts).

Anyway, Ramos said:

Soon didn’t respond to my request for comment; I did receive an odd e-mail, signed by an associate of his, quibbling with the premise that government grants confer more credibility than funding from corporate interests.

You can see what I wrote above. Ramos could not be taught that, inside the academy, government grants do confer great status—and great monetary and professional rewards. Ramos was also disinterested in how the vast sums injected into the system, as Eisenhower promised, corrupt science.

And science is indeed corrupt. As any disinterested examination of global warming would reveal.

« Older posts Newer posts »

© 2016 William M. Briggs

Theme by Anders NorenUp ↑