Skip to content

Author: Briggs

October 6, 2018 | 5 Comments

Insanity & Doom Update LIX

Item Student editor who tweeted that ‘women don’t have penises’ is fired from university journal in transgender row

A student editor at a top university has been fired in a transphobia row after he tweeted that ‘women don’t have penises’.

Angelos Sofocleous, assistant editor at Durham University’s philosophy journal ‘Critique’, was sacked from his post after just three days for writing a tweet deemed ‘transphobic’ by fellow students.

Mr Sofocleous, 24, from Cyprus, faced disciplinary action last month after he re-tweeted an article by The Spectator on his Twitter titled ‘Is it a crime to say women don’t have penises?’, with the comment: ‘RT if women don’t have penises’.

The postgraduate philosophy and psychology student was dismissed from his position at the university after the tweet sparked outrage.

We’ve had many other items proving Reality is now illegal in England. This one is not surprising.

What’s unique about this item is that the “outrage” (acknowledging most who use the word are lying) came from the “free speech society Humanist Students.” Seems “former chair of LGBT Humanists Christopher Ward” said Sofocleous’s tweet “was ‘factually incorrect’ and not ‘worthy of a debate’.”

Factually incorrect? Yes, sir. Factually.

Everybody knows, and it’s true here anyway, that “humanist” is a euphemism for atheists. So we have atheists leading the way in denying Reality. Now we have atheists insisting that some women have penises is a fact.

Atheism is, of course, a triumph of the will. If you can willfully deny God, you can deny Reality with ease. Even a moment’s thought will confirm to you that it is those people who deny God who lead the way in denying Reality.

Item Beware: transparency rule is a Trojan Horse (Thanks to Jonathan Witt for the tip.)

Last month, the US Environmental Protection Agency (EPA) proposed a new rule to “ensure that the regulatory science underlying Agency actions is fully transparent, and that underlying scientific information is publicly available in a manner sufficient for independent validation”. The alleged justification is a crisis in science over replicability and reproducibility.

At face value, the proposal might seem reasonable. It isn’t.

Many EPA watchers believe that the rule targets long-term epidemiological studies that linked air pollution to shorter lives and were used to justify air-quality regulations. In my view, the rule could keep that and other high-quality evidence from being used to shape regulations, even if there are legitimate reasons, such as patient privacy, why some data cannot be made public. It could potentially retroactively exclude an enormous amount of respected evidence. This would make the EPA less able to serve its function “to protect human health and the environment”. The window for speaking up is closing fast.

In other words, please don’t them look at the evidence because when they do we might have to abandon the policies we claimed flowed from that evidence. “Trust us,” she seems to be saying, “For we know what is best for you. And we don’t want you to know why we know.” Also, it can’t be a coincidence that a woman who looks like a horse warns of horsiness.

Item One-third of adults may need blood pressure drugs under new guidelines (Thanks to Forbes Tuttle for the tip.)

One out of every three U.S. adults has high blood pressure that should be treated with medication, under guidelines recently adopted by the two leading heart health associations.

The American College of Cardiology and American Heart Association redefined high blood pressure at 130/80 in November, down from the previous level of 140/90, based on new evidence supporting a lower threshold.

Under the new guidelines, nearly 46 percent of U.S. adults now would be considered to have high blood pressure, a new study reported.

Further, 36 percent would be recommended for blood pressure medication, the study authors said…

Full implementation of the new guidelines would mean 156,000 fewer deaths each year, and 340,000 fewer heart attacks, strokes and other heart-related ailments, the researchers concluded.

Well, why not lower it to 125/75 and be even extra super safe? We will have 156,387 fewer deaths each year (based on my statistical estimate). And 156,387 is more than 156,000. And really, isn’t it worth it if we save even one life? Of course, even more will need medication, which either they or the government will have to pay for. Americans are increasing fat and unfit as all know. But why not insist they get off their duffs and stop eating crap instead of ingesting expensive drugs?

October 5, 2018 | 4 Comments

Just When You Thought It Was Safe To Go Outside, It’s Gaia 2.0!

James Lovelock’s Gaia hypothesis, via Wikipedia, is that “living organisms interact with their inorganic surroundings on Earth to form a synergistic and self-regulating, complex system that helps to maintain and perpetuate the conditions for life on the planet.”

That life “interacts” with its environment is impossible-not-to-be true. Even dead things “interact” with the environment. Simply to exist is to interact. A rock sitting alone on a far hillside interacts with the hill. Self-regulation is another thing entirely. That implies a goal for all life, which might be true, but is an idea disconcerting to rabid evolutionists who are frightened by teleology. To others, the Gaia hypothesis says the earth itself is alive, which is scientific paganism. Given the many major extinction events, before man, the earth is a sickly creature if it’s alive, and unsure of its goal.

Gaia 1.0 came, had a heyday, and slowly petered out, as these things do. Yet there are those who would resurrect it. Like Vatican favorite Hans Schellnhuber, who not only suggests the earth is alive, but that it is rational—and pissed off.

Then there is Timothy M. Lenton and Bruno Latour who are responsible for Gaia 2.0—an article they managed to foist on the inaptly named Science magazine (that publication mainly does social justice with scientific asides, these days).

The earth, they say, “has now entered a new epoch called the Anthropocene”, which is a propaganda term meaning man is the dominant species, which everybody already knew. When buffalo were the dominant species 1,000 years ago, or whatever beastie ruled in number, it was not called the Bisonopocene. Yet we are asked to be surprised that man interacts with his environment. Which he cannot but do. It’s well to insist that every creature remakes his environment to suit him: ants, aardvarks, and acacia, as well as man.

“By emphasizing the agency of life-forms and their ability to set goals, Gaia 2.0 may be an effective framework for fostering global sustainability.”

This is an example of the sort of gibberish that now passes for science. The goals of most life forms are humble. A blade of grass wants sun, water, and carbon dioxide to eat. The grass doesn’t set this goal, though: whatever agency was responsible for creating the universal form of grass did. (And that couldn’t have been “evolution”.) What exactly precisely is “sustainability”? I have an answer, but if Lenton and Latour have one, they never give it. It functions as yet another propaganda word. Anything can be proclaimed as “unsustainable”.

“Gaia was built by adaptive networks of microbial actors that exchanged materials, electrons, and information, the latter through ubiquitous horizontal gene transfer. These microbial networks form the basis of the recycling loops that make up global biogeochemical cycles.”

Apparently these fellows believe that only man can use up resources. Have they never heard of locusts? Forrest fires? That other beasties spoil or soil their environments is no excuse for rapacity, of course. Nor does eschewing SJW science mean embracing “consumerism”, man’s form of a locust plague. Man is not unique except for his rationality, and the question must be “how much” instead of “if.”

Some of Earth’s climate self-regulation mechanisms are purely physical and chemical, but many involve biology. On time scales of hundreds of thousands of years, changes in global temperature are counteracted by biologically amplified changes in the removal of CO2 by silicate weathering. On intermediate time scales of millennia…[blah blah blah]

In all this they have forgotten the sun. Odd, that. They recall high CO2 levels from long ago, but forget the direction of cause and effect (as many do: usually increasing in temperature presaged increases in CO2: golly).

What is Gaia 2.0 really about? Here’s a hint.

Implementation of alternative forms of climate control to reduce production of CO2 or augment existing feedbacks depends on who is in charge of such voluntary activity. The results would clearly be different if the Intergovernmental Panel on Climate Change, President Putin, the California legislature, or President Trump had their finger on the proverbial thermostat. In reality, all these agents and many others have some grip on the thermostat, and their combined effect is not simple to predict.

Look who their hero is. Gaia 2.0 is nothing more than standard progressive politics. How boring.

They say “Human flourishing is not possible without a biodiverse, life-sustaining Earth system. This is recognized in the United Nations’ 17 Sustainable Development Goals.” Just what is a the best state of “biodiversity”? Are more species always better than fewer? Why? Exactly why. And what precisely exactly does “life-sustaining Earth system” mean? I’ll tell you: more propaganda.

“Yet, maintaining a self-regulating, human life—supporting planet is not the primary goal of some dominant modes of collective human activity today.”

That because some humans recognize these ploys for that they are: power grabs and attempts to foist an unwanted ideology on us.

October 4, 2018 | 18 Comments

Insanity & Doom Update LVIII — Special Midweek Doom

There is more Doom that we can shake a holy-water-doused crucifix at. Maybe we should have both Saturdays and Wednesdays Doom devoted.

Item How three scholars gulled academic journals to publish hoax papers on ‘grievance studies.’

Such hoaxes are unethical, and The Wall Street Journal doesn’t condone them.

Good grief, what effeminacy. But of course we all remember the effect of the Sokol hoax. That’s right: beside a few giggles, none at all.

Item The Senate Should Not Confirm Kavanaugh: Signed, 1,000+ Law Professors (and Counting)

The left is now screaming like sissies that Kavanaugh reacted angrily after being hit. This is more effeminacy. I pray you see that. Anyway, that “professors” are against him is, these days, a sign he must be a good man.

Item California, working hard, discovers a new way to destroy culture, industry.

“CA Gov. Jerry Brown signs bill requiring corporate boards to include women, saying despite flaws in measure “recent events in Washington DC– and beyond– make it crystal clear that many are not getting the message” on gender equality.”

It’s one small step from mandating the testosterone-deficient be given board memberships to mandating, say, trannies join the club.

Item Britain’s most senior female bishop says Church should stop calling God ‘he’ because it can put young people off religion (Thanks to reader Mavis Emberson for the tip.)

Britain’s most senior female bishop said the church should avoid referring to God only as ‘he’ after a survey found young Christians assume God is male…

The Bishop of Gloucester, the Rt Rev Rachel Treweek, the Church’s first female diocesan bishop, told The Telegraph: ‘I don’t want young girls or young boys to hear us constantly refer to God as he,’ adding that it was important to be ‘mindful of our language’.

It is not the first time Rt Rev Rachel Treweek has made these claims having said: ‘God is not to be seen as male. God is god,’ in the past.

In 2015 Rev Treweek refused the title of ‘right reverend father’. She has now claimed that non-Christians could feel a sense of alienation from the church if the image of God is painted as solely male, and public announcements are made in only male language to describe God.

Of course, after re-re-re-…-re-acknowledging God is not a man nor woman, a fact which nobody denies, we’ll have to toss out all references to God the Father, and forget the painful fact that Jesus was a non-female. The real question is: who could have foreseen putting a women in charge of religion would have resulted in her calling for the purging of male metaphors?

Item Mormon blogger says men are ‘100% responsible for unwanted pregnancies’ in powerful Twitter thread (Thanks to reader Kunzipjn for the tip.)

A blogger from Oakland, California has become a viral sensation after sharing her views on abortion on Twitter, in which she argues that “all unwanted pregnancies are caused by the the irresponsible ejaculations of men”…

Blair points out that biologically, men cannot impregnate women without experiencing an orgasm and therefore concludes that “getting a woman pregnant is a pleasurable act for men.”

Yes, we have long passed the point at which banalities and idiocies are passing for wisdom. But this is real bottom-of-the-barrel stupidity. A foolish woman discovers that sex can lead to pregnancy, is shocked, and conveys her shock to Twitter, whose readers also convey shock.

We began, as a species, with the full knowledge that sex was for procreation. We eventually came to the idea that sex was for pleasure and procreation an inconvenient side effect, one that could be cured by harmful drugs or by killing the side effect. But this foolish woman, and the explosion of myriad “orientations”, signal we are reaching the point where pregnancy is becoming a mystery. “I have no idea how I got pregnant,” said Thot One. “Did you have sex?” asked Thot Two. “Yeah,” replied Thot One. “But what’s that got to do with it?”

Item Pedophile’s Decapitated Corpse Found On Judge’s Doorstep After Bail Hearing In Aurora, Illinois

William Smith, 28, from Aurora, Illinois was discovered in the early hours of Tuesday morning, decapitated and slumped against the front door of the judge who had granted him bail in August.

Smith was arrested last month following allegations by his then girlfriend that he had raped her 8-year-old daughter.

After a police investigation in which Smith was found in possession of child pornography, he was arrested on two counts related to child pornography and one count of child molestation.

After being charged, Smith walked free from the court after the judge controversially ruled that he did not pose a threat to the local community, and he raised the $30,000 bail required to trigger his freedom…

Aurora police say they are currently “following leads” but have yet to make any arrests for the murder.

Note carefully that this was the perp’s corpse and not the judge’s. This is a case of the government not seen doing justice. We wonder how this judge will rule on the next similar case.

Item In attempting to rebut Rusty Reno’s dismissal of his book, Jonah Goldberg manages to invoke Hitler. He did wait a few paragraphs, though.

Item Eight of Iran’s women’s football team ‘are men’

Eight of Iran’s women’s football team are actually men awaiting sex change operations, it has been claimed.

It is impossible to change your sex. And all the men are ugly.

October 3, 2018 | 1 Comment

How To Do Predictive Statistics: Part IX Stan — Logistic & Beta Regression

Review!

We’re doing logistic and beta regression this time. These aren’t far apart, because the observable for both lives between 0 and 1; for logistic it is 0 or 1; for beta, any fraction or ratio—but not probability–that is on (0,1) works. We don’t model probability; we use probability to model.

That brings up another blunt point. In these demonstrations I do not care much about the models themselves. I’m skipping over all the nice adjustments, tweaks, careful considerations, and other in-depth details about modeling. Most of that stuff is certainly of use, if tainted by the false belief, shared by both frequentists and Bayesians, that probability exists.

I am not here to teach you how to create the best model for this or that kind of observable. I am not here to teach you best coding practices. I am here to teach you the philosophy of uncertainty. Everything takes an economy class nonrefundable ticket seat to that. Because that’s true, it’s more than likely missed code shortcuts and model crudities will be readily apparent to experts in these areas. You’re welcome to put corrective tips in the comments.

On with the show!

Logistic

Let’s use, as we did before, the MASS package dataset birthwt.


library(MCMCpack)
library(rstanarm)

x=MASS::birthwt # we always store in x for downstream ease
x$race = as.factor(x$race)

fit.m =  MCMClogit(low ~ smoke + age + race + ptl + ht + ftv, data=x)
fit.s = stan_glm (low ~ smoke + age + race + ptl + ht + ftv, family=binomial, data=x)

Last time we didn’t put all those other measures in; this time we do. Notice that we specify the data in both methods: rstanarm needs this, and it’s good practice anyway.



# This is a copy-paste from the MCMClogit lesson; only changing to p.m
p.m = NA
for(i in 1:nrow(x)){
  p.m[i] = MCMClogit.pred(fit.m,x[i,])
}

plot(x$age,p.m,col=x$race,ylab='Pr(low wt|old data,M)', pch=x$smoke+1,main='MCMC')
grid()
legend('topright',c('r1','r2','r3','s0','s1'), col=c(1,2,3,1,1), pch = c(3,3,3,1,2), bty='n')

Save that plot. Recall that the three races and two smoking states are given different colors and plotting characters. There is more to each scenario than just these measures, as the model statements show. But this is a start.



# This is new
p.s = posterior_predict(fit.s)
plot(x$age,colMeans(p.s),col=x$race,ylab='Pr(low wt|old data,M)', pch=x$smoke+1,main='Stan')
grid()
legend('topright',c('r1','r2','r3','s0','s1'), col=c(1,2,3,1,1), pch = c(3,3,3,1,2), bty='n')


Notice in the plot we have to do colMeans(p.s) to get the probability estimates—this is the tweak I mentioned last time. That’s because p.s contains nothing buy 189 columns (same as original data length) of 0s and 1s. Remember these are predictions! We take the average of the predictions, at each scenario, to get the probability estimate.

Stare and see the differences in the plots. We can’t put them all on one so easily here. While there are some differences, my guess is that no one would make any different decision based on them.

For homework, you can use rstanarm to check for measure relevancy and importance. Recalling, as you do so, that these are conditional concepts! As all probability is.

What’s that? You don’t remember how to do that? You did review, right? Sigh. Here’s one way checking the measure ptl, number of previous premature labours.



# The commented out lines we already ran; they're in memory
#fit.s = stan_glm (low ~ smoke + age + race + ptl + ht + ftv, family=binomial, data=x)
fit.s.2 = stan_glm (low ~ smoke + age + race +  ht + ftv, family=binomial, data=x)

p.s = posterior_predict(fit.s)
p.s.2 = posterior_predict(fit.s.2)

#a.1 = colMeans(p.s)
a.2 = colMeans(p.s.2)

plot(a.1,a.2, xlab='Full model', ylab='Sans ptl', main='Pr(low wt|old data, Ms)')
  abline(0,1)

Obviously ptl is relevant in the face of all these other measures. Would it be excluding others? Or with new observations? You check. That’s an order: you check. Would you say, as a decision maker interested in predicting low birth weight, that the probabilities change enough such that different decisions would be made using the different models? If so, then ptl is important, and should be kept in this model; i.e. in this form of the model with all the other measures in, too. If not, then it should be chucked.

There is no longer any such thing as hypothesis testing, or model building without reference to the decisions to be made using the model. It is impossible, beyond raw relevancy, which is trivial to see, that model building is independent of decision making.

Beta regression

We can’t do comparisons here, because only rstanarm has this kind of model. It’s for observables living on (0,1), things like ratios, fractions, and the like. The idea (which you can look up elsewhere) is that uncertainty in the observable y is characterized with a beta distribution. These have two parameters—normal distributions do, too. Unlike with normals, here we can model linear functions of both (suitably transformed) parameters. This is done with normals, too, in things like GARCH time series. But it’s not usual for regular regressions.

For beta regression, one or both parameters is transformed (by logging or identity, usually), and this is equated to a linear function of measures. The linear function does not have to be the same for both transformed parameters.

In the end, although there are all kinds of considerations about the kinds of transforms and linear functions, we are really interested in predictions of the observable, and its uncertainty. Meaning, I’m going to use the defaults on all these models, and will leave you to investigate how to make changes. Why? Because no matter what changes you make to the parameterizations, the predicitons about observables remains the same. And that is really all that counts. We are predictivists!

We last time loaded the betareg package. We’re not going to use it except to steal some of its data (the rstanarm package doesn’t have any suitable).

library(betareg)
data('GasolineYield', package='betareg')
x = GasolineYield

attr(x$batch, "contrasts") <- NULL # odd line

?GasolineYield # to investigate the data

We're interested in yield: "proportion of crude oil converted to gasoline after distillation and fractionation" and its uncertainty relating to temperature and experimental batch. There are other measures, and you can play with these on your own.

The "odd line" removes the "contrasts" put there by the data collectors, and which are used to create contrasts in the parameters; say, checking whether the parameter for batch 2 was different than for batch 10, or whatever. We never care about these. If we want to know the difference in uncertainty in the observable for different batches, we just look. Be cautious in using canned examples, because these kinds of things hide. I didn't notice it at first and was flummoxed by some screwy results in some generated scenarios. People aren't coding these things with predictions in mind.


fit = stan_betareg(yield ~ batch + temp | temp, data=x)
p = predictive_interval(fit)


The first transformed parameter is a linear function of batch and temp. The second---everything to the right of "|"---is a linear function of temperature. This was added because, the example makers say, the lone parameter model wasn't adequate.

How do we check adequacy? Right: only one way. Checking the model's predictions against new observables never before used in any way. Do we have those in this case? No, we do not. So can we adequately check this model? No, we cannot. Then how can we know the model we design will work well in practice? We do not know.

And does everything just said apply to every model everywhere for all time? Even by those models released by experts who are loaded to their uvulas with grant money? Yes: yes, it does. Is vast, vast, inconceivable over-certainty produced when people just fit models and release notes on the fit as if these notes (parameter estimates etc.) are adequate for judging model goodness? Yes, it is. Then why do people do this?

Because it is easy and hardly anybody knows better.

With those true, sobering, and probably to-be-ignored important words, let's continue.


plot(x$temp,p[,2],type='n',ylim=c(0,.6),xlab='Temperature',ylab='Yield')
for(i in 1:nrow(p)){
   lines(c(x$temp[i],x$temp[i]),c(p[i,1],p[i,2]))
   text(x$temp[i],mean(c(p[i,1],p[i,2])), as.character(x$batch[i]))
}
grid()


This pictures heads up today's post.

We went right for the (90%) predictive intervals, because these are easy to see, and plotted up each batch at each temperature. Depends on the batch, but it looks like as temperature increases, we have some confidence (and I do not mean this word in its frequentist sense) yield increases.

Let's do our own scenario, batch 1 at increasing temperatures.


y = data.frame(batch="1",temp=seq(200,450,20))
p.y = predictive_interval(fit,newdata=y)
plot(y$temp,p.y[,2],type='n',ylim=c(0,.6),xlab='Temperature',ylab='Yield')
for(i in 1:nrow(p.y)){
   lines(c(y$temp[i],y$temp[i]),c(p.y[i,1],p.y[i,2]))
}


This is where I noticed the screwiness. With the contrasts I wasn't getting results that matched the original data, when, of course, if I make up a scenario that is identical to the original data, the predictions should be the same. This took me a good hour to track down, because I failed to even (at first) think about contrasts. Nobody bats a thousand.

Let's do our own contrast. Would a decision make do anything different regarding batches 7 and 8?


y = data.frame(batch="7",temp=seq(200,450,20))
  p.y.7 = predictive_interval(fit,newdata=y)
y = data.frame(batch="8",temp=seq(200,450,20))
  p.y.8 = predictive_interval(fit,newdata=y)

plot(p.y.7[,1],p.y.8[,1],type='b')
  lines(p.y.7[,2],p.y.8[,2],type='b',col=2)
  abline(0,1)


This only checks the 90% interval. If the decision maker has different important points (say, yields greater than 0.4, or whatever), we'd use those. Different decision makers would do different things. A good model to one decision maker can be a lousy one to a second!

Keep repeating these things to yourself.

The batch 7 gives slightly higher upper bounds on the yields. How much? Mixing code and output:


> p.y.7/p.y.8
         5%      95%
1  1.131608 1.083384
2  1.034032 1.067870
3  1.062169 1.054514
4  1.129055 1.101601
5  1.081880 1.034637
6  1.062632 1.061596
7  1.068189 1.065227
8  1.063752 1.048760
9  1.052784 1.048021
10 1.036181 1.033333
11 1.054342 1.028127
12 1.026944 1.042885
13 1.062791 1.037957


Say 3%-8% higher. That difference enough to make a difference? Not to me. But what the heck do I know about yields like this. Answer: not much. I am a statistician and am incompetent to answer the question---as is each statistician who attempts to answer it with, God help us, a p-value.

At this point I'd ask the client: keep these batches separate? If he says "Nah; I need 20% or more of a difference to make a difference", then relevel the batch measure:


levels(x$batch) = c('1','2','3','4','5','6','7-8','7-8','9','10')

Then rerun the model and check everything again.