Skip to content
February 2, 2018 | 9 Comments

Who Is Q & Why Might He Matter? — Guest Post by The Blonde Bombshell

The internet is a cauldron of competing ideas. The ones that get the most play are promoted by the legacy media. Under the surface, there are a swarm of websites that hawk so-called “alternative” ideas–alternative in the sense that they do not toe the line that has been plainly and loudly laid out my the MSM mandarins. Questions bubbling beneath the surface are either largely ignored or tamped down by the gatekeepers and remain hidden largely from public view.

The activity of one “Q” could be thrown in the pile of “nothing to see here,” but intriguingly there may definitely something, and something potentially earthshaking and paradigm-shifting, to see here. Mysterious messages started showing up on 4-chan on October 28, 2017 that suggested the detention (not arrest) of HRC and asked questions about recent donations made by a prominent globalist billionaire. Q did not sign his communications until November 2, and a trip code for verification was not put in place until November 9. It is purely speculative whether Q was the author of the earlier posts.

October started with the puzzling shooting in Las Vegas and continued with the baffling deaths of some witnesses of that shooting. Catalonia voted for independence but was blocked by Spain. There was a plague in Madagascar. Raqqa was liberated from ISIS. There was the run-up to the run-off election in Alabama for a senate seat. The president, posing for pictures after a meeting with military leaders, suggested that this period was the “calm before the storm.” There was more going on, of course, but this is just a refresher. In terms of the news cycle, October was a thousand years ago and almost as forgotten.

At the end of this confusing month, in comes Q. The choice of “Q” is thought to be a reference to a high-level security clearance. Given the content of Q’s posts, it seems likely that if Q is not in the inner circle of the president, he is very, very close. If Q has such a high-level clearance, what is he doing blabbing on the chans? He hasn’t overtly divulged classified information. Q poses questions, and his followers dissect and try to interpret the various multiple meanings that Q could be alluding to. There is a predictive quality to some of Q’s posts.

Sometimes Q tries to point people in a particular direction to do further research, but the people are deaf to his suggestions. Other times, Q’s meaning is more evident. Q posts can be as short as a few words or a few lines, or longer, heavy with text or code. Q also posts images. A recent image reminded followers of the deep, warm relationship one former candidate for president had with an acknowledged and avowed KKK member. The answers to the questions (“crumbs” or “bread crumbs” in Q-speak) are publicly available and can be found with a little ingenuity and digging. Q will sometimes communicate with anons on the boards, and answer questions or calls for clarification.

Reddit hosts a board dedicated to discussion about Q called CBTS (Calm Before The Storm), referring to the president’s once-cryptic remark. Some made the leap that this had something to do with North Korea. In light of recent events (such as the lost-and-found FBI texts, the still-secret four-page FISA memo, for starters), it seems likely that the storm has to do with dismantling the corrupt power structure of DC. Q’s posts seem to underscore that this a reality, and much of the dismantling is going on behind the scenes and that the public is being purposefully kept in the dark to preserve the republic (much to the chagrin of avid followers).

Q speaks in riddles: “Future proves past.” “Do you believe in coincidences?” “Follow the wives.” “Expand your thinking.” “Alice & Wonderland.” “Who took an undisclosed trip to SA?” “Why would the Chairman of GOOG travel to NK?”

Very early in the Q story there were questions regarding Q’s authenticity. Some said that Q was the result of sophisticated AI (as in the movie War Games; Q said, “Shall we play a game?”). Others posited that it was a LARP (Live Action Role Play)—that Q wasn’t some big patriot, but that he was just another frustrated video game player having some fun. Others cast aspersions on the Q followers as being gullible rubes seeking a savior.

Because of the nature of the information being dealt with, there are mistakes, there is some barking up the wrong trees. There was a kerfluffle when Q posted “DEFCON [1]” on January 8. Those with a military background jumped to the only conclusion they could have, but in Q-speak that was later decoded seems to be: DEFinitive CONfirmation in 1 minute.

Q—which could be one person or a group—offers some solace for frustrated, law-abiding citizens who are tired of seeing their future being swatted away by the globalist agenda. Q offers hope that Something Will Be Done, and Is, in fact, Being Done. Q isn’t only for Americans. The Reddit board attracts comments from people around the world who are watching the storm very carefully, and who have a fervent hope that some of the winds will blow their way and clean up their governments.

There also is a spiritual aspect to Q. Q is prone to quoting scripture and urges people to pray. Oddly, he posted the text of the Lord’s Prayer before the Pope started musing publicly that he thought the crusty piece of text needed an upgrade. On the Reddit CBTS board, a new person often stumbles on and posts something like, “Is it me, or are there a lot of Christians here?” They are informed that yes, indeed, this is a battle between good versus evil, light versus the dark, God versus Lucifer. Followers are praying and fasting for the president and the republic. Whatever Q has done, he has reawakened a spirit in the American people that has been slumbering for far too long.

Link: Reddit CBTS board. If you are unfamiliar with Q, take a look at the “Book of Q” which is posted on the right-hand side of the page. Check out the FAQs and other resources.

February 1, 2018 | 14 Comments

Pedophile Says He’s A 9-Year-Old Trapped In Man’s Body. So He Is

Stream: Pedophile Says He’s A 9-Year-Old Trapped In Man’s Body. So He Is

The Daily Wire reports on the story of 38-year-old Joseph Roman, who was “accused of sexually assaulting two six-year-olds and an eight-year-old on repeated occasions.”

Roman was “charged with repeated predatory criminal sexual assault.” The kicker is that he told police that he’s really “a 9-year-old trapped in an adult’s body.”

If Roman was a 9-year-old, he obviously would not be guilty of sexual assault on the same scale a 38-year-old would be. The best the police could do is to call 9-year-old Roman’s parents and ensure that he gets a good talking to. He couldn’t even be spanked, because that’s abuse.

Well, Roman says he’s 9. His birth certificate says 38. Who’s right?

The Right to be 9

Roman is. That’s what the transgender movement is all about. The right to self-define who we are. Transgender activists insist we should not be hemmed in by externalities forced on us against our will. Reality cannot be allowed to trump our desires.

Besides, the birth year used to calculate Roman’s 38 years was assigned to him at birth (probably by some patriarchal doctors). He had no choice in the matter.

He now has the right to make a choice. If he says he’s 9, we have to honor that right. It is our duty to agree with him about this age. To do otherwise risks ageophobia, in the same way that calling a man who thinks he’s a woman a man is transphobic.

The Supreme Anthony

Recall the words of U.S. Supreme Court Justice Anthony Kennedy who wrote into the law of the land that “At the heart of liberty is the right to define one’s own concept of existence, of meaning, of the universe, and of the mystery of human life.”

Roman has defined the concept of his existence of being a 9-year-old. That’s his truth. It must therefore be our truth, too.

You may object to all this, but just think. If what we have been told by transgender advocates is true, then we cannot rely on science or measurement to decide what “gender” somebody is. We can only go by what people tell us they are. The same reasoning must apply to any biological characteristic.

Missed Genders

A man believes he is a woman, and says he is a woman. Science and all external, objective measurement says he is a man. We must discard this evidence. It must form no part of our judgement. All that is left is the man’s claim that he is a woman. That claim makes him a woman. Not only that, it creates the burden on us to recognize his womanhood.

If we call this man a man, we would be guilty of “misgendering” him. According to Health Line,

[]

Identify as a Stream reader and click here to read the rest.

January 31, 2018 | 4 Comments

Free Probability-Statistics Class: Predictive Case Study 1, Part X

Review!

Last time we created four models of CGPA. Which is correct? They all are. Why? I should ask as a homework question, but I’ll remind us here. Since all probability is conditional, and these are all ad hoc models, all are correct, given the assumptions. Which one is best? Depends on the decisions you want to make and on what you mean by “best.” Let’s see.

We discovered last time that by itself HGPA was relevant to CGPA, but not by much. SAT was also by itself relevant. The contour plots (which I have decided not to redo, since we did them last week) showed that SAT and HGPA when considered together are also relevant. We also created the “null” model (remember our terminology does not match the classical usage) which only used past data (and grading rules, etc.). We now have to see how useful each of these models are. (If you can’t remember what all these terms mean — review!)

In one sense, we cannot do too much more because we have the models and have made predictions using them. That was our goal, remember? Now we have to sit back and wait for new values of HGPA/SAT and CGPA come in. Then we can see how each of these models’ predictions match reality. This is the True Test.

Here, for instance, is an excerpt of predictions of the full model (HGPA/SAT) (I’m assuming we left off just where we were last time, so that y and a are still in R’s memory; if not, go over last week’s code):

> cbind(y[,1:2],round(a,2))
    sat hgpa    0    1    2    3    4
1   400    0 0.55 0.26 0.19 0.00 0.00
2   500    0 0.52 0.27 0.21 0.01 0.00
3   600    0 0.47 0.28 0.24 0.01 0.00
4   700    0 0.42 0.30 0.28 0.01 0.00
5   800    0 0.35 0.32 0.32 0.01 0.00
6   900    0 0.28 0.33 0.37 0.01 0.00
7  1000    0 0.22 0.35 0.41 0.02 0.00
8  1100    0 0.16 0.36 0.45 0.03 0.01
9  1200    0 0.11 0.36 0.48 0.04 0.01
10 1300    0 0.08 0.34 0.51 0.06 0.02
11 1400    0 0.05 0.31 0.52 0.08 0.03
12 1500    0 0.04 0.28 0.53 0.11 0.04
13  400    1 0.52 0.24 0.23 0.01 0.00
14  500    1 0.46 0.25 0.27 0.01 0.00
15  600    1 0.40 0.27 0.32 0.01 0.00
16  700    1 0.32 0.29 0.37 0.02 0.00
...
57 1200    4 0.00 0.04 0.22 0.58 0.15
58 1300    4 0.00 0.04 0.18 0.63 0.15
59 1400    4 0.00 0.04 0.14 0.66 0.15
60 1500    4 0.00 0.05 0.12 0.69 0.14

We could publish this, or the whole table, and we’d be done! Anybody could take these predictions and implement them. They wouldn’t have to know the details of your model, or of your original data. There is your bold theory, exposed for the world to see! That, after all, is how science should work.

Of course, all the shortcomings of your model will be obvious to anybody who tries to use it, too. Which, again, is just how it should be.

Make sure you understand what we’ve done so far. If you are the Dean and want to classify students into one of five CGPA buckets, we have a predictive model accounting for HGPA and SAT. But suppose you didn’t want to account for HGPA. Well, we have a model of just SAT: use that. And so on. The breakdowns of SAT (every 100) and HGPA (every 1) we used were also geared to the decision. Change the decision, change the breakdown, change the model.

In any case, this is it! This is what we wanted. We wanted to know, given the grading rules and old obs, and values of SAT and HPGA, what is the probability of having CGPA in one of the buckets. And this is what we did! We are done. All those people who wanted practical examples of the predictive way, well, here you go. In the end, it’s pretty simple, isn’t it?

But we can do two more things. (1) We can compare our predictive model (perhaps varying it here and there) with old-fashioned NHST/parameter-estimating models, and (2) we can create a new model that predicts the performance our current model. Number (2) is the really important step, but we won’t get to it today. Let’s do (1).

What model would most use in this situation? A linear regression. Here it is (mixing code and output again). The cgpa.o was the original CPGA, not classified into buckets. It is the raw score (the “o” is for original).

fit.n= glm(cgpa.o ~ sat + hgpa , data=x) # note the cgpa.o! which is the original data
summary(fit.n)

Call:
glm(formula = cgpa.o ~ sat + hgpa, data = x)

Deviance Residuals: 
     Min        1Q    Median        3Q       Max  
-1.12153  -0.44120   0.00954   0.38198   1.80356  

Coefficients:
              Estimate Std. Error t value Pr(>|t|)    
(Intercept) -0.0881312  0.2866638  -0.307 0.759169    
sat          0.0012167  0.0003011   4.041 0.000107 ***
hgpa         0.4071133  0.0905946   4.494 1.94e-05 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Wee p-values galore! The asterisks give the glory. Feel it. So, the classical statistician would say, SAT and HPGA are “linked to” CGPA. Researchers will speak of “statistical significance” and say “SAT drives high college grade point” and so on. How much does SAT influence (they’ll say “impact”) CGPA? Well, they might say for every increase in SAT by 1, CGPA goes up on average 0.0012 points. And so on.

Not much more would be done with this model, especially since everything is “significant”. Maybe the modeler throws in an interaction. Whatever. No matter what, this model exaggerates the evidence we have, and is in substantial error, even though it doesn’t look like it. Here’s why.

This model implies a prediction: all models imply predictions, even though they are not routinely made. It’s written in classical form, but the prediction is there, hidden away. Let’s look at it. We do so by integrating out the parameters, picking a “flat” prior, which, for this model anyway, gives us the exact same results for the parameters as the frequentist estimates.

Recall fit is our mutlinomial (like) model, including both SAT and HGPA. Let’s pick two hypothetical students, a good one and a bad one, and compare our predictive model with the prediction based on the ordinary regression. Before running this, first re-download the class code, which has been updated to include the code which calculates probability predictions from normal regression models. This is the obs.glm, which outputs a central, spread, and degrees-of-freedom parameter for the predictive distribution of the regression (this turns out to be a non-central T).

y = data.frame(cgpa = c("4","4"), sat=c(400,1500), hgpa = c(1,4)) #bad and good student
a=predict(fit, newdata = y, type='prob')$p #our mulitnomial moels

plot(levels(x$cgpa), a[1,],type='h',xlab='College GPA',ylab='Probability',ylim=c(0,1.1*max(a)),lwd=5)
  lines(levels(x$cgpa), a[2,],type='h',lty=2,col=2,lwd=3)
  grid()
  legend('topright', c("SAT =  400, HGPA = 1","SAT = 1500, HGPA = 4"),lty=1:2,col=1:2,bty='n',lwd=2) 
 
s1 = obs.glm(fit.n,y[1,]) # the prediction from the regression
s2 = obs.glm(fit.n,y[2,])
 
 m = s1$central-s2$central  # plotting up the probability DENSITIES, NOT PROBABILITIES
 s = sqrt(s1$scale^2+s2$scale^2)
 w = seq(min(s1$central,s2$central)-3*s, max(s1$central,s2$central)+3*s, length.out = 100)
lines(w,.7*dt((w-s1$central)/s1$scale,s1$df)/s1$scale,lty=1,lwd=2) 
lines(w,.7*dt((w-s2$central)/s2$scale,s2$df)/s2$scale,lty=2,col=2,lwd=2) 

Our multinomial-like model gives the spikes; the regression densities, which are not probabilities, are the curves. We’ll fix the densities into probabilities later. But it’s densities, because the normal lives on the continuum, and so does the predictive distribution from the normal (the t). Notice anything odd? The regression gives probabilities to impossible values, i.e. CGPA less than 0 and greater than 4. I call this probability leakage.

It’s not terrible here, but it does exist. It means the model is predicting impossibilities. The standard regression model is at the least inefficient. Interestingly, this leakage model cannot be falsified. It gives positive probability to events we’ll never see, but it never gives 0 probability anywhere! Falsifiability isn’t that interesting.

That’s enough for this time. Next time, we turn the densities into probabilities, and make modifications to our multinomial model.

Homework Try the normal predictions for the singular models with SAT and HGPA alone, and for the “null” model, which is had by glm(cgpa.o ~ 1 , data=x). Which has the most leakage? Are there huge differences?

Because you asked You can now download briggs.homework.R, which contains all the code used in the lectures. Note that this is different than briggs.class.R, which can be treated like black-box helper code.

January 30, 2018 | 20 Comments

Should We Worry Artificial Neurons Can Now Compute Faster Than The Human Brain?

Stream: Should We Worry Artificial Neurons Can Now Compute Faster Than The Human Brain?

The report from last week’s Nature magazine is that “artificial neurons” can now “compute faster than the human brain“. We owe congratulations to the inventors of the mouth-twisting nanotextured magnetic Josephson junctions, which can zip along at over 100 gigahertz, a speed “several orders of magnitude faster than human neurons.”

This is some accomplishment. But it remains to be seen what kind.

Nature believes these artificial neurons can be used in “neuromorphic” hardware, which is said will mimic the human nervous system. The inventors are hopeful their creation might soon be configured to reach “the level of complexity of the human brain.”

When that happens, here comes true artificial intelligence. Computerized minds that are human-like, or even advanced beyond them, but without the burden of fallible bodies. Or so they say.

But is it really speed or computational ability that differentiates humans from computers? The answer is no.

At the Sound of the Beep, it Will be 1 PM

It was 1978. We were sitting in the back of geometry class and Brian brought over his new toy. A Texas Instruments hand-held electronic calculator.

Brian was the first to own one of these marvels. We weren’t surprised. Weeks earlier he caused waves of envy by sporting a digital watch. You pressed a button and it showed the time, glowing red. It beeped on every hour, lest you miss this momentous twenty-four-times-a-day event. By the end of the year digital watches were everywhere, serenading schoolrooms hourly—beep-beep-beep—because nobody could figure how to shut the sound off.

The calculator was equally fancy. It could, for example, figure the cube root of 513,537,536,512 in a flash. (This is what stood for a teenage boy’s math joke.) Just try it by hand and see how long it takes you. A minute, at least, and probably longer.

Hurry Up and Calculate

Because it was fast, was that calculator alive, in the sense of possessing a mind? Was it aware it was computing numbers? Did it even understand what a number was? As crude as it was, it could calculate faster than any human. If mere calculation speed is the criterion for awareness, that calculator was more “woke” than we were.

Yet speed does not create awareness. By the time pocket calculators showed up, computers were already faster than people by more than thirty years. The “electronic brain” ENIAC was processing bits faster than any man by 1946. Adding machines based solely on levers, gears, and cogs were faster than men even before that. Why, the humble abacus, already thousands of years old and composed of nothing but some wooden beads on slides, was far faster than people. But nobody would []

Fire up your calculators and click here to read the rest.