Skip to content

Author: Briggs

April 8, 2018 | 5 Comments

Summary Against Modern Thought: Become More Godlike

Previous post.

Review! We skipped a week because of Easter, and memories are short.

That things naturally tend to become like God inasmuch as He is a cause

1 As a result, it is evident that things also tend toward the divine likeness by the fact that they are the cause of other things.

2 In fact, a created thing tends toward the divine likeness through its operation. Now, through its operation, one thing becomes the cause of another. Therefore, in this way, also, do things tend toward the divine likeness, in that they are the causes of other things.

Notes Don’t get cocky. You become “like” God because you can cause things, and God is cause itself. But animals also cause things, and so do bags of rocks.

3 Again, things tend toward the divine likeness inasmuch as He is good, as we said above. Now, it is as a result of the goodness of God that He confers being on all things, for a being acts by virtue of the fact that it is actually perfect. So, things generally desire to become like God in this respect, by being the causes of other things.

4 Besides, an orderly relation toward the good has the formal character of a good thing, as is clear from what we have said. Now, by the fact that it is the cause of another, a thing is ordered toward the good, for only the good is directly caused in itself; evil is merely caused accidentally, as we have shown. Therefore, to be the cause of other things is good. Now, a thing tends toward the divine likeness according to each good to which it inclines, since any created thing is good through participation in divine goodness. And so, things tend toward the divine likeness by the fact that they are causes of others.

5 Moreover, it is for the same reason that the effect tends to the likeness of the agent, and that the agent makes the effect like to itself, for the effect tends toward the end to which it is directed by the agent.

The agent tends to make the patient like the agent, not only in regard to its act of being, but also in regard to causality. For instance, just as the principles by which a natural agent subsists are conferred by the agent, so are the principles by which the effect is the cause of others. Thus, an animal receives from the generating agent, at the time of its generation, the nutritive power and also the generative power.

So, the effect does tend to be like the agent, not only in its species, but also in this characteristic of being the cause of others. Now, things tend to the likeness of God in the same way that effects tend to the likeness of the agent, as we have shown. Therefore, things naturally tend to become like God by the fact that they are the causes of others.

6 Furthermore, everything is at its peak perfection when it is able to make another thing like itself; thus, a thing is a perfect source of light when it can enlighten other things. Now, everything tending to its own perfection tends toward the divine likeness. So, a thing tends to the divine likeness by tending to be the cause of other things.

Notes And this is why our saint can teach us so well.

7 And since a cause, as such, is superior to the thing caused, it is evident that to tend toward the divine likeness in the manner of something that causes others is appropriate to higher types of beings.

8 Again, a thing must first be perfect in itself before it can cause another thing, as we have said already. So, this final perfection comes to a thing in order that it may exist as the cause of others. Therefore, since a created thing tends to the divine likeness in many ways, this one whereby it seeks the divine likeness by being the cause of others takes the ultimate place. Hence Dionysius says, in the third chapter of On the Celestial Hierarchy, that “of all things, it is more divine to become a co-worker with God”; in accord with the statement of the Apostle: “we are God’s coadjutors” (1 Cor, 3:9).

Notes Do God’s will.

April 7, 2018 | 7 Comments

Insanity & Doom Update XXVIII

Item City Council wants new schools chancellor to ‘take bold action’

City lawmakers are demanding that incoming schools chief Richard Carranza bring his focus on ethnic and LGBTQ studies with him from Houston to new York.

Joining activists at City Hall, City Council members called on Mayor de Blasio to give Carranza full authority to promote “bold” changes to city curriculums that emphasize “culturally responsive education.”…

The group demanded a new focus on the history and culture of African, Latino, Asian, Middle Eastern and Native American communities in city schools.

They also called on a focus on their “intersections with gender, LGBTQ and religious diversity.”

Not only are colleges and universities beyond hope, so too are high schools. Home school before it is made illegal.

Item Liberals to revamp ‘discriminatory’ age law for anal intercourse

Change comes as Justin Trudeau appoints new adviser to advance equality agenda

The Liberal government is repealing what it calls a “discriminatory” law that makes it illegal to have anal sex under the age of 18, unless it is between a husband and wife.

Right now, the age of consent for sexual activity is 16 but the Criminal Code prohibits anal intercourse for people under the age 18 unless they are husband and wife, a discrepancy many have denounced as unconstitutional.

Justice Minister Jody Wilson-Raybould announced the change today, saying the “outdated” law violates equality rights.

“This section of the Criminal Code is discriminatory and the LGBTQ2 community has rightfully called for its repeal,” she said.

“Our society has evolved over the last few decades and our criminal justice system needs to evolve as well. This legislation will help ensure that the system is keeping pace with societal change and continuing to meet expectations of Canadians.”

Some like ’em young, which is, of course, how some of ’em are made. It’s also well to point out evolution, as all good scientists tell us, is not always in a direction of improvement.

Item Catholic Church ‘an empire of misogyny’ – Mary McAleese

A former president of Ireland has criticised the Catholic Church as “an empire of misogyny”…

“The Catholic Church is one of the last great bastions of misogyny,” said Mrs McAleese. “It’s an empire of misogyny.

“There are so few leadership roles currently available to women.

Mrs McAleese said women do not have strong role models in the Church they can look up to.

A Church hierarchy that is “homophobic and anti-abortion is not the Church of the future”, she added.

Mary is an inapt name for this poor soul as can be imagined. Next thing you know the poor dear will be complaining the Church still doesn’t all men to become nuns.

Item University of Texas students launch ‘No Whites Allowed’ magazine

Students at the University of Texas, San Antonio are launching a new magazine called “No Whites Allowed.”

A Facebook event announcing Thursday’s launch says the magazine will “create a space” for “black and brown people, especially those who are queer,” who have been “told that they don’t have a space” or a “voice or a say.”

“The [magazine] specifically features and promotes black and brown lgbtqa creatives,” a description of the event reads. “We hope to showcase our talent and create an open space for our voices to be heard.”

White people are welcome to attend the launch party.

How big of them.

But since the mag is only for people “who have been ‘told that they don’t have a space'”, and this includes zero actual “black and brown people, especially those who are queer,” the magazine will have no possible readers except for whites. Ought to be a short run.

Item A Good Gay Myth is a Terrible Thing to Waste

To this day, gay elites dine out on the grisly murder of Matthew Shepard who they falsely claim was murdered by strangers because he was gay. In fact, according to voluminous research by award-winning gay journalist Stephen Jimenez–who spent months on the ground in Laramie, Wyoming–Shephard was murdered by a fellow drug dealer who was also his occasional gay sex partner…

In much the same way, the mass murder at Orlando’s Pulse nightclub has come to occupy the same emotional, political, and financial space. Recall that Muslim Omar Mateen invaded the gay-themed nightclub in Orlando, Florida and killed 49 mostly gay men. Ask practically anyone and they will tell you the killer was motivated by “homophobia.”…

A lengthy and explosive story in The Intercept by investigative reporter Glenn Greenwald shows that Mateen “went to Pulse only after having scouted other venues that night that were wholly unrelated to the LGBT community, only to find that they too were defended by armed guards and police, and ultimately chose Pulse after a generic Google search for ‘Orlando nightclubs’—not ‘gay clubs’—produced Pulse as the first search result.” [etc. etc. etc.]

Lies make for good fund raising points, though.

April 6, 2018 | 19 Comments

Readers’ Help With Definitions Needed

Dear Readers,

Could you help me by discovering, if possible, official progressive (Cathedral) definitions of these (and similar) words?

  1. homophobia
  2. homophobe
  3. transphobia
  4. transphobe
  5. Islamaphobia
  6. Islamaphobe
  7. anti-semitism
  8. anti-semite
  9. alt-right
  10. white supremacist
  11. Nazi
  12. facist
  13. racist
  14. sexist

We want official non-satirical non-sardonic non-humorous earnest precise definitions supported by Cathedral members. We’re not after instances where these aspersions/labels were cast, for though these are plentiful they are empty. For instance, citing a source saying “So-and-so is a white supremacist” is of no use, because if one was ignorant of what a “white supremacists” was, knowing that So-and-so is one is of no interest.

Without the definitions, what are we to make of cases like this, where a black man was called a “white supremacist” for “opposing jihad terror and Islamization”?

“Oh, Briggs, you troll. Everybody knows what these words mean!”

They do?

“They do. You’re just stirring up trouble.”

If everybody knows what these words mean, can you can me precisely what, say, Islamophobia means? For as you know, once you have told what it is, I will then know what it is not. Yes?

“I’m not talking to you. The answers are obvious.”

They aren’t. And that’s the problem. In many cases they seem more like secular curses, meant only to frighten and not enlighten. Besides, if all these labels identify actual horribleness, what’s the harm in defining them?

If you know somebody who might know, please send them this request.



April 5, 2018 | 20 Comments

The Gremlins Of MCMC: Or, Computer Simulations Are Not What You Think

I don’t think we’re clear on what simulation is NOT. RANDOMNESS IS NOT NECESSARY, for the simple reason randomness is merely a state of knowledge. Hence this classic post from 12 June 2017.

“Let me get this straight. You said what makes your car go?”

“You heard me. Gremlins.”

“Grelims make your car go.”

“Look, it’s obvious. The cars runs, doesn’t it? It has to run for some reason, right? Everybody says that reason is gremlins. So it’s gremlins. No, wait. I know what you’re going to say. You’re going to say I don’t know why gremlins make it go, and you’re right, I don’t. Nobody does. But it’s gremlins.”

“And if I told you instead your car runs by a purely mechanical process, the result of internal combustion causing movement through a complex but straightforward process, would that interest you at all?”

“No. Look, I don’t care. It runs and that it’s gremlins is enough explanation for me. I get where I want to go, don’t I? What’s the difference if it’s gremlins or whatever it is you said?”


That form of reasoning is used by defenders of simulations, a.k.a. Monte Carlo or MCMC methods (the other MC is for Markov Chain), in which gremlins are replaced by “randomness” and “draws from distributions.” Like the car run by gremlins, MCMC methods get you where you want to go, so why bother looking under the hood for more complicated explanations? Besides, doesn’t everybody agree simulations work by gremlins—I mean, “randomness” and “draws”?

Here is an abbreviated example from Uncertainty which proves it’s a mechanical process and not gremlins or randomness that accounts for the succeess of MCMC methods.

First let’s use gremlin language to describe a simple MCMC example. Z, I say, is “distributed” as a standard normal, and I want to know the probability Z is less than -1. Now the normal distribution is not an analytic equation, meaning I cannot just plug in numbers and calculate an answer. There are, however, many excellent approximations to do the job near enough, meaning I can with ease calculate this probability to reasonable accuracy. The R software does so by typing pnorm(-1), and which gives -0.1586553. This gives us something to compare our simulations to.

I could also get at the answer using MCMC. To do so I randomly—recall we’re using gremlin language—simulate a large number of draws from a standard normal, and count how many of these simulations are less than -1. Divide that number by the total number of simulations, and there is my approximation to the probability. Look into the literature and you will discover all kinds of niceties to this procedure (such as computing how accurate the approximation is, etc.), but this is close enough for us here. Use the following self-explanatory R code:

n = 10000
z = rnorm(n)
sum(z < -1)/n

I get 0.158, which is for applications not requiring accuracy beyond the third digit peachy keen. Play around with the size of n: e.g., with n = 10, I get for one simulation 0.2, which is not so hot. In gremlin language, the larger the number of draws the closer will the approximation "converge" to the right answer.

All MCMC methods are the same as this one in spirit. Some can grow to enormous complexity, of course, but the base idea, the philosophy, is all right here. The approximation is seen as legitimate not just because we can match it against an near-analytic answer, because we can't do that for any situation of real interest (if we could, we wouldn't need simulations!). It is seen as legitimate because of the way the answer was produced. Random draws imbued the structure of the MCMC "process" with a kind of mystical life. If the draws weren't random---and never mind defining what random really means---the approximation would be off, somehow, like in a pagan ceremony where somebody forgot to light the black randomness candle.

Of course, nobody speaks in this way. Few speak of the process at all, except to say it was gremlins; or rather, "randomness" and "draws". It's stranger still because the "randomness" is all computer-generated, and it is known computer-generated numbers aren't "truly" random. But, somehow, the whole thing still works, like the randomness candle has been swapped for a (safer!) electric version, and whatever entities were watching over the ceremony were satisfied the form has been met.


Now let's do the whole thing over in mechanical language and see what the differences are. By assumption, we want to quantify our uncertainty in Z using a standard normal distribution. We seek Pr(Z < -1 | assumption). We do not say Z "is normally distributed", which is gremlin talk. We say our uncertainty in Z is represented using this equation by assumption.

One popular way of "generating normals" (in gremlin language) is to use what's called a Box-Muller transformation. Any algorithm which needs "normals" can use this procedure. It starts by "generating" two "random independent uniform" numbers U_1 and U_2 and then calculating this creature:

Z = \sqrt{-2 \ln U_1} \cos(2 \pi U_2),

where Z is now said to be "standard normally distributed." We don't need to worry about the math, except to notice that it is written as a causal, or rather determinative, proposition: ``If U_1 is this and U_2 is that, Z is this with certainty." No uncertainty enters here; U_1 and U_2 determine Z. There is no life to this equation; it is (in effect) just an equation which translates a two-dimensional straight line on the interval 0 to 1 (in 2-D) to a line with a certain shape which runs from negative infinity to positive infinity.

To get the transformation, we simply write down all the numbers in the paired sequence (0.01, 0.01), (0.01, 0.02), ..., (0.99, 0.99). The decision to use two-digit accuracy was mine, just as I had to decide n above. This results in a sequence of pairs of numbers (U_1, U_2) of length 9801. For each pair, we apply the determinative mapping of (U_1, U_2) to produce Z as above, which gives (3.028866, 3.010924, ..., 1.414971e-01). Here is the R code (not written for efficiency, but transparency):

ep = 0.01 # the (st)ep
u1 = seq(ep, 1-ep, by = ep) # gives 0.01, 0.02, ..., 0.99
u2 = u1

z = NA # start with an empty vector
k = 0 # just a counter
for (i in u1){
for (j in u2){
k = k + 1
z[k] = sqrt(-2*log(i))*cos(2*pi*j) # the transformation
z[1:10] # shows the first 10 numbers of z

The first 10 numbers of Z map to the pairs (0.01, 0.01), (0.02, 0.01), (0.03, 0.01), ..., (0.10, 0.01). There is nothing at all special about the order in which the (U_1, U_2) pairs are input. In the end, as long as the "grid" of numbers implied by the loop are fed into the formula, we'll have our Z. We do not say U_1 and U_2 are "independent". That's gremlin talk. We speak of Z is purely causal terms. If you like, try this:


We have not "drawn" from any distribution here, neither uniform or normal. All that has happened is some perfectly simple math. And there is nothing "random". Everything is determined, as shown. The mechanical approximation is got the same way:

sum(z < -1)/length(z) # the denominator counts the size of z

which gives 0.1608677, which is a tad high. Try lowering ep, which is to say, try increasing the step resolution and see what that does. It is important to recognize the mechanical method will always give the same answer (with same inputs) regardless of how many times we compute it. Whereas the MCMC method above gives different numbers. Why?

Gremlins slain

Here is the gremlin R code, which first "draws" from "uniforms", and then applies the transformation. The ".s" are to indicate simulation.

n = 10000
u1.s = runif(n)
u2.s = runif(n)
z.s = sqrt(-2*log(u1.s))*cos(2*pi*u2.s)
sum(z.s < -1)/n

The first time I ran this, I got 0.1623, which is much worse than the mechanical, but the second I got 0.1589 which is good. Even in the gremlin approach, though, there is no "draw" from a normal. Our Z is still absolutely determined from the values of (u1.s, u2.s). That is, even in the gremlin approach, there is at least one mechanical process: calculating Z. So what can we say about (u1.s, u2.s)?

Here is where it gets interesting. Here is a plot of the empirical cumulative distribution of U_1 values from the mechanical procedure, overlaid with the ECDF of u1.s in red. It should be obvious the plots for U_2 and u2.s will be similar (but try!). Generate this yourself with the following code:

plot(ecdf(u1),xlab="U_1 values", ylab="Probability of U1 < value", xlim=c(0,1),pch='.') lines(ecdf(u1.s), col=2) abline(0,1,lty=2)

The values of U_1 are a rough step function; after all, there are only 99 values, while u1.s is of length n = 10000.

Do you see it yet? The gremlins have almost disappeared! If you don't see it---and do try and figure it out before reading further---try this code:


This gives the first 20 values of the "random" u1.s sorted from low to high. The values of U_1 were 0.01, 0.02, ... automatically sorted from low to high.

Do you see it yet? All u1.s is is a series of ordered numbers on the interval from 1-e6 to 1 - 1e-6. And the same for u2.s. (The 1e-6 is R's native display resolution for this problem; this can be adjusted.) And the same for U_1 and U_2, except the interval is a mite shorter! What we have are nothing but ordinary sequences of numbers from (roughly) 0 to 1! Do you have it?

The answer is: The gremlin procedure is identical to the mechanical!

Everything in the MCMC method was just as fixed and determined as the other mechanical method. There was nothing random, there were no draws. Everything was simple calculation, relying on an analytic formula somebody found that mapped two straight lines to one crooked one. But the MCMC method hides what's under the hood. Look at this plot (with the plot screen maximized; again, this is for transparency not efficiency):

plot(u1.s,u2.s, col=2, xlab='U 1 values',ylab='U 2 values')
u1.v = NA; u2.v = NA
k = 0
for (i in u1){
for (j in u2){
k = k + 1
u1.v[k] = i
u2.v[k] = j
points(u1.v,u2.v,pch=20) # these are (U_1, U_2) as one long vector of each

The black dots are the (U_1, U_2) pairs and the red the (u1.s, u2.s) pairs fed into the Z calculation. The mechanical is a regular gird and the MCMC-mechanical is also a (rougher) grid. So it's no wonder they give the same (or similar) answers: they are doing the same things.

The key is that the u1.s and u2.s themselves were produced by a purely mechanical process as well. R uses a formula no different in spirit for Z above, which if fed the same numbers always produces the same output (stick in known W which determines u1.s, etc.). The formula is called a "pseudorandom number generator", whereby "pseudorandom" they mean not random; purely mechanical. Everybody knows this, and everybody knows this, too: there is no point at which "randomness" or "draws" ever comes into the picture. There are no gremlins anywhere.

Now I do not and in no way claim that this grunt-mechanical, rigorous-grid approach is the way to handle all problems or that it is the most efficient. And I do not say the MCMC car doesn't get us where we are going. I am saying, and it is true, there are no gremlins. Everything is a determinate, mechanical process.

So what does that mean? I'm glad you asked. Let's let the late-great ET Jaynes give the answer. "It appears to be a quite general principle that, whenever there is a randomized way of doing something, then there is a nonrandomized way that delivers better performance but requires more thought."

We can believe in gremlins if we like, but we can do better if we understand how the engine really works.

There's lots more details, like the error of approximation and so forth, which I'll leave to Uncertainty (which does not have any code).

Bonus code

The value of -1 was nothing special. We can see the mechanical and MCMC procedures produce normal distributions which match almost everywhere. To see that, try this code:

plot(ecdf(z),xlab="Possible values of Z", ylab="Probability of Z < value", main="A standard normal") s = seq(-4,4,by=ep) lines(s,pnorm(s),lty=2,col=2) lines(ecdf(z.s),lty=3,col=3)

This is the (e)cdf of the distributions: mechanical Z (black solid), gremlin (green dot-dashed), analytic approximation (red dashed). The step in the middle is from the crude step in the mechanical. Play with the limits of the axis to "blow up" certain sections of the picture, like this:

plot(ecdf(z),xlab="Possible values of Z", ylab="Probability of Z < value", main="A standard normal", xlim=c(-1,1)) s = seq(-4,4,by=ep) lines(s,pnorm(s),lty=2,col=2) lines(ecdf(z.s),lty=3,col=3)

Try xlim=c(-4,-3) too.


Find the values of U_1 and U_2 that correspond to Z = -1. Using the modern language, what can you say about these values in relation to the (conditional!) probability Z < -1? Think about the probabilities of the Us.

What other simple transforms can you find that correspond to other common distributions? Try out your own code for these transforms.