Skip to content

Category: Philosophy

The philosophy of science, empiricism, a priori reasoning, epistemology, and so on.

The Spirit World Is Alive And Well And Surrounds Us

38 Comments on The Spirit World Is Alive And Well And Surrounds Us

I ever tell you this story? This is Detroit when I was a boy, maybe 5. Sunday, watching golf on TV with my dad, my mom in the kitchen making dinner, my sister tottering around. My dad asked me to go in the bedroom and grab a pillow off the bed so that he’d be more comfortable lying on the floor.

I went into my parent’s bedroom and there at the head of the bed, on either side, stood two large Xs. Best way to describe them. They were hairy, a fine hair, black. Three feet or so tall. They made no sound. The one at my left threw a pillow at me. I was so astonished I ran out with the pillow and gave it to my dad.

Now this happened. That is, I have always remembered it. But the distance between me and five is great. But what happened? One explanation is childhood imagination. That’s comforting, but false. The second explanation, because I lived it, is that it happened.

What were they? I haven’t the slightest idea, now or then. They never came back. But there were other times and other stories, as many of us have.

Christianity insists such episodes are at least possible. Which is to say, the spirit world, the world of the immaterial that can, at times, interact with the material world is real. Angels are real, demons are real. Strange things are real.

These angelic creatures are not abstract objects, set far away in the universe to be dispatched on occasion. They are here, now, everywhere. And active, manifesting themselves in a variety of ways.

The pagans used to know this far better than we do. As I’ve said before, the Egyptian who was sure a chariot was carrying the sun on its daily route across the sky was closer to the truth than we are with our lifeless orbital mechanics. Both make the same prediction, but the Egyptian knew a deeper truth than the scientist is capable of understanding. What is at base is not science, which has no intrinsic base, but Being itself.

Now there are good, theoretical theological and scriptural reasons for making this claim, which I will defer to another time, and which should anyway be obvious to believing Christians (how many times in the Bible did God or an angel talk to us in dreams?). For now, we cast our eyes to the Bronze Age Pervert, who had an essay in American Sun (please read all of it first; and ignore his affection of writing badly to write well).

Passing over the true and often verified Nietzschean lament that Christianity can breed weak men, especially among protesting Christian sects and latterly Jesuits, we come to the well known longing for a restoration of manly men among the dissident right. Some seek this via neopaganism, but, like transgenderism, this often devolves into LARPing. Some try tradcath (as you might have heard it called), yet this is often a strategy not fully implemented.

We look high, or try to, but for most of us, as BAP says, “everything becomes a toothless allegory and symbol.” It’s hard to believe. His approach is different (ellipsis original):

You could call this an innocent apprehension of the hidden demons and gods inside nature and things. Maybe it is the precursor, this feeling, to the artistic revival of paganism, I don’t know. But it is an innate sensation, a natural animism. I tried briefly to discuss it in book. I have had images appear to me in daydreams since I was small boy, that were very vivid and specific. I know others who have had the same. Then also a sensation of spirits inhabiting animals and inanimate objects too, with a reverence for some, and a pity when others are mistreated — Houellebecq mentions his pity for a line of coats. This is not a sensation informed by any rational doxy or theoretical or theological belief. And I’ve always firmly rejected any such interpretations of these events because I found they didn’t fit. So to such sensibility it is very offensive or stupid when people try to actually go to forest and pretend they worship Wotan or Hermes or whoever…nor have I seen those old gods or any other such specifically that I can name. Those gods are dead or asleep. If you want to see what this feeling is like, watch some of David Lynch. I believe that is what this natural, innocent and innate paganism looks like in our time, when presented naively: he simply shows the demons and acknowledges them, doesn’t pretend to know who they are, what they want, or even how to worship or assuage them. And they look terrifying and surreal for us, and not at all like what you’d expect.

I have seen some things like this since I was small boy, and have felt the presence of one in particular. I feel that he will make a great show one day and erupt into the world.

I know what BAP means about this presence.

Now we differ only in our definition of the “gods” are and in how they should be treated. That they are here, here now, we do not differ. If we want a restoration of manly Christianity, we need to grasp and act on the truth that (for lack of a better phrase) the spirit world is real and is more important than the material world. Not in some distant past or in some rare occasion, but everywhere and at all times.

This is not a call to superstition. This is a plea to believe what we believe.

An Argument Against The Multiverse

8 Comments on An Argument Against The Multiverse

The multiverse might be real. God might in His wisdom and love of completeness and true diversity and the joy of filling all possible potentials with actuality might have created such a thing. Indeed, the wondrous complexity and size of the known universe is traditionally taken as an argument for the existence of God. The multiverse simply carries that idea to its limit—in the mathematical and philosophical sense.

Still, there is something appalling about the idea. The multiverse takes parsimony by the short hairs and kicks it in the ass. Talk about multiplying entities beyond necessity! An uncountable infinity of universes that cannot be seen might sound good on paper, but only because we trouble grasping how truly large an uncountable infinity is.

What follows is not a proof against the multiverse, but an argument which casts doubt on the idea. It was inspired by Jeremy Butterfield’s review of Sabine Hossenfelder’s Lost in Math: How Beauty Leads Physics Astray. (hat Tip Ed Feser; I haven’t read Hossenfelder’s book), a review we need to examine in depth first.

Her book “emphasizes supersymmetry, naturalness and the multiverse. She sees all three as wrong turns that physics has made; and as having a common motivation—the pursuit of mathematical beauty.” Regular readers have heard similar complaints here, especially about all those wonderful limit results for distributions of test statistics which give rise to p-values, which aren’t what people thought they were. Also time series. But never mind all that. On to physics!

“…Hossenfelder’s main criticism of supersymmetry is, in short, that it is advocated because of its beauty, but is unobserved. But even if supersymmetry is not realized in nature, one might well defend studying it as an invaluable tool for getting a better understanding of quantum field theories…A similar defence might well be given for studying string theory.”

How about the multiverse?

Here, Hossenfelder’s main criticism is, I think, not simply that the multiverse is unobservable: that is, the other pocket universes (domains) apart from our own are unobservable. That is, obviously, ‘built in’ to the proposal; and so can hardly count as a knock-down objection. The criticism is, rather, that we have very little idea how to confirm a theory postulating such a multiverse.

We discussed non-empirical confirmation of theories earlier in the week. We need to understand what is meant by fine-tuning, a crucial concept.

As to supersymmetry, which is a family of symmetries transposing fermions and bosons: the main point is not merely that it is unobserved. Rather, it is unobserved at the energies recently attained at the LHC at which—one should not say: ‘it was predicted to be observed’; but so to speak—‘we would have been pleased to see it’. This cautious choice of words reflects the connection to Hossenfelder’s second target: naturalness, or in another jargon, fine-tuning. More precisely, these labels are each other’s opposites: naturalness is, allegedly, a virtue: and fine-tuning is the vice of not being natural.

Butterfield says “naturalness” is “against coincidence”, “against difference”, “for typically.”

By against coincidence he means “There should be some explanation of the value of a fundamental physical parameter.” This is the key thought for us. There has to be a reason—a cause—of the value of the electron charge or fine structure constant; indeed any and every constant. Butterfield says “the value [of any constant] should not be a ‘brute fact’, or a ‘mere matter of happenstance’, or a ‘numerical coincidence’.”

The against difference concept is related to how parameters, i.e. constants, are estimated. And typicality means the value of the parameter must be defined in a rigorously defined “theoretical framework.”

Namely: there should be a probability distribution over the possible values of the parameter, and the actual value should not have too low a probability. This connects of course with orthodox statistical inference. There, it is standard practice to say that if a probability distribution for some variable is hypothesized, then observing the value of a variable to lie ‘in the tail of the distribution’—to have ‘a low likelihood’ (i.e. low probability, conditional on the hypothesis that the distribution is correct)—disconfirms the hypothesis that the distribution is the correct one: i.e. the hypothesis that the distribution truly governs the variable. This scheme for understanding typicality seems to me, and surely most interested parties—be they physicists or philosophers—sensible, perhaps even mandatory, as part of scientific method. Agreed: questions remain about:

(a) how far under the tail of the distribution—how much of an outlier—an observation can be without disconfirming the hypothesis, i.e. without being ‘atypical’;

(b) how in general we should understand ‘confirm’ and ‘disconfirm’, e.g. whether in Bayesian or in traditional (Neyman-Pearson) terms; and relatedly

(c) whether the probability distribution is subjective or objective; or more generally, what probability really means.

That “standard” statistical practice is now being jettisoned (follow the link above for details of this joyful news). Far better to assess the probability a proposition is true given explicitly stated evidence. For that is exactly what probability is: a measure of truth.

Again, never mind that. Let’s discuss cause. Butterfield says he follows Hume and his “constant conjunctions”, which is of course the modern way. But that way fails when thinking about what causes parameters. There are no conjunctions, constant or otherwise.

Ideally, what a physicist would love is a mathematical-like theorem with rigorous premises from which are deduced the value of each and every physical constant/parameter. That would provide the explanation for each constant, and an explanation is a lovely thing to have. But an explanation is not a cause, and knowing only a effect’s efficient cause might not tell you about its final cause, or reason for being.

Now in the multiverse (if it exists) sits our own universe, with its own set of constants with specific values which we can only estimate and which are, as should be clear, theory dependent. A different universe in the unimaginably infinite set could and would have different values for all or some of the constants.

An anthropic-type argument next enters which says we can see what we can see because we got lucky. Our universe had just the right values needed to produce beings like us—notice the implicit and enormous and unjustified assumption that only material things exist—beings that could posit such things as multiverses. But we had to get real lucky, since it appears that even minute deviations from the constants would produce universes where beings like us would not exist. We discussed before arguments against fine-tuning and parameter cause: here and here. Do read these.

Probability insinuates itself long about here. What is the probability of all this fine-tuning? It does’t exist. No thing has a probability/. All probability is conditional on the premises assumed. And once we start on the premises of the multiverse we very quickly run into some deep kimchi. For one of these premises is, or appears to be (I ask for correction from physicists), uncountability. There is not just a countable infinity of universes, but an uncountable collection of them. This follows from the continuity assumption about the values of constants. They live on the real line; or, because there may be relations between them, the real hyper-cube.

Well, probability breaks down at infinities. We instead speak of limits, but that’s a strictly mathematical concept. What does it mean physically to have a probability approach a limit? I don’t know, but I suspect it has no meaning. Butterfield is aware of the problem.

“For all our understanding of probability derives from cases where there are many systems (coins, dice…or laboratory systems) that are, or are believed to be, suitably similar. And this puts us at a loss to say what ‘the probability of a cosmos’, or similar phrases like ‘the probability of a state of the universe’, or ‘the probability of a value of a fundamental parameter’ really mean” [ellipsis original].

I disagree, for all the reasons we’ve discussed many times. Probability is not a measure of propensity, though probability can be used to assess uncertainty of propensity, and to make predictions. Butterfield then rightly rejects naive frequentism. But he didn’t quite say he rejected it because counting multiverses is impossible. Such a thing can never be done. Still, probability as true survives.

Back to fine-tuning and some words of Weinberg about fine-tuning quoted by Butterfield (all markings original):

We assumed the probability distribution was completely flat, that all values of the constant are equally likely. Then we said, ;What we see is biased because it has to have a value that allows for the evolution of life. So what is the biased probability distribution?’ And we calculated the curve for the probability and asked ‘Where is the maximum? What is the most likely value?’ … [Hossenfelder adds: ‘the most likely value turned out to be quite close to the value of the cosmological constant which was measured a year later’.]…So you could say that if you had a fundamental theory that predicted a vast number of individual big bangs with varying values of the dark energy [i.e. cosmological constant] and an intrinsic probability distribution for the cosmological constant that is flat…then what living beings would expect to see is exactly what we see.

What premise allowed the idea of a “flat” prior on a constant’s value? Only improper probabilities, which is to say, not probabilities at all result from this premise. Unless we want to speak of limits of distributions again—but where is the justification for that?

All right. Here’s where we are. No physicist has any idea why the constants which are measured (or rather estimated) take the values they do. The values must have a reason: Butterfield and Hossenfelder agree. That is, they must have a cause.

Now if the multiverse exists (and here we recall our previous arguments against fine-tuning), our universe, even though it is one of an uncountable infinity, must have a reason why it has these values for its constants. You cannot say “Well, all values are represented somewhere in the multiverse. We have these.” That’s only a restatement of the multiverse premises. We have to say why this universe was caused to have these values, and, it follows, why others were caused to have other values.

Well, so much is not new. Here is what is (finally!).

You’ll grant that math is used to do the figurings of all this work. Math itself relies on my constants, assumptions, and so on. Like the values of π and e. Something caused these constants to take the values they do, too (a longer argument about this is here). They cannot exist for no reason, and the reason cannot be “chance”, which is without power.

There is no hint, as far as I can discover, that multiverse theorists believe the values of these mathematical and logical constants differ, as do physical constants. That physical constants differ is only an assumption anyway. So why not assume math, logic, and truth differ? But if they do, then there is no saying what could happen in any multiverse. You can’t use the same math as in our universe to diagnose an imagined selection from the multiverse. You don’t know what math to use.

Everything is put into question. And we’re further from the idea of cause. That we run from it ought to tell use something important. The problem of cause is not solved by the multiverse. There has to be a reason each universe has its own parameter values, and there has to be a reason it has the values of mathematical constants. This might be the same reason; or again, it might not be. The cause has to be there, however. It cannot be absent.

It is of interest that we initially thought the physics might be variable but the math not. Math is deeper down, in an epistemological sense; so deep, that we have forgotten to ask about cause. At any rate, because it seems preposterous to assume math changes from universe to universe, because math seems best to use fixed and universal (to make a pun), there is reason to question whether physics changes, too.

The Anti-Christian New York Times Says God Not Coherent

18 Comments on The Anti-Christian New York Times Says God Not Coherent

That the New York Times is anti-Christian is obvious enough. Most of its founders, leaders, and top employees are not Christian, and thus the paper naturally has an in-built bias, which at times manifests itself as pique.

Others papers, such as, say, the Asahi Shimbun also betray a natural anti-Christian bias, though perhaps non-Christian bias is a better term. The difference between the AS and the NYT is that former is written by and for a predominately non-Christian nation, whereas the latter is penned largely by non-Christians (ex-Christians and other never-Christians) in a majority Christian land.

This leads to occasions like the opinion column “A God Problem: Perfect. All-powerful. All-knowing. The idea of the deity most Westerners accept is actually not coherent” by some academic philosopher (AP). We know this poor effort is designed to stick in the eye of Christians, because the AP comes to the opposite conclusions of the various Christian saints and eminences he quotes. And because the NYT accompanies the purported takedown with a broken and decaying statue of Jesus.

What are the AP’s arguments for the incoherence of the Creator?

First omnipotence.

Can God create a stone that cannot be lifted? If God can create such a stone, then He is not all powerful, since He Himself cannot lift it. On the other hand, if He cannot create a stone that cannot be lifted, then He is not all powerful, since He cannot create the unliftable stone. Either way, God is not all powerful.

Try to define what such a stone would be. Are you imagining a God with powerful legs? Just two? If your god sweating and grunting? How big is the rock? Where would the rock be placed? What exactly is making it unliftable? Ten seconds is sufficient to prove the idea, not God, is incoherent.

If you performed the exercise you’ll find yourself in agreement with our great Saint Thomas, whom even the AP acknowledges had the best counter-argument “that God cannot do self-contradictory things”.

So omnipotence is no problem. Yet the AP does’t design to admit defeat on this point, and quickly moves to the well known Problem of Evil (he doesn’t call it that).

The AP, somehow, forgets the most important part of the Problem of Evil. If God doesn’t exist there is no evil. Oops. Oh, there’s lot of opinions and hurt feelings and subjective experiences of pain. But that’s just so much tough cookies. It isn’t evil that a flood washed your loved ones away, it’s just—-nothing. It’s not even bad luck. Your entire being, life, wishes, longing, thoughts, are so much Cosmic Bullshit if there is no God.

Yet, of course, this too is absurd. So there is evil. Which, I note, the AP does not bother to define. It is the absence of the good. The ultimate good being God, the absence of God is thus the ultimate evil. Welcome to Hell.

The AP, though he quotes some nice words by Al Plantinga, correct words at that about the presence of free will and the creation of the absence of the good, forgets too that if God exists—please keep the qualifier in mind!—then our lives here on earth is only part of our existence. Why suffering? Hey, it’s all part of a bigger game than the threescore and ten we have here. I don’t know the ins and outs and whys, and I too wish it would not happen, but no pain, no gain.

That’s the answer! If God exists, you (and I) can complain about suffering, as Job was finally led to do, but this expresses our ignorance. That’s one hard lesson; indeed, the hardest. Yet if God does not exist, then all your plaints are so much hot air.

The last argument the AP marshals is the dumbest. I read it through twice to make sure I wasn’t deceiving myself: surely the AP couldn’t have been that dim?

Alas, yes. For we are now at omniscience. And…hold up. First, remember Lavrentiy Beria? Stalin’s secret police chief, murderer, and sadist? Nasty guy, right? You do remember his crimes? Well, that makes you no better than a murderer!

Hey, that’s not my argument. That’s the AP’s.

…if God knows all there is to know, then He knows at least as much as we know. But if He knows what we know, then this would appear to detract from His perfection. Why?

There are some things that we know that, if they were also known to God, would automatically make Him a sinner, which of course is in contradiction with the concept of God. As the late American philosopher Michael Martin has already pointed out, if God knows all that is knowable, then God must know things that we do, like lust and envy. But one cannot know lust and envy unless one has experienced them. But to have had feelings of lust and envy is to have sinned, in which case God cannot be morally perfect.

You can only know of something if you experienced it? A restive person who has never been slothful cannot know sloth? Since you know what murdering is like, via Beria, you too must be a murderer. Or something. Act and potential have no dividing lines here.

It follows that “God doesn’t know what it is like to be human”, says the AP. Yet if God exists, He created humans and thus knows very well what it is like to be human. What about Jesus? The AP runs away from trying to answer, saying only the Incarnation “presents us with its own formidable difficulties”.

It do, AP; it do.

Non-Empirical Confirmation Of Theories

7 Comments on Non-Empirical Confirmation Of Theories


“Fundamental physics today faces the problem that empirical testing of its core hypotheses is very difficult to achieve and even more difficult to be made conclusive,” says Richard Dawid in his paper “The Significance of Non-Empirical Confirmation in Fundamental Physics“. In olden days “it was plausible to focus on empirical confirmation as the only reliable basis for assessing a theory‚Äôs viability.”

That’s so: if a theory says X would happen or was likely, given conditions E, and if E is seen and X was not, the theory is doubted. But that if E is seen and so is X, confidence in the theory is bolstered. If your theory insisted the sun would rise in the east on days ending in ‘y’, you had lots of empirical backing; but if it was west instead of east, you did not.

Sunrises are easy to see. Strings inside quarks are not; nor are multiverses. So if you said every day ending in ‘y’ creates a certain special kind of new universe with slip-knotted strings, orthogonal in every way to our own, there is no way to check your theory. Well, this is the problem with multiverses of any kind. Nobody can see them. Are they real? Which is to ask, is the theory which predicts them true?

String theory has been playing the role of a well established approach towards a universal theory of all interactions for over three decades and is trusted to a high degree by many of its exponents in the absence of either empirical confirmation or even a full understanding of what the theory amounts to…Multiverse scenarios in the eyes of critics raise the question to what degree they can be endorsed as scientific hypotheses at all, given that their core empirical implications to a large extent seem not empirically testable in principle.

Here comes the point. He says “a considerable degree of trust in an empirically unconfirmed theory could be generated based on ‘non-empirical theory confirmation’. Non-empirical confirmation denotes confirmation by evidence that is not of the kind that can be predicted by the theory in question, i.e. that does not lie within the theory’s intended domain.”

About string theory:

In the absence of empirical confirmation, exponents of the theory may rely on different kinds of reasoning. For example, they may argue that the theory is supported by the striking difficulties to come up with promising alternatives. Those difficulties clearly cannot be predicted by string theory itself. The observation that those difficulties exist is a contingent observation about the research process of which the development of string theory is just one part. Therefore, this observation does not constitute evidence within string theory’s intended domain. If one concludes, as I will, that the observation amounts to confirmation of string theory nevertheless, it can only be non-empirical confirmation.

The question is this: can there be non-empirical verification of theories? Which is to ask, can we know a theory is true before witnessing the predictions from the theory? The answer is yes, sometimes.

Theory Truth & Probability

Here’s how I see it. A theory—or model, there is no difference, so I’ll use M as shorthand and not T, which in logic often stands for truth—is a collection of a list of premises or propositions, which are all taken jointly, as one complex proposition, like this:

    Pr( M | P_1 P_2 P_3 … P_m) = 1

The collection is “anded” together, where each component proposition may be very complicated indeed. Some P_i may be observational, others may be mathematical, still more other kinds of assumptions. The model M is deduced from these propositions; thus, given the propositions are true, so is the model.

All models, barring mistakes in calculation or logic, are locally true: true conditional supposing their conditions. Whether a theory is universally true is the real question. A theory is universally (or necessarily) true given all its P_i are themselves universally true, meaning true following a chain of argument to a base which is known to be true based on sense impression. All mathematics and logic follows this rule; so will we.

We can’t tell if M is false or uncertain from the P_i, because we use these to create the model or theory. So if we want to say M is false or uncertain, we have to bring in external evidence E. E itself is usually composed of propositions.

Now we can do things like this

    Pr( P_j | E ) = p_j

We cannot insist each p_j can be quantified, though some can. For instance, if P_j is a known mathematical theorem, the “known” part is in the E somewhere, so that we deduce Pr( P_j | E ) = 1. If some P_j is an observation proposition, such as “X_j = (17, 32, … 50)”, then unless there were measurement error, we usually say Pr( P_j | E ) ~ 1.

If we knew that for some k Pr( P_k | E ) = 0, then

    Pr( P_1 P_2 … P_k … P_m | E ) = 0

And therefore

    Pr ( M | E ) = Pr( P | E) = 0,

since M is deduced from P = “P_1 P_2 … P_m”. We have proven the model is false, even though Pr(M|P) = 1 (notice the absence of E). That Pr(M|E) = 0 is not a proof that

    Pr (Y in s | M ) = q

is false. What this means is that probability statements conditional on assuming M is true are themselves true. Since M has no implicit propositions about human computational error, and indeed has the opposite, any statement we deduce from the model is true, assuming the model is true.

Yet, of course, if the model is false, because we know based on everybody-accepts knowledge E that P is false, then M must be false, too. The point is that

    Pr (E | sense impressions; deeper truths ) = 1.

Meaning that we just accept E as true, based on our intuitions, or via other axiomatic truths. Of course, if some do not accept E, then they are working with something else; thus it will be no surprise they could come to different judgments of M.

In general, assuming no Pr(P_j|E) = 0, which will likely be the case in most theories under consideration, calculating Pr( P | E ) = Pr( P_1 P_2 … P_m | E ) would be hideously difficult. If it could be calculated at all — not all probability can be quantified!

The strategy, then, if we want to have some idea of the non-empirical confirmation of M is to attack the problem in pieces, if possible. For instance, we might be able to peel off a set of P_T = (P_a P _b … P_z) that we know conditional on E are true. These might be the mathematical derivations, some observations, some basics of logic, and the like. Then we’ll be left with

    Pr(M|E) = Pr( P_NT P_T | E ) = Pr(P_NT | P_T E) Pr (P_T | E)

and so

    Pr(M|E) = Pr(P_NT | P_T E)

since Pr (P_T | E) = 1. Interestingly, it does not matter how large P_T is: it could contain a million propositions, or only one. If it’s true, it’s true: and that it is true adds nothing to the truth or falsity of the model as long as there are any P_NT in the model! Since it is likely that the P_NT are logically orthogonal to P_T; any statements in P_NT that are deducible from statements in P_T we can lump in P_T.

The perhaps surprising implication is that piling on true evidence to a model does nothing to help improve its veracity. Neither, then, does removing truths detract from a model’s veracity. This is because

    Pr(M|E) = Pr(P_NT | E).

That means that if P_NT contains just one proposition which gives, say, Pr(P_NT | E) = 0.1, and that Pr(P_T = P_a P _b … P_z … Pr_aa … Pr_z…z)| E ) = 1, so that Pr(M|E) = 0.1!

The joint proposition P_NT will contain all those propositions which we know are not false and we know are not true, conditional on E. So, if we can make a stab at calculating, or approximating, or guesstimating Pr(P_NT | E) we have what we wanted: non-empirical “confirmation”, where “confirmation” is taken in the sense of having less than full proof.

Well, this is just a sketch of how to do non-empirical verification. Real-life efforts will come down to disputes over E. For instance, scholastic versus Humean notions of cause, about potentia and actual versus “potential worlds”. Meaning, I think, lack of agreement on whether theories have been non-empirically verified. After all, how many agreements are there on fundamental “theories” or philosophy, even though we’ve been working at it for thousands of years?

Summary Against Modern Thought: Divine Providence Does Not Exclude Evil

7 Comments on Summary Against Modern Thought: Divine Providence Does Not Exclude Evil

Previous post.

In which the age-old question is answered: If God exists, whence comes evil?


1 Now, from these conclusions it becomes evident that divine providence, whereby He governs things, does not prevent corruption, deficiency, and evil from being found in things.

2 Indeed, divine governance, whereby God works in things, does not exclude the working of secondary causes, as we have already shown. Now, it is possible for a defect to happen in an effect, because of a defect in the secondary agent cause, without there being a defect in the primary agent. For example, in the case of the product of a perfectly skilled artisan, some defect may occur because of a defect in his instrument.

And again, in the case of a man whose motive power is strong, he may limp as a result of no defect in his bodily power to move, but because of a twist in his leg bone. So, it is possible, in the case of things made and governed by God, for some defect and evil to be found, because of a defect of the secondary agents, even though there be no defect in God Himself.

3 Moreover, perfect goodness would not be found in created things unless there were an order of goodness in them, in the sense that some of them are better than others. Otherwise, all possible grades of goodness would not be realized, nor would any creature be like God by virtue of holding a higher place than another.

The highest beauty would be taken away from things, too, if the order of distinct and unequal things were removed. And what is more, multiplicity would be taken away from things if inequality of goodness were removed, since through the differences by which things are distinguished from each other one thing stands out as better than another; for instance, the animate in relation to the inanimate, and the rational in regard to the irrational.

And so, if complete equality were present in things, there would be but one created good, which clearly disparages the perfection of the creature.

Now, it is a higher grade of goodness for a thing to be good because it cannot fall from goodness; lower than that is the thing which can fall from goodness. So, the perfection of the universe requires both grades of goodness. But it pertains to the providence of the governor to preserve perfection in the things governed, and not to decrease it. Therefore, it does not pertain to divine goodness, entirely to exclude from things the power of falling from the good. But evil is the consequence of this power, because what is able to fall does fall at times. And this defection of the good is evil, as we showed above. Therefore, it does not pertain to divine providence to prohibit evil entirely from things.

Notes Not only does Equality not exist, it is not desirable.

4 Again, the best thing in any government is to provide for the things governed according to their own mode, for the justice of a regime consists in this. Therefore, as it would be contrary to the rational character of a human regime for men to be prevented by the governor from acting in accord with their own duties—except, perhaps, on occasion, due to the need of the moment—so, too, would it be contrary to the rational character of the divine regime to refuse permission for created things to act according to the mode of their nature. Now, as a result of this fact, that creatures do act in this way, corruption and evil result in things, because, due to the contrariety and incompatibility present in things, one may be a source of corruption for another. Therefore, it does not pertain to divine providence to exclude evil entirely from the things that are governed.

Notes If follows from this that tyranny, an evil, is when the governor does indeed prevent men from acting in accord with their duties and nature. As we know from increasing practical experience.

5 Besides, it is impossible for an agent to do something evil, unless by virtue of the fact that the agent intends something good, as is evident from the foregoing. But to prohibit universally the intending of the good for the individual on the part of created things is not the function of the providence of Him Who is the cause of every good thing. For, in that way, many goods would be taken away from the whole of things. For example, if the inclination to generate its like were taken away from fire (from which inclination there results this particular evil which is the burning up of combustible things), there would also be taken away this particular good which is the generation of fire and the preservation of the same according to its species. Therefore, it is not the function of divine providence totally to exclude evil from things.

6 Furthermore, many goods are present in things which would not occur unless there were evils. For instance, there would not be the patience of the just if there were not the malice of their persecutors; there would not be a place for the justice of vindication if there were no offenses; and in the order of nature, there would not be the generation of one thing unless there were the corruption of another. So, if evil were totally excluded from the whole of things by divine providence, a multitude of good things would have to be sacrificed. And this is as it should be, for the good is stronger in its goodness than evil is in its malice, as is clear from earlier sections. Therefore, evil should not be totally excluded from things by divine providence.

Notes Dear reader, please contemplate this last argument closely. For this is why we have trials.

7 Moreover, the good of the whole takes precedence over the good of a part. It is proper for a governor with foresight to neglect some lack of goodness in a part, so that there may be an increase of goodness in the whole. Thus, an artisan bides the foundations beneath earth, so that the whole house may have stability. But, if evil were removed from some parts of the universe, much perfection would perish from the universe, whose beauty arises from an ordered unification of evil and good things. In fact, while evil things originate from good things that are defective, still, certain good things also result from them, as a consequence of the providence of the governor. Thus, even a silent pause makes a hymn appealing. Therefore, evil should not have been excluded from things by divine providence.

8 Again, other things, particularly lower ones, are ordered to man’s good as an end. Now, if no evils were present in things, much of man’s good would be diminished, both in regard to knowledge and in regard to the desire or love of the good. In fact, the good is better known from its comparison with evil, and while we continue to suffer certain evils our desire for goods grows more ardent. For instance, how great a good health is, is best known by the sick; and they also crave it more than do the healthy. Therefore, it is not the function of divine providence totally to exclude evils from things.

9 For this reason, it is said: “I make peace and create evil” (Is. 45:7); and again: “”There is no evil in a city which God will not do” (Amos 3:6).

10 Now, with these considerations we dispose of the error of those who, because they noticed that evils occur in the world, said that there is no God. Thus, Boethius introduces a certain philosopher who asks: “If God exists, whence comes evil?” [De consolatione philosophiae I, 4]. But it could be argued to the contrary: “If evil exists, God exists.” For, there would be no evil if the order of good were taken away, since its privation is evil. But this order would not exist if there were no God.

11 Moreover, by the foregoing arguments, even the occasion of error is removed from those who denied that divine providence is extended to these corruptible things, because they saw that many evils occur in them; they said, moreover, that only incorruptible things are subject to divine providence, things in which no defect or evil part is found.

12 By these considerations, the occasion of erring is also taken away from the Manicheans who maintained two first agent principles, good and evil, as though evil could have no place under the providence of a good God.

13 So, too, the difficulty of some people is solved; namely, whether evil actions are from God. Indeed, since it has been shown that every agent produces its action by acting through the divine power, and, consequently that God is the cause both of all effects and all actions, and since it was also shown that evil and defects occur in things ruled by divine providence as a result of the establishment of secondary causes in which there can be deficiency, it is evident that bad actions, according as they are defective, are not from God but from defective proximate causes; but, in so far as they possess something of action and entity, they must be from God. Thus limping arises from the motive power, in so far as it possesses something of motion, but in regard to what it has by way of defect it is due to the crookedness of the leg.

A Failed Argument Against Free Will: Predicting Actions

36 Comments on A Failed Argument Against Free Will: Predicting Actions

It is always hilarious when people rail against free will, who are especially flummoxed that Common Man believes in free will, and say “If only people realized their choices weren’t free, they would make better choices.”

The fallacy is usually buried in a long string of propositions, the length of which causes one to forget where one began, which is the premise “actions aren’t free.” Yet if that’s true, then nothing matters, and no choices can ever be made, right or wrong. Indeed, there is no right or wrong: there is nothing. Even your desire that this should not be so is nothing.

Yet these fans of science still believe that better choices would be made if folks knew they couldn’t make choices, and that all was absent of free will.

This silly argument is not limited to our progressive pals. It’s seen on the right (the Wrong Right), too. Z-man, God bless him, is a frequent proponent. He says “The concept of free will has been essential to Western thought since the Greeks and it is an essential element of Christianity.” True enough, but, Zed, it is essential everywhere if it is true. Which it is.

Not so, he claims. Free will is a “myth” (he uses the word as a synonym for false, as unfortunately many do) because the choices people make are “so easily predicted by behavioral genetics.” As evidence for this, he points us to somebody called Jayman (this makes me B-man, I suppose). “No, You Don’t Have Free Will, and This is Why” insists J. He opens with this bit of hilarity:

Slate recently featured an article written by Roy F. Baumeister, Do You Really Have Free Will? In it, he claims that human do indeed have free will, something that regular readers will know that I have emphatically argued against.

Why, dude. If people don’t have free will, then there is no reason to argue, emphatically or like a lady, that they don’t. People can’t make better choices if they can’t make choices.

Anyway, Jayman quotes statistics showing “all human behavioral traits are heritable”. By heritable, he does not mean necessarily passed on, but merely sometimes in certain measure passed on, where most measurements of behaviors are all forced quantifications of the unquantifiable. Jayman appears to take weak correlation as complete causation.

It is true and was always obvious until yesterday different races have different distributions of proclivities and behavior, and that some of these differences are biological, i.e. innate. Thus that race of folks who have different spleens (“the Bajau takes free diving to the extreme, staying underwater for as long as 13 minutes at depths of around 200 feet”) will react differently on average than, say, Arabs to being tossed into the drink. But this does not eliminate free will. The words used when finding oneself dunked are still freely chosen from a conditional subset of words. So free will is conditional, but so what? Most things are conditional, including cause, probability, and the rightness or wrongness of many acts. That right and wrong are sometimes conditional does not mean there is no right and wrong. There is an infinite gap between conditional and determined.

Now most of our bodily activities are given over to automation, including those activities, like walking, where robust and active free will was initially necessary to learn the activities. Where next do you place your foot? At first we think hard about it, but eventually not at all. Eventually there is no free will in each step. But you may, at any moment, decide to take a skip instead of a step. The potential for free will is always there.

If you decided to model, using the latest deep-learning neural net massively parallel AI, to predict with near certainty that, on a walk, after I take a step with my left foot I then will take a step with my right, and vice versa, you have not proved the absence of free will. Nor would you have disproved free will if you hooked an fMRI to me while walking and asked “When did you choose to use your right foot?”

The fMRI would (more or less) show that part of the talking automation taking place in the brain, where to choice might seem to come after the “decision”. But this is because this is not a real instance of free will. You have to expend real mental effort to overcome the automation. You’re not really making a choice of step, even though the experimenter put the act in those terms. You might even try and stutter steps, which is a free act of will, but then you have given over to the automation the orders “Stutter steps”, and again you’re not quite exactly making precise choices of each stuttered step. But you did use free will to start the process. The fMRI would not capture any of this.

Same kind of thing happens when you learn a video game. At first you carefully and freely plan which button to press, but after a while automation takes over. A good thing, too, for it frees the mind to think of other things. Like strategy.

This puts into proper context articles like “Decoding the contents and strength of imagery before volitional engagement” by Roger Koenig-Robert and Joel Pearson in Nature: Scientific Reports. They hooked a small group of folks to a video game asking them, in a horribly convoluted process (see Fig. 1), to press a button indicating a choice of what kind of interleaved stripes would appear on a screen. The people practiced, like in a video game, until they got good at it.

A model based on fMRI images was able to predict “choices” made at that game to a good but imperfect degree. The fMRI did not measure, and couldn’t, the process whereby the participants told the automation to do its thing. Free will was not disproven.

You may predict with even greater certainty what your wife will say when you forget to do the task assigned to you for the umpteenth time. But this does not mean your wife does not have free will. No, in order to prove the lack of free will, you need to demonstrate without error the causes of every action. And that will never be possible.

The Power To Kill Without Detection

18 Comments on The Power To Kill Without Detection

Suppose you discover a device which allows you to kill those whom you would. The device’s “batteries” never run out; it will always work and never disappoint. The device is of such excellence that you will never be caught in these killings nor even suspected in the deaths it causes. Nobody knows or will ever discover you have the device, not even after you are dead.

Perhaps the device uses the excess heat produced by global warming to run a cold-fusion zero-point energy quantum death ray, that when switched on tunnels through a wormhole from the device to the victim, and is thus untraceable.

Would you use the device? Would somebody else? Would you, if you could, ensure that this device is destroyed?

The temptation to zap enemies would be strong, perhaps overwhelming. Think of the good you could do with it! Cult leaders gone in the flick of a switch, maniacs with “gender-transforming” knives and chemical-filled syringes given the pronoun “deceased”, Planned Parenthood offices vacated; the world instantly a better, safer place.

This hypothetical scenario belongs in the class of academic “trolley problems”. The classic situation is to suppose you are confronted with a runaway trolley that will cream a group of unaware victims, unless you pull a lever which will redirect the trolley onto another track. The kick is that on this other track is a single person who will certainly be crushed.

Do nothing and let a handful die, or act and kill one. What do to!

Much has been written about this and similar scenarios, but all of the writing shares one thing in common. It’s same thing in common with our Death Machine. It’s mostly irrelevant.

I do not know what I’d do in a real-life trolley emergency. Probably start smacking controls that I know nothing about, hoping that something good would happen. I’d probably derail the train and kill everybody. Or I might want to act but worry that if I touched anything I’d do greater harm. Or I might figure, in the heat of the moment, that surely that somebody in the station will see the trolley coming and warn the others to hop out off the way. Or I might try and find a public address system to shout out a warning.

What answer I give now, sitting in the cool of the bar at cocktail hour, where I can puzzle out all my actions in the belief the problem itself is unambiguous, won’t give anybody much insight into what I or anybody would do for real.

The difficulty of creating unambiguous scenarios cannot be underestimated. In the academic trolley problem there are no words about the existence of a public address system. That means people are free to think there is. And thus they are free to think they might use it and save everybody’s lives.

Even if the academic trolley scenario is modified to include “No way of communicating with the people on the tracks is possible before the trolley hits”, it does not mean people asked to consider the modified scenario will believe it. People might say to themselves, “Oh, I’m sure there’s at least a window somewhere nearby.”

People bring all kind of baggage to these hypotheticals making the task of the researcher designing them doubly difficult. The scenario itself has to be crystalline, from which there cannot possibly be any deviation.

These scenarios can exist. But only in situations where all fuzziness and the potential for modification from the persons being quizzed can be removed. They work in math, for instance. If x + y = 12, and y = 7, what is x? There is only one right answer. Assuming only integer solutions, naturally. See? Another unspoken premise!

But if you modified the question into a scenario in which you hope to discover the hidden depths of citizens’ mathematical knowledge, you might be disappointed. Such a scenario might be, “You walk into a room with a chalkboard with the following math problem (as above). What number would you write?”

Then you could expect answers like, “42” (from a Douglas Adams fan), “My phone number” from a wag, and so on.

It’s not that nothing can be learned from scenarios. For instance, in the death-ray setup, I’d say I wouldn’t use it. Rather, I probably wouldn’t use it, because why? Because God is watching. That no other person sees would not excuse me from culpability (the same with all my sins!).

That means the real point of these scenarios are the hidden, tacit, or implied premises bring to them. Learn them and you learn something interesting.

On True And False Theories

7 Comments on On True And False Theories

What makes a theory true? Bad question, that. Conflates too easily why a theory is true with our knowledge whether a theory is true. Both are subjects of great interest, but they are not the same thing.

It’s a hard question, too. Here’s a simpler: how do we know a theorem is true? Notice y has been swapped with em.

Knowledge of true theorems are deductions from compound propositions where each compound proposition is itself, taken together, known to be true—knowledge which itself is based on other, usually simpler, compound propositions, themselves also known to be true, and so on in a chain tied to indubitable propositions, which themselves are known via certain kinds of inductions made from sense impressions. Mathematics, then, is a giant web of true propositions all tied together.

Even in math the why is different than the how (we know). We do not know why, for instance, π takes the values it does. Something caused π to be what it is (I discuss this more fully in Uncertainty). Yes, we can know lots of things about π that give hints to its cause, such as π equals this or that infinite series, and isn’t it curious how this infinite series makes use of other fundamental mathematical truths, etc. But why God chose the universe to be such that π took the value it does versus some other (in a continuous infinity of choices) we do not know.

Let’s return to our original question in its epistemological sense: how do we know this theory is true? Well, in just the same way we know how this theorem is true. Theories like theorems are deduced from compound propositions. We know theorems are true because we learn no mistakes have been made in the deductions and because the propositions on which it is based are themselves truth. We check the truth of theories in the same way.

The twist is that, with theorems, we are dealing with strict truth and falsity, whereas with theories we often have uncertainty. Not all the propositions from which a theory is deduced are known with certainty to be true; neither, then, can we know the compound proposition is true, even though we can know the deduction from the assumed-true compound proposition is itself true (supposing no mistakes have been made, which as a compound proposition grows becomes less and less believable; have you seen the computer code for our “best”, say, climate models?).

We’re done. That’s the answer. It’s not yet satisfying, though. Examples are necessary.

Let’s risk a dice example (though I once had a paper rejected in part because the reviewer was understandably sick up to his gills with dice examples). These aren’t real dice. They’re fictional. In fact, (as I do in Uncertainty) let’s make the dice states of an interocitor, a device manufactured on the planet Metaluna. We know—it is true—that all interocitors must take only one of n states. We also know—it is also true—that this is an interocitor before us. We deduce from these two true propositions that this interocitor must take only one of n states.

Our theory, which we deduce from true premises, is that this interocitor will be state s has probability 1/n (where, of course, s can be only one of the allowed states).

Our theory is therefore true. We know it is true because it is based on a true compound proposition, and because the deduction is valid and sound. The theory is probabilistic and that it is true means we cannot have better understanding or make superior predictions based on this theory given only the information of the number of possible states. That point must be stressed, and stressed again.

The theory is true but limited; it is limited because we do not know why the states will be what they will be, and we cannot predict with perfection. That is, we do not understand the causes of the states. If we did, it would mean we had a whole new set of premises which allow us to deduce the states via the causes. We’d have a second true theorem, or even a true theory.

A true theory therefore does not mean a perfect theory. A true theory is one deduced from true premises, even true observational premises. These are universally true theories. Contrast these with locally true theories. A locally true theory (like any local or conditional truth) posits its own premises, which may merely be supposeds or guesses or even fictions, but where the conclusions are deduced (without error) from these premises.

Most statistical models fall into this category because most are built on ad hoc premises. The models are locally true, and may even be true if and only if the ad hoc premises themselves turn out to be universally true. They are only false—but still locally true (assuming no calculation errors)—if is it known one of the model premises is itself false.

This means the quip, heard everywhere, that “all models are false but some are useful” is itself false. But it also means we have a hint about a land between truth and falsity, where we know a theory isn’t universally true, but where we also know it isn’t universally false. Models and theories can themselves be probable.

How this all works out we’ll save in our discussion of non-empirical confirmation of theories. Coming soon!