**Ontology does not recapitulate epistemology**

Let A be some apparatus or experimental setup; the things and conditions which work towards producing some effect P, which is some proposition of interest. For example A could be that milieu in which a coin is flipped, a milieu which includes the person doing the deed, the characteristics of the coin, the physical environment, and so forth. P = “A head shows.” A theory of physical chance might offer , where *p* is considered a *property* of the system (I mark physical probability with the equivalence relation).

Few deny that the logical probability of P, perhaps also given A, would have the same value, but the logical probability is acknowledged properly as epistemological whereas the equivalence is seen as a physical essence or quality in the same way length or color is. Indeed, some (like Richard Johns, Howson & Urbach) say Lewis’s so-called Principal Principle must be interpreted to state that the (logical or subjective) probability should equal the physical chance. Many authors speak of credence-chance functions to avoid use of the word probability, but I have no scruples about it, thus in my notation: .

The conditions A are not seen as causative *per se*, but only contributing to the (efficient, ontological) cause of P or not-P. What *actually* causes P? Well, *p*: the probability itself. Nobody says that outright, but that is what is logically implied by physical chance. This *p* does not act alone, of course, but requires the catalyst of A: A allows *p* to operate as it will, sometimes this way, sometimes that, or as in some quantum theories, sometimes from an infinite number of ways. Just *how* *p* rises from it hidden lair and operates, how it chooses which cause—which side of P, so to speak—to invoke, is a mystery, or, as is again sometimes claimed in quantum theories, *there is no cause of P and *p* is its name*.

**Update** Based on some Twitter conversations I think the main point is getting lost. That point is: *probability cannot be a cause.*

The efficient cause of an event cannot be mathematics or logic, a claim with which any but strict idealists would agree. And since probability is logic, at least sometimes as physical-chance theorists admit, probability-as-logic cannot be a cause. Since it makes no sense to say “nothing caused P” (or not-P), therefore since something must have been the efficient cause of P, in some situations chance itself must be the cause or partial cause. Fortuna herself, still meddling in human affairs!

**More coins**

If we knew, for example, the initial conditions of the coin flip, we could predict with certainty whether P is true. This knowledge is not the cause. Neither are the initial conditions the cause. The equations with which we represent the motion of the spinning object are not the cause, these beautiful things are mere representations of the our knowledge of part of the cause. No: the it’s the forces on coin itself and on its handler, all depending on the environment which are the cause.

It is obvious enough that because a given man does not the cause of some effect that it is false therefore the effect has no cause. It follows that because two men do not know the cause that it is false therefore the effect has no cause, and so on for all men. Ignorance itself cannot be a cause.

Something caused P to be ontologically true or false. If A were a complete description of the coin flip then *p* would be 0 or 1 and no other number. We would *know* whether P. In everyday coin flips we don’t know what the outcome will be, but that’s because our premises are limited to something less than A. But *some* complete A does exist. And that implies that physical chance is real enough, but it only has extreme probabilities (0 or 1).

In everyday coin flips we have, or should have, the idea that the conditions change from flip to flip. There is a complete A in each case, but that A changes, and whatever is causing P or not-P is dependent on these changes. Our knowledge of these changes might be minimal or nonexistent, which is why we use coin flips to make decisions, but our ignorance is nothing to the causes operating on the coin. The study of chaos becomes instructive: sensitivity to initial conditions is a statement about A, not *p*.

**And for something completely different**

And now for the quantum elephant, the precinct in which many are happy to abandon causality. Didn’t Bell prove there are no “hidden variables” therefore either *p* must be a cause or else there is no cause? Well, since Bell’s arguments are probabilistic and since (or assuming) probability is not ontological, then Bell’s theorem is a statement about the limitations of our knowledge (in a certain quantum setup) and not a proof that causality stops. Anyway, where does A stop in a quantum experiment? Given the lack of locality, isn’t it the whole universe?

How about Everett’s Many Worlds? This says when the wavefunction of each of every object which “collapses” (when it “collapses”), it does so across “many worlds”, such that each possible value of “collapse” is realized in one of these worlds. The number of “worlds” thus required for this theory since the beginning of universe is a number so large that it rivals infinity, especially considering that wavefunction equations are typically computed on a continuum. Even if this theory were true, and I frankly think it is not, it doesn’t change a thing. Many worlds does not say, and cannot say, why *this* wavefunction “collapsed” to *this* value in *this* world.

This is already too long, but the same line of criticism can be offered for the so-called multiverse and modal realism.

**Update** Here’s another notable of the Principal Principle (as much as I love puns, I would ban them from philosophical and scientific names: too much levity is not a good thing). But if you really like puns: if the logical probability of P given the evidence of its physical probability is the same as the physical probability, then . Actually: , where the *logical* probability of P equals *p* given the *logical* probability of P given A equals *p*. In other words, there is nothing at all special about the Principal Principle; plus, it assumes what it seeks to prove, i.e. that physical chance exists.

Categories: Philosophy, Statistics

Briggs, “The efficient cause of an event cannot be mathematics or logic, a claim with which any but strict idealists would agree.” So where does this leave the cosmological argument?

I think that I will have to add quantum mechanics to natural selection in the visceral reaction category.

Scotian: It leaves the cosmological argument right where it has always been argued from – the efficient cause is God. No?

Scotian,

It leaves the cosmological logic sittin’ pretty, as Nick suggested.

I’d really like to have your thoughts on the epistemological v. ontological interpretations of Many Worlds and the Multiverse. I am, quite quite obviously, not an expert. But from my inexpert readings, I don’t see how these proposed solutions are solutions at all. They just, it seems to me, to push the problem back one level deeper.

Briggs, since you have presented the cosmological argument as a logical deduction similar to Protagoras theorem then we do indeed have a problem or at least a contradiction between your two statements. Relabeling as God to hide the logical nature of the deduction seems like obfuscation to me.

“epistemological v. ontological”: I am not an expert on QM interpretation either but I think that these two quoted words should be banned in polite conversation. 🙂 I don’t think that the many worlds and the multiverse, which are two very different things, are meant as solutions. What problem do you think needs a solution? Is it that you want a classical description of quantum mechanics? I have never heard anyone claim that QM is acausal, but maybe it is the unmoved mover that you are looking for.

Scotian: “Briggs, since you have presented the cosmological argument as a logical deduction similar to Protagoras theorem then we do indeed have a problem or at least a contradiction between your two statements.”

Protagoras theorem? Do you mean in the sense of relativism, “Man is the measure of all things, of the things that are that they are, of the things that are not that they are not.”

“epistemological v. ontological”

Knowledge and existence, or knowing and being. I agree, much simpler.

And now, our resident exploding penguin.

Briggs: “It leaves the cosmological logic sittinâ€™ pretty, as Nick suggested.”

And: “Since it makes no sense to say ‘nothing caused P’ (or not-P), therefore since something must have been the efficient cause of P, in some situations chance itself must be the cause or partial cause.”

It’s an elegant argument. Here is another I’ve seen:

1. Every finite and contingent P has a cause, P0.

2. A causal loop cannot exist.

3. A causal chain cannot be of infinite length.

4. There must be some P for which P0 is undefined.

“The conditions A are not seen as causative per se, but only contributing to the (efficient, ontological) cause of P or not-P. What actually causes P? Well, p: the probability itself. Nobody says that outright, but that is what is logically implied by physical chance.”

Perhaps that’s how the law of the excluded middle became so-named. p for P and not-P both equal 1.

Brandon, “Protagoras theorem? Do you mean in the sense of relativism …”

Actually, I was just referring to right triangles. Your quote is too ontological for me.

Scotian: That’s how I first read it, but when geometry failed me I reviewed your spelling. Only later did Pythagoras help, via Euclid, and now suddenly GÃ¶del’s first incompleteness theorem is starting to make a little more sense. Funny things happen on the way to the otology clinic …….

Scotian (cont.):

The application of Pythagoras led me to Euclid since it has been shown that Euclid’s fifth postulate, the parallel postulate, cannot be proven using only the preceeding four. This I accept on faith. It’s my understanding that the parallel postulate can be proved if one adds additional axioms. This I accept because I’ve cooked up a few of my own axioms and done it to my own satisfaction.

Now comes GÃ¶del who states that “all consistent axiomatic formulations of number theory include undecidable propositions.” http://mathworld.wolfram.com/GoedelsIncompletenessTheorem.html

In other words, if you can prove that some theory is consistent, it must contain at least one unprovable axiom (statement). In the vernacular of Innertoob poleaxes: “Hah! That’s an *assumption*! Argument fails!” But if one considers the self-evidence of existence (go ahead, prove to yourself that you don’t exist), it can lead to some … interesting … discussions. Some say enlightened, even.

Protagoras may have some application here, which is why I asked. But it might divert, so I screech now to a full stop.

Brandon,

Thus we see the dangers of weak spelling skills. On the other hand Protagoras is strangely appropriate as shown here: http://www.ancient.eu.com/article/61/

Scotian:

For me it was poor reading but good rhyming, which I consider a useful mistake on my part. Any reason to pun is a good one.

“Whether a room is objectively cold, then, can never really be known since the experience of being cold is entirely subjective. This same claim was extended to knowledge of the gods, ‘Concerning the gods, I have no means of knowing whether they exist or not or of what sort they may be. Many things prevent knowledge including the obscurity of the subject and the brevity of human life.'”

Good, so Protagoras can be directly applied to the topic. I was backing into it as Briggs has this week; my main pitstop was at the (non)objectivity of morality and ethics via free will. Either direction resolves to one of two conclusions: God must be both good and evil. If not-God, Man must be both good and evil. From there I tinker with objective/subjective.

Briggs, I am certainly no expert either, and first I’ll say that overall I very much appreciate this post, but when it comes to Many Worlds, I thought the whole theory is meant to suggest a reality in which “why

thiswavefunction â€œcollapsedâ€ to this value inthisworld” is a meaningless question. The issue isn’t whether it answers this question, but whether it successfully dismisses it.Wait’ll folks realize that William of Ockham was being epistemological and not ontological…. 😀

Even if you were to flip a coin 10 times, and each time it came up heads, each flip would be a unique event. It is the person observing the event that establishes what constitutes similarity by selecting what he choses to measure, and thus, probability is epistemological and not ontological – but this is not to say that it is subjective and not objective.

Briggs, I’m going to quantum mechanics, since I am not a mathematician nor a logician, but a physicist. It seems to me that you’re waving fairy dust over what is in essence a fairly reasonable notion of probability in quantum mechanics, a frequentist definition of probability, if you will, i.e. a definition in terms of measurements over a large number of samples. To take a simple example, a spin system (S=1) passing through an inhomogeneous magnetic field: when one says the probability is 1/3 that the spin component in the direction of the field is parallel to the field direction, 1/3 that it is perpendicular, and 1/3 that it is anti-parallel, one is saying that if you take a sufficiently large number of measurement, close to 1/3 of those will land on a screen up, 1/3 straight ahead, and 1/3 down. And this is what Bell’s theorem resolves to–different measurements,if hidden variables (locality and hidden variables) apply than if non-local.

Moreover, doesn’t the Conway-Kochen Free Will Theorem show that causality (in some sense) can only be applied in limited way to quantum systems? That is to say, the results of the physical measurement can not be totally predicted by past history. (See http://rationalcatholic.blogspot.com/2014/02/do-quantum-entities-have-free-will-and.html).

*Conway-Kochen Free Will Theorem–or see (closer to home):http://wmbriggs.com/blog/?p=11264

Bob,

But there is no problem between a frequency and a probability. Say your evidence is Q = “5 of 10 Frenchmen in this room wear hats and Jacques is a Frenchman in this room.” The probability of P = “Jacques wears hat” is 1/2

becauseof the relative frequency.This probability did not

causeJacques to wear a hat (or not wear a hat) if we eventually see him in one.Neither can probability in QM

causea “collapse” of a wavefunction. Now from what I can glean from Bell, all we know is we don’t, and can’t under the circumstances he outlined, know why a certain quantum systems is caused to take the observables it eventually takes. Many Worlds and multiverses attempt to bypass Bell and “prove” causation by saying the quantum systems takeallpossible values.That seems silly to me. And like I said, it doesn’t prove anything. These theories don’t save why the wavefunction collapsed to

thisvalue inourbranch/world. Certainly probability didn’tcausethese values.I am not saying that I have any idea what the cause is, of course. No fairy dust from me!

Matt, I will agree with you that probability isn’t a “cause” in the logical or, indeed, common sense. It is, however, a description of how things will happen. Science–equations, theories–is not/are not cause(s). They are descriptions/predictions of how things happen/will happen. But that seems so obvious!!

Bob,

Amen, brother, it does seem obvious. But…well, you know the rest.

“Iâ€™d really like to have your thoughts on the epistemological v. ontological interpretations of Many Worlds and the Multiverse. I am, quite quite obviously, not an expert. But from my inexpert readings, I donâ€™t see how these proposed solutions are solutions at all. They just, it seems to me, to push the problem back one level deeper.”“Many Worlds” is really a misnomer, arising from Wheeler’s attempt to describe it intuitively. It’s more properly called the Everett interpretation, and simply asserts that there *is no wavefunction collapse* – it’s quantum mechanics all the way up. Everett showed that an observer who was themselves a quantum system following the accepted linear QM rules would experience the *illusion* of collapse to a random eigenstate. The Everett interpretation is in fact completely deterministic – there is no randomness at all in it.

In classical physics, when you hook two oscillators together they will commonly converge on a superposition of correlated oscillations called ‘normal modes’. Mathematically, these are determined as the eigenvectors of the matrix form of the differential equations describing the interaction. The same thing happens with quantum oscillators. When two quantum systems interact, their states become correlated, so that if the system started in a superposition of states, the observer will end up in a superposition of states each corresponding to the observer observing one of the eigenstate outcomes. Because eigenvectors are orthogonal, the quantum states are orthogonal and don’t interact, so none of the superposed observer states can perceive any of the others. Thus, it’s *as if* the universe split into many worlds, in each of which the observer observes only one outcome.

The squared length of each component eigenvector acts like the ‘fraction’ of worlds where that outcome occurs, which gives rise to the illusion of probability. When you make an observation with two equally probable outcomes, you can think of that as there being a continuum of worlds, in half of which you see one outcome and in half of which you see the other. Experienced ‘from the inside’, each individual component of the observer sees only one outcome, but the observer as a whole sees *all* alternatives, simultaneously.

When you take a long chain of similar observations, the observer state splits into 2^n different components, one for each possible sequence, the vast majority of which sequences follow no particular pattern. This ensemble will follow the usual probabilistic axioms, and looks to every observer component like random choice.

It’s called an ‘interpretation’ because there’s no way of proving or disproving it. The point is that linear QM with added wavefunction collapse looks *exactly* the same as linear QM – the collapse has *no* observable consequences. It might happen. It might not. We have no empirical way of telling.

The only guidance we have is aesthetics. Which solution do you find more mathematically elegant? Simple linear QM without the unexplained, vaguely defined, irreversible, non-linear, non-local ‘collapse’ process? Or a *very* large number of alternative copies of ‘you’ that you can’t ever see or sense?

Most physicists ignore the issue as meaningless, one of the unresolvable meanderings of philosophy, and use whichever approach works for them personally. Since the outcome is the same either way, it doesn’t really matter.