William M. Briggs

Statistician to the Stars!

Category: Statistics (page 1 of 358)

The general theory, methods, and philosophy of the Science of Guessing What Is.

Randomness & God

Reader Omer Abid points us to Serkan Zorba’s article “God is Random: A New Perspective on Evolution and Creationism”, which has concepts of interest to all of us (Abid asked me to look into this not quite two years ago, so you can see I’m a tad behind in my emails).

Regular readers know, and I prove in Uncertainty, that random means unknown. Random is an epistemological concept and not an ontological one: there is no “random force” as there is, say, a gravitational force.

With that, here is Zorba (jumping in about half way down).

Thus I will propound that generation and “understanding” of absolute randomness requires infinite intelligence. I will dare to speculate that true randomness observed in nature is a strong indication, if not the “proof,” of the existence of an infinitely intelligent entity (God). Absolute randomness is a telltale sign of God.

One way of seeing this is as follows. Perfect randomness is when the result of an event is independent of the past and future influences. That means the event is not determined by any physical cause although it transpires in our physical universe, but rather by what I will call a ‘transcause,’ a cause originating beyond our phenomenal level.

When a “wave function” “collapses”, if that is what really happens, it collapses to a specific value. The (conditional) probability, a function of the wave (the conditions), can be calculated that this specific value will result. Now this value before it results it is only a potential. Some thing actual must actualize this potential and so make the final state an actuality. If Bell is right, we cannot know what this actualizer is; but that it must exist is a truism. It cannot be that nothing actualized the potential, because nothing is not-a-thing, and nothing has no powers. It must be that some thing actual with power to actualize did the actualizing. Zorba will call this a “transcause”, which is as good a name as any, and maybe a better name than most.

Incidentally, Heisenberg spoke in exactly this Aristotelian language when he philosophized about quantum mechanics. Uncertainty has details on this.

I differ from Zorba in calling quantum events “independent.” I deprecate this word in statistics, too, and use instead relevancy. Prior knowledge of some proposition (event) is either relevant or irrelevant to some new proposition. To use “independent” is to say two events are not causally related, and in the case of transcauses (to use his fine word), since we have no idea of the reasoning behind the cause of the first event, we necessarily do not have it of the second. Two events may very well be, in the perspective of the transcause, dependent.

Furthermore, the independence of such random behavior of the past and future influences—a sort of memorylessness—is, I assert, indistinguishable from having a timeless omniscience, as the knowledge of the past and the future must really be known to truly render a correlationless behavior. Thus the introduced ‘transcausality’, by virtue of its having infinite computational wherewithal, implies the existence and intervention of a metaphysical and categorically-different intelligence, which I will name ‘transintelligence’.

‘Transcausality’ necessarily implies non-locality, which is a fundamental feature of quantum mechanics. Furthermore, the discontinuous and seemingly non-algorithmic character of wavefunction collapse also dovetails well with the idea of ‘transcausality.’

Well, we cannot claim “memorylessness”, especially if we’re going to, as Zorba does, equate transcausality with God. And we have to be careful about correlation, too. If we use it in the sense of relevancy, we’re on solid epistemological ground; but as lacking-causal-connection, we are not. (Besides, statisticians have the bad habit of speaking of correlation as if it only involves straight line undefined—perhaps causal, perhaps not—“links”.)

Non-locality, of course, applies to our material world. Since God is at the base of all existence, the First and Sustaining Cause of all (see the beginning of this series), Zorba’s suggestion makes sense. God is not here now, and there later. In a crude analogy, if you think of the universe right now emanating from a single point, a singularity, all points and all times are present to this singularity at once, which puts this singularity in the perfect time-place to be a (the) transcause.

I’ll skip over the bits about how our intellects work, which brings up many subjects, such as induction (such as also discussed in Uncertainty).

I thus posit that the information-laden perfect randomness observed in nature at the microscopic level entails the existence of an “oracle,” a transintelligence, namely, an omniscient being. To further identify this Being with God–who is conceptually defined as omniscient, omnipotent, and morally perfect–is not facilely accomplished, albeit such identification is not uncommon[4].

The transintelligent being inferred in this article must be omniscient and omnipotent due to the proposed ontological (creation/selection of quantum events) and epistemological (information-theoretic nature of the irreducible randomness of the quantum world) connection. Linking omniscience/omnipotence to moral perfection, as assumed or done in various forms of ontological argument (e.g., in Plantinga’s modal argument[5]), is beyond the scope of this article[6].

(About Plantinga’s version of an onotological argument, click here; and don’t miss the comment by Paul Brandon Rimmer.)

Zorba’s kicker is this: “If God is, by definition, infinite, absolute and singular, then, generally speaking, in what other pattern will a finite being—such as a human being—perceive Him other than randomness?”

This is far from a proof, though I agree with Zorba’s aim. God is “mysterious” in the sense that we do not know why this wave “collapsed” to this point. Of course, this hinges on the absolute correctness of quantum mechanics as it is now known. If, say, next week string theorists finally convince the world they know of what they speak, then would Zorba’s argument be weakened? Probably not, because (as far as I understand it) string theorists have no answer to what is actualizing the potentials of strings, either.

Headline: We Used Terrible Science to Justify Smoking Bans. Amen: We Did

When they used to tell me I would shorten my life ten years by smoking, they little knew the devotee they were wasting their puerile word upon—they little knew how trivial and valueless I would regard a decade that had no smoking in it! —Mark Twain

Flabbergasted is the word we need. Or, if you hail from the Land Without Combs, gobsmacked. Taken aback—taken way back—and floored make acceptable substitutes.

For these words describe the emotion one experiences while reading “We Used Terrible Science to Justify Smoking Bans” in the magazine Slate, which is not on anybody’s list of traditionalist or even moderate publications.

All I want to do here is point to the article, which is long and has a wealth of observations. Including, if you can believe it, this one at the end.

While science can inform, though not fully determine, the boundaries of where people are allowed to smoke, the debunking of the previous decade’s heart miracles should provide some grounds for humility.

This is right. An admission scientism is not the way, and another admission of previous, wild over-confidence, a state brought about the misuse of statistics.

There are people who strictly deprive themselves of each and every eatable, drinkable and smokable which has in any way acquired a shady reputation. They pay this price for health. And health is all they get for it. How strange it is. It is like paying out your whole fortune for a cow that has gone dry. —Mark Twain

The story in brief: there was weak statistical evidence that second-hand smoked caused heart disease. But weak second-hand evidence provided by statistics cannot discern cause. The evidence that was once thought strong was gradually whittled down until it became clear that second-hand smoke—such as a man smoking a cigar on a windy beach—was not going to kill scores of women and children.

Yet puritanical “activists” wanted smoking banned altogether. Most of these tolerant, freedom-loving activists were not on the right of the political spectrum. Consider the same people who wanted to ban “second-hand” cigarette smoke generally supported smoke from other substances. A sort of political tremor and mini-moral panic swept the land and smoking was banned everywhere, even where it couldn’t possibly do any harm, like in parks and beaches. It became so idiotic there was even talk of “third-hand” smoke. Yes, really.

The heady effects of banning—I mean the bureaucratic satisfaction of non-appealable regulations well passed—folks began banning vaping, which produces no second-hand smoke. But it looks like smoking, and appearances count. And once you loose a bureaucracy, nothing but its violent dismantling will cause it to cease regulating.

So here we are, with anti-smoking zealots still with gleams in their eyes, and along comes this Slate article. It is smart money to bet against the article having much good effect, but it is not wrong to hope it does. Here are reasons for that hope:

And now that the evidence has had time to accumulate, it’s also become clear that the extravagant promises made by anti-smoking groups—that implementing bans would bring about extraordinary improvements in cardiac health—never materialized…The updated science debunks the alarmist fantasies that were used to sell smoking bans to the public, allowing for a more sober analysis suggesting that current restrictions on smoking are extreme from a risk-reduction standpoint…

In the paper’s admirably honest commentary, the authors reflected on the reasons that earlier studies, including their own, had overstated the impact of smoking bans. The first is that small sample sizes allowed random variances in data to be mistaken for real effects. The second is that most previous studies failed to account for existing downward trends in the rate of heart attacks. And the third is publication bias: Since no one believes that smoking bans increase heart attacks, few would bother submitting or publishing studies that show a positive correlation or null effect. Thus the published record is likely unintentionally biased toward showing a larger effect than truly exists.

It goes happily on. But allow me to remind that these studies cannot discover cause. It is always an outside assumption of what is causing the noticed reductions and increases, because why? Because the thing said to be doing the causing is never measured on individuals! Repeat that out loud, to yourself and the nearest stranger. Asserting cause in these cases is always by fiat or direct assumption—which is cheating.

Raise your hand if you remember the epidemiologist fallacy. Here’s an example with PM2.5, which is also said to be a deadlier killer than that shark in Jaws. Read more about the fine subjects in Uncertainty: The Soul of Modeling, Probability & Statistics.

Induction & Essence

Not that kind of induction!

Suppose we observe a raven. It’s black. We see a second, also black. And so on for a few dozen more. We reason, or rather we argue with ourselves, “Since all the many ravens I’ve seen have been black, the next raven I see will be black.”

There are seeming problems with this self-argument, this induction-argument. It appears to be invalid since, as is probably obvious, it might be that a non-black raven, perhaps even an albino raven, exists somewhere. And if that’s true, then the next ravens I see might not be black. Also, the argument is incomplete—as written, though not as thought. As thought, it contains the implicit premise “All ravens are the same color.” That makes the entire argument: R = “All ravens are the same color and every raven I have seen was black; therefore the next raven I see will be black.” That argument is valid.

Therefore, it is a local truth that “The next raven I see will be black” given those premises. We are back to the same kind of situation as when we discussed Gettier problems. What is our goal here? Is it to assess the truth or falsity of the premises? Or to make predictions? Given the premises are true, then it necessarily follows we will make flawless predictions.

Now “every raven I have seen is black” is true (I promise), so the only question is “All ravens are the same color.” Where did that arise? That was an Induction-intuition, arising from the judgment that having black feathers is the essence of being a raven, or at least part of the essence. If this judgement is true, if having black feathers is essential to being a raven, then the this premise is also true and the conclusion to R follows.

The crux is thus the step, i.e. the induction, from the observations to an understanding of what it is to be a raven. But there have been observed white ravens, and it is said (by biologists) that these suffer from a genetic defect. A defect is thus a departure from the “norm”, from what is expected, and what is expected is the form given by the essence.

With this in mind we can fix the argument. R’ = “All the ravens I’ve seen have been black and it is the essence of ravens to be black; therefore the next raven I see which is properly manifesting its essence will be black.” This is a valid argument, and sound if indeed, as induction tells us, ravens having black feathers is part of the essence of being a raven.

Some people have mistakenly identified features of things thought to be essential but which were instead accidents. It is not for instance essential that swans have white feathers; some have black. But because mistakes are made in the induction of essences does not prove that inductions are of no use, nor does it prove things do not have essences. Many people make mistakes in math—surely more than who make mistakes in inductions of essences—yet we do not say math is a “problem”, where that word is used in its modern philosophical sense as an unresolved, unresovlable or paradoxical question; and we do not say math is invalid and not to be trusted. We do not seek for alternatives to math that explain how it could possibly be, given that some have erred in the calculation, that 1 + 1 = 2. We are not mathematical skeptics. Yet the mere possibility of mistake in induction is enough, for some, to cast doubt on the whole of induction.

Induction, as outlined in this must-have book, comes in various flavors. The kind of induction that extracts essences is not the same as the kind of induction that is statistical and that lets us make empirical predictions.

We did this before: we know via one kind of induction that dogs essentially (and I do not mean this word in its more-or-less connotation, but in the rigorous must-have) have four legs, but we know via statistical induction that some dogs do not fully evince this essence. The kind of prediction we wish to make varies with the type of induction we have in mind. If we want to know via the essential-induction whether all dogs have four legs, the answer is always yes, since it is essential for dogs to have four legs. But if we want to know via statistical-induction how many dogs have four legs in some certain situation, then the answer will be different, and will instead be a counting of the departures from essence. Read the linked article for more.

The Big Bang, Eternal Inflation & Many Worlds

We’re back to our Edge series of ideas scientists wish more people knew about. Today is John C. Mather and the Big Bang.

Mather isn’t pleased with popular conceptions.

What astronomers actually have observed is that distant galaxies all appear to be receding from us, with a speed roughly proportional to their distance…[W]e can get the approximate age of the universe by dividing the distance by the speed; the current value is around 14 billion years. The second and more striking conclusion is that there is no center of this expansion, even though we seem to be at the center. We can imagine what an astronomer would see living in another distant galaxy, and she would also conclude that the universe appears to be receding from her own location. The upshot is that there is no sign of a center of the universe…A third conclusion is that there is no sign of an edge of the universe, no place where we run out of either matter or space…The actual universe appears to be infinite now, and if so it has probably always been infinite. It’s often said that the whole universe we can now observe was once compressed into a volume the size of a golf ball, but we should imagine that the golf ball is only a tiny piece of a universe that was infinite even then. The unending infinite universe is expanding into itself.

Consider the idea of the multiverse coupled with eternal inflation. Inflation helped propel the “big bang”, that initial golf ball, into the roomy and expanding universe we see around us today. The idea of eternal inflation is that these golf balls are everywhere, popping into existence and swelling into local universes in their own right.

There isn’t any way to do justice to all the views and variations of multiverses in this short post, so I will comment on only one aspect, Max Tegmark’s second Level of multiverses. These are universes which have different parameters, or different physical constants. The first Level of multiverse is the same as ours, run by the same physics, but each has different initial conditions from whatever conditions existed at the start of ours. How these initial conditions are chosen and why ours got the values it did, except by reference to anthropic principles, is never specified—for the very good reason that nobody knows anything about how the initial conditions were caused, except by some hand-waving about quantum mechanics. Nobody hows how any quantum mechanic result is specified. We do not know what causes QM events, so we cannot know why our universe had the initial conditions it did.

The multiverse is another hypothesis to solve the peculiarities of quantum mechanics, or rather move them back one level so they seem to disappear. Eternal inflation comes from the relativity side of things. Now in this universe (the one out your window), there exists certain physical reactions which physicists have described using parameterized equations. The parameters are not known but estimated; they are hypothetical, meaning they might be wrong. That is, it might be that there are no free parameters and what physicists have proposed equations which are mere estimations of the universe’s true forces. It might be, for instance, that the descriptions of motion and change are entirely deducible from first principles (such as the principle of non-contradiction).

But suppose arguendo the parameterized equations are correct. The values of the parameters—as do the equations themselves!—have to come from somewhere. They must be chosen; a causal mechanism must exist which “assigns” the values (and causes the equations). It might appear that it is a solution to quantum ambiguity to say that a different universe is created which takes each possible value of parameters. Since parameters are assumed (there is no first-principles proof) to be continuous, the number of other universes is thus infinite, with the power of the continuum. That’s a lot of universes!

How? How are the parameters decided? Decided as in caused to be?

We earlier critiqued Tipler’s interpretation of Everett’s Many Worlds, which is a kind of multiverse. Readers will recall I did not buy the physical interpretation of Many Worlds which insisted upon infinite upon infinite upon infinite et cetera ad infinitum ad dudem literum (or whatever the Latin is for “Dude: literally”), and instead favored the epistemological, i.e. probabilistic, view. What’s fascinating is that Leonard Susskind and Raphael Bousso claim that, under certain conditions, Everertt’s Many Worlds matches, or is, the multiverse. Or so says somebody at MIT Technology Review

The author of that article says what is often said, “The reason many physicists love the many worlds idea is that it explains away all the strange paradoxes of quantum mechanics.” It does not. Neither Many Worlds or multiverses does away with the peculiarities of QM: they simply push them back one or more level, so that they seem to go away. Re-read the critique of Many Worlds to see why this was so there. Here, in multiverses, there is no solution to QM by saying infinite number of universes are created with different parameterizations, because there is nothing that says which parameterization went where, and how QM knew about all those parameterizations, and how it had the causal power to make the distinctions. It is true that in this universe we can say, “We’re just one of many, so QM is not strange.” But when pictured as a whole, QM is still strange.

About the multiverse-Many Worlds equivalence, the article says:

But Susskind and Bousso say there is a special formulation of the universe in which [experiments about other universes are] possible. This is known as the supersymmetric multiverse with vanishing cosmological constant.

If the universe takes this form, then it is possible to carry out an infinite number of experiments within the causal horizon of each other.

Now here’s the key point: this is exactly what happens in the many worlds interpretation. At each instant in time, an infinite (or very large) number of experiments take place within the causal horizon of each other. As observers, we are capable of seeing the outcome of any of these experiments but we actually follow only one.

Bousso and Susskind argue that since the many worlds interpretation is possible only in their supersymmetric multiverse, they must be equivalent. “We argue that the global multiverse is a representation of the many-worlds in a single geometry,” they say.

They call this new idea the multiverse interpretation of quantum mechanics…

But what this idea lacks is a testable prediction that would help physicists distinguish it experimentally from other theories of the universe. And without this crucial element, the multiverse interpretation of quantum mechanics is little more than philosophy.

That may not worry too many physicists, since few of the other interpretations of quantum mechanics have testable predictions either (that’s why they’re called interpretations).

Again, you’ll have to review, but in Many Worlds the problem of how the splits happen, and how they appear to require infinite power do not disappear. And it is still the case that we can only follow one of Many Worlds at a time: the one we’re in (and recall you cannot split because you are part intellect and will and these are not made of splittable stuff). This is why there aren’t and can’t be any testable predictions.

This makes the duo’s idea not “little more” than philosphy, but precisely philosophy. And given the spiritual nature of our makeup, incomplete or wrong philosophy.

Older posts

© 2017 William M. Briggs

Theme by Anders NorenUp ↑