Today, since I’m very busy, a small-ish review—mere comments, really—of “Origin of probabilities and their application to the multiverse” by Andreas Albrecht and Daniel Phillips. I got this paper via George Shiber, who many of you should know. Paper is relevant to my book, which is mere days—nay, hours—from being wrapped up. About the book, more very soon.

Probability suffers from an empirical bias, which is natural enough in the sciences where probability is used to make predictions. Logic, which is a narrow branch of probability, never suffered the same fate having been developed before empiricism struck. As such, nobody blinks when they’re confronted with non-empirical examples in logic, but people treat the same examples in probability suspiciously.

Anyway, the authors say D are observables (i.e. data), and T some pertinent theory. They say T “always requires a ‘model uncertainty’ (MU) prior [Pr(T)] that provides a personal statement about which model(s) you prefer.” This isn’t so. We’re sick of the example, but let T = “We have a 6-sided object, etc., etc.” Then P(see 6 | T) = 1/6, and we don’t need any uncertainty in T. T is a premise, an assumption, or model. Now this experiment can be empirical or imaginative. If empirical, it works as physics.

Given T, we predict D. If T is all we have, then unless we see something that T said was *impossible*—not unlikely, *impossible*—then nothing will falsify T and so we stick with T. That is, all non-impossible observations are consonant with T. And this means if T is all we have, then T is all we have. (This is why hypothesis testing, frequentist or Bayesian, is wrong.) Anyway, the authors work only with Pr(D|T).

There is no such thing as Pr(T), but there exists Pr(T|some premises or assumptions) = 1. If we have other premises which say “T or T'”, then based on those premises we can judge Pr(T| premises) = 1 – P(T’| premises); and we might even be able to say Pr(T| premises) = some definite number. But not all probability is quantifiable.

Authors next bring up idea of eternal inflation, which is best popularly described in the book *Many Worlds In One* by Alex Vilenkin. Very crudely, the idea is that infinite pockets of universes continuously pop up and go their own way; communication between universes (most say) is not possible. Obviously, quantum mechanical events take place in ours and, it is said, in the other universes, too (faith matters, even in science); which is to say, QM observables are measured. But as far as we know, we don’t know precisely why. Enter probability.

Here—and everywhere!—we must keep separate ontology and epistemology. There are things and our knowledge of things. Bell’s Theorem and other arguments show that our knowledge of (some) things is limited. But these arguments do not prove the ontology doesn’t exist, which is to say, they do not prove, as some claim, that causation no longer functions. Probabilities are measures of uncertainty: they are not drivers of it.

Authors say:

We believe that in every situation where we use “classical” probabilities successfully to describe physical randomness these probabilities could in principle be derived from a wavefunction describing the full physical situation. In this context classical probabilities are just ways to estimate quantum probabilities when calculating them directly is inconvenient. Our extensive experience using classical probabilities in this way (really quantifying our quantum ignorance) cannot be used to justify the use of classical probabilities in situations where quantum probabilities have been clearly shown to be ill-defined and uncomputable. Translating the formal framework from one situation to the other is not an extrapolation but the creation of a brand new conceptual framework that needs to be justified on its own.

This seems on the right tack. Only since probability is epistemological, there is no such thing as “physical randomness”, except in the sense of “measurements we don’t know the values of.” There’s no hope, that I can see, of writing down even the simple wavefunction of you scanning these words with your eyes, therefore we’ll always be stuck with locally explanative or determinative and not *fully* causal models.

Locally causal models abound. For instance, the authors cite billiards. “This ball caused this one to move” is a locally causal model, and a good one. But those rolling, colliding balls are governed at base by QM processes (this could be strings or whatever), so in principle a wavefunction could be written and we could make good probability predictions which quantified our uncertainty in the balls’ locations. Only difficulty with the authors’ position is this:

This argument that the randomness in collections of molecules in the world around us has a fully quantum origin lies at the core of our case. We expect that all practical applications of probabilities can be traced to this intrinsic randomness in the physical world.

If by “intrinsic randomness” they mean “intrinsic unpredictability”, then I’m right there with them. But if they mean the principle of causation evaporates at some small scale only to return, magically, at some heretofore undefined larger-scale point, then I’m not.

Also, I do not see the need for a classical “notion” of probability as distinguished from a quantum one. All probability is conditional on whatever premises are fed in, which is a universal notion. Scale doesn’t matter. Nor causality. Probability can make use of causal notions, but it doesn’t need them. We can say Pr(drownings | high ice cream sales) is high without any notion that ice cream is causing drownings. We can say Pr(eight ball in the corner pocket | this layout and this much force hitting the cue ball and this angle) is high or low while understanding we do not have a full physical or causal description down to the QM level. If we did know the QM premises about the billiard table, we can add them and get better predictions, but where we also know we do not have a full causal description.

Authors say “that fundamentally classical probabilities have no place in cosmological theories”. About measurement problems, I’ll say only this. All theories of many world, multiverses, or whatever, do not explain why this/our localverse, or whatever you want to call it, takes these states. Probability conditioned on suitable premises, some of which are observational and others logical and metaphysical, allow us to quantify our uncertainty in these states, but they never say what is the ultimate cause either.

We ran long, so I skipped commenting on the “randomness” of the digits of pi. I have a discussion in my book.

December 15, 2015 at 9:01 am

The multiverse to me has always been a slippery attempt to avoid the teleological argument for God in light of modern science. It takes as much blind faith to believe in the multi-verse as to believe in God, in fact more if you look into the intriguing stuff on Boltzmann Brains

December 15, 2015 at 11:01 am

Looking forward to the book news.

I don’t like the use of the word intrinsic here, even if you changed the phrase to “intrinsic unpredictability”. Is unpredictability really intrinsic (

essentialto the natural world)? That sounds like a pretty major assumption about the limits of knowledge.December 15, 2015 at 11:44 am

Mark Citadel:

Yup, that’s pretty much it. Personally, I’ve devoted my physics career to finding ways around the ontological argument. Slippery, slippery ways.

December 15, 2015 at 2:53 pm

Publishers withdraw more than 120 gibberish papers

Conference proceedings removed from subscription databases after scientist reveals that they were computer-generated.

See: http://www.nature.com/news/publishers-withdraw-more-than-120-gibberish-papers-1.14763

Science’s Big Scandal

Even legitimate publishers are faking peer review.

http://www.slate.com/articles/health_and_science/science/2015/04/fake_peer_review_scientific_journals_publish_fraudulent_plagiarized_or_nonsense.single.html

The “trick,” if that’s what it needs to be called, is to write in a style that doesn’t appear like something SCIgen would say.

December 15, 2015 at 4:17 pm

Multiverse theories try to preserve local realism in bizarre ways. Why bother? Because humans feel comfortable with the notion of billiard balls banging together. The belief is that the universe was designed to be comprehensible in a way that humans feel comfortable with due to their terrestrial experiences. The underpinnings of the notion are really as silly as that.

December 15, 2015 at 4:54 pm

Here’s a topic Brigg’s expertise, should he so choose–and so choose he should–could be applied to the subject of the paper that started it all:

Abstract: “This paper deals with a typical problem of renewable resources described in terms of an optimal control model. The differences in the analysis of the three cases of concave, linear, and convex utility functions are pointed out and optimal solutions are obtained. It is also demonstrated that the size of the discount rate can determine the optimal policy.”

Title: “The Transylvanian Problem of Renewable Resources”

In Plain English: Can vampires as portrayed in media coexist with humans, or, would the introduction of vampires exterminate humanity?

Relevancy: An analysis on how to do something right, involving a nonexistent topic few will get emotional about, can actually teach something. Nitpicking why or how thus & such got it wrong in their analysis doesn’t make much of a point at all … especially when its about a topic that arguably doesn’t exist, or even a topic the original analysts consider might not even exist. If one is going to quibble about analyses of non-existences, might as well and go all out with something having potential to be a screenplay.

December 15, 2015 at 4:56 pm

Transylvanian problem – link:

http://www.slate.com/blogs/atlas_obscura/2015/12/02/using_math_to_calculate_how_long_it_would_take_vampires_to_annihilate_humanity.html

Lots to critique there…and much room for original contributions. Have a go at it!

December 15, 2015 at 7:56 pm

I put discussions of multiverses in the domain of mathematical metaphysics. There is no way, by definition of multiverses, to verify their existence empirically, so they are not subject to scientific tests. The proposal for multiverses is an obvious cop-out, to avoid the strange fit of natural constants and physical relations, the fine-tuning, that has enabled carbon-based life to exist.

In his books Roger Penrose discusses this fine-tuning (in connection with why the Second Law of Thermodynamics). He has a cartoon of “God” (in the old style–robe and beard and all the rest) putting a needle into the very, very small “sweet spot” the defines the initial conditions for the universe. But he still doesn’t believe that God made that choice.

December 16, 2015 at 2:00 pm

Pier review: long walk off a short