William M. Briggs

Statistician to the Stars!

Category: Statistics (page 1 of 356)

The general theory, methods, and philosophy of the Science of Guessing What Is.

Stream: Hottest Yeah Evah

Stream: The Hottest Yeah Evah! Really? Or a yet another example of activism masquerading as science?

Assume for a moment, as the press with triumphant glee is reporting, that 2016 was the hottest year evah! Believe the claim for the sake of argument. Swallow the idea, for at least the next minute, the media and government really do have your best interests at heart and are reporting the truth, the whole truth, and nothing but the truth about the world’s temperature.

How much hotter than previous years was 2016? Bare your wrist and blow a huh on it from about half a foot away. Don’t blow—stay with me here: this is a genuine scientific experiment—but utter a soft ugh so that your breath wafts over your wrist gently. Feel that increase in heat? Well, that boost to your skin was much hotter than the increase supposed to have happened to the atmosphere in 2016.

Here’s a better experiment. You are likely reading this article sitting down. Sense the temperature around your face: it might help to think about your cheeks. Now stand up. Take a second mental reading. Feel the difference? That same tenth or a so change in degree, which was probably imperceptible to you, is about the same as the change in temperature scientists say they measured over the entire globe, including over the salty seas from last year to this.

Yes. Climatologists gathered measurements from buoys at sea, from thousands of thermometers at airports and other locations, from balloons, even, and then took their average—sort of. That number was then declared as the Official Temperature of Earth for 2016.

The “sort of” is important. Because the places and methods of measurement used in 2016 were not exactly the same as those used in 2015; and those used in 2015 were not the same as those used in 2014; and so on. And those used in, for instance, 1914 are completely different than in 2014. A century ago, mercury-in-glass thermometers were in a different class than the digital complexities in use today. Too, 100 years ago the places of measurement were few in number. Vast areas of the globe went unmeasured. And at places which were the same, well, thermometers out in the woods in 1914 now have a cities grown up around them. Even in modern times, thermometers break and are serviced. Buoys corrode. And so on. Things change….

[Don’t miss the exciting conclusion!]

Go there to read the rest.

Update That point is that NASA’s (or NOAA’s) way of calculating uncertainty about the 0.07 degree increase is wrong, as is detailed in the links at Stream (which come back to here to the technical articles about parametric versus predictive uncertainty, remembering all probability is conditional, and so forth). The hubris of thinking we can measure the surface temperature of the earth to the hundredth degree!

Update See Bob’s comment below. Main story. My take.

Formal Logic And Probability

One of the arguments is that probability does not extend predicate logic, but does extend propositional logic. The concern is that because predicate logic is formal and propositional logic is not, or not to the same extent, that probability is therefore of limited use. This is explained (in classical terms) on David Chapman’s site. I don’t have the space here to rebut everything there with which I disagree (there is no such thing as unconditional probabaility for one; believing there is accounts for most of what’s wrong): that would take a book. Which I just happen to have written: Uncertainty: The Soul of Modeling, Probability & Statistics. But we can do a few things here.

The biggest weakness of predicate logic is that, pretty and mathematical as it is, any time you want to use or apply an argument in predicate logic to some real world proposition, you need to “lapse” into propositional logic, which is to say plain English (or whatever language you use for understanding). For instance, in predicate logic you can write:

\forall x [Px \to Qx] \wedge Ps \therefore Qs

which is peachy and correct (some might use different symbols) and formal. It’s the formality that makes it mathematical and which allows it to be manipulated algorithmically. But it is also what kills its usefulness in actual applications.

In the equation, P and Q are predicates and x is a variable. The formality means that we can stick any predicate and any variable into the formula and it should work. One predicate (for P) might be “is a man”, and another (for Q) “is mortal”; a reasonable variable is s = “the man Socrates”. Both can be inserted into the formula to produce, finally in English, “All men are mortal and Socrates is a man, therefore Socrates is mortal.”

Formalists will say the conclusion is true because of the schema or form of the predicate-logic formula. The symbols—not the words—are purely formal objects which slide through a rigorously constructed pipeline to the conclusion, just like the quadratic formula provides solutions to quadratic equations (keep this example in mind). In propositional logic, plain common sense will say the conclusion is true because the conclusion shares in the essence of the premises. Yet formalists will complain and say that the propositional logic version of the argument amounts to

M \wedge N \therefore R ,

where M is the proposition “All men are mortal”, N = “Socrates is a man”, and R = “Socrates is mortal.” This, they say, isn’t formal, because the propositions in this formula aren’t “about” anything. They’re floating symbols, so, of course, R doesn’t follow from the conjunction of M and N. How could it? One cannot stick in just any old propositions for M, N, and R and have any hope the argument will produce a true conclusion. The schema itself, formalists say, is invalid (yet it produces the odd true argument).

Aristotelian logic, on the other hand, takes the argument as a syllogism and, partly by virtue of its syllogistic form, and partly from the plain understanding of the words and grammar, sees the conclusion as valid. The argument is also considered sound because of the understanding of intension (this is not a misspelling) of the terms.

Why the hunger for formality? Well, that’s what math is all about and, as such, there is nothing wrong the goal. But to say all logic should be formal is to claim all thought can be quantified or made into mathematics somehow. And that is the goal of many; think of certain forms of artificial intelligence. There is no proof of that claim; there is only the assurance or hope that it can be so.

But there is bad news for formalists. In 2009, David Stove proved logic is not formal (I’m quoting from my own article, which in turn draws quotations from Stove’s Rationality of Induction; see the original for details).

An argument is formal “if it employs at least one individual variable, or predicate variable, or propositional variable, and places no restriction on the values that that variable can take” (emphasis mine). Stove claims that “few or no such things” can be found.

Here is an example of formality: the rule of transposition. “If p then q” entails “If not-q then not-p” for all p and for all q.

This is formal in the sense that we have the variables p and q for which we can substitute actual instances, but for which there are no restrictions. If Stove is right, then we should be able to find an example of formal transposition that fails.

First a common example that works: let p = “there is fire” and q = “there is oxygen”, then

    “If p then q” == “If there is fire there is oxygen”.

And by transposition, not-q = “there is no oxygen” and not-p = “there is no fire” then

    “If not-q then not-p” == “If there is no oxygen then there is no fire.”

For an example in which formal transposition fails, let p = “Baby cries” and q = “we beat him”, thus

    “If p then q” == “If Baby cries then we beat him”.

But then by transposition, not-q = “We do not beat Baby”, not-p = “he does not cry”, thus

    “If not-q then not-p” == “If we do not beat Baby then he does not cry.”

which is obviously false. (Stove credits Vic Dudman with this example.)

So we have found an instance of formal transposition that fails. Which means logic cannot be “formal” in Stove’s sense. It also means that all theorems that use transposition in their proofs will have instances in which those theorems are false if restrictions are not placed on its variables. (It’s worse, because transposition is logically equivalent to several other logical rules; we won’t go into that now.)

It is Stove’s contention that all logical forms will have an example where it goes bad, like with transposition.

Now, as I said, some form of Aristotelian logic or of something more propositionally informal and fundamental must take place when we assent that the proposition “Socrates is mortal” follows from the other propositions. It is not the schema that makes something true. Schemas have no power! Things are not made true by mathematical or logical form (this “form” is not the same as the Aristotelian “form”, of course: for “form” here, read “formula”). They are caused to be true by something, all right, but a schema has no causal power.

Go back to the quadratic equation example. It, like all mathematical theorems, has a proved formal structure. But is not purely formal (in Stove’s sense). The quadratic formula has restrictions. You cannot input matrices into it, for example. The pure formality doesn’t exist because of these restrictions.

As said above, when applying predicate calculus to a real-world problem, we always must lapse into propositional logic or plain English. This falling back, as it were, always brings with it restrictions, which is why ordinary discussions aren’t purely formal. The real problem lies in attempting to formalize what ultimately cannot be formalized.

Tipler’s Tipsy Parallel Universes of Quantum Mechanics

Be sure to read the caption.

We’re back on our Edge series of concepts scientists wished more people knew about. Today’s entry is Frank Tipler’s Parallel Universes of Quantum Mechanics. Tipler:

In 1957, a Princeton physics graduate student named Hugh Everett showed that the consistency of quantum mechanics required the existence of an infinity of universes parallel to our universe. That is, there has to be a person identical to you reading this identical article right now in a universe identical to ours. Further, there have to be an infinite number of universes, and thus an infinite number of people identical to you in them.

Most physicists, at least most physicists who apply quantum mechanics to cosmology, accept Everett’s argument. So obvious is Everett’s proof for the existence of these parallel universes, that Steve Hawking once told me that he considered the existence of these parallel universes “trivially true.”

Hawking also thought it trivially true that philosophy is useless, itself a philosophical judgment. So perhaps we should seek out a more eminent authority.

Anyway, Tipler says “Everett showed that the consistency of quantum mechanics required the existence of an infinity of universes parallel to our universe.” Everett showed no such thing. Quantum mechanics does not need an infinite number of duplicate universes, along with another infinite number of different universes, to be consist. Everett instead produced a mathematical picture the interpretation of which is up for grabs. Don’t forget: QM is a theory of probabilities, and probabilities aren’t real, i.e. they are not physical entities. The refication of probability in QM is a major problem: see more in this book. I am dubious that “most” physics buy the interpretation that these infinity of parallel universes are real entities and not just parameters in an equation, but I’ve done no survey.

The free will question arises because the equations of physics are deterministic. Everything that you do today was determined by the initial state of all the universes at the beginning of time. But the equations of quantum mechanics say that although the future behavior of all the universes are determined exactly, it is also determined that in the various universes, the identical yous will make different choices at each instant, and thus the universes will differentiate over time. Say you are in an ice cream shop, trying to choose between vanilla and strawberry. What is determined is that in one world you will choose vanilla and in another you will choose strawberry. But before the two yous make the choice, you two are exactly identical. The laws of physics assert it makes no sense to say which one of you will choose vanilla and which strawberry. So before the choice is made, which universe you will be in after the choice is unknowable in the sense that it is meaningless to ask.

To me, this analysis shows that we indeed have free will, even though the evolution of the universe is totally deterministic.

This type of thing leads to exasperation, but proof by exasperation doesn’t count in logic, so we need to take it seriously. Accepting the Many Worlds of Everett, here you are, ready to make a choice. There are (it is said), at the moment, an infinite number of yous standing in line at an infinite number of Baskin Robbins (and they with only 31 flavors!). The universes are identical in every way, down to the quark across the vast regions of space. There are also an infinite number of other universes different in an infinite number of ways.

In your universe you choose, as any sensible person would choose, Moose Tracks. An infinite number of other yous also choose Moose Tracks, and separate infinite yous choose the other flavors. Actually, you don’t choose, since it is quantum mechanics determining that set A of you gets Moose Tracks, set B Chocolate Cherry, set C gets Orange Swirl, and so on. All the choices are filled, and all must be filled. The universes, since the non-choice choices were different, are all now on their own paths, evolving differently. Nobody who eats Moose Tracks acts in exactly the same way as somebody who eats Orange Swirl.

Each time a choice is made, an infinity of universes peel off and wend their own ways. How many choices are made? Oh, many, many. Toss a pebble onto the pavement. Quantum mechanics suggests—this is the formula—that the pebble can take infinitely many end positions, all the way from infinitely over there, to infinitely in that direction. That makes another set of infinite universes pop into existence, to each follow the paths decided by where quantum mechanics puts the infinite pebbles.

That’s a lot of infinities! Infinities upon infinities upon infinities, because stuff is happening all over the place. Think of some remote star and the physical and chemical reactions taking place within. Each reaction in each moment requires another infinity of branching universes. You can’t emphasize enough how many infinities this is, since these reactions are happening already across an infinite number of universes. It’s a lot.

But then, what makes quantum mechanics choose this universe as the one in which I opt for Moose Tracks? Sure, QM makes sure all choices are made. But how? And how does it order the choices? What—what exactly—is driving QM to put what where? Which of the infinite universes gets the pebble at X, and which at X – 17? And why?

Ah. We’re right back to the same problem the original, single-universe QM posed, and the reason for the positing of Many Worlds. How does QM actualize potentialities? How does it select specific outcomes. Nobody knows the answer with ordinary, single-universe QM. Indeed, all we can know is we can’t know (thanks to Monsieur Bell). But something is making the choice, even if we don’t and can’t know what it is. The escape to Many Worlds avoids the question, because that theory says all choices are made. Very well: all are made. But how? The theory still does not say, and cannot say. Nothing has been solved.

So we see, even if Tipler’s interpretation of Everett is right, and there is no, there is zero, observational evidence it is, we still haven’t solved the problem we set out to solve. How does QM choose? All we’ve done if multiplied infinities faster than democracies increase budget deficits. Which is supposed to make the problem.

Before we go, we owe to Tipler to present his solution to the so-called Problem of Evil—which vexes both atheist and theist theories: to atheists, there can be no such thing as evil (or good) yet trying taking an atheist’s wallet; to theists, an Omnipotent God would seem to preclude evil.

Another philosophical problem with ethical implications is the Problem of Evil: Why is there evil in the universe we see? We can imagine a universe in which we experienced nothing bad, so why is this evil-free universe not the universe we actually see? The German philosopher Gottfried Leibniz argued that we are actually in the best of all possible worlds, but this seems unlikely. If Hitler had never taken power in Germany, there would have been no Holocaust. Is it plausible that a universe with Hitler is better than a universe without him? The medieval philosopher Abelard claimed that existence was a good in itself, so that in order to maximize the good in reality, all universes, both those with evil in them and those without evil, have to be actualized. Remarkably, quantum mechanics says that the maximization of good as Abelard suggested is in fact realized.

Is this the solution of the Problem of Evil? I do know that many wonder “why Hitler?” but no analysis considers the fact that—if quantum mechanics is correct—there is a universe out there in which he remained a house painter.

Oh my. If Everett-like universes exist, not only was there a Stalin, ruthless socialist murderer that he was, but there were an infinite other worse Stalins, some that not only killed millions, but who slaughtered billions. And there must have been one who killed everybody, and not just killed everybody, but who tortured them all to death in the worst possible way. And not only must there been such a blood-soaked Stalin, there must have been an infinite number of Maos who committed worse crimes. And not only must there been an infinite number of Stalins and Maos, but there must be—there must be—an infinite number of yous who are worse criminals still!

This is the solution to the Problem of Evil? One doubts.

Deeper criticisms

An irreconcilable flaw of infinite “yous” is that our intellects and wills are not bodies, i.e. not made of physical stuff, and therefore not susceptible to physical forces. There is thus no way to split an intellect since each is unique. Even if you can imagine a way to overcome this, it gives rise to continuity problems.

At this moment in time stands you, ready to make a choice. Forget all other universes and concentrate on the one you are in now, poised. QM makes the choices (however many there are, and this could be an infinite number) and splits the universes. Never mind how. But it must make the splits. It’s not a problem (not really) where these different universes go, but where does the energy come from to make the splits? It must be infinite in extent and infinite in ability. Everything happens instantaneously. Must this Infinite Pool exist outside the universes it is creating? Is it God?

Now there must be a you that persists through each split that is the same since intellects are not splittable. Each split must be accompanied by the creation of a new intellect attached to the new physical stuff, including the new bodies that resemble you (and how is that accomplished?). But you yourself must persist.

Bayesian Statistics Isn’t What You Think

A Logical Probabilist (note the large forehead) explains that the interocitor has three states.

Back to our Edge series. Sean Carroll says Bayes’s Theorem should be better known. He outlines the theorem in the familiar updating-prior-belief formula. But, as this modified classic article shows, this is not the most important facet of Bayesian theory.

Below we learn all probabilities fit into the schema \Pr(\mbox{Y}|\mbox{X}), where X is the totality of evidence we have for proposition Y. It does not matter how this final number is computed (if indeed it can be): it can sometimes be computed directly, and sometimes by busting X apart into “prior” and “new” information, or sometimes by busting X apart into ways that are convenient for the mechanics of the calculation. That’s all Bayes’s theorem is: a way to ease calculation in some but not all instances. An example is given below. The real innovation—the real magic—comes in understanding all probability is conditional, i.e. that it fits into the schema. As shown in this talk-of-the-town book.

This post is modified version of one that was been restored after The Hacking. All original comments were lost.

Bayesian theory probably isn’t what you think. Most have the idea that it’s all about “prior beliefs” and “updating” probabilities, or perhaps a way of encapsulating “feelings” quantitatively. The real innovation is something much more profound. And really, when it comes down to it, Bayes’s theorem isn’t even necessary for Bayesian theory. Here’s why.

Any probability is denoted by the schematic equation \Pr(\mbox{Y}|\mbox{X}) (all probability is conditional), which is the probability the proposition Y is true given the premise X. X may be compound, complex or simple. Bayes’s theorem looks like this:
\Pr(\mbox{Y}|\mbox{W}\mbox{X}) = \frac{\Pr(\mbox{W}|\mbox{YX})\Pr(\mbox{Y}|\mbox{X})}{\Pr(\mbox{W}|\mbox{X})}.
We start knowing or accepting the premise X, then later assume or learn W, and are able to calculate, or “update”, the probability of Y given this new information WX (read as “W and X are true or assumed true”). Bayes’s theorem is a way to compute \Pr(\mbox{Y}|\mbox{W}\mbox{X}). But it isn’t strictly needed. We could compute \Pr(\mbox{Y}|\mbox{W}\mbox{X}) directly from knowledge of W and X themselves. Sometimes the use of Bayes’s theorem can hinder.

Given X = “This machine must take one of states S1, S2, or S3”, we want the probability Y = “The machine is in state S1.” The deduced answer is 1/3. We then learn W = “The machine is malfunctioning and cannot take state S3”. The probability of Y given W and X is deduced as 1/2, as is trivial to see.

Now let’s find the result by applying Bayes’s theorem, the results of which must match. We know that \Pr(\mbox{W}|\mbox{YX})/\Pr(\mbox{W}|\mbox{X}) = 3/2, because \Pr(\mbox{Y}|\mbox{X}) = 1/3. But it’s difficult at first to tell how this comes about. What exactly is \Pr(\mbox{W}|\mbox{X}), the probability the machine malfunctions such that it cannot take state S3 given only the knowledge that it must take one of S1, S2, or S3? If we argue that if the machine is going to malfunction, given the premises we have (X), it is equally likely to be any of the three states, thus the probability is 1/3. Then \Pr(\mbox{W}|\mbox{YX}) must equal 1/2, but why? Given we know the machine is in state S1, and that it can take any of the three, the probability state S3 is the malfunction is 1/2, because we know the malfunctioning state cannot be S1, but can be S2 or S3. Using Bayes works, as it must, but in this case it added considerably to the burden of the calculation. In Uncertainty, I have other examples.

Most scientific, which is to say empirical, propositions start with the premise that they are contingent. This knowledge is usually left tacit; it rarely (or never) appears in equations. But it could: we could compute \Pr(\mbox{Y}|\mbox{Y is contingent}), which even is quantifiable (the open interval (0,1)). We then “update” this to \Pr(\mbox{Y}|\mbox{X \& Y is contingent}), which is 1/3 as above. Bayes’s theorem is again not needed.

Of course, there are many instances in which Bayes facilitates. Without this tool we would be more than hard pressed to calculate some probabilities. But the point is the theorem can but doesn’t have to be invoked as a computational aide. The theorem is not the philosophy.

The real innovation in Bayesian philosophy, whether it is recognized or not, came with the idea that any uncertain proposition can and must be assigned a probability, not in how the probabilities are calculated. (This dictum is not always assiduously followed.) This is contrasted with frequentist theory which assigns probabilities to some unknown propositions while forbidding this assignment in others, and where the choice is ad hoc. Given premises, a Bayesian can and does put a probability on the truth of an hypothesis (which is a proposition), a frequentist cannot—at least not formally. Mistakes and misinterpretations made by users of frequentist theory are legion.

The problem with both philosophies is misdirection, the unreasonable fascination with questions nobody asks, which is to say, the peculiar preoccupation with parameters. About that, another time.

Older posts

© 2017 William M. Briggs

Theme by Anders NorenUp ↑