William M. Briggs

Statistician to the Stars!

Page 395 of 566

Symmetry, Priors, Logical Probability, Infinities, and Needless Paradoxes

One reason why some reject the notions of logical probability and Bayesian statistics is because it is said that assignments of probability under symmetry generate paradoxes. However, as I will show, this is only so only for illegal jumps to hyperspace; that is, excursions to infinity.

Consider this poser by Joseph Betrand: We have knowledge that a cube has sides less than or equal to 2 cm. Now, according to symmetry, or Keyne’s Principle of Indifference, given this evidence E, the probability the cube has sides less than or equal to 1 cm equals the probability the sides are greater than 1 cm but less than or equal to 2 cm.

However, E also tells us we have a cube, and cubes have volume. If the sides are 2 cms or less, then the volume is 8 cm3 or less. And this is where the trouble starts, for we can invoke symmetry again and say the probability that the cube has volume 4 cm3 or less, given E, should be equal to the probability the cube has volume from 4 cm3 to 8 cm3.

But a cube with the volume of 4 cm3 has sides equal to the cube root of 4 cm3, or about 1.59 cm. Trouble! Because we now have symmetry telling us that the probability the cube has sides less than 1 cm is the same as it having sides less than 1.59 cm. Oops.

What to do? Well, if you’re like most people, you toss out logical and Bayesian probability. Worse, there are dozens of examples like this. For the fertile mine, generating paradoxes is easy! Really, every time a Bayesian assigns a prior, he runs into this trouble: merely changing the units of measurement is often enough to incur this bizarre inconsistency.

But there is more to this problem than meets the mind. Logicians, philosophers, and statisticians have been too quick to dismiss logical probability on these grounds. Here’s why.

E does not just contain knowledge that the cube is less than 2 cm. Implicit in E is the idea that the cube is infinitely divisible This fact, known but appreciated, is the cause of all difficulties.

How many cubes do you know of, in real life, that are infinitely divisible? I say there are none. Even at the submicroscopic level, a cube—an actual, fleshy there-it-is cubea—is made of discrete building blocks. These blocks are not infinite in number, nor are they infinitesimally small. They are finite and of definite, non-zero size.

Let’s take one of these real cubes; say, a cube which is made of blocks, each 1 cm on a side. We can now re-state our original problem: we have knowledge that a cube has sides less than or equal to 2 cm. This knowledge—and the knowledge that the real cube is made of a finite number of discrete blocks—forms our new E. Now under symmetry, or via the principle of indifference, the probability that the cube’s sides are 1 cm or less is again equal to the probability that the sides are greater than 1 cm.

What about volume? Well, given E, what are the possibilities? Only two: the volume can be 1 cm3 just in case the length of the cube’s side are 1 cm, or the volume can be 8 cm3 just in case the length of the cube’s sides are 2 cm.

The volume can take no other values but these two! Under E, we know we have a cube, which means the length of a side cannot be 0, therefore the volume cannot be zero. We also know that the cube is made of discrete blocks of a definite size. Thus, volumes like 4 cm3, or 3 or 7 or whatever, are not just unlikely, they are impossible.

The only two possible volumes are 1 cm3 and 8 cm3. Under symmetry, or the principle of indifference, the probability the volume is 1 cm3 is equal to the probability the volume is 8 cm3. And either of those statements are the same as saying the cube’s length is either 1 cm or 2 cm.

What about transformation of units? No problem here either, because the discrete blocks which make the cube have a fixed, definite length.

What if the cube is made (on a side) from up to N definite blocks? Again, no problem.b Lengths are 1, 2, … N; and the only possible volumes are the cubes of these. Even stronger, any measured quality or dimension—not just volume—can only be discrete.

Is assuming what appears to be true—that the universe, or at least our ability to measure it, is discrete and finite—a limitation on theory of probability. No! N can be as big as you like! Let it grow, grow, GROW! Just don’t let it hit infinity, and we will never have any paradoxes creep into our calculations.

Would it surprise you to learn that this criticism of infinity is old? Nowadays our position is called constructivism or finitism. But let’s recall what our man Leopold Kronecker, he of “product” fame and enemy of Georg Cantor, said: “God made the integers; all else is the work of man.” Amen, brother Kronecker, amen.c


aThe reality of the cube is not essential, not required. Nor is the equality of size of the discrete chunks which make up objects and measurements. What is required is finiteness and discreteness.

bIn rare cases, we have to keep track whether N is divisible by 2, and then only to make coherent statements (those times when we want to chop the sides up into parcels and say something about those parcels).

cObviously, there is much, much more to be said on this subject.


Lady Wins Fourth Lottery: What Are the Odds?

I’m in the wild blue yonder today; so here is a distraction. Thanks to reader Jade for suggesting the topic.

Nobody can scratch better than Las Vegas resident Joan Ginther, who has just scrapped that little gray fuzz off of her fourth winning lottery ticket. Her fourth win!

The two questions on everybody’s mind are: How do I hit Ginther up for a loan? and How do I work her magic? I don’t have a sure answer for the first, other than to say that, she being female, flattery rarely fails; but I can tell you all about the chances of duplicating her performance.

First, her achievements, according to the (as it is sometimes miscalled) Corpus Crispi Caller and Yahoo Buzz:

  • 1993: $5.4 million (paid in yearly installments). Odds: 1 in 15.8 million,
  • 2006: $2 million (lump-sum payoff). Odds: 1 in 1,028,338,
  • 2008: $3 million (lump-sum payoff). Odds: 1 in 909,000,
  • 2010: $10 million (lump-sum payoff). Odds: 1 in 1,200,000.

It’s not clear if these are the pre-government confiscation amounts, or the actual dollars she pocketed; probably the former. Still, even considering the (approximate) 40% federal tax bracket, if the lovely Ginther has been living clean, then she likely has at least has several million in the bank.

But since she’s been camped out in Vegas, and she has quite positively evinced a love of gambling, she might not have much left after all. For to win that many lotteries requires her to buy many, many tickets.

Let’s simplify a bit, just to make it easier on ourselves. The probability of winning her lotteries are approximately 1 in 15 million, and three 1 in a millions. I’ll assume the 1 in 15 million was a “bouncing ball” lottery, and that the others are all scratch-off tickets: the kind of gamble doesn’t matter in calculating the odds of winning, but naming it makes it easier to describe. We don’t know, but it’s a good guess that she likely bought more than one ticket per game.

Take her 2006 win. If she bought just one ticket for that gamble, then she had a 1 in a million chance of winning. If she bought two tickets, then she roughly doubles her chance of winning. If she was like a lot of folks I see lining up at the bodega windows, she might have laid down as much as a 100 bets in a few months’ time. Buying that many tickets pushes the chance of winning to 1 in a 100 thousand, a substantial jump.

There are about 13 years (we don’t know the exact dates of her wins) between her jackpot payout of 1993 and her next winning ticket in 2006. Assuming she bought 100 tickets a year—a not uncommon figurel; probably on the low side—then she might have racked up 1300 tickets. That gave her a just over a 1 in a 1000 chance of winning. Pretty good odds! If she bought 200 tickets a year, her odds of winning rise to almost 3 in a 1000.

Anyway, she did win in 2006, then she won again in 2008, which, of course, is only two years later. How many tickets did our Joan buy in those two years? We can only guess: but she had a pocketful of money, so, at least as a first approximation, we can imagine she bought another 1300 tickets. That gave the odds of winning (in 2008) 1 in a 1000 again.

And the same thing, or something like it, is true for her last win in 2010. That is, she likely had a 1 in 1000 chance of winning the last payout.

The lottery has no memory, by which I mean that winning before does not affect the probability of winning again. Given that and the rule that chances multiply, we can calculate that Ginther had a 1 in a billion (which is 1 in 1000 multiplied by itself three times) chance of winning her last three payouts. If she bought twice as many tickets as we guessed, then she had about a 2 in 100 million chance of winning thrice.

But what about her first win? It’s the same process. We have to make a guess about how many tickets she bought. It’s not impossible to imagine, this being her first win, that she dropped a substantial bundle before seeing her numbers come up. Say she blew six grand: that gave her the odds of 4 in 10,000 of taking home the jackpot.

Altogether, this makes the chances anywhere from 7 in a trillion to 9 in 10 trillion of winning four times, depending on the number of tickets purchased. Even if we assume she bought twice as many tickets as we guessed, this still works out to about 1 in 10 billion.

But that’s just the odds that she, Joan Ginther, wins four times. The odds that somebody wins that many times is much, much higher. As much as 2 in a 1000, if there were 20 million inveterate gamblers like Joan out there. And if there were 100 million—a distinct possibility: remember, we’re talking about many decades of lotteries from which to find four winners—then the chance of at least one Joan Ginther is about 1 in a 100.

Which suddenly doesn’t seem so small.


Consensus Members Agree To Agree: Breaking Story

The agreement amongst climatologists who agree that mankind will cause devastating climate change is popularly known as “The Consensus.” Those who agree with the consensus are part of the consensus; while those that disagree with it are called skeptics. OK so far?

Now, what would you think of a study which examined members of the consensus, and which asked those members, “Do you agree with the consensus?”, and then reported that members of the consensus agree with the consensus as news?

Well, reporters said, “There’s a consensus among consensus members!”

Kirsten Zickfeld of the University of Victoria, and some of her pals, gathered top consensus members and asked them questions about climate change. They then wrote a paper summarizing the consensus’s answers: “Expert judgments about transient climate response to alternative future trajectories of radiative forcing.”

Zickfeld presented experts with three made-up forcing scenarios: a high “The sky is falling! The sky is falling!”, a medium “It’s worse than we thought”, and a low “More funding is needed.” The exact (and dull) specifications can be looked up in the original paper.

All experts agreed that “cloud radiative feedbacks” were the least understood processes, and therefore the largest contributer to climate uncertainty. Not un-coincidentally, “cloud radiative feedbacks” are the same sources of uncertainty pointed to by many doomsday-climate skeptics, such as Roy Spencer.

The experts did not agree on importance of other forcings; or in the learned words of the report, the rankings were “not entirely robust with respect to the procedure used.” In other words, no consensus!

They then “asked experts to make judgments about the probability that different levels of radiative forcing could trigger some ‘basic’ state change in the climate system.” That is, will there be “tipping points”?

The picture shows the probabilities “elicited.” It’s a bit screwy at first glance. Climate Expert Probability for Scenarios It appears to indicate the experts’ guesses of the chances for each of the three scenarios.

So, for example, expert #1 (M. Allen) appears to say that the chance the sky will fall is certain. Yet Allen only claims a 60% chance that it’s worse than we thought; and he gives a mere 20% probability for more funding is needed. I make this to be a total of 180% chance of the climate changing. No wonder why these fellows are so nervous!

But that’s not what the graph reports; that is, it does not report the experts’ guesses on the likelihood for each of the three scenarios. Instead, each of the three scenarios was assumed, and only then was each expert asked something like this, “Give the sky will fall, what is the chance that the climate will undergo a tipping point?”

In law, this is called a leading question. In logic, it’s close to a tautology: it only differs from one in the same way the sky falling differs from a climatological tipping point, a distance which is measured with calipers. It is, therefore, a near meaningless question. They should have asked how likely each scenario was; they should not have asked how likely the climate was to change given that the climate changed. The information content of the answers were as low as a New York Times editorial.

However, the all-important abstract—the small blurb which appears at the start of all scientific papers, and the only part of a work which most people read—states, “experts judged the probability that
the climate system would undergo, or be irrevocably committed to, a ‘basic state change’ as > 0.5.” Now, even though that entire sentence is factually correct, it is misleading. Reporters, for example, who are, it must be admitted, not always the best representatives of the cognitive elite, misread that statement to mean that there was at least a 50% chance that the sky would fall.

The researchers pushed on. They asked each expert for his best guess of the (surface) temperature increase we can expect under each of the scenarios. Now just recall: the scenarios already say the temperature will increase, just not by how much, although in all cases the amount of radiative forcing is such that the increase could not be negligible.

Lo, they found that when asked if temperatures were going to increase, the experts roughly agreed on the amount of increase. The researchers repeated the process for the quantity known as “climate sensitivity”, a value which says more about our need as humans to summarize complexity with a single number than its usefulness as a physical measure. Anyway, given the three scenarios, the experts agreed on climate sensitivity, too.

They better had!

If they had not agreed, I would have been concerned deeply. Why? Consider: each of these men believe the same basic theory, they use largely the same data, they run models that contain matching lines of code. They meet regularly to discuss how much they agree, and on how to correct and to come to consensus on the small points where there is disagreement.

Agreement, then, is a given. Agreement is why there is a consensus. Agreement means consensus.

The entire report, with its fictionalized scenarios, merely told us what we already knew. Yet this news was greeted with wide acclaim. Nowadays, this is what is called “good science.”


Nathan’s Hot Dog Eating Contest: Kobayahsi Arrested! Records and Statistics

Sonna no uso da! Fuzakeruna yo! Were those the words of ex-champ Takeru Kobayahsi as he brazenly stormed the stage at the ninety-fifth annual Nathan’s Hot Dog eating contest yesterday?

Poor Takeru! Joey “Jaws” Chestnut had just crammed 54 franks down his gullet in 10 minutes and was being awarded his fourth consecutive Mustard-Yellow Belt, when the crazed Kobayashi ran up the dais and went all anime. By which I mean, he stood rock still and expostulated and expostulated and expostulated. The only thing that was missing were his tentacles.

But New York’s Finest were there—thank you, boys!—and, after a brief struggle, fitted Kobayashi out with a set of personalized, Fourth of July, silver bracelets. Off to chokey! But not in the way Takeru had hoped.

Poor Takeru. It didn’t have to end this way.

It was 4 July 2001, at Nathan’s hot dog stand at the corner of Surf and Stillwell, in Coney Island, Brooklyn. I was there with about 150 fans—yes! only 150!—as we watched Kobayashi and his rivals mount the stage. It was the big men that impressed: eating legends like Ed “Cookie” Jarvis (who is now down with a thyroid cancer injury) and Eric “Badlands” Booker. Takeru practically disappeared behind these behemoths.

There was a fog so thick that we couldn’t see the sea, though it was only one block and a boardwalk away. I was able to stand right by the rope placed in front of the stand, there was such a small crowd.

Fans weren’t completely dismissive of Kobayashi. Just last year, his country-mate Kazutoyo Arai had took the title by eating 25 1/8 dogs in 12 minutes, an impressive but not galvanizing performance. But smart money went with the literally hungry look in Jarvis’s eyes. Surely he or one of the other giants would eat this tiny Nipponese under the table.

When the gun sounded, the slight Kobayashi turned into a machine. He ate so fast that they eye couldn’t follow. Dog followed dog down his throat so fast that he could not have been chewing. Eric “Badlands” Booker glanced over, but he couldn’t push himself to match Kobayashi’s speed. He spent just as much time drinking water as eating buns.

Young Takeru handily won, eating a then-record 50 hot dogs in 12 minutes. There was scattered applause and the crowd dispersed quickly. I walked over to the ocean to check on the fog, and to see if the water was swimmable. I returned to Nathan’s after about ten to fifteen minutes. The contestants were still hanging around the stage.

A Japanese TV station was interviewing Kobayashi. He pulled up his shirt and I was stunned to see a washboard stomach. Where had all those dogs gone!

But what really struck me was that “Badlands” Booker, who was chatting with another competitor, was casually eating another hot dog or two. It’s true that he had only eaten half as many dogs as Kobayashi, but there he was, still dining on Nathan’s dime.

So here is some advice for the bosses at Major League Eating: there is a difference between sprinting and the Marathon. Nathan’s contest is a sprint. Eaters like “Cookie” Jarvis and “Badlands” Booker are natural Marathoners. Contests designed around this concept should be created forthwith. Nobody is going to beat an American at Marathon eating! Why, just walk down the street in Augusta, Georgia and you’ll see what I mean.

It’s now 2007 and the conviction was that nobody could oust Kobayahsi: he had won six championships in a row. But then came a nobody from California named (beautifully) Joey Chestnut. Inside his intestines burned an intense desire to take back the belt for the Red, White, & Blue, especially on the Day of Days for this great country.

Chestnut slammed Kobayashi; the little guy didn’t know what hit him as Joey sucked down 66 dogs in 12 minutes. Wow! American spirit soared. Just as it would do so a year later when Chestnut again bested Kobayshi; as he did again in 2009, this time eating 68sixty-eight!!—dogs in only 10 minutes.

Then came 2010. Kobayshi refused to enter the tournament, claiming a “contract dispute.” Nonsense, said the organizers: they ask all contestants to sign a standard release which hadn’t changed from previous years. I speculate that Kobayshi couldn’t stomach competing against—and losing to—Chestnut again. So he invented the story of a “dispute” to save face.

Which might have worked, had he not bansai-ed the stage after watching Chestnut suck down 54 dogs in 10 minutes. He probably thought to himself that he could have done more than 54. But it was near 100 degrees yesterday, which limited Chestnut’s intake.

Oh, poor Takeru! He’s on his way home, via Rikers, on an empty stomach. It didn’t have to be this way.

Historical statistics

The contest began in 1913. Back then, there were only 3.5 minutes allocated. But after the tournament became (locally) popular, it was changed to 10 minutes. This was altered in 1991 to 12 minutes; but worries about human limitations (after Chestnut’s performance) caused the organizers to reduce the time back to 10 minutes in 2008.

Therefore, to compare the numbers from year to year, we must look at the number of hot dogs eaten per minute. That’s what the graph below shows.

Nathan's Hot Dog Eating Contest

The rate of increase through the late 1970s until the mid 1990s was sedate, respectable. But now you can see how the Kobayashi took the world aback with his gorgeous gustatory gluttony: just look at that jump in 2001!

We can now see how Chestnut’s fortitude in 2007 traumatized poor Takeru. The rate of increase in dogs-per-minute was too much for the little man to match.

The only thing that appears capable of stopping Chestnut is the weather.

« Older posts Newer posts »

© 2015 William M. Briggs

Theme by Anders NorenUp ↑