David Blackwell, who died two weeks ago, was one of the first mainstream statisticians to “go Bayesian.” And for that and his unique skill in clearly explaining difficult ideas, we owe him plenty.
Blackwell handed in his slide rule and the grand age of 91. A good run!
He worked on cool problems. From his Times obituary, “His fascination with game theory, for example, prompted him to investigate the mathematics of bluffing and to develop a theory on the optimal moment for an advancing duelist to open fire.”
If that isn’t slick—and useful!—I don’t know what is. Of course it’s useful; because it doesn’t have to be two guys facing off with pistols, it can be two tank columns facing off with depleted uranium rounds.
One of the big reasons statisticians started the switch to Bayesian theory, or at least accorded it respect, is that it is aptly suited to decision theory, which Blackwell (with Girshick) explicated neatly in their to-be-read book Theory of Games and Statistical Decisions. I encourage you to buy this book: you can pick up a copy for as little as three bucks.
A classic decision analysis problem of the sort Blackwell examined is this.
St Petersburg Paradox
The estimable Daniel Bernoulli gave us this problem, one of the first creations of decision theory. You have to pay a certain amount of money to play the following game:
A pot starts out with one dollar. A coin is then tossed. If a head shows, then the amount in the pot is doubled. If a tail shows, the game is over and you win the pot, else the coin is re-flipped repeatedly until a tail appears. How much should you pay to play?
Suppose you pay ten bucks and the coin shows a tail the very first throw. You win the dollar in the pot, but it costs you a bundle. You won’t make any money unless a tail waits until at least the fifth throw.
The standard solution begins by introducing the idea of expected value. This is usually a misnomer, because the “expected” value is often one that you do not expect or is impossible. Its formal definition is this: the weighted sum of everything that can happen multiplied by the probability of everything that can happen.
For example, the expected value of a die roll is:
EV = 0.167*1 + 0.167*2 + 0.167*3 + 0.167*4 + 0.167*5 + 0.167*6 = 3.5,
where 0.167 is 1/6, the probability of seeing any result. This says we “expect” to see 3.5, which is impossible. The dodge we introduce is to turn the die roll into a game that can be played “indefinitely.” Suppose you win one dollar for every spot that shows. Then, for example, if a 5 shows you win $5.
If you were to play the die game “indefinitely” the average amount won per game would converge to 3.5, and seeing an average of 3.5 is certainly possible. For instance, you win $6 on the first roll and $1 on the second, for an average of $3.5 per roll. However, expected value is the average after you play a number of games that approaches infinity.
We can now apply expected value to the St Petersburg game:
EV = (1/2)*1 + (1/4)*2 + (1/8)*4 + … = infinity.
There’s a 1/2 chance of winning $1, a 1/2 * 1/2 = 1/4 chance of winning $2 (we see a Tail on the second throw), a 1/2 * 1/2 * 1/2 = 1/8 chance of winning $4 (we see a tail on the third throw), and so on. Those of you who have had a “pre-calculus” course will quickly see that this sum approaches infinity.
Yes, that’s right. The “expected” amount you win is infinite. Therefore, this being true, you should be willing to pay any finite sum to play! If you’re convinced, please email me your credit card number and we’ll have a go.
The classical solution to this “paradox” is to assume that your valuation of money is different than its face value. For example, if you already have a million, adding 10 bucks is trivial. But if you have nothing, then 10 bucks is the difference between eating and going hungry. Thus, the more you have, the less more is worth.
Through calculus, you can use a down-weighted money function to give less value to absurdly high possibilities in the St Petersburg game. So, instead of winning $2100 (a million billion billion) if a tail doesn’t show until the 100th toss, and which has the chance of 1/2100 = 8 x 10-31, you say that amount is worth only a vanishingly fractional amount.
Whatever down-weighting function is used (usually some form of log(money)), calculus can supply the result, which is that the expected value becomes finite. The results are usually in the single-dollars range; that is, the calculus typically shows the expected value to be anywhere from $2 to $10, which is the amount you should be willing to pay.
The real solution is to assume what is true: the amount of money is not infinite! Using only physically realizable finite banks, we know the pot can never exceed some fixed amount.
If that amount is, say, $1 billion, then the number of flips can never exceed 30. The expected value, ignoring down-weighting, of 30 flips is only $30 * (1/2) = $15. And we can, if we like, even include the down-weighting! (Even $1 trillion gives only a max 40 tosses with expected value $20!)
Thus, the St Petersburg “paradox”, like all paradoxes, never was. It was a figment of our creation, a puzzle concocted with premises we knew were false.
More on finitism, or mathematical constructivism here.
Ahhh! The uses of utility. The classical solution was the first mention of it IIRC. It’s still a foreign concept to a lot of gamblers so I would say the ‘paradox’ lives on.
The problem now shifts to how to develop the utility function. It all hinges on the value of money and goals of the applier. For example, a chance to win $185M on a lottery might be worth only $20 to me but $6000 to someone else. The value money holds does not necessarily follow from the amount one already has.
What you failed to mention: some problems still have this inherent ‘paradox’. This particular one was solved because the assigned utility function could be used to convert an infinitely increasing return of investment into one that could be optimized. That is, the ROI function is now concave. Note that the operational word here is ‘assigned’. It doesn’t necessarily follow that all utility functions will lead to concavity.
A completely efficient market has no optimum. Try as you might, the paradox returns. Even a weakly efficient market has no optimum (particularly if one rejects negative optima) if the potential gains do not exceed expected losses.
—
Side note: an “efficient market” is a market where no position is profitable. IOW: you gets what you pay for. This is sometimes called a ‘fair market’.
A ‘weakly efficient’ market is one where all potential profits are roughly equal.
There are different definitions of these but the differences are mainly concerned with how one determines the efficiency — say by using past prices.
Matt:
It seems to me that there is a connection here to the issue of the use or misuse of the Precautionary Principle.
It would also be interesting to apply Blackwell’s logic as to when (the amount of prize money) and how much to bet on one of the big lotteries – given that all potential players will be trying to make the same decision – kind of like duelists. Also based on the Wikipedia write up on Powerball, where you buy your ticket is also a consideration.
If you come up with a compelling solutions, perhaps we can pool our utility functions?
P.S. I will buy the tickets! ;>)
Powerball is an interesting problem. Not only because of the complications arising from how the pot is divided but also from the mere fact that, past a certain level, the investment modifies the result. The modification is a real problem in all parimutuel markets and even the stock market where price changes may change the perception of future investment value. (The reason why Pump and Dump schemes work).
Applying game theory (“given that all potential players will be trying to make the same decision”) ‘to these problems is a simplifying assumption. You can get into a real tizzy trying to adjust yourself to how others might act. You are effectively assuming that the goals and values of others are the same as yours. You now need to assign a probability that your assumption is correct. If you don’t, you’re likely wasting your time using it in an analysis. At the very least, you would have no reason to have confidence in your answer.
The Precautionary Principle applied to say Global Warming with the wildly differing opinions on its worth is a good example of the problem arising from using game theory with simplifying assumptions
Dr. D. Blackwell was an outstanding, admirable winner in the game of life!
Is this supposed to be a paradox?
JH, more or less. 😉
What’s a dollar to a billionaire?
The whole idea though doesn’t address the fact that money ain’t everything even if it beats whatever comes in first place.
So how do irrationals such as pi fit into finitism?
And why am I not taught about this in my statistics courses? There is mention of the problem of binary representation of real numbers, but nobody ever suggested changing the theorems to fit the computer representation.
Does this imply that statistical models in 2-byte precision may be different to those in 4-byte precision? Or is that the benefit of infinity: if it works in infinity it works in any precision (I think the cube example says not).
The more I study statistics, the more I empathise with Rutherford: If your experiment needs statistics, you ought to have done a better experiment.
DAV:
The marginal value of money requires intersubject comparisons and is inherently problematic if not impossible (Arrow). I have, thankfully, considerably more money than my three kids but they seem to attach no value to anything less than a $5 bill, i.e., I am constantly picking up pennies, dimes, quarters and $bills that they have seemingly discarded.
As for the Precautionary Principle, those using it frequently fail to fully consider the real and more definite opportunity costs associated with preventing one possible bad outcome as opposed to others. Aaron Wildavsky (But Is It True?)and Bjorn Lomborg (Cool It) has this one about right.
Tim:
IMHO, Rutherford got it precisely right and undoubtedly was a closet Bayesian.
JH:
My wife got a chuckle from Blackwell’s comment on the value of the telephone as stated in his Saturdays WSJ Obituary:
He raised eight children, and for many years lived without a telephone. He hated how it interfered with conversation.
“What a rude, impolite instrument that is,” he said in the “Mathematical People” interview.
Tim,
They don’t, except at the limit. And limits are, or at least can be, fine things when used as approximations. Although everything is finite, keeping track of large numbers is difficult, and passing to the limit allows simplifications. But like all simplifications, they can go badly wrong when they are used indiscriminately.
And to Rutherford I say, amen, brother. Statistics is best used when reporting what happened or predicting what might. When it’s used, as it almost always is, to posit the (indirect) truth of unobservable theories and parameters, trouble ensues.
You are mixing models. On one hand you assume a large number of players or plays. On the other hand you compute the payoff for one player playing one hand.
This is exactly what happens to traders who assume expected payoffs from a singular event based on payoffs of a large number of events. von Mises anyone?
Pingback: William M. Briggs, Statistician » The Two-Envelope Problem Solution: Part I
Does it really make any sense to sum mutually exclusive outcomes?
Shouldn’t EV, rather than being equal to (1/2)*1 + (1/4)*2 + (1/8)*4 + … = infinity,
be equal to (1/2)*1 OR (1/4)*2 OR (1/8)*4 OR … = $.50??