Probability Puzzle Paradox: Which Boxes To Take?

Probability Puzzle Paradox: Which Boxes To Take?

We did Newcomb’s Paradox (so called) years ago, and a twist in premises after that. But I thought it would be fun to revisit, because I had some new ideas about less-than-perfect predictors.

The set up is simple. Two boxes in front of you, A and B. A has \$1,000 no matter what, B has a million or nothing. A Guesser loads the boxes. You get to pick (and keep) B, or both A and B. But if Guesser guesses you’ll take A+B, and so hope for the max money, he puts \$0 in B. If you decide to pick B, but not both, Guesser puts a million bucks in B.

Which box or boxes do you take? Think about it first.

The paradox is first framed by saying the Guesser is perfect. In the link above, I show all the scenarios lead to you picking box B, since that gives the most money. There is no way to fool a perfect Guesser. Any way you come to the choice A+B is guessed by perfect Guesser, and he leaves B empty. This includes the strategy whereby you employ some kind of “randomizer”, to make your choice unpredictable even to you until you have to make the choice, because Guesser will know you are trying to cheat him and he will leave B empty.

It’s supposed to be a paradox because there are two decision or game theory “optimal” choices, one being maximizing something called expected utility, which says pick B, and the other “strategic dominance”, which says pick A+B because no matter what you get a grand. Wokepedia has a good explanation.

But that only shows you that theoretical “optimums” may have nothing to do with Reality. If you choose the strategy “Gimme the most”, then choosing B is optimal.

There are concerns, some say, about free will. If Guesser can know what you’re going to do with certainty, then you don’t have free will. This doesn’t follow. You know your kid is going to take a cookie if you leave them on the counter to cool, but that doesn’t mean it wasn’t your kid’s free choice to take a cookie. Your prediction is not causative of him taking the cookie.

You can immediately see that conclusion has application in theology. So it seems to follow that because God is omniscient, it does not follow that we do not have free will. And indeed, this is the solution advocated by William Lane Craig in his excellent The Only Wise God: The Compatibility of Divine Foreknowledge and Human Freedom, which I encourage you to read.

Notice, too, that we never (well, or rarely) question whether our physical predictive models are causative. We can predict the sun will rise in the east, and do an excellent job, but few now believe because we predict this we are causing it. Though some make similar (confusing) arguments in quantum mechanics.

However, let’s stick closer to fallibility. Robert Nozick was apparently the first philosopher to look into Newcomb, and he (says Wokepedia) wanted to skirt around problems like divine foreknowledge, and so supposed Guesser was not necessarily perfect. But he didn’t say how imperfect, and implied something near to perfection.

Martin Gardner apparently wrote about this decades ago, so I know I risk making an ass out of myself, because if it was already done by Gardner, then there is little sense in trying outdo him. Nevertheless, I go on glibly because I haven’t been able to find Gardner’s solution.

As I’ve said many times, sometimes notation helps sometimes it hinders. Here it helps. Let Y be the Guesser’s guess of what you’ll decide, A+B or B. Let X be your guess, A+B or B. We’ll let R be the rules of the game, which contain all other knowledge we have regarding the setup, including your own choices. We want to build imperfection into Guesser, since if he’s perfect, we know the answer (choose B). We have:

$$\Pr(Y = A+B | X = A+B,R) = p_{A+B},$$

$$\Pr(Y = B | X = B,R) = p_{B},$$

where these read the probability Guesser guesses correctly when you pick A+B, and when you pick B, both assuming R. These two chances need not be equal. Since the guess of Guesser drives the money, his guess is equivalent to what’s in the boxes.

Box B has a million either when Guesser guesses correctly you’ll pick B, or when he fails to guess you’ll pick A+B. Box B is empty when he fails to guess you’ll pick B or when he guesses correctly you’ll pick A+B. In other words:

B has a million = “Y = B & X = A+B or Y = B & X = B”

And that has probability

$$\Pr(Y = B | X = A+B,R) \times\Pr(X=A+B|R) + \Pr(Y = B | X = B,R) \times\Pr(X=B|R), $$

or

$$(1-p_{A+B}) \times\Pr(X=A+B|R) + p_B \times (1-\Pr(X=A+B|R)). $$

If Guesser is perfect, then $p_{A+B} = p_{B} = 1$, and if you choose A+B, i.e. Pr(X=A+B|R) = 1—remembering probability is not causative, but you are!—, then the probability B has a million is 0. If again Guesser is perfect and you choose B alone, then Pr(X=A+B|R) =0 and the probability B has a million is 1.

And indeed, when Guesser is perfect the equation simplifies to (1-Pr(X=A+B|R)) = Pr(X=B|R). So it all works, and shows even if you pick some mechanism to “randomize”, if $p_{A+B} = p_{B} = 1$ then the best strategy is Pr(X=B|R) = 1, i.e. pick box B.

We want to know what happens if the Guesser is imperfect, and either $p_{A+B} < 1$ or $p_{B} < 1$. Obviously, we want to know if we can beat the million and come home with that extra thousand. These are situations when Pr(X=A+B|R) = 1, i.e. you pick A+B, and Guesser guesses wrong, which he does with probability (1-p_{A+B}). The extreme case is that Guesser always gets it wrong when you decide A+B, i.e. p_{A+B} = 0, so the chance B has million is certain.

At what probability (1-p_{A+B}) is it best for you to pick A+B? Don’t forget Guesser might now goof, and might get it wrong if you pick B, and so leave B empty. Thus, if you pick A+B the chance is (1-p_{A+B}) B has the million and you take home an extra grand; if you pick B the chance is p_{B} and nothing extra, so you’d pick A+B if

$$1-p_{A+B} > p_B$$,

else pick B.

That ignores the nicety of the extra grand, so you might adopt a game theory “optimal” strategy, like expected value. It doesn’t follow that what is optimal in some theory is optimal for you. Speaking for myself, if I could get the million I wouldn’t care about the extra thousand. However, for the sake of completeness, pick A+B if

$$(1-p_{A+B})\times \$1,001,000 > p_B\times \$1,000,000,$$

else pick B.

How we learn of Guesser’s prowess is anyone’s guess (great pun!).

Update: In producing a (lousy) video for this, available soon at my YouTube channel, I realized that if p_{A+B} = p_{B} = p then you should only pick A+B if 1 – p > p or p < 1/2. Which means only take A+B if Guesser is a lousy predictor — and rich and willing to throw away his obviously inherited money.

Subscribe or donate to support this site and its wholly independent host using credit card click here. Or use the paid subscription at Substack. Cash App: \$WilliamMBriggs. For Zelle, use my email: matt@wmbriggs.com, and please include yours so I know who to thank. BUY ME A COFFEE.

5 Comments

  1. Rhetocrates

    I’m confused about what’s the difficulty here if the box filler has perfect information. Maybe I misread the problem. Let me restate it in terms that remove any ‘free will’ concerns and still have a ‘perfect guesser’.

    There are two boxes, A and B. These will be filled by a ‘Guesser’ to use the terms of the post. Box A always has $1000. If the Guesser knows you will just take box B, he puts $1m in box B. If he knows you will take box A + B, he puts $0 in box B. We ignore the case where he knows you will only take box A because then it doesn’t matter what’s in box B.

    Before he fills the boxes, you tell him which box or boxes you will pick. You’re trustworthy and honourable, and so bound by what you say here.

    Clearly you tell him you will pick box B only, so you get $1m.

    This is no different from him obtaining the knowledge in any other way, e.g. overhearing you discussing the problem with your wife before entering the room, having a shrewdly accurate predictive model of your behaviour, whatever. None of this requires removing your agency.

    It’s only in the cases where the ‘Guesser’ does not have perfect knowledge that the question becomes probabilistic – which I think is your point, really. Probability arises only in cases of imperfect information and is not itself an ontological real.

  2. Briggs

    NLR,

    Many thanks!

  3. Rudolph Harrier

    Something I find interesting about this puzzle is that BOTH parties are correct in their strategy, assuming the premises that the guesser is honest and has perfect knowledge of the future. What I mean is this. There are people who take the premises as they are stated, and view the situation (correctly) as one where you can either choose to take both boxes and get $1000 or take B alone and get a million. If the guesser is perfect, then he would have known that these people would view the problem in this way, and thus put a million in box B.

    But other people are not content to take the premises as given and try to outsmart the system. Even though we have been told to accept that the guesser has perfect knowledge, and so cannot be outsmarted, they convince themselves that he can’t REALLY have known what was the choice was going to be, or that they can make a “preliminary” choice as the guesser fills the boxes and the change the choice afterwards to nab more money. So they view the choice in terms of knowing that box A definitely contains money and box B may or may not contain money, and so take both boxes. But the thing is that if the guesser really did have perfect knowledge, then he would have known that they would have been committed to that analysis of the problem, and so would have known that they would take both boxes. Thus for these people box B would be empty, and so taking both boxes really would be the better option.

    Ultimately this is a problem of how much you trust the guesser. If you trust his predictions and his honesty, pick box B. If you don’t trust his predictions and at least know that neither box has poisonous gas or anything harmful in it, pick both. If you trust his predictions but not his honesty, pick at random.

  4. Personally, I’d play the expected utility with a perfect Guesser and strategic dominance with an imperfect Guesser. If we’d be playing this game infinitely long, I’d go with expected utility even with an imperfect Guesser, but with a finite number of rounds I think it’s more important to guarantee a payoff than maximize a theoretically possible – but practically uncertain – outcome.

    > If Guesser can know what you’re going to do with certainty, then you don’t have free will. This doesn’t follow.

    True. Speaking of God specifically, my approach is to point out God is outside of time and therefore equally present in both past, present and *future*. So he can always peek into the future to know what you’ll do. Futhermore, there’s a model of Reality that starts with positing the Universe runs inside a Turing machine (with infinite memory). The machine itself runs on nothing because no other explanation works (an infinite stack of Turing machines each virtualizing the one above it can’t actually perform a computation). Then, the decisions of every free-willed creature are encoded in arrays in the memory of that Turing machine, one for every creature. The arrays are all infinite in size and detail the decisions the creature will make in every possible situation, in every possible history of every possible Universe. The arrays fill themselves upon God’s command intelligently but without any constraints – which is the freedom of the free will. Then, once they’re full God decides which exact universe and which exact history of that universe will play out. In Biblical language – not an exact match but very close – the arrays are called “hearts”, and much of history is so that “what’s in the hearts of everyone will become known”. 🙂

Leave a Reply

Your email address will not be published. Required fields are marked *