A lady wrote Ariely asking for economic party games. Ariely suggested this one:

Give each of your guests a quarter and ask them to predict whether it will land heads or tails, but they should keep that prediction to themselves. Also tell them that a correct forecast gets them a drink, while a wrong one gets them nothing.

Then ask each guest to toss the coin and tell you if their guess was right. If more than half of your guests “predicted correctly,” you’ll know that as a group they are less than honest. For each 1% of “correct predictions” above 50% you can tell that 2% more of the guests are dishonest. (If you get 70% you will know that 40% are dishonest.) Also, observe if the amount of dishonesty increases with more drinking. Mazel tov, and let me know how it turns out!

Let’s see how useful these rules are.

Regular readers have had it pounded into their heads that probability is always conditional: we proceed from fixed evidence and deduce its logical relation to some proposition of interest. The proposition here is some number of individuals guessing correctly on coin flips.

What is our evidence? The standard bit about coins plus what we know about a group of thirsty bored people. Coin evidence: two-sided object, just one side of which is H, the other T, which when flipped shows only one. Given that evidence, the probability of an H is 1/2, etc. That’s also the probability of guessing correctly, assuming *just* the coin evidence.

If there were one party guest, the probability is thus 1/2 she’ll guess right. Obviously 100% of the guests claimed accuracy, and we can score the game using Ariely’s rules. Take the percentage of guests who predicted accurately over 50% and multiply this percentage by 2%. (He gave the example of 70% correct guesses, which is 20% over 50%, and 20% x 2% = 40% dishonest guests.)

Since 100% of the guests claimed accuracy, our example has 50% above 50%, thus “you can tell” 2% x 50% = 100% of the guests are cheating. Harsh! You’d toss your invitee out on her ear before she could even take a sip.

If there were two guests, the probability both honestly shout “Down the hatch!” is 25%. How? Well, both could guess wrong, the first one right with the second wrong, the first wrong with the second right, or both right. 25% chance for the last, as promised. Suppose both were honestly right. We again have 100% correct answers, making another 50% above 50%. According to Ariely, we can tell 2% x 50% or 100% “of the guests are dishonest.” Tough game! Seems we’re inviting people over for the express purpose of calling them liars.

Now suppose just one guest (of two) claimed he was right. We have 0% over 50%, or 2% x 0% = 0% dishonest guests. But the gentlemen who claimed accuracy, or even both guests, easily could have been lying. The second who said she guessed incorrectly might have been a teetotaler wanting to be friendly. Or the second could have guessed incorrectly, and so did the first but he really needed a drink. Who knows?

If you had 10 guests and 6 claimed accuracy, then (with an excesses of 10%) 2% x 10% = 20% of your guests, or two of them, are labeled liars. Yet there is a 21% chance 6 people would guess correctly using just the coin information. Saying there are 2 liars with such a high chance of that many correct guesses is pretty brutal.

Ariely’s rules, in other words, are fractured.

So let’s think of workable games. I suggest two.

(1) Invite economists to use their favorite theory to make accurate predictions of any kind, three times successively. Those who fail must resign their posts, those who succeed are re-entered into the game and must continue playing until they are booted or they retire.

(2) Have guests be contestants in your own version of Monty Hall. Use cards: two number cards as “empty” doors and an Ace as the prize. Either reward your guests with a drink for (eventually) picking correctly, or punish them with one for picking incorrectly (if you think drinking is a sin).

**Update** In this original version I misspelled, in two different ways (not a record), Ariely’s name. I beg his pardon.

**Update** Mr Ariely was kind enough to respond to me via email, where he said he had in mind a party with a very large number of guests. This was my reply:

Hi Dan,

I supposed that’s what you meant, but it’s still wrong, unfortunately.

If you had 100 guests there’s a 7.8% chance 51 guess correctly (and truthfully). But the rules say 1% x 2% = 2% of the guests, or 2 of them, are certainly lying. Just can’t get there from here.

Worse, the more people there are the more the situation resembles the one with just two guests, where both forecasted incorrectly but where one said he was right. In that case the rules say nobody cheated. But one did.

The more guests there are the easier it is to cheat and not be accused of cheating, too. You just wait until you see how many people said they were right, and as long as this number isn’t going to make 50 or so, you can lie (if you had to) and never be accused.

There’s no fixing the game, either. Suppose all 100 guests said they answered correctly. Suspicious, of course, but since there is a positive chance this could happen, you can’t claim (with certainty) *anybody* lied. All you could do is glare at the group and say, “The chance that all of you are telling the truth is only 10^-30!”

But then some wag will retort, “Rare things happen.” To which there is no reply.

There might be a way to make a logic game of this, but my head is still fuzzy from jet lag and I can’t think of it.

Also, apologies for (originally) misspelling your name!

Matt

Categories: Statistics

“(1) Invite economists to use their favorite theory to make accurate predictions of any kind, three times successively. Those who fail must resign their posts, those who succeed are re-entered into the game and must continue playing until they are booted or they retire.”

Where’s the fun in game that only goes for 1 round?…..

You’re looking at the wrong economists Briggs.

http://mises.org/daily/6528/Science-is-More-than-Mathematics

In a real time 20 year experiment, Phillip Tetlock’s “Expert Political Judgement” found that (1) experts are not as good as a decent computer program in predicting future world events; (2) the “experts” are expert at devising excuses for why they were wrong. Remember this the next time you are hit with a talking head.

Let me try to give a possible explanation as to how Ariely derives his rule.

Assume the game scenario described in this post.

Let GC (GW) be the event that a drunk guest predicts/

guesses the coin tossing result correctly (wrong).Let CC (CW) be the event that a drunk guest

claims that he guesses the coin tossing result correctly (wrong).We know that P(GC) = 1/2.

Assume that everyone wants a free drink, but not everyone would lie for the sake of a drink.

So, if a guest has predicted correctly, he would not lie and would tell you that he indeed has the correct guess. That is, P(CC|GC)=1.

Let p = be the probability of lying, i.e., in this case, the probability a guest guesses wrong, yet claims to have a correct guess. That is, p = P(CC|GW)

Those who claim to have correct predictions either indeed have correct guesses or lie about being correct.

Suppose that P(CC) = x (a number larger than 0.5). Then,

x = P(CC|GC)P(GC) + P(CC|GW) P(GW) = 1 * 0.5 + p * 0.5.

Now solve for x. It yields Arielyâ€™s rule.

Mr. Briggs,

You have computed the probabilities of observing certain outcomes given the guests are just guessing (not lying), i.e, P( data| the null of not lying). You have used the reasoning behind the frequentist hypothesis testing. Ha.

Anyway, if Ariely is a frequentist, he would conclude that he could not offer you any rule based on the result from a guest, i.e., a sample size of one, as the uncertainty is immeasurable… in a frequentist way.

JH,

Oh my, no. No no. No, ma’am. Ariely was the frequentist. I was the Bayesian: I gave the deduced probabilities of actual events. Ariely have “probabilities” of unrealized events, kinda sorta like p-values. I never gave the probability of anybody cheating or not. Oh dear.

In fact, for homework, I ask you (ALL READERS) the difference in deduced probabilities between:

Pr(H | E_coin & Lying) and Pr(H | E_coin & Truthful)

where E_coin is the evidence laid out in the post. That will be enlightening.

The issue is his claim, “you will know … ”

You gather guests, get them drunk, and then make unsubstantiated claims.

You either claim all are telling the truth when some may be lying â€“ not much of an issue, except loss of credibility in the minds of the liars; or you claim at least someone is a liar â€“ an issue that will offend all those who answered truthfully â€“ which may be everyone at the party â€“ since they are now under suspicion. In this instance, you do not KNOW that someone has lied. You only claim that to be so, based on errant logic.

But, again, the issue is he states he KNOWS something to be true when he does not.

Of course, he IS a behavioral economist, so the game may be a ruse to see what behaviors are exhibited (I doubt this, though).

Briggs,

It really doesnâ€™t matter how many nos youâ€™ve typed. No, you didnâ€™t give the probability of anybody cheating or not. I didnâ€™t claim you did, either. The frequentist hypothesis testing doesnâ€™t need such probably either.

Why donâ€™t you (1) explain how you derive the probability of 0.21 and write it in terms of a conditional probability (practice what you preach and define your events clearly) (2) and then think about the frequentist hypothesis testing of a null hypothesis of no lying with the observed evidence of â€œ6 out of the 10 guest claimed accuracy.â€

Ask Henk Tijms!

I meant to quote the following

What are those events? why are they sorta like p-value?

Just notice that it should be â€œNow solve for pâ€ (not x) in my previous comment.

Briggs,

You solution is equivalent to how one would test a null of a fair coin with the evidence of observing 6 tails out of 10 tosses. The probability of 0.21 is calculated given the assumption of a fair coin. (If you want to calculate the p-value based on its definition, you may, but itâ€™s not necessary in this case.)

Why is 0.21 a high probability? Did you use 0.05 as your threshold?

Now I hope you see why I said that you have employed the reasoning behind frequentsit hypothesis testing. Whether does this qualify you a frequentist? I donâ€™t care. Yes, there are frequentist and Bayesian methods, to label a person as frequenters or Bayesian is redundant, imo.

My dear JH,

Look back to the

hundredsof times Ideducedthe probability of a coin flip, etc. Nothing “fair” needed; itisa deduction; frequentism fails absolutely everywhere except when it matches logical probability. And even if it matches, the reasons given for the matchings are wrong.Clinging to frequentism is like somebody, beholden to “gremlin theory,” saying, “The gremlin caused my car to start” and then not understanding that the accidental correct prediction (the car did start) proves the gremlin theory is true.

If you don’t think frequentism fails, read the previous post and give me the relative frequency result for Martian example. Don’t skip this homework like you skipped the other one (about lying and truthful response conditioning).

Sure, 21% chance is high to claim there are

exactly2 cheaters in that scenario. I gave enough examples to show Ariely’s reasoning fails and why. Look again at those, which you failed to comment upon.UpdateHe is saying there is a 100% chance there are 2 liars, but there is a 21% deduced chance of 6 people guessing correctly. Thus, 21% is “high”.Jet lag time. I’m off until tomorrow.

Ariely never mentioned how to handle the real possibility of the percentage being less then 50% (assuming you have honest people or liars who don’t want a drink. Is there such a thing as a super truther?

Brian,

Yes, exactly; that was like the example I gave with two people, both who guessed incorrectly but one who lied.

If the party is large, then all the Liars, and about half the Honest guests, will say they got it Right, i.e. L + 1/2 H ~= R. Letting N=L+H, we find via algebra that L/N ~= 2 ( R/N – 1/2). Party on!

Brian, good question. Different assumptions may lead to a different rule.

Briggs,

So let me just ask you to clarify the following.

(1) Do you know how how Mr. Ariely derives his rule?

Please note that the interpretation of the probabilities in my explanations doesnâ€™t have to be frequentist. Under the assumptions in my explanations, his rule makes sense.

Mr. Ariely offers a rule and asks his readers to try out his rule and to let him know how it turns out. Does this sound like a man who is 100% sure about his rule and demand an exact number of cheaters?

Are you assuming that he has never run any experiments and has no idea about the difference between theoretical and empirical results? Or do you want to believe that he is so clueless about the difference?

Briggs,

(2) The validity of your argument and reply to Mr. Ariely hinges on your answer to the following question.

Look back several times (I donâ€™t exaggerate), I have not seen how you deduced the probability of a coin flip or the chance of 21% thatâ€™s calculated using a binomial probability distribution (Binom).

(2) Let me do part of the work for you. Define X to be the number of guests who claim to have correct predictions among the 10 guests under the conditions of the game. Show me that you can use Binom to model the probability distribution of X. That is, show me all the assumptions required for Binom are met. If you can do so, then I would say youâ€™ve

deducedthe probability.(1% x 2% = 2% Really?) Now, define Y to be the number of correct guesses among the 100 guests. Not wanting to give away the assumptions required for Binom, I would just say, indeed, one can model Y using Binom. However, please note that X and Y are different, and we observe X, not Y.

That’s all the time I have for today!

Is that all he said?

Briggs,

Oh, the probabilities in the “homework” you wrote are not well defined. What is the evidence laid out in this post? Is it “6 out of 10 guests claims to have correct predictions? What is “H”? What does “truthful” (lying) mean? Does it mean given all 10 guests tell the truth (lie)? Perhaps you can enlighten us what they are exactly and what they are for in trying to estimate the probability p of lying based on th observed evidence of “the number of people who claim to have a correct prediction”. Note that After a guest tosses the quarter, he decides to tell you whether he has a correct prediction. If you know what you are talking about, show it!

JH,

You’ve been hanging around the (wrong kinds of) undergraduates too long. You’re starting to pick up their ways of explaining why their assignments are late.

Briggs,

You are just giving yourself a reason not to answer my question. Why can’t you just answer the question? A teacher usually would not take late homework, but still announce the answer.

Again, “if you have it, show it,” one of my sis-in-law favorite phrase.

JH,

See the two papers by NIV referenced/linked in the Not all Prob is Quantifiable post. What fun you’ll have! Academic papers by an academic!

Briggs, I don’t really follow of of JH’s criticism, but surely you agree that the probabilities you have deduced are the same ones that would be used as p-values by many (frequentists?) to assess the null hypothesis that there were no liars. Your point is different – simply that it is definitely possibly that there were no liars (I note the eschewal of any magic p-value criteria in your reply to Ariely), but the question “What probability that x people claim to guess correctly if they are all honest?” is still the same.

In contrast, Ariely answers the question “How many people lied, given the evidence that x claimed to guess correctly?” It’s not clear whether Ariely used JH’s approach (which seems fairly frequentist to me), the even rought approach of Steve, or something else (why not add one to the answer?), but it’s certainly true that he doesn’t acknowledge any uncertainty. (Econometricians are econometricians, it seems.) I suppose it is a matter of style whether to take this as sloppy writing with no mention of probabilities, or as an assertion that there is 0% chance that any other number of people lied (in particular, that everyone is truthful). Either way, it would perhaps be interesting to see what everyone had to say about the probability that the number of liars is as Ariely claims.