Thanks to Bruce Foutch who found the video above. Transitivity is familiar with ordinary numbers. If B > A and C > B and D > C, then D > A. But only if the numbers A, B, C and D behave themselves. They don’t always, as the video shows.
What’s nice about this demonstration is the probability and not expected value ordering. Hence the “10 gazillion” joke. “Expected” is not exactly a misnomer, but it does have two meanings. The plain English definition tells you an expected value is a value you’re probably going to see sometime or another. The probability definition doesn’t match that, or matches only sometimes.
Expected value is purely a mathematical formalism. You multiply the—conditional: all probability is conditional—probability of a possible outcome by the value of that possible outcome, and then sum them up. For an ordinary die, this is 1/6 x 1 + 1/6 x 2 + etc. which equals 3.5, a number nobody will ever see on a die, hence you cannot plain-English “expect” it.
It’s good homework to calculate the probability expected value for the dice in the video. It’s better homework to calculate the probabilities B > A and C > B and D > C, and D > A.
It’s not that expected values don’t have uses, but that they are sometimes put to the wrong use. The intransitive dice example illustrates this. If you’re in a game rolling against another playing and what counts is winning then you’ll want the probability ordering. If you’re in a game and what counts is some score based on the face of the dice, then you might want to use the expected value ordering, especially if you’re going to have a chance of winning 10 gazillion dollars. If you use the expected value ordering and what counts is winning, you will in general lose if you pick one die and your opponent is allowed to pick any of the remaining three.
Homework three: can you find a single change to the last die such that it’s now more likely to beat the first die?
There are some technical instances using “estimators” for parameters inside probability models which produce intransitivity and which I won’t discuss. As regular readers know I advocate eschewing parameter estimates altogether and moving to a strictly predictive approach in probability models (see other other posts in this class category for why).
Intransitivity shows up a lot when decisions must be made. Take the game rock-paper-scissors. What counts is winning. You can think of it in this sense: each “face” of this “three-sided die” has the same value. Rock beats scissors which beats paper which beats rock. There is no single best object in the trio.
Homework four: what is the probability of one R-P-S die beating another R-P-S die? Given that, why is it that some people are champions of this game?
R-P-S dice in effect are everywhere, and of course can have more than three sides. Voting provides prime cases. Even simple votes, like where to go to lunch. If you and your workmates are presented choices as comparisons, then you could end up with a suboptimal choice.
It can even lead to indecision. Suppose it’s you alone and you rated restaurants with “weights” the probability of the dice in the video (the weights aren’t necessary; it’s the ordering that counts). Which do you choose? You’d pick B over A, C over B, and D over C. But you’d also pick A over D. So you have to pick A. But then you’d have to pick B, because B is better than A. And so on.
People “break free” of these vicious circles by adding additional decision elements, which have the effect of changing the preference ordering (adding negative elements is possible, too). “Oh, just forget it. C is closest. Let’s go.” Tastiness and price, which might have been the drivers of the ordering before, are jettisoned in favor of distance, which for true distances provides a transitive ordering.
That maneuver is important. Without a change in premises, indecision results. Since a decision was made, the premises must have changed, too.
Voting is too large a topic to handle in one small post, so we’ll come back to it. It’s far from a simple subject. It’s also can be a depressing one, as we’ll see.
RPS is not a game of chance, played with dice. It is a game of psychology, played with people. How well do you know / can you read your opponent? What can you do to influence their decision making process?
Hence, the “creation” of R-P-S-L-S
That was a very good video — putting a fancy name, Non-Transitive Probabilistic Comparison, on what is fundamentally a kid’s game, that kids understand: Rock-Paper-Scissors (by putting probabilities on how often rock, paper, or scissors are going to show up and then rendering the ‘things’ [rock, paper & scissors] quantitatively).
For those needing a refresher:
Rock crushes scissors (rock wins)
Scissors cut paper (scissors win)
Paper covers rock (paper wins)
R > S, S > P, but R NOT > P
One “bottom line”: It IS a simple subject.
Where Briggs concludes, “It’s far from a simple subject” one should be asking why that seems to be the case — at least by grownups who should know better.
Part of the answer is psychology — one can calculate that two players of R-P-S will win 50% of the time, but in reality results are usually skewed, often highly. There’s a comparable set of behavioral factors underway why the slightly different rendition, as presented by Briggs, manifests as a “far from simple subject” (one of those is intellectual laziness.
You missed the point, Ken. Note the term Non-Transitive Probabilistic Comparison. RPS is NOT objectively probabilistic. The probability of your opponent’s response depends on your reading of your opponent. More like poker than a game of chance.
Ken
You are also neglecting the number of ties
50% of non-tying games (may result in very few wins by either)
“Anecdotal evidence suggests that players familiar with each other will tie 75-80% of the time”
https://www.youtube.com/watch?v=iapcKVn7DdY
Interesting. The said implication that “if A cures more people than B … and B more than C…, you cannot necessarily conclude that A is better than C” doesn’t seem to be rigorously correct.
Whether a person is cured is 0-or-1/head-or-tail valued, say, with 1/tail representing “being cured”. It is like tossing a coin or a two-sided die. I am not so sure that it would result in the non-transitivity property using the same definition of “stronger.” I don’t think so. Will do the calculations later.
PRS is a simple example of non-transitivity. It might be difficult to construct different types of non-transitive dice though.
DAV – RE: “The probability of your opponent’s response depends on your reading of your opponent.”
That suggests a kind of “quantum-locking” of the sort in the Dr Who series about the Weeping Angels (see episode, [Don’t] Blink) — malevolent creatures that are frozen as stone only while being observed by another. Kind of like Schrodinger’s cat being alive or dead depending on who’s watching when.
Of course, with humans having “free will,” the mere notion that one could ‘read’ one’s opponent in a game and via such ‘reading’ and thereby affect the probability of their opponent’s response must necessarily be fanciful — that would suggest the opponent lacked “free will.” Except that happens all the time, often enough subconsciously (a phenomena associated with “bluffing” and reading “tells”), which illustrates how freely manipulable [by others] one’s free will can be.
That whimsy aside, I don’t know what you’re addressing.
In a simple game of R-P-S one either wins or loses, treating ties as do-overs [like foul balls in baseball] having no effect on the Win/Loss score or payout. The probability of an objective player showing R, P, or S is one-third, so the probabilities can be calculated for a pairing of such fictional beings. That much is objective.
But in reality the results are highly skewed…due to a “SET of behavioral factors” — “SET” being multiple such factors (I pointed out one of those, a kind of intellectual laziness).
When you assert I missed the point — and base that on another behavioral factor (reading an opponent/bluffing) — we’re left wondering, “what point did I MISS?”
Listing all the behavioral factors that COULD apply necessarily differs from those that actually DO apply in a particular situation.
The point, DAV & JohnB() missed — like the vast majority of people assessing complex situations — is that in different renditions of an activity that my appear identical, in reality different factors have different weightings (different levels of influence) in different circumstances. This is seldom apparent to the casual observer. Simply identifying one factor that will apply in some situation(s), but not all, only applies to that particular combination of circumstances — the error is in extrapolating broadly. And that’s what both DAV and JohnB() did above:
– DAV picked a particular behavioral factor (‘reading’ aka bluffing) and clearly reached a conclusion to the effect, ‘aha, here’s an applicable case that negates the 50% win split for an objective baseline, therefore the entire argument made is wrong’
– JohnB() picked a particular behavior factor (players familiar with each other) and noted most outcomes tend to be ties — another, “aha! the 50% baseline win is revoked”
Again, there are a myriad of behavior factors that affect how any particular game of R-P-S will play out. In effect, each and every game is its own special case, and not necessarily comparable to another.
The recurring human error — and ALL of us do this more often than not — is to pounce on one (usually just one single) factor and evaluate that against some single other factor. This occurs in political debate all the time — extremely complex social situations get split into opposing sides, with each opposing position based on the most meager rationale and degraded into a this-vs-that trade-off. The number of situations and issues that lend themselves to such either/or positions is negligible…but that’s what happens with emotional reasoning — the counterweight to intellectual laziness.
The truly perverse aspect of all this is that very complex problems tend to be over-simplified into simple-minded either/or issues and either/or options for resolution far more than less complex problems having a handful of factors. Human nature has been to address great complexity by oversimplifying it. Peter Drucker observed this trend beginning in the 80s, or earlier, and predicted things would degrade to the point such simple solutions, coupled with short time horizons, would lead elected leadership to react too late to be able to make needed changes (see his, “The New Realities”).
It wasn’t always this way–especially in politics. Read or listen to some of the earliest recorded debates about the social issues of the day (e.g. Teddy Roosevelt vs Woodrow Wilson) and one reads/hears lengthy analyses of a myriad of factors on social issues (both of those made radio speeches lasting 45-60 minutes perhaps on just one overall issue). Fast forward today, with all the information technology puts at our fingertips (literally) and our crop of politicians & media oversimplify comparable issues into an either/or, us-vs-them, soundbite that can fit into a 140 character tweet. As a society, we are applying technology in a way that is dumbing us down.
I don’t know what you’re addressing.
If that wasn’t clear before, it is now.
Tic Tac Toe, anyone?
X on top centre, John B.
>Homework three: can you find a single change to the last die such that it’s now more likely to beat the first die?
Find “a single change”; hmmm…. If that has to mean, “one single number on the (6 6 2 2 2 2) die changes,” I can see how by changing just one number you can win half the time against the “all 3s” die (say with a die with 6 6 4 2 2 2), but not how to ‘beat’ the first die.
But maybe I can fudge and say my new (6 6 4 2 2 2) die is more likely than before to beat the “all 3s” die.
Or maybe I can say that my “single change” is that a 6 always comes up on the (6 6 2 2 2 2) die.
Anybody less dumb care to help?
What is the “probability expected value”?
How does the intransitive dice example illustrate any wrong use of expected values? The dice example defines a criterion for being “stronger” that results in a intransitivity or non-transitivith property.
One can define that that A is stronger than B (A>B) if the expected value of A is greater than the one of B. However, in this definition, If A>B and B>C, then C>A. I don’t see anything about the wrong use of expected values in the video. What do I miss?
There are some technical instances using “estimators” for parameters inside probability models which produce intransitivity and which I won’t discuss.
Interesting. There are scuh instances? References? Any claim, be it in a academic paper or in a report, without supporting evidence is often a claim with an agenda or a puspose behint it. Why the scare quotes?
If A>B and B>C, then A>C.
JohnK, how about cutting the die somehow to make it biased?
Medicine: Given a sample size of 100 patients for each trial:
Drug A cures 70 and kills 20.
Drug B cures 65 and kills 10.
Drug C cures 60 and kills 0.
Which drug would you rather be given?
“Homework three: can you find a single change to the last die such that it’s now more likely to beat the first die?”
Change a 2 to a 6. The 4th die is now more likely to beat the 1st die than it was before.
Alternate: Change the rules so the low die wins.
Alternate: Change the 4th die to a 12 sided die, and fill all the extra faces with the number 6.
Alternate: Fit a lead weight into the 4th die in such a way that it becomes much more likely than not to show a 6 when rolled.
Alternate: Replace any ‘2’ found on the 4th die with a ‘6’. This can be considered a single change, as it results from a single, simple rule, universally applied.
Much as I dispise the clinical trial system, at least in cancer medicine, the cure/kill ratio you suggest isn’t quite fair. Maybe more like; some responce improvement vs. no responce improvement over toxic effects, quality of life and cost. It’s a horrible model, and there are other, better, ways to determine best treatment options.
The medical examples are imagined situations.
Decisions are made all the time without statistics or numbers being involved.
Observation and trial and error is how things work in clinics. Live experimentation. Statistics comes in at the end to tell people what they want to hear. That’s what it looks like to me.
Bright sparks try to say you’re doing some kind of mental arithmetic which amounts to the same ting but those kinds of people have never been inside a consultation room and had never been asked to give a diagnosis or a prognosis in their life. How do they know? Answer? They don’t.