Taleb’s Curious Views On Probability — Part II: Skin in the Game

Skin in the game

Read Part I

It is in one sense fortunate that the mathematical, or rather quantitative, roots of probability began with gambling. Routine gambles are easy to understand, and the calculations not only easy, but as models have great applicability to actual events. All know the story of how quantitative probability flourished, and flourishes, from these beginnings.

On the other hand, it has been difficult for probability to remember that its more robust, fuller, and certainly more supportive roots which are non-quantitative. That gambles were easily quantifiable and made skillful models produced the false idea that all probability is, or should be, quantitative. And this led to the main error, discussed last time, that probability exists. It also produced a second error, which I won’t examine here (but have at length in Uncertainty), that probability is subjective.

Given the rules of craps—our premises—we can deduce the probability of winning and losing. We can also apply this model to real dice. And the same is true for card games, slot machines, and so on. These models have been found to work well. But even casinos change out worn dice and bent cards knowing the models are no longer as applicable.

These models work well for single gamblers (with assumed fortunes), but they cannot be applied to groups of gamblers, because how much and how long people, plus how many people, gamble cannot be captured by the simple premises. Here I agree with Taleb when he says about groups of gamblers, “Some may lose, some may win, and we can infer at the end of the day what the [casino’s] ‘edge’ is, that is, calculate the returns simply by counting the money left with the people who return.” This observational data is used to infer premises for a model beyond the premises available per game (which are easy).

Taleb continues: “We can thus figure out if the casino is properly pricing the odds.” The odds for each single game are deduced, so that means, at first glance, that the overall odds are also correct. But sometimes it pays for casinos to change single-game odds. If few wins at some slot machine, few will use it (after word spreads); likewise, if one pays off well, more will use it. Observed behavior can help slide the single-game deduced odds to entice more gambles. Since behavior is volatile, so will be these models.

I also—everybody also—agrees with Taleb that when a gambler goes but he must stop playing. For some reason he calls going bust an “uncle point” (crying uncle?). Everybody also knows that because a certain gambler reaches an “uncle point”, that other gamblers might still have money. This seems to be something of a revelation to Taleb, though, who calls the models applied to groups of gamblers “ensemble probability” models, and those applied to single gamblers (with known or assumed fortunes) “time probability” models.

Taleb then argues, what isn’t a secret, that sometimes people use the wrong model. They’ll use a single-gambler model for a market (group), and a group model for a single-gambler. I don’t think this often happens, however, not with stocks, anyway, with so much money involved.

He says, “I effectively organized all my life around the point that sequence matters and the presence of ruin does not allow cost-benefit analyses; but it never hit me that the flaw in decision theory was so deep.”

Well, of course, the presence of ruin, i.e. if one is ruined, the cost-benefit is not flawed, it is as easy as can be. That that possibility of ruin exists does not conceal a flaw in decision theory, either.

I agree that decision theory has many flaws, but I see them differently. Many formal quantitative methods allow for impossible values (infinities or other large numbers), or they assume probabilities are real or they conflate probability and decision. Probability is not decision.

Taleb is concerned with “tails”, which is to say, large values. Now actual observed large values may or may not be well modeled; often they are not, and then Taleb’s criticism is spot on. For instance, normal distributions are as overused as the word “like” is in ordinary conversation. Other times there are possibilities in decision analysis for “tail” values that can’t be seen, and that’s a flaw with either the probability model or decision criterion (or both).

Somehow Taleb believes people, unless they possess genius, cannot figure probability if they do not have “skin in the game”, his favorite marketing phrase. This is false, as is obvious. People who do not give a rat’s rear about an outcome are less likely to attend to the problem as closely as those who do care, which is clear enough. But having money on the line does not bring the psychic gift of probability awareness. Indeed, gamblers with much “skin in the game” are apt to be the worst estimators.

That’s enough for Part II. I’ll wrap it up in Part III, Ergodicity and all that.

4 Comments

  1. He wants them to behave. He doesn’t care whether or not they can figure probability- he cares whether or not they behave. If they don’t have skin in the game, they can make a huge bet, and get someone else to pay for it. Remember the bailouts?

    Even now there are plenty of sociopaths in various finance jobs who only need to make the deal look good until they get their bonus. Nobody goes back to these guys and takes the money back. Nobody goes to jail.

    You should read his stuff- AND suspend your criticisms until after you’ve properly digested it. Maybe wait a year.

  2. RE: “For some reason he calls going bust an “uncle point” (crying uncle?).”

    Kind of; its a term of art (sort of) used by traders (stocks, commodities, etc.), where appreciation of probabilities –and other factors — interplay.

    RE: “… Taleb believes people, unless they possess genius, cannot figure probability if they do not have “skin in the game”, …. This is false, as is obvious. People who do not give a rat’s rear about an outcome are less likely to attend to the problem as closely as those who do care, which is clear enough. But having money on the line does not bring the psychic gift of probability awareness. Indeed, gamblers with much “skin in the game” are apt to be the worst estimators.”

    That’s true & false, and false in a profound way, but only sometimes with some people. Having money on the line does not bring probability awareness (true to a point) …however… smart enough people who do put money on the line will, because of the financial risk and desire for returns, explore all the other various factors that might/will impact the outcome — they’re motivated to do more research and to refine their model much much more, in other words.

    Teddy Roosevelt eluded to something very similar in his speech containing this oft-quoted portion:

    “It is not the critic who counts; not the man who points out how the strong man stumbles, or where the doer of deeds could have done them better. The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood; who strives valiantly; who errs, who comes short again and again, because there is no effort without error and shortcoming; but who does actually strive to do the deeds; who knows great enthusiasms, the great devotions; who spends himself in a worthy cause; who at the best knows in the end the triumph of high achievement, and who at the worst, if he fails, at least fails while daring greatly, so that his place shall never be with those cold and timid souls who neither know victory nor defeat.”

    Someone going into the arena, be it a fight or a trading floor, etc., will often enough do the most thorough research (develop the best “models”) to win. The greatest determining factor isn’t intelligence, at least beyond some ordinary point, so much as initiative/drive in the mental realm to do the homework on ALL the relevant fronts.

    There’s something about thinking (which can actually be measured in part by the brain’s energy consumption) that so many people avoid as much as possible. Its work and most people viscerally avoid work. Most wins start in one’s mind, be they on the ball field, trading floor, wherever. Most people seem to confuse talking about something with actually studying that something. Philosophy is notorious for the ‘talking about mode’ without getting into the truly pertinent details; talking about consists of generalities whereas every situation has situational-specifics that invariably violate, or limit in very particular ways, some philosophical generalization(s).

    And for those that do think, and consider in human factors of the involved humans — as that’s from which the most execution flaws arise, there are coaches out there, such as:
    http://www.daytradingpsychology.com/day-trading-coach/buy-low-sell-high/.

    The author of that advertising site presents enough info to illustrate how psychological factors impact actual execution (and he offers to sell one remedies) — where Briggs says “gamblers with much “skin in the game” are apt to be the worst estimators” that is often enough (more often enough with experienced gamblers) not only false but sometimes extraordinarily false. Enough gamblers with extraordinarily astute methods once in a while break discipline and get caught up in some familiar human foible. There’s a scene in the movie, 21, that illustrates this, for example. Peruse the “Causes of Failure” at the above link and you’ll see some examples.

    Briggs likes to focus on the mechanics & philosophy of statistics. That’s ok if you want to live in an academic bubble. If you want to apply it and make money in some arena (especially in some markets and/or some gambling), you’ll find that there’s a LOT more to factor in…and…that things like ‘p-values’ do not matter because they’re not a relevant consideration (one does not ascribe some merit, rightly or wrongly to some low p-value because, in the real world of making money (at least where I do), the calculation either never arises at all, or, if it does you simply have no need to consider it).

  3. Taleb uses that ‘skin in the game’ allusion elsewhere to analyze why neocon interventions are so often so bad. If one decides to intervene in, say, Syria, and things go sideways, the planner seldom loses anything. Oh, we’ve replaced an autocratic thug with ISIS? Well, better luck next time.

    It’s not that those with skin in the game always make good decisions; but that when they don’t, they suffer real consequences.

  4. YOS,

    That is, of course, the least of the neocon’s problems. And with this definition of SITG I am in solid agreement. However, that has nothing do with either probability or decision theory, per se.

Leave a Comment

Your email address will not be published. Required fields are marked *