Why Decision Analysis Isn’t Straightforward

Heads I win, Tails you lose.

Vast subject, decision analysis, but we can get a grip on a corner of it if we consider the simplest of problems, the do-don’t do, yes-no, gonna happen-ain’t gonna happen, simple dichotomous choice. Which is to say, there are one of two decisions to make which will result in a limited horizon of effects.

Consider the most apolitical decision Yours Truly can think of, the take-it-or-leave-it umbrella problem. Carry it and you’re protected from Mother Nature; forget it and you might get wet. Yet if you take it and it doesn’t rain, you have to carry the ungainly thing around. What to do?

Textbooks say to first calculate the probability of the event, which is here rain. Now the probability of the event is calculated with respect to certain evidence, which for most people will be the weatherman’s report. Most of us would take whatever he says as “good enough”, which isn’t a bad strategy because meteorologists do a mighty fine job forecasting rain for the day ahead. So adept are they, that the issued numerical value of the probability can be trusted; it is reliable and measures up to other, technical qualities of goodness which needn’t concern us here.

Next is to figure the costs and rewards of the various actions. Here there are four. Take the umbrella and it rains or not, or leave the umbrella and it rains or not. Something (of course) happens to you in each of these scenarios. None of these happenings are the least quantifiable—how do you quantify not getting wet because it didn’t rain but you carried an umbrella?—but in order to work decision analysis we must pretend they are because decision analysis is a quantitative science.

So we must invent a quantification using the idea of “utility”, indexed by “utiles”, i.e. quantum, individualistic units of good and bad. Not getting wet might be a dozen good utiles, while my getting wet is a negative seventeen utiles. On the other hand, the burden of carrying an umbrella scores a negative three utiles. On the other other hand, even if it doesn’t rain I can use the umbrella to intimidate cabs that try to cut me off in intersections (somehow they have no compunction running over unarmed people but are frightened of plowing into a sharp stick—try it) which might be worth seven good utiles and might be worth nothing if I meet no cabs. And what is more suave than sporting a Brigg umbrella? Five positive utiles right there no matter what.

Since I get to (and must) make these up, there is no criticizing them. They are what they are—and they may not even be that. Sitting here now, I can’t make up my mind how to specify the complete set of utiles. Yet I do manage to everyday carry an umbrella or not, so I must be implementing some kind of decision process. Surely whether or not to carry an umbrella is a trivial decision (except on days I’m wearing my nicest brogues), but even so it is an astonishingly complex task to specify or articulate how I make it.

Utility is “individualistic” because one of my utiles is not equivalent to one of yours, except by accident. Further, even if we could set up a rate of utility exchange between you and me—say we agree one of my utiles equals 3.2 of yours—the relationship is not linear. That is, it is not the case that 10 utiles of mine would equal 32 of yours: it could be anything. There just is no way of knowing. But, like I said, if we want to use decision analysis we have to pretend it is knowable.

Now if we could quantify the four scenarios and the (conditional-on-certain-evidence) probability of the “event” (which necessarily gives us the probability of the “non-event”), decision analysis provides us a formula for what to do. Well, several formulas, actually: maxi-max, mini-max, expected value, etc. There isn’t universal agreement on which formula to pick. All these formulas begin in the same place: with the four scenarios quantified in terms of utile “loss”, which might be negative hence a gain, or utile “gain”, which also might be negative hence a loss. Plus the (conditional) probability for the event. Idea is to set up a table with columns headed “rain” and “no rain” and rows labeled “carry” and “not carry”: each of the intersections, or “cells”, has a utility (loss or gain). It might look like this:

  • Carry—Rain, u1
  • Carry—No rain, u2
  • No carry—Rain, u3
  • No carry—No rain, u4,

where each of the ui‘s are the utilities (the indexes are arbitrary). Again, I can express these utilities in terms of loss or gain. For example, under Carry—Rain I am awarded 12 utiles but pay 3 for carrying, which is a gain of 9 utiles or a loss of -9 utiles. Of course, it was I who picked just these two considerations; there could have been more, making the utility calculation more involved. In any case, in the end I must end up with one number (in simple versions of decision analysis).

Now the probability that it Rains is some number (with respect to some evidence); call it p. Of course, p might not be a number, a unique number. Decision analysis requires, however, that it be stated as a unique number. In the “expected value” version of the decision analysis formulas, perhaps the most popular in economics, we calculate the “expected” utility for each cell, which is easy:

  • Carry—Rain, p*u1
  • Carry—No rain, (1-p)*u2
  • No carry—Rain, p*u3
  • No carry—No rain, (1-p)*u4.

If the probability of rain is 50%, and my u1 is 9 (or -9), then the cell gets a 4.5 (or -4.5). And so on for the other cells. Next step sums the “expected” values across the decisions, i.e. the rows. Here that is:

  • Carry, p*u1 + (1-p)*u2
  • No carry, p*u3 + (1-p)*u4.

The result will be some number for each row, stated in terms of either loss or gain in utiles. The “optimal” decision, in one framework, is the one which minimizes the “expected loss” or maximizes the “expected gain.” That’s it. What’s nice about this is its cookbook nature, the whole process is automated, easily programmed into spreadsheets and understood by bureaucrats. Naturally, its ease is one of its biggest problems.

What does “expected” mean? Not what you think in English. It has to do with the utiles you’ll win or lose if you repeat the decision-outcome pair many times, which might make sense for carrying an umbrella since you face that choice often, but it has no plain-sense meaning for one-time or few-time decisions, like say betting whether Pope Francis will resign his office in 2017.

We assumed the numerical probability was reliable, which it is for day-ahead weather forecasts. But what about for that resignation? It’s easy enough to cobble evidence to compute a probability for this event, but how reliable is the evidence? Nobody knows. There are so many different aspects of evidence one might accept that the probabilities conditional on them could be anywhere from zero to one, which isn’t too helpful.

It was already acknowledged that picking the utilities is difficult to impossible. Probabilities are often non-quantified. And then, even when all is numerical, it could be that ties exist in the calculations, with the result that there is no clear decision.

Yet decisions are still made. Now this either means our decision-making process is not like any formal analytic method, or that our process is like some analytic, quantified method with implied numerical values, but with the acknowledgement we might not be able to discover these values. But if this latter supposition is true, then there exist implied values for every decision method (mini-max, maxi-min, etc.), even when those methods insist upon different decisions. It is thus more likely that our decisions are rarely like formal methods, and that formal methods only approximate how people make actual decisions.

5 Comments

  1. Gary

    And are our chosen utiles fixed? Probably not if we are reminded of the last time we got caught out in the rain unprotected, or forgot the umbrella someplace when we carried it but it didn’t rain. Or maybe we almost always guess right and have no bad memories of a soaking. Each experience adds evidence to our calculations, both for likelihood and expense of taking an action.

  2. Mark Luhman

    I have a simple solution to the problem it the one I implemented, move to a desert it doesn’t rain enough to make a difference. /sarc off Yes I am enjoying the 80 degree temp. here in Mesa today with full sunshine.

  3. Richard A

    You’re just trying to unload your recent shipment of snazzy umbrellas with the hidden sword blade. Which are always utilitarian even if it doesn’t rain and you don’t encounter any muggers. Unless you get mugged in the rain, and then you have decide between fending off the attack and getting wet.

  4. katzxy

    There may be times, perhaps when a project is being evaluated by a corporate finance department when the sort of analysis you describe above looking to maximize utiles is done. But a little introspection shows that I don’t do that, at least not where I’m consciously aware of it.

    Maybe we need another model.

  5. QL

    This example has another fun property, namely that if you do carry the umbrella (always or usually), you’ll never learn how wet you would have gotten if you hadn’t carried it, preventing you from converging on an appropriately bold choice through trial and error.

Leave a Reply

Your email address will not be published. Required fields are marked *