Use And Abuses Of Decision Analysis: Global Warming Example

damneddoSuppose we have a simple decision to make1: implement a government-takeover of all energy companies, so as to regulate with thoroughness their carbon budgets, or leave these entities as they are, semi-governmental entities engaged in a perpetual dance with the EPA, among other agencies.

The first course of action is deemed necessary to avert the horrors of global warming. The second option is fine if global warming turns out to be the product of the fervid imaginations of grant-receiving computer modelers.

This is a problem in decision analysis; as such, it subject to quantification. Or so say users of decision analysis say. Let’s see.

As stated, the problem is easy to write. Only one of these situations will occur:

  1. Energy companies socialized & Global warming cannot strike
  2. Energy companies not socialized & Global warming strikes
  3. Energy companies not socialized & Global warming does not strike

In our simplification, there will be costs in socializing energy, whereas we can consider it free to continue status quo. Global warming cannot strike if companies are socialized, but it might if companies are not socialized. There will be catastrophic costs if global warming strikes. There are no costs if it does not.

We need estimates of costs and of the probability GW strikes. Certain evidence will supply these estimates. Call all this evidence E: E consists in experts’ judgments, actual facts, probable fictions, model outputs, data observations, and so forth. The entire point of this brief post is that E can never be more than a wild guess, thus even if the costs and probabilities derived from E are deduced without error formal quantitative decisions made are more certain than they should be.

Call the probability GW strikes given E as Pr(GW | E). Let the cost of socializing energy be Cse|E, and let the costs of the horrors of GW be CGW|E, where both subscripts indicate the values were derived from E. Then we can write the outcomes

  1. Pay Cse|E with probability 1
  2. Pay CGW|E with probability Pr(GW | E)
  3. Pay nothing with probability 1 – Pr(GW | E)

We need only one additional concept, that of “expected value.” These are the costs we’d “expect” to pay if we do not socialize. Expected value (ignoring its strengths and many, many weaknesses) is easy to calculate: multiply the cost by its probability, summing across the different outcomes. The expected value of doing nothing is:

     CGW|E x Pr(GW | E) + 0 x [1 – Pr(GW | E)] = CGW|E x Pr(GW | E),

since there is no cost of not socializing if GW does not strike. The expected value of socializing is

     Cse|E x 1 = Cse|E.

Decision analysis says to take the path with the lower cost: here that is either (A) Cse|E (socialize) or (B) CGW|E x Pr(GW | E) (not socialize).

A person advocating socializing will tend to minimize (A), saying the cost of acting is trivial, that socializing might even make money, and not just cost it (that is, Cse|E might be a negative number, indicating negative costs, i.e. profits).

That person may also exaggerate either or both CGW|E and Pr(GW | E), since an increase in either increases the expected value.

But there is less leeway with Pr(GW | E): no matter what the status of E, the probability GW strikes lives between 0 and 1. Thus a move from (say) 0.9 to 0.95—saying we are now 95% and no longer 90% sure GW will strike—will change (B), but not by very much.

It is thus much better for the advocate (activist?) to monkey with CGW|E, which can increase without bound (it lives on the real line; hence, it can take any value). It is the most trivial of mathematical problems to find a cost such that CGW|E

     CGW|E x Pr(GW | E) > Cse|E.

The costs and the probability are fixed once E is, so what will happen is the advocate will toy with E, adding to it. It’s not just snakes which will thrive when GW strikes (and thus cause an increase in deadly snake bites), but killer bees will also blossom (and thus cause an increase in deadly bee stings). It’s not just corn which will whither on the vine, but wheat, too, and rice, barely, and the current favorite of the foodies, quinoa, all of which will suffer, thus increasing food costs.

The possibilities are limited only by the imagination, and nothing stokes our fantasy engines more than the end of the world. CGW|E can absolutely always be made as large as you like.

Understand, even if Pr(GW | E) is small—say as little as 10%—the advocate can still increase CGW|E at will. You will finding him doing just that each time the estimate for Pr(GW | E) is lowered.

But the worst is not yet. For in reality, nobody can say with any kind of certainty what Pr(GW | E), CGW|E, and Cse|E are. Their true values are “I don’t know”, “Could be anything”, and “Beats me.” Thus when it comes to calculating the expected value we really have the decision equation

     “Could be anything” x “I don’t know” > “Beats me”.

To those who look at this strange result and say, “But that means we don’t know what do to!” I reply, “Yes, that’s right. What’s your point?”

I’m available to speak on this (and many other) topics. See the Contact Page.

Update Typo (what? me have a typo?) has been fixed in last equation. Thanks to Paul Mullen for bringing it to my attention.

Update The last equation is just as unsolvable if we substitute Pr(GW | E) = 0.99. Failing to understand that is what drives climatologists to their excesses. More on this later.

——————————————————————————

1There are a myriad of ways to complicate this decision, all tending (you will agree after reading) to cause certainty to decrease.

18 Comments

  1. Rich

    Doesn’t this lead to the professional statistician’s dilemma? There’s no point paying a man to say, “We don’t know” when we didn’t know before we asked him. So the pressure is on to say, “We know with 90% confidence” and take the cash and run.

  2. Briggs

    Rich,

    No. I’m perfectly happy to be paid to officially tell somebody “I have no idea.”

  3. Chinahand

    Prof Briggs – do you really have no idea, given a certain emissions scenario, what global temperatures could be in 50 years time. No idea at all … at all?

    Not within any range? Sure you could say the range swamps out any change, but you don’t think you could integrate the uncertainty at all, to get some idea of probability.

    What about a long term average – the temperature averaged over say 1983 – 2013 compared to the temperatures averaged over 2020-2050 for various emissions scenarios – business as usual, 50% drop in carbon intensity etc.

    Would such an exercise really be total junk in junk out?

    Is some thing like this:

    http://www.met.reading.ac.uk/~ed/bloguploads/hiatus.gif

    a total red herring?

    Are you totally sanguine about a rate of change of this order of magnitude, or able to totally dismiss it as nothing more than myth making?

    To me, that level of dismissal would require a total dismissal of the science. Maybe you are more knowledgeable than me – but that seems quite a leap.

    Are there any other areas of science with similar levels of uncertainty – which you also reject entirely, what makes Climatology such an outlier?

    What is your estimate of the climate sesitivity – mines between 2 and 5 degrees for a doubling of CO2 (and the other factors which such a doubling would entail) is yours really – infitinty to + infinity?

  4. MattS

    Briggs,

    “As stated, the problem is easy to write. Only one of these situations will occur:

    Energy companies socialized & Global warming cannot strike
    Energy companies not socialized & Global warming strikes
    Energy companies not socialized & Global warming does not strike”

    This is not true. There is no guarantee that even if Global Warming is a real threat that “Energy companies socialized” will do anything to prevent it.

    Energy companies socialized & Global warming strikes anyway is a real possibility.

  5. William Sears

    Briggs: I just love that last inequality! I must find a way to use it. Also, I dislike footnotes.

    Chinahand: quote “Maybe you are more knowledgeable than me – but that seems quite a leap.” This is very funny as well! I must also find a way to use it as well.

  6. I’ve emailed you regarding questions related to this post, i.e., how does a statistician “fuzzy” questions? Your posts are doing an admirable job of convincing this lay reader that there is little value in gathering data, using statistical thinking to make decisions, etc.

    As to Rich’s point and your response, you may be willing to take my money to say “I don’t know” but I’m not willing to pay it.

  7. Ugh. Editing error. Should be “…how does a statistician answer “fuzzy” questions?”

  8. MattS

    Rob,

    Once you’ve offered Mr. Briggs or any other expert a contract to provide you with advice you don’t get to not pay just because you don’t like the answer.

  9. JH

    Mr. Briggs,

    Call all this evidence E: E consists in experts’ judgments, actual facts, probable fictions, model outputs, data observations, and so forth. The entire point of this brief post is that E can never be more than a wild guess,…

    Thus when it comes to calculating the expected value we really have the decision equation
    “Could be anything” x “I don’t know” > “Beats me”.

    How can experts’ judgments, actual facts and data observation be wild guesses?

    Based on what you say here, we should not sort through the large quantities of data produced by DNA sequencing either, because nothing is certain. I imagine the area would probably become stagnant.

    “Uncertain” doesn’t mean “could be anything” or “I don’t know.”

    Why do you continue to espousing some rhetoric to seemingly demote Statistics and science?

    You are too certain that others are over-confident about their conclusions. James Hansen has repeatedly emphasized the uncertainty in climate models in his book (Storms of My Grandchildren), but well, let’s trash him… and the science… due to his choice of being an advocate.

  10. JH

    Mr. Briggs,

    Bayesian was the main methodology in 19th century, and frequentism 20th century? Which will dominate in 21th century? The arrival of high-speed computation (again, one reason why statistics is not all probability) has been making Bayesian analysis more viable. There is no better time to promote Bayesian analysis. And if I want to promote Bayesian analysis via blogging, instead of trashing statistical significance, I’d illustrate with real-life examples, e.g., whether/how FDA should approve an NDA using Bayesian analysis.

  11. Chinahand

    William Sears – opps my punctuation is terrible there. The quite a leap is the dismissal of the science, not being more knowledgeable than me. The first is a big deal, the second a triviality.

  12. Ray

    I’m still waiting for the, so called, climate modelers to demonstrate that they can foretell the future and control the climate. Until then AGW is just hypothetical. Last time I checked the people at the Climate Research Unit admitted there hasn’t been any warming the past decade.

  13. anona

    @JG

    “How can experts’ judgments, actual facts and data observation be wild guesses?”

    Well, the experts have had their predictions tested in the last decade. And the results have not been very kind. Given their track record of predictive failures, it’s easy to conclude that the experts are just making wild guesses.

  14. Doug M

    When we multiply unlikely by enormous, we have a enormous sensitivity to errors in our estimate of unlikely.

  15. Briggs

    Rob,

    Well, send me your address and I’ll send you a refund.

    Obviously statisticians can be useful—you saw the recent posts on gun control and abortion attitudes? But anybody who tells you that they know just how much money it will cost to, say, socialize energy is, if not actually insane, been the recipient of too much praise in his life. Why, it’s like somebody telling you authoritatively it will save X dollars to implement the socialization of medicine.

    Matters that complicated cannot be forecast with precision.

  16. Milton Hathaway

    http://www.met.reading.ac.uk/~ed/bloguploads/hiatus.gif

    Hah, this plot really cracked me up. It reminded me of a problem I worked on shortly after I got out of college. Data storage was limited and expensive in those days, so instead of creating large arrays of correction data, we’d fit equations and splines and transforms and such that could be described with a small number of coefficients. After some satisfying successes, I got really full of myself and decided that since interpolation had worked so well, it was time to apply the same methods to extrapolation. It was a miserable failure and a very humbling experience. Without actual measured data acting to bound the results as with interpolation, the extrapolated data typically shot off wildly in some unexpected direction, as if it were happy to be finally free of the constraints of reality.

  17. Rob

    Those who replied (Matt, Dr. Briggs) using an interpretation that I would retain a statistician and then, should he say “I don’t know” not pay him misinterpret. My point is that I am being convinced that hiring a statistician in the first place is highly likely to provide no useful information. Thus “unwilling to pay for it” means “unwilling to sign the retainer agreement.”

  18. Ye Olde Statistician

    The conundrum is this:
    1. With seven variables you can fit any finite set of data, as long as you can play with the coefficients.
    2. Ockham’s Razor: Don’t have too many factors in your models because you won’t be able to understand your models. (This, btw, is what he actually wrote, updated to modern terminology.)

    When there are too many factors, you can’t understand the model.
    When you reduce the number of factors, the model is too crude to match reality.
    (Ockham pointed out that reality could be as complex as God wished; it was the model that had to be simple. For epistemic reasons, not ontological!)
    Once a good fit has been obtained with seven factors, there will be no “room” (by definition) any 8th factor will lack “explanatory power.”
    Once new data have accumulated, the model will no longer fit and must be redone with new coefficients.
    Of course, the factors in the model are not the measured variables, but constructs derived from principle component analyses and such-like foo-foo. So the meaning of the model is one or two steps removed from empirical reality anyway.

    “All models are wrong. Some are useful.” — George E. P. Box

Leave a Reply

Your email address will not be published. Required fields are marked *