There Is No Difference Between A Forecast, A Scenario, or A Projection

This is a tad incoherent, but the gist is here. I had the opportunity of submitting an abstract to the AGU fall meeting, and had only a couple of hours in which to do it. This is the, eh, plain-English rendering of that abstract. Stand by for more news.

People trying to escape the implication of a bad forecast often claim their forecast wasn’t a forecast but a projection or scenario. The implication is that a bad forecast means a (possibly beloved) theory is no good. Therefore, if the forecast wasn’t a forecast, but a projection or scenario, the theory can still be admired (or funded).

This won’t do. Forecasts are scenarios are projections. And bad forecast-scenario-projections means bad theories.

These misunderstandings are not only found in making predictions, and in classifying which future-statements count as predictions, but also under which circumstances predictions must be verified. There is general recognition that good models produce good forecasts, but bad forecasts can’t be ignored by calling them a projection or scenario.

Now the finer points. For a start, the remarks below are general and apply to any data not yet seen, but for ease, predictions of future events are illustrated.

All forecasts are conditional on two things: a theory/model and a guess about what the future holds. Neither need be quantified or even rigorously defined, of course, but since scientists are keen on quantification, models usually have numbers attached to them.

Imagine the simplest model, which is a function of the past data, of time, and some set of premises which specify the model form (say, an ARMA process). This model can make a forecast. It will be conditional on the theory—which is the past data and model-form premises—and on a guess about what the future holds—which here is just that the future will come at us in discrete time points, t+1, t+2, etc.

Suppose the forecast is for time points t+17 and t+18. Here you stand at t+3, well short of t+19, but you still want to “verify” the forecast. Well, you can’t. The guess of the future has not obtained, therefore the forecast hasn’t, in effect, really been made. It is null. It is impossible to discuss the quality of the theory: it may be good or bad, we can’t know which.

Nothing changes if we add to the model other propositions of interest. Suppose we augment our simple time series model with “x” variables, propositions which, for the sake of argument, say something about matters probative of the thing forecasted. Now if the forecast does not change in any way regardless of the state of the “x” propositions, then these items are irrelevant to the theory. Irrelevant items shouldn’t even be part of a theory, but these days, in this heyday of the politicization of science, anything goes.

For illustration, add another component to “x”, say, the price of oil exceeding some level. Point is this. If the guess of the future is t+1 and t+2, and here you stand at t+3, you have met the time criteria, but still have to check whether the price of oil exceeds the stated level. If it does, you can check the validity of the forecast; if not, not.

Nothing changes if we add other “x” variables, or turn the model into physics instead of statistics. Get it? There is no difference between a physics and statistics models in terms of forecasts. (Most models are mixtures of both anyway.) If the guess of the future conditions obtains, then the forecast may, even must, be evaluated (the technical term is “verified”).

A difficulty arises with the word scenario, which is “overloaded”. It can mean guess of what the future holds, the time points plus the price of oil, i.e. the “x”, and is therefore not a forecast but part of one, or it might mean a forecast. To avoid confusion, this is why only “forecast” or “prediction” should be used.

It’s time t+3 and the price of oil did not exceed the stated level, so the forecast is null. But, since the price of oil is probative, we could make a new (after-the-fact) forecast assuming the appropriate price. In this way, we can still verify the model.

If we’re using that model for making decisions, particularly in government, we must verify it. We must input the “scenarios”, i.e. the “x”s that obtained, and then recompute the forecast. If the forecast has no skill, the model must be acknowledged as unworthy, to be abandoned of overhauled.

Update Reading is a difficult art, rarely mastered. Many after reading a title feel they have assimilated all the material there under. Strange. Many others gloss. More than a few shot right by where I said scenarios were sometimes the “x” and sometimes the forecasts themselves.

But it sure is nice to have an opinion, isn’t it?

36 Comments

  1. Sheri

    The claim of “climate change is not about models” has become quite popular on skeptical blogs, at least for a while. I spent much time explaining that there are only two ways to predict the future: get a reliable psychic or use a model. All data can tell us is what has happened and what is happening. To go beyond that, one has to use a model. You can’t just try and deny using a model to “fix” the problem of models that fail. I was never certain if these people believed that lack of models or were just repeating what someone told them to say. I suppose when people attack your beliefs and your beliefs are based on faith because you don’t understand what is being said, you just go with whatever seems most logical. If the models fail, say it’s not about models. That way global warming is still there no matter what happens with those pesky computer projections.

  2. Gary

    Some might argue that the issue is semantics, with precision varying along the forecast-projection-scenario spectrum, so these really aren’t the same thing and some leeway is allowed for uncertainty. Of course, they’re trying to have it both ways with weasel-wording and shades of gray.

    Is the purpose of your abstract to establish a logically rigorous definition of prediction so that theory can be evaluated against a standard?

  3. Johan

    The intended output of scenario analysis is a fairly limited set of “structurally different but coherent” narratives or storylines of “futures” that are each “plausible”, “internally consistent” and “somewhat useful for decision-making”, with the explicit aim of revealing available choices and their potential impacts. As such, the term “scenario” was introduced by Herman Kahn in the 1950s in connection with studies conducted by the Rand Corporation. Shell became a pioneer in the field of corporate scenario planning.

    Scenario building is just one of many “management (science) tools” helping decision-makers to consider the range of plausible futures, to articulate preferred visions of the future (in case of “normative scenarios”), to stimulate creativity, to break from the preoccupation with present and short-term problems, and to use the knowledge and understanding acquired during the scenario development process to anticipate the context in which they have to act.

    One may use simulation modelling to “quantify” scenario analysis, but this is most definitely not a must. In fact, the large number of factors involved and the associated high uncertainties may very well be the main reasons why scenario analysis emerged as a “foresight” method in the first place. In all cases, one should never assign probabilities to scenarios (e.g. most or least likely scenario), because that defies the whole purpose of scenario building, namely illuminating different (possible) futures and allowing decision-makers to become proactive.

    I therefore do not agree that scenarios equal forecasts or predictions. To quote Ludwig Lachmann: “… the future is unknowable, though not unimaginable …”. But imagination is not prediction. There is no guarantee that decision-makers can imagine “correctly”.

  4. Ye Olde Statisician

    I had always figured a “scenario” to be a set of conditions assumed for the sake of a forecast. The additional x’s you mention. For example, if little green men from Alpha Centauri corner the wheat market for export, then…. the model predicts such-and-such. They often come in sets: scenario A, scenario B, and so on, and act as an unquantifiable categorical variable. That is, the model does not try to predict whether A, B, or C will come about, but asks what the model would say if each of them does.

  5. Johan

    “Scenario” is a fuzzy concept that is used and misused, with various shades of meaning (Mietzner D. & Reger G., Scenario-Approaches: History, Differences, Advantages and Disadvantages, in: Proceedings of the EU-US Scientific Seminar: New Technology Foresight, Forecasting & Assessment Methods in Seville, Spain, 3–14 May 2004.

    A good primer of “scenario analysis” is Kosow H. & Gaszner R., Methods of future and scenario analysis, Overview, assessment, and selection criteria, Deutsches Institut für Entwicklungspolitik, Bonn, 2008. http://www.die-gdi.de/uploads/media/Studies_39.2008.pdf

    To paraphrase the authors:

    With regard to differences in the generalized definition of scenarios, one aspect stands out: the distinction between scenarios and prognoses. The concept “scenario” is often used in contradistinction from the concept of “prognosis” and that of “prognostics“, with all its negative connotations. Prognoses are statements about future developments which may be expected. In contrast to prophecies these statements are supported by a basis of knowledge, as in the statistical extrapolation of present and past trends. Some authors explicitly exclude prognoses, i.e. predictions based on the expected “extension” of present-day developments into the future, from the concept of a scenario. They emphasize that it is precisely the nature of scenarios not to offer prognoses but rather in essence to take into account the possibility of several alternative futures. In contrast, however, concepts like “prognosis“, “outlook“, “forecast“, “prognostics” and “trend extrapolation” are often equated on the one hand with scenario approaches in the areas of market research and consultation. On the other hand, however, it must also be recognized that classical techniques of prognosis, along with traditional forecasting techniques, have made their way into scenario methods and are enhanced by, although not completely replaced, by the latter. They can well be said to represent a partial aspect of scenario approaches

    Fair warning though, Kosow studied social and political sciences, Gaszner is a psychologist. You are now entering the realm of the reviled soft “social sciences”.

  6. I tend to think of “scenario” in very simple terms that are clearly understood by most people like:
    Ballpark figure
    Winging it
    Pulling a number out of your butt
    Supposing
    Just for the sake of argument
    Is it bigger than a breadbox?
    Run it up the flagpole
    Guesstimate
    Throw out a number
    All things being equal
    & yada yada yada…

  7. Johan

    @Bob Mrotek

    You forgot “gut feelings”, and they are part of the so-called “Intuitive Logics”, only one of three complex creative-narrative techniques.

    BTW, Intuitive Logics was developed by the Stanford Research Institute (SRI), Global Business Networks and Shell.
    Never underestimate your opponents 🙂

  8. Johan,

    Intuitive logics seems to me like some kind of “back to the future approach” 🙂

  9. Doug M

    When Nate Silver says “The Republicans have a 63% chance of taking the Senate”, what does that mean to you?

    In a million simultaneous worlds the Rebublicans win in 63% of those and the Democrats win in 37%?

    Is it saying, if you were to establish a betting line, this is the line I recommend that you set?

    I have a model, that I know has some degree of error. My estimate of the error of the model suggests…

    If the Democrats hold the Senate, he can say, “I only gave the Republicans a 67% percent chance of taking the Senate.” It is more squirrelly than making a straight out prediction, and accepting that he is accurate 67% of the time.

  10. IPCC’s explanation is rather peculiar.

    Projections of future climate change are not like weather forecasts. It is not possible to make deterministic, definitive predictions of how climate will evolve over the next century and beyond as it is with short- term weather forecasts. It is not even possible to make projections of the frequency of occurrence of all possible outcomes in the way that it might be possible with a calibrated probabilistic medium-range weather forecast. Projections of climate change are uncertain, first because they are dependent primarily on scenarios of future anthropogenic and natural forcings that are uncertain, second because of incomplete understanding and imprecise models of the climate system and finally because of the existence of internal climate variability. The term climate projection tacitly implies these uncertainties and dependencies. Nevertheless, as greenhouse gas (GHG) concentrations continue to rise, we expect to see future changes to the climate system that are greater than those already observed and attributed to human activties. It is possible to understand future climate change using models and to use models to characterize outcomes and uncertainties under specific assumptions about future forcing scenarios.

    http://www.climatechange2013.org/images/report/WG1AR5_Chapter12_FINAL.pdf

  11. Curious George

    There may be important legal distinctions. I saw somewhere that a projection cannot be falsified (probably under American law), whereas if you are brave enough and call it a forecast .. you may be shown wrong. No wonder a climate “science” walks always the safe path.

    You have omitted one synonym, in IPCC jargon hypotheses are called “conclusions”, and they are somehow assigned a “statistical” probability. I guess it is really a consensual probability.

  12. Betapug

    Presentation is everything and I wonder about the rhetorical power that computer generated graphics makes to the credibility of the dubious.
    The Marcott “We’re Screwed: 11,000 Years’ Worth of Climate Data Prove It” paper of last year with the accompanying chart with the y axis range constricted to allow the “temperature anomaly” trace to project above the box, along with interview quotes about temperature “going through the roof” impressed the hell out of millions.
    The note buried in the paper that the final 200 years were “not robust” carried no weight at all.
    http://www.theatlantic.com/technology/archive/2013/03/were-screwed-11-000-years-worth-of-climate-data-prove-it/273870/

  13. JH

    In statistics, a forecast is a prediction of the future based on the past data. Predictions can be made without a time component. For example, the unknown sale price of a property can be predicted using its size, the number of bathrooms and other variables.

    Forecasts are predictions but not all predictions are forecasts.

    In demographic techniques, a projection (scenario) may be calculated assuming that a particular set of assumptions were to hold true. Therefore, unlike a forecast relying mainly on past data, it might never be proven right or wrong by future events. Statistician may view calculations of future population based on past data as forecasts, but Census Bureau may see them as projections.

    Also, one may say that backcasting is forecasting in reverse time, but some people rather call it projection in this case.

    So, forecasts are projections but not all projections are forecasts.

  14. JH

    Briggs,

    I am not sure what kind of talks is appropriate for the AGU meeting. The differences and similarities among prediction, forecasting, projection, and scenario are taught in a graduate GIS course here. Materials taught in a classroom are usually not acceptable for a professional statistical conference.

  15. Ray

    Forecasting is very difficult, especially if it’s about the future.

    Niels Bohr

  16. James

    I would distinguish ‘scenario’ from the other two in the following way. Scenarios, in my usage, means a setting of certain exogenous variables (or noise variables, if you like). For example, if I’m doing economic analysis of an airplane design, I may create different ‘scenarios’ of fuel price and consumer demand curves. Within each scenario the math/physics/etc. stays the same. The only thing changing are some inputs.

    When the IPCC talks about ‘scenarios’, they usually (in my limited experience) talk about different CO2 emission curves, etc. These are things that shouldn’t change the physics or math in the models. It’s just a new context/starting point. Scenarios can each have their own prediction or forecast (which I think are basically the same).

    Stated differently, forecasts/predictions are conditioned on scenarios.

  17. Tom Scharf

    The models do have a couple get out of jail free cards.

    1. Volcanoes.
    2. Other forcing inputs that were not anticipated and are materially different than expected.

    #1 is just a special case of #2, but probably the most likely anticipated issue.

    So if we had a mega Mt. Pinatubo go off, the models would not be expected to have anticipated this.

    They also cannot reproduce the timing or strength of ENSO events so the assumption is that these will average out over time. They seem to latch onto this one as a “feature” instead of a “bug”. Whether the inability to predict ENSO events suggest a further inability to predict the climate in total is debatable.

    The different RCP emissions scenarios are them attempting to deal with problem #2 and allowing predictions based on different emission forcings inputs.

    What should be considered a valid exercise is for them to re-run the models later using observed forcings to remove this question. Whether this is a trustworthy exercise given the political nature of climate science is another debate.

    As I recall they have already done this and the pause was still not reproduced.

    In any event you get points for being accurate and you lose points for the opposite. At what point the model is insufficiently accurate to be used for policy input is a judgment call, and as we see there are differing opinions on this.

    What would be fair is for the modelers to establish these guidelines at release of model and not make it up as they go along. If they say it takes 30 years to invalidate a model’s performance, I say it also takes 30 years to validate it. Call me when you have the results.

  18. Briggs:

    We disagree. I don’t know the cause.

    As the words are commonly used in climatology, “prediction” refers to a proposition regarding the outcome of an event. A “projection” is a function that maps the time to the global temperature; it is a kind of response function As a function is a different kind of thing than a proposition, the two ideas should be linguistically separated. One of the costs to climatology from failure to fastidiously separate them is for climatologists not to notice it when events do not underlie their models. A consequence is for the models which are used by policy makers in attempting to regulate the climate to convey no information to these policy makers about the outcomes from their policy decisions. There is not the possibility of regulating the climate but policy makers persist in attempts at regulating it because they are unaware of the need for the events to exist if policy makers are to have information about the outcomes of them.

  19. Sheri

    Betapug: Yes, Marcott did specifically say the results were not statistically robust. It is unclear whether the publishing journal chose to omit that data or what. The original paper (his master’s thesis, I think) was very clear. My guess is that it could be spun to what was wanted and who really cared about accuracy? It’s good to see someone who actually finds the hidden parts that are left out of many write-ups.

  20. Ken:

    In the first 60 seconds of Prof Feynman’s lecture he states that “if it disagrees with experiment its wrong.” This description leaves out the detail of how disagreement is determined.

  21. Sander van der Wal

    @Terry Oldberg

    Theories will have some kind of inherent accuracy builtin. If your solar eclipse prediction theory says it predicts eclipses to the second, then a difference if a minute makes it obviously wrong. If the difference is a tenth of a second then it is not yet wrong. If the differences over time become bigger and bigger than 1 second it is wrong. If the differences tend to stay around 2 seconds for years it is wrong, but good enough.

  22. Briggs

    JH,

    Unfortunately, after you’re done teaching them, somebody has to come later and teach it the right way.

    For instance, correcting fundamental errors like “Forecasts are predictions but not all predictions are forecasts.” Poor students!

  23. Rich

    If a projection is conditional on a scenario and the scenario actually occurs, doesn’t the projection become a forecast? “England will win the World Cup when hell freezes over” is, surely, a prediction if hell, in fact, freezes over?

  24. JH

    Briggs,

    Fundamental error? Hotair doen’t rise in my world. It takes no brain power to make such a claim. Have a banana first!

    Snarky comments aside , how about some explanations or a counter example to back up your claim?

  25. Sheri

    @Terry: It seems that when dealing with probability and outcomes, falling within the error bars is sufficient for “agreement”. It’s a pretty liberal standard and, depending on how the error bars were generated, can be quite useless in reality.

    As Sander says, the theories have built-in accuracy measures. Since these can vary from theory to theory and researcher to researcher, even from one time to another it seems, it is very important to read what the criteria is. Then you can decide if the measure of “agreement” makes sense and is high enough that you can believe the outcome was more or less proof of the theory.

  26. Sheri and Sander:

    The way I like to state it, for falsifiability there has to be a specified set of events and these events have to have a set of specified “outcomes.” In statistical terms, these outcomes form the “sample space.” In a sequence of coin flips, the sample space contain “heads” and “tails” for example.

    If the model is slated for use in controlling a system, the set of “conditions” has to be specified. A pairing of a condition with an outcome provides a partial description of a type of event. For example, “cloudy” for the condition and “rain in the next 24 hours” for the outcome provides a partial description of an event.

    In global warming climatology, there is only one item on test: the Earth. For independence, the events have to occupy differing time slots. The various time slots should cover the time line with the result that the set of time slots is a partition of the time line. It follows that the end time for one event equals the start time for the subsequent event.

    In a “prediction,” the condition of an event is observed and the outcome is inferred. The various outcomes have probabilities of being true plus uncertainties on the limiting relative frequencies. In testing the model, the proposition is tested that the probability and uncertainties values are accurately stated by the model. This testing is performed in observed events that is events in which the conditions and outcomes have been observed. The observed events cannot have been used in the construction of the model.

    In global warming climatology, this structure did not exist until recently, making the conjectures of the models non-falsifiable and unscientific. To make “projection” and “prediction” synonyms is to cover up the fact that there was not a truly scientific basis for the policy decisions that were made. For example, there was not a scientic basis for the EPA’s “endangerment” finding.

    I find that most bloggers are thoroughly confused about this situation. This confusion arises from the common practice of using “predict” and “project” as synonyms, I think.

  27. Alan McIntire

    I agree with “Ye Old Statistician” that a “scenario” is more of a “what if” than an actual prediction. I’m currently reading a biography of Eisenhower, and the US military in the 1930s had several scenarios, with War Plan Orange to defeat Japan and War Plan Black to fight Germany- which DID turn out to be applied , but there were also Plan Red to fight with England, Plan Green to fight with Mexico, etc., which fortunately the US never had to implement.

  28. Ken

    There’s a lot of nitpicky (word definitions) hairsplitting going on…

    …but nobody here is grasping a fundamental issue —

    — just because a model predicts what experiment and/or actual observations observe does NOT mean the model is validated.

    All that means is the model is not proven false.

    It might be true, and it might not.

  29. Tim Hammond

    Seems to me this is missing the point.

    A theory must say something about the future – call it whatever you like.

    If the whatever is wrong, i.e. the future does not behave in the way the theory says it will, the theory is wrong.

    If you theory cannot make roust enough whatevers that can be tested against what happens int he future, your theory is useless and can be ignored.

  30. Sheri

    Tim: I don’t know that a theory has to say something about the future. A theory can tell you what will happen in a given situation. In a sense, that is the future, but theories about gravity, the big bang, etc don’t predict the future. Gravity explains why things fall—and why they fall the same way each time. The big bang tries to explain what happened in the past. Even though the big bang and evolution cannot really be tested, they can only be discarded if a better theory comes along. Theories can give you an explanation of a process that can then be applied to like situations.

  31. Ken:

    In the usual terminology, a model that has been “validated” is one in which the conclusions that are reached by it have been been tested against observational data without being falsified by the evidence. A theory is “scientific” if and only if it has been validated.

  32. Tim Hammond:

    Your description of a scientific theory is accurate but incomplete. Disambiguation is in order in view of the disastrous consequences for mankind when a pseudoscience is mistaken by many for a science. Among the terms needing disambiguation in the language of global warming climatology are “prediction” aka “forecast” and “projection.” The headline that says “There is no difference between a forecast, a scenario or a projection” runs counter to this need.

  33. Joshua

    If Briggs were to examine his own biases, he wouldn’t write blog posts like this one (projection).

    Briggs will not start to examine his own biases (prediction).

Leave a Reply

Your email address will not be published. Required fields are marked *