# Climate Model Uncertainty: Part I

Would you check the results of a model with another model? Before you answer, be sure you know what the question is.

A model—whether it is physical, statistical, mathematical, or some combination—is an algorithmic device designed to make predictions about some observable thing. You want today to know that price of tomorrow’s Dow Jones Industrial Index? There are models for that; usually statistical models.

You want today to know whether it will rain in Detroit tomorrow so that you can decide whether to plant your crops in the old lots that used to contain houses? There’s a model for that; a physical-statistical weather model called MOS (model output statistics; see Part II).

Now, how would you, assuming you are not an expert in these matters, check the accuracy of your model? Would you (a) compare the model’s predictions with what actually happened, or (b) produce another model and check the results of the first model against the predictions of the second?

The right answer is (a), of course, but the problem is that there are two ways to interpret “what actually happened.” You probably thought it meant “what happened in the future.” Now, it is the great shame in the field of statistics—both in the dismal way it is taught and the worse way it is practiced by most—that (a) is nearly always is interpreted to mean “what happened in the past.”

Nearly all—the exceptions to this are rarer than sober Paul Krugman columns—statistical models, and many physical models, are checked against the data that was used to fit, or create them. Since it is an elementary theorem that any model may be made to fit perfectly—not just closely, perfectly—to any set of historical data, to claim that your model is good because it fits old data well is a hollow boast.

This is the reason for the great overconfidence of experts who build and use models. And don’t think it doesn’t matter, because it does. People in charge of us makes decisions and set policy based on these models frequently. We are at the mercy of bad statistics.

Weather and Climate Models

But it’s not all bad. It is to the great glory of meteorological models that they are usually—in practice, I mean—checked against what happened in the future. Weather models have the advantage of a constant stream of model predictions and future observations. Discrepancies between the two are noted quickly and used in tweaking the models so that they perform better in the future.

Anybody who cares to look will discover that the performance of meteorological models has improved dramatically over the last thirty years. Of course, people’s expectations of accuracy has also increased, so that the level of grousing about weatherman has remained constant. Human nature.

Climate models are in a different category. So far, all they can boast about is how well they fit the data used to build them, which we have just seen is no great shakes. This being true, those who use climate model output should be humble, they should be cautious, even timid about their prognostications. And that’s just what we see in practice, right?

Actually, it’s still worse, because climate modelers—and in their development stages, weather modelers—answer (b) to that question above. They check their models against the output of other models. How could this be?

The Analysis

Climate/weather models take current observations as input and produce forecasts of future observables as output. But these physical models cannot take observations raw, like statistical models can. They must first process those observations so that they fit into the model environment. This assimilation is called an analysis. Analysis is a model itself.

Climate/weather models are run on grid-like structures, but observations come irregularly: we do not have equally spaced observations over the surface of the Earth and through the atmosphere. To operate, the observations have to be placed on the model grid. The analysis, then, is a sort of interpolation that does this. This is not a detriment; it is a necessary step to get these models to run.

Once the analysis is complete, the model is integrated forward in time to produce a forecast. OK so far? Because it’s about to get tricky. At that future point—the time of the forecast—come new observations. Ideally, the climate/weather model’s output would be checked against these actual observations, at only the irregularly spaced sites where they are taken. These observations are, are the truth, the whole truth, and the only truth.

But that’s not what happens. Instead, these new observations are read into the model in a new analysis cycle. This interpolates these new observations to the model grid. Then the old model integration is checked against this new analysis.

Thus, the model’s accuracy is checked with another model.

In Part II: MOS and measurement error

1. I find it bizarre that you continually misrepresent what climate models do. They do not ‘match the data used to build them’ in the sense that you mean (presumably the global mean temperature anomalies) since those data are *not* used to build them. What is used are the climatologies (the decadal+ averages of observables along with their statistical properties), not the year by year variations of the weather. They do not “take current observations as input” in the sense of weather forecast models. Transient runs are initiallised with pre-industrial conditions and are free-running after that, driven only by external (and independently derived) changes in atmospheric composition are surface features.

Try actually downloading a climate model and see for yourself how they work. Or, you know, try reading a paper. Even your description of weather forecast procedures are wrong.

2. Briggs says:

Hi Gav,

Oh, you don’t find it bizarre. Anyway, let’s not quibble.

Your “climatologies (the decadal+ averages of observables along with their statistical properties)” are observations. An average of an observable is still observable; that is, it is an observation. Your “initiallised with pre-industrial conditions” are observations, albeit observations with a healthy dose of measurement error (more uncertainty!). And your “external (and independently derived) changes in atmospheric composition [of] surface features” sure sound like observations to me.

All those observations are certainly used to build the models—in my sense, which is this. Those observations are used to inform the modeling process. Parameterizations, tweaks, and so forth are done with respect to those observations. You surely aren’t claiming that historical observations are not used to tune climate models in any way?

Anyway, neither here nor there. The larger claim that to assess the accuracy of models—climate or weather—can only be done by comparing integrations (forecasts—or, if you like the euphemism, “scenarios”) with actual observations. And not with analyses. You must agree with that.

I say “euphemism” with regard to scenarios, incidentally, because, of course, a scenario is nothing but a conditional forecast. We can talk later about that.

3. Hi Briggs,

I believe this recent post (and its links to the discussion on Steve Easterbrook’s blog) on parameterization, calibration, and validation of weather/climate models is relevant to the the issue of climate model uncertainty you bring up here.

BTW, I disagree with Gavin that inputs into the climate models are “the climatologies”. I think the distinction between input and output needs to be kept clear.

George

4. Briggs says:

Hey Gav,

It just occurred to me that we can shorten this entire discussion by jumping right to the gist.

How do you know—by what specific measures—that our best climate models are accurate?

5. Frank says:

Matt,

Back in my youth I took a graduate-level forecasting course (ARIMA). The one point that was continually stressed was to hold out at least 20% of the existing data prior to estimating each model’s parameters so as to be able to compare predictions of competing models on the basis of how well these stacked up against the hold-out sample. No offense to Gavin (above), but every representation I’ve seen on GCM results seems to focus on how well the models fit ALL of the available data. This implies in my mind that they are continually re-estimated, or “tuned”, as new observations become available. (For now, let’s not get into the issue that they are probably tuned to inaccurate / corrupted surface data records)!

The point here is that “tuning”, as you have continually pointed out, is a trivial exercise. And in the case of GCMs, if the assumption going in is that CO2 done it, then all tuning (i.e., re-estimating) does is drive the model’s feedback and sensitivity parameters to the extent necessary to arrive at the observed (i.e., known) inputs. In short, if Gavin wants to buttress support for the GCMs, he needs to demonstrate that the models were estimated using data from one period and that the output predicts hold-out data that was not available to the model during estimation.

PS – What I’m really curious to see done is to have Gavin turn over his parameterized model to an independent group, who would then re-run it with a constant CO2 input of 280 ppm…

6. dearieme says:

It would be a useful test, would it not, to tweak the models on, say, the 1910 to 2009 data and then get them to “predict” the 1850 – 1909 ‘climates’, which could then be compared with the observations. How do they do on such tests?

7. Briggs says:

Frank,

My heart soared like a hawk when I read your second paragraph.

George,

8. Doug M says:

Suppose I built a model of a physical system. The inputs to my model would include equations of Newton, Galileo, et al. Some variables such friction, I may not be able to measure in my lab. So, I leave that as an open dial on my model, and I tease that dial to fit my observations. Ideally, I would test my model on new data. But, if new data is impossible to gather, and I have high confidence that my equations are correct, and I think I have accounted for all of my variables, can I publish my results?

In my former life, I traded mortgage-backed securities. There are 2 big unknowns in the world of MBS — if you will be repaid, and when you will be repaid. The government guarantees the majority of the market, so the â€œifâ€ was a less relevant question than the â€œwhen.â€ Wall St. produces models that will forecast a mortgagorâ€™s propensity to pay ones your mortgage each month for the next 30 years. Factors in the models include the rate, the size, the age, LTV, the location of the house, the environment in which the loan was issued, the path of interest rates since issue, and a multitude of potential interest rates, now and into the future. About once a year, each bank tweaks their model, without really addressing how the 2009 prepayment experience is has any bearing on how will pay in 2011. At least, in the case of MBS, your maximum loss is limited to the size of your initial investment.

9. Briggs–
Gavin has a point about the initialization. Really, he does.

Modelers in engineering do similar things when running models for steady state processes. (Say, heating a big tub of goo from the bottom of a tank.) You write a model that you think correctly parameterizes all phenomena at steady state. The model will break the volume of the tank into a bunch of grids, apply the equations we think apply to each grid in the tank.

When running the code, you can initialize the temperature in the tank one of two ways:

1) Set all temperatures to a constant value. Maybe you’ll pick room temperature and say the velocity is zero everywhere in the tank. You know this is a bad guess.
2) Guess the correct temperature and velocity in every grid cell. You could guess this based on someone’s measurements. You expect this is a better guess.

Now, given your guess, you run your code. To the extent that your first guess does not match the answer your model gives at steady state, heat will travel from one grid cell to another by convection or conduction.

Now, you expect that when you set the temperature and velocity to zero, that your first iteration will cause temperature to rise some places (especially near the heat source). On your second iteration, you might see the warm goo try to rise. You keep iterating until the answer after interation “N” matches the answer after iteration “N+1” within some criteria.

Oddly, if the observations you used were right but your models is wrong heat and mass will flow. So, the “right” answer will start to deviate from the observations. You must permit this to happen. You let the model continue to run. You keep iterating until the answer after interation “N” matches the answer after iteration “N+1” within some criteria.

In principle, if the solution is not very sensitive to initial conditions, your model should give you the same answer under both conditions. However the first method should converge more rapidly, and reduce the amount of computer time required to obtain a solution.

Mind you: there are physical systems that can have several different meta-stable steady-state solution. These might give different answers if you started with different initial conditions. Problems that are very sensitive to initial conditions would fall in the category of very interesting ones. In an industrial application, engineers would likely consider the multiple possible steady states a problem and would likely want to design the system to eliminate this sensitivity. Luckily, with many, many systems, the multiple steady states cannot occur.

But the general notion suggested by Gavin– where models are initialized with data and then spun up– mostly works, has analogs in engineering, and mostly is not particularly sensitive to initial conditions. The specific problem with using observations to set initial conditions is a potential problem, you are but as a practical matter, it is probably not important.

That said: there are more practical problems associated with modelers being able to peek at the 20th data when creating their hindcast.

There are posisbilities that the modelers can tweak their “boundary conditions” and “forcings” within the range constrained by already observed known data in a way that permits better agreement between transients and data.

For example, if a modeler A creates a model (say model E) and modeler B wants to get the best possible match to 20th century data with that model, modeler B could do some sensitivity tests varying aerosols, solar, co2 etc. They would fully explore the behavior of model E. (Why, they might even report this in the literature as interesting in and of itself. 🙂 ). Then, knowing more or less how the model responds to forcings, modeler B can pick higher or lower aerosols within the large range supported by the data. Say high aerosols is better for their model: Use that, cite that paper instead of the paper with the lower forcings. They can pick from several solar forcing estimates for the 20th century etc. They can’t force perfect agreement, but they can certainly improve it relative to someone who had to predict the 20th century climate without knowing what it looked like before running the transients.

Oh, and for what it’s worth, the modeler might do this all more or less unconsciously without thinking they are tuning. They can protest the tuning is not done in the same way that econometricians tune their models. Quite right.

They might argue that assuming the parameterziations in “model E” were right, then that points towards the specific aerosol/solar etc forcings that result in the best match being right. Of course, the alternative is the model parameterizations are biased and the good agreement will vanish the moment the modeler tries to predict future unknown data. The fact is, that combination of parameterizations and forcings were selected with knowledge of the observations that would ultimately be used to test the simulations. So, there is some feedback– though not as direct as fitting in a linear regression.

10. dearie me:
It would be a useful test, would it not, to tweak the models on, say, the 1910 to 2009 data and then get them to â€œpredictâ€ the 1850 â€“ 1909 â€˜climatesâ€™, which could then be compared with the observations. How do they do on such tests?
I’m not sure what you are suggesting. But given the class of models GCMS are, this would involve running it backwards. You can’t do that because the second law of thermo does not permit you to run these sorts of physical models backwards in time.

For example: Suppose you come home and discover a drinking glass full of room temperature water sitting on the counter. (More precisely, indistinguishable from roon temperature within measurement precision.)

Given that ultimate condition, and believing the room was at a constant temperature for the previous 5 hours, can you run a model to discover the temperature of the water 5 hour ago? If you told an engineer that the water had been 90 F at 8am and the room temperature was 70F they could do a pretty good job estimating the temperature as a function of time going forward. But there are an infinite number of initial conditions that would result in the water temperature being indistinguishable from room temperature when you arrive in the kitchen at 1 pm.

Gavin can’t run Model E backwards. No one can.

11. Steve E says:

Lucia,

In your reply to dearie me, I understand your explanation and agree that you can’t run the model backwards. Can’t you, however, develop your model parameterizations using the 1910 to 2009 data set and then run the model (understanding that the base conditions e.g. CO2 concentration will need to be adjusted) feeding it data starting in 1850 and make forecasts you can test against the actual results later in the data set?

12. Briggs says:

Lucia,

Thank you. I agree, of course, with all that you say; and like I told Gavin, I wasn’t intending to do more then sketch a modeling process.

I think we both agree that, given whatever details of assimilation, the focus should ever be on the correctness of the forward integrations, the actual forecasts. Secondarily, we can talk about what theories drives the models—if the models eventual predictions are skillful.

Incidentally, these kinds of models are sensitive to initial conditions. This is exploited—well, acknowledged and lived with—in the operational sense with ensemble forecasting.

13. PaulW says:

Here is a chart comparing aerosols (direct and indirect) forcing to the GHG forcing as reflected in the GISS Model E simulations.

It was until 1970 that the GHG forcing became greater than the aerosols forcing. In the 1890s, aerosols negative forcing was -170% of the GHG forcing.

http://img101.imageshack.us/img101/6042/modeleaerosols.gif

You could call the aerosols a fudge factor but it more that they are fudged period. Direct and indirect aerosols forcing.

http://img83.imageshack.us/img83/7408/modeleforcing.gif

14. dearieme says:

Lucia, your examples happen to involve models or situations where the long run solution is a steady state. But surely climate is nothing like that? I’ve written lots of models that I could run either way in time: appeals to the Second Law wouldn’t matter a button for them. The cases where I couldn’t have run them either way typically involved some sort of mathematical stunt to avoid a computational difficulty – a shock, or near-shock, say – that precludes running them backwards. In other words the problem has not been a function of the physics but of the imperfect craft of mathematical modelling. Had I realised in advance that their might be a virtue in running them backwards, I’d have found some other remedy for the computational difficulties. Be that as it may, Steve’s point gets around the difficulty anyway.

15. Ray says:

I’ve always wondered why there are twenty some odd climate models in existance. If these modelers knew what they were doing there should be only one climate model and the output should agree with the measurements.

16. Doug M says:

Some models only work backards.

An accident investigator can look at the results and detrimine what the intitial conditions must have been. But, no one could have looked at the intitial conditions and say that an accident would have been likely.

17. Frank says:

As Lucia said – “The fact is, that combination of parameterizations and forcings were selected with knowledge of the observations that would ultimately be used to test the simulations”.

Whether or not these selections were made “unconsciously” is irrelevant at this point – the “tuning” horse is out of the barn and no withholding of previously observed data or hindcasting can validate or falsify the models. We could, of course, wait on new observations, but this would take many years (decades?) before a statistically meaningful test could be run. (Assuming the modelers can be retrained from re-tuning their models in the interim)! Logically, then, the only way to validate the models is to test their specification, namely the forcing due to CO2.

If the modelers have this specification right, then running the models using pre-industrial levels of CO2 should result in a slight cooling consistent with the “consensus” estimate of warming due to emissions during the industrial age. If not, that is if the warming is due to forcings that are excluded or not correctly specified in the models, then running the models should result in a pronounced cooling due to the incorrect CO2-related feedbacks and sensitivities built into the models by tuning.

I’d like to see this simple test done, but assume Hell will freeze over before Gavin et al turn control of their supercomputers over to independent outsiders.

18. Leonard Weinstein says:

Climate models predict what would happen in reasonably long term climate trends at times beyond the time they were developed from. They are one way in temporal output, but they certainly can have the initial and boundary conditions of a previous time (if it can be accurately determined) used to develop them, then see if the present climate correctly results (but this is valid only if the most recent data was not used to help develop them). If all known data is used to develop the code, the only way it can be verified is to wait and see if the predicted trend occurs at times beyond that which was used to develop it. So far the time beyond the development period is too short to judge the models, but they seem to be heading the wrong way. Since all of the models seem to make similar assumptions such as constant relative humidity, and ignore many possible forcing such as Sunspot effects on cloud formation, they may agree with each other all they want, but all may be bad models.

19. TomVonk says:

Incidentally, these kinds of models are sensitive to initial conditions. This is exploitedâ€”well, acknowledged and lived withâ€”in the operational sense with ensemble forecasting.
.
Absolutely correct William .
I remember somebody (Dan Hughes ?) having quoted G.Schmidt that the climate models present <b<POSITIVE Lyapounov coefficients .
I don’t know if the quote is accurate but if it is , it only shows misunderstanding of chaotic systems .
In any case the resolution of the climate models is too coarse to estimate Lyapounov coefficients .
On top this estimation works well only for purely temporal systems (e.g space independent) , spatio temporal climate models couldnÂ´t do the estimation even if they had the right resolution what they have not and will never have .
So even if it is virtually certain that there are positive Lyapounov coefficients for climate equations , as nobody can write them accurately down we can’t know what they are .
And even if we had the equations , itÂ´s clear that no climate model could be able to make this computation .
.
Anyway …
What we do know is that the “weather” is highly sensitive to initial conditions .
Climate models dealing with EXACTLY the same physical laws as those that govern the weather must exhibit the same sensibility .
Averaging (spatially or temporally) changes nothing because as (hopefully) everybody knows an average of a chaotic variable is a chaotic variable .
The only thing averaging does is that it eliminates high frequency components but the low frequency components are still chaotic albeit , trivially so , at larger time and space scales .
There simply is NO mathematical transformation be it an average or whatever else that would magically transform a chaotic variable in a non chaotic variable .
.
So it is precisely because the atmosphere-hydrosphere-cryosphere system is highly sensible to initial conditions at the fundamental level of the physical laws that it stays so after all and any mathematical transformations whether they “smooth” , “average” , “rotate” or whatever .
ThatÂ´s also why what you say is correct – the only way to simulate such a system is to systematically feed it with updated numbers to make sure that it stays on track .
ThatÂ´s exactly what weather models do .
.
Once that it is demonstrated that due to its properties , the system can’t be deterministically forcast beyond a very short horizon , comes the question if there is not SOME invariant probability distribution function that would at least allow to estimate the probabilities of future dynamical states of the system .
For spatio-temporal chaotic systems the answer on this question is generally no .
All this has really nothing to do with steady states or small deviations from some mythical “equilibriums” .

20. Pat Moffitt says:

The MAGIC models used to “prove” acid rain are the best example of how model predictions go horribly wrong when only the physical processes that fit the agenda are allowed to be incorporated into a model. The acid rain models assumed the primary introduction of acidity to surface waters was from SO2 emissions. The acid rain models said if we controlled SO2 we would see pH improvements in 20 years. We haven’t because the primary source of acidity was never acid rain but natural soil derived organic acids. What is particularly sad is some scientists now claim the failure of pH recovery in surface waters is the result of climate change accelerated production of organic acids. ( At least they are admitting to the role of organic acids but too late for the scientists whose careers were destroyed for questioning the acid model assumptions)

Acid rain was the template for climate change especially in naming a crisis after a natural phenomena. All rain is acid – always has been- always will be– climate changes- always has always will. How could any scientist when pushed say acid rain did not exist? (“natural rain” is approx. a pH 5.5). However an admission as to the “reality” of acid rain was taken to support an agenda far beyond the scientific reality of the acid rain threat.

21. TomVonk says:

P.S
I think that one would also eliminate once for all analogies with aerodynamics .
Those analogies are so deeply misguided that they are not even wrong .
People say – “you see planes fly even if the flow is chaotic . So the models have an ability to provide useful predictions” .
Well to get something useful from a chaotic system model one has to DRAMATICALLY restrict the phase space domain .
That happens either by looking at very short times (thatÂ´s what weather forecasting does) , at very small spaces (that’s what aerodynamics does) or at very small ranges in dynamical parameters (that’s what CFD does) .
But the climate ?
It must take the biggest space available (the whole Earth) , look at huge times (centuries) and work with the full natural range of the dynamical parameters .
ItÂ´s the worst possible situation where no simplifications can take place and the system has to be treated as a fully chaotic system .
Clearly nothing to do whatsoever with a flow of air around the few meters of a plane wing .

22. Date free models built with no observations are common. They have no utility except possibly as thought experiments.

Gavin and Lucia seem to think that climate models are a) thought experiments, or b) use minimal actual observations but we just don’t like to talk about them.

Either way, the disconnect between the models and reality (the observations) is deep and wide. So be it. I accept that. The models have no predictive power vis a vis reality, since reality is not important to the modelers. They have some other purpose in mind. They even eschew the word “prediction”.

Great. Except that some powerful politicians and moneyed interests are using the hypothetical models to impose draconian regulations on society in the mistaken notion that the climate models have actual predictive power. But the modelers insist that they don’t, at least they said so above. (I would be remiss not to point out that climate modelers are not so blithe and forthcoming about the unreality of their models when testifying before Congress).

Why then don’t the modelers speak up, and tell the politicians that their models have been completely misinterpreted, that prediction is not what the models do, that any and all model outcomes are hypothetical, and that imposing penalties on society on the basis of hypothetical models is wrong?

Because the if they don’t make that clear, the modelers will someday be dragged kicking and screaming through the streets, as oppressed society blames THEM, not the power-hungry money-grubbing politicians.

The very idea that scientists are somehow apart from society, as well as reality, is naive, disingenuous, and dangerous to all concerned. Some modelers apparently want it both ways: their models are just thought experiments, and their models predict actual dire futures. Not smart. Blame will be laid and is being laid. The future for such dissembling modelers is not bright, regardless of the future of the climate.

23. AusieDan says:

Gavin – take a look at the IPCC 2007 report.
It is clear from the main temperature charts with reducing reducing error bands with time, that the models used were tuned to the period 1976 – 2000.

Your models should be restricted to data from 1880 to 1975 and specifically tuned to the period from 1943 to 1975.
Then use them to predict 1976 to 2010.
That’s 35 years, which is long enough to avoid the “that’s not climate, only short term weather fluctiations” song so often used to fend off criticism.

I bet you won’t show us the results.
Good luck and keep your overcoat handy.

24. Briggs says:

Tom Vonk,

Right on, brother. The other distinction with air foil modeling is that nobody—not a soul—would build a wing based solely on the basis of a theoretical model, fix the wing to a plane, and then strap themselves in and take off with a load of passengers.

It has to be shown to work in practice first.

25. Noblesse Oblige says:

” What Iâ€™m really curious to see done is to have Gavin turn over his parameterized model to an independent group, who would then re-run it with a constant CO2 input of 280 ppmâ€¦”

We know what would happen. Without exogenous forcing (e.g., volcanoes) not much of anything.

26. Steve E says:

Briggs,

“…nobody–not a soul –would build a wing based solely on the basis of a theoretical model, fix the wing to a plane, and then strap themselves in and take off with a load of passengers.”

Except maybe the Wright brothers, minus the passengers of course. 😉 And all those other crazy flight pioneers who jumped off barns, cliffs, etc.

I really enjoyed and agreed with this post. I’m in financial services and see far too many models that have been developed and then tested “in sample,” and then reporting incredible predictive powers. The slicker “snake-oilers” will tell you they used “monte carlo” simulations which is supposed to prove infallibility. Unfortunately like all modelling techniques, monte carlo can only test within the parameters it is fed. Thus the models couldn’t predict the most recent market meltdown any better than they predicted the market meltdown in 1987.

It would seem that in climate science the number of variables, and our limited understanding of said, creates a system even more chaotic than the worldwide financial system. I just don’t see how they can get it right. Climate Science today seems to be dwelling in the extreme tails of a bell distribution instead of within the mean +/- (albeit very generously given) a full quartile.

Keep up the good work! It’s very much appreciated.

P.S. Look forward to Part II!

27. John R T says:

Pat Moffitt: at 4:19 pm
My 80s USAR enlistment duties took me to Heidelberg each year. Among the data I reviewed were descriptions and warnings re European Â¨forest death.Â¨ My civilian employer, a newsprint mill, offered education benefits for study related to my work. I chose to pursue independent study, at my alma mater – VCU, on acid rain.
My career was not ruined. My GPA, however, plummeted: only years later did I realize that my failure to find a connection WAS my finding.
Thank you, Briggs et al. Keep up the great work-

28. Sean Inglis says:

Apologies for the noob question, but having digested William’s (or do I say “Briggs”?) first post and believing I understanding the point, I’d like to know more about the process used to build the anomaly grid in the first place.

I’ve tried JFG’ing it, but I’m struggling to get to something clueful. Any links would be appreciated.

29. Why do I think climate models are useful? Because they demonstrate skill and have made predictions that have turned out to be valid.

Some examples:
1) Hansen et al (1992) predicted the impact of Pinatubo aerosols on temperatures accurately before they happened. Subsequent research has shown that this would not have been the case if the water vapour feedbacks or ocean heat uptake had been very wrong. Matches to radiation anomalies, WV, stratospheric temperatures and wind pattern changes to observations are all excellent (Hansen et al, 2007, Shindell et al 2004).
2) Rind and Peteet (1985) showed that an estimate of ice age ocean temperatures were inconsistent with a suite of land based proxy information using an atmospheric GCM. They were right.
3) Hansen et al (2005) and Domingues et al (2009) showed that predicted ocean heat content anomalies matched the current best estimates. Note that they didn’t match the ‘best’ estimate at the time the models were run. Models instead predicted that that estimate was flawed (AcutaRao et al). (i.e. the models were *not* tuned to get the ‘right’ answer).
4) Hansen et al (1988) shows substantial skill over a ‘no change’ forecast for global mean temperatures in the 1984-2009 period.
5) Model climate sensitivities which are emergent properties of the underlying processes (and are not easily tunable) match constraints derived from paleo-climate (Annan and Hargreaves (2006), Kohler et al 2010)
6) Models designed for late 20th Century conditions reproduce large parts of the orbitally driven climate changes in the mid Holocene (increased rainfall in the Sahara/Sahel, increased summer time warming etc).
7) Models designed for the late 20th Century show good matches first time out for the climate impacts of a slow down in the North Atlantic ocean circulation 8200 years ago driven by the final collapse of Lake Agassiz (LeGrande et al, 2006).
8) Coupled Model estimates of the Last Glacial Maximum climate are all in the right ballpark (given of course that boundary conditions are not known precisely). This would not be the case if climate sensitivity was either negligible or extremely high.

I could go on, but none of these things would have occurred if climate modelling was fundamentally impossible. Many of them involved challenges to what was thought of as the ‘best’ observations at the time, many of them are true ‘out of sample’ tests.

If all we were doing was constructing best fits to current observations and updating them as new information came in, none of this would work. The science is done when we have a conflict between the models, the observations and occasionally how the comparison is being done. The MSU data are a good example – the MSU2 record did not seem to match model estimates of mid-troposphere temperatures, but then people realised that you had to account for the tail of the weighting in the stratosphere where it is cooling, MSU-LT initially showed cooling in contradiction with model estimates that suggested it could not be that different from the surface records (mainly ocean temperatures). Turns out there were a number of problems in the MSU analysis etc.

As I suggested above, please read the papers that discuss the building and testing of models – it is nothing like the cartoon you suggest – i.e Schmidt et al (2006).

And although this might not answer every precise issue raised here, this FAQ covers a lot of it: http://www.realclimate.org/index.php/archives/2008/11/faq-on-climate-models/

Then download the code and look for yourself for the hidden subroutines that you seem to think must be reading in observed temperatures and fixing the physics accordingly (good luck with that one): http://www.giss.nasa.gov/tools/modelE/

Try running it yourself and see how quickly the initial conditions of the ocean are forgotten (about 100 years for the upper ocean and atmosphere, 500 years or so for the deeper ocean drifts), or how quickly weather patterns diverge if you change a single digit at the start (a much easier test for the home modeller).

Try downloading some output: http://data.giss.nasa.gov/modelE/ and seeing how you can get a similar climate signal regardless of the weather for the response to a volcano or the long term trend in greenhouse gases.

Try and learn something instead of just making up stories to convince yourself that the models aren’t worth bothering with.

30. Rich says:

A while ago I downloaded the Hadcrut3 gridded dataset. It was “anomalies” and I wondered how to get the actual values. There’s another dataset “Absolute” which gives, for each gridcell in a month the values subtracted to produce anomalies. I got even more confused. The full dataset has missing data for something like 40% of the gridcells but the absolute set has a value for every gridcell every month. I found out a little later that the cells of the absolute dataset were derived from the observations by running a climate model that matched the available data and using the model’s values for the remaining empty cells.

So all of the anomaly data incorporates model data. This is then averaged over the Northern and Southern hemispheres and then these are averaged again to give the “Global Average Temperature Anomaly”. Which is then presented as observational data. Which is compared to model outputs.

There’s no escape from models.

31. I don’t have a science background and so the vast majority of the content contained in the above 30 comments is well over my head.

There is something I’m quite equipped to note – and understand – however. And that’s that the comments from Gavin don’t just discuss the subject matter. They veer off into nastiness, condescension, and accusation.

from Gavin comment #1: “Try actually downloading a climate model and see for yourself how they work. Or, you know, try reading a paper.”

Gavin comment #2: “Try and learn something instead of just making up stories to convince yourself that the models arenâ€™t worth bothering with.”

Earth to Gavin: Talking down to people as if they’re four-year-olds hasn’t worked for climate scientists so far. Why not try a little courtesy and respect?

32. brent says:

Climate Change 2007: The Physical Science Basis
Summary for Policymakers

by Vincent Gray

My disillusionment with the whole process began very early. The very first Report was dominated by an attempt to persuade the value of computer models. Climate data on the supposed warming were largely confined to the end of the Report, presumably to draw attention from their lack of confirmation of the models. This was concealed by claiming that the size of this warming was â€œbroadly consistentâ€ with the models.
I could claim my first success, that after my comment, all subsequent IPCC Science Reports have placed the climate data at the beginning. But that was not the only cause for suspicion. In the first draft of the IPCC WGI 1995 Report there was a Chapter headed â€œValidation of Climate Modelsâ€ I commented that this word was inappropriate as no model had ever been â€œvalidatedâ€™, and there seemed to be no attempt to do so. They agreed, and not only changed the word in the title to â€œevaluationâ€, but they did so no less than fifty times throughout the next draft. They have rigidly kept to this practice ever since.
â€œValidationâ€, as understood by computer engineers, involves an elaborate testing procedure on real data which goes beyond mere simulation of past datasets, but must include successful prediction of future behaviour to an acceptable level of accuracy. Without this process no computer model would ever be acceptable for future prediction. The absence of any form of validation still applies today to all computer models of the climate, and the IPCC wriggle out of it by outlawing the use of the word â€œpredictionâ€ from all its publications. It should be emphasised that the IPCC do not make â€œpredictionsâ€, but provide only â€œprojectionsâ€. It is the politicians and the activists who convert these, wrongly, into â€œpredictions.â€, not the scientists. An unfortunate result of this deficiency is that without a validation process there cannot be any scientific or practical measure of accuracy. There is therefore no justified claim for the reliability of any of the â€œprojectionsâ€™
They have tried to draw attention from the undoubted fact that the models have not been shown capable of making predictions by seeking the â€œopinionâ€ (or â€œguessâ€) of a panel of â€œexpertsâ€, all of whom have a financial stake in the outcome, and apply to these guesses levels of â€œlikelihoodâ€ which have even been given a spurious numerical value. If the experts were employees of oil or coal companies, and their opinions were undesired there would be an outcry. As the â€œexpertsâ€ are employees or recipients of funding from governments promoting the notion of greenhouse warming, criticism is not heard.
.
The 2007 Summary for Policymakers
The current document is really a Summary BY Policymakers, since it has been agreed lineby- line by government representatives.

http://www.pensee-unique.fr/GrayCritique.pdf

33. brent says:

Comment on the Nature Weblog By Kevin Trenberth Entitled Predictions of climate

This is remarkable since the following statements are made

1. IN FACT THERE ARE NO PREDICTIONS BY IPCC AT ALL. AND THERE NEVER HAVE BEEN.
2.None of the models used by IPCC are initialized to the observed state and none of the climate states in the models correspond even remotely to the current observed climate.?
3.Moreover, the starting climate state in several of the models may depart significantly from the real climate owing to model errors. I postulate that regional climate change is impossible to deal with properly unless the models are initialized.
http://pielkeclimatesci.wordpress.com/2007/06/18/comment-on-the-nature-weblog-by-kevin-trenberth-entitled-predictions-of-climate/

34. Briggs says:

Gav,

Most of your points are good ones, and many combative skeptics should take them to heart. But before I get to your points, and since I have your attention, have you seen this post? I wanted to have your opinion of those who say that criticism is “treasonous”, “unpatriotic”, “immoral”, and so forth. That OK with you? You’re fine to say that criticism is misguided or even wrong: but do you say it is criminal like your boss implies?

Now, I have never, not once, said that models “aren’t worth bothering with.” Some skeptics, do, and they should not. That they do is sometimes because of ignorance, but sometimes due to the heat of battle. People are apt to ratchet up the accusations when the going gets rough; they’ll leave the regret for later.

I have never said, for example, that climate modelers insert “hidden subroutines” into their code, nor anything else equally foolish. I have instead said that the vast majority of climate scientists are doing a reasonable job. (You can look that up.)

My one major criticism is overconfidence, or over-certainty. I have long said that I might be wrong in my criticisms. I don’t believe I am, of course, but I acknowledge the possibility. Confirmation bias is a distinct possibility; indeed, I think it is operative here.

Your points show that climate models do a respectable job of mimicking certain climatic signals. Very well; I have never disputed that either. In my “cartoon” way, I say that that means models “fit historical data.” This is a necessary but not sufficient step to prove that the theory behind the models is valid.

And now we are back to the point we were chatting about last month. We agreed—more or less—that climate models have not yet demonstrated independent skill, where I use that word in its technical sense: they haven’t been able to beat persistence on independent data, for example. They haven’t had the time to, as you rightly pointed out.

The crux is that the theory driving the models might be wrong, and that other explanatory theories exist. This is where I say confirmation bias can enter. There is certainly no reason to suspect that climate modelers are immune to this disease. And given all the politics, money flow, religious and heated feelings on the subject, it’s more than just plausible.

Lastly, the point I’m making here, and in particular Part II, is that a certain method of verification is guaranteed to produce overconfidence in those models’ efficacy. This applies to meteorological as well as climatological models.

35. Bernie says:

Matt:
Are you saying that “modelers” have to call their shots(predictions) on a consistent basis?

It also strikes me that models can be useful, i.e., make accurate predictions, but be seriously and knowingly incomplete and, therefore, necessarily projects greater certainty than can exist.

36. Steve E says:

Briggs,

Your point about overconfidence is well made. Massimo Piattelli-Palmarini in his book Inevitable Illusions: How Mistakes of Reason Rule Our Minds, John Wiley & Sons, Inc. 1994, summarizes the classic work of Fischhoff, Slovic, & Lichtenstein from their 1977 Journal of Experimental Psychology paper.

Piatelli-Palmarini says, “This study, as well as others, shows that the discrepancy between correctness of response and overconfidence increases as the respondent is more knowledgeable…the level of accuracy increases, yes, but the level of overconfidence increases to a far greater degree. …this over-confidence is at its greatest in our own area of expertise–in short, just where it can do the most damage.”

37. Frank says:

I very much appreciate Gavin Schmidt’s willingness to address concerns on this forum. If Gavin (or others) would be so kind, I have a few questions related to the FAQ referenced in his prior post:

Re. Do models have global warming built in?

“If left to run on their own, the models will oscillate around a long-term mean that is the same regardless of what the initial conditions were”.

– Is this ocillation due to different initial conditions (as implied by the FAQ section addressing “wiggles” in the output) or is there inherent variability (chaos) in the model?
– If there is inherent volatility in the model, how many runs are typically required for a simulation?
– What is the current mean global temperature as obtained by the “base mode”model?

Re. What is tuning?

“With these approximations and empirical formulae, there is often a tunable parameter or two that can be varied in order to improve the match to whatever observations exist.”

and

“Surprisingly, there are very few of these (maybe a half dozen) that are used in adjusting the models to match the data.”

– Are these tuned parameters scalar (i.e., global) values or are they vectors and/or arrays (i.e., temporal / spatially differentiated)?

Kind Regards,

Frank

38. RichieRich says:

Briggs

Your points show that climate models do a respectable job of mimicking certain climatic signals. Very well; I have never disputed that either. In my â€œcartoonâ€ way, I say that that means models â€œfit historical data.â€ This is a necessary but not sufficient step to prove that the theory behind the models is valid.

However, several of Gavin’s examples – 1, 3 and 4 – are not examples of fitting historical data but of predicting aspects of future climate. And, as I understand you, this is exactly what is required for validation of a model. Is this – admittedly limited – successful prediction enough to consitute validation? Be interested to hear your thoughts.

39. brent says:

The Carl Wunsch Complaint
http://climateaudit.org/2008/07/22/the-carl-wunsch-complaint/
http://climateaudit.org/?s=ofcom

Global Warming & Greentech: Why global warming is unlikely to be a safe area for investment by Richard Lindzen, April 14, 2009

Another colleague, Carl Wunsch, professionally calls into question virtually all alarmist claims concerning sea level, ocean temperature and ocean modeling, but assiduously avoids association with skeptics; if nothing else, he has a major oceanographic program to worry about. Moreover, his politics are clearly liberal. . Perhaps, the most interesting example is Wally Broecker, whose work clearly shows that sudden climate change occurs without anthropogenic influence, and is a property of cold rather than warm climates. However, he staunchly beats the drums for alarm and is richly rewarded for doing so.
http://www.ecoworld.com/global-warming/global-warming-greentech.html

A Climate of Belief
The claim that anthropogenic CO2 is responsible for the current warming of Earth climate is scientifically insupportable because climate models are unreliable
by Patrick Frank

Acknowledgments
The author thanks Prof. Carl Wunsch, Department of Earth, Atmospheric and Planetary Sciences, Massachusetts Institute of Technology etcâ€¦. for reviewing a prior version of this manuscript
http://tinyurl.com/635bf8

The Present (Circulation) Is the Key to Understanding The Past(Circulation)
Carl Wunsch
Cambridge UK
Leverhulme Meeting
March 2008
One uses GCMs for the modern world in two distinguishable ways:
(1) Run them forward from some initial state in an â€œextrapolationâ€ mode.
(2) Use them to interpolate data over finite time intervals.
All time-stepped models in extrapolation mode exhibit error growth. The only exceptions are kinematically and dynamicaly trivial (the stopped-clock analogy
Snip
It is urgent to come to some understanding of error growth in GCMs in extrapolation mode. Mere assertion that feedbacks or compensating errors occur is not science. Determining error growth is difficult, but hardly impossible

When I saw the original release of TGGWS, I was impressed with the openness of Carl Wunsch in discussing the modeling issues.
I had no problem in relating to what he was saying, particularly because I have a modeling background myself. I’m an old retired petroleum downstreamer who spent most of his career with a heavy involvement in optimization. I had already figured out for myself that the GCM’s would not be capable of prediction, and that the IPCC “projections” were no more than prophesy.
What Wunsch was saying in essence was we cannot even inch forward on learning anything (in a science that is in it’s infancy) without big complicated models. (I agree with this)

Politically Wunsch was embarrassed after the original TGGWS was released because of the seeming association with sceptics.
I think Steve McIntyre did an excellent job summarizing the various OFCOM complaints including Wunsch’s

Subsequently, I was certainly intrigued, when Pat Frank’s paper came out to note in the acknowledgments that he thanked none other than Carl Wunsch for reviewing an earlier draft of his paper.

Later I dredged up Carl Wunsch’s Leverhulme meeting link above, where Carl Wunsch criticizes colleagues regarding the very issue that was the central tenet of Pat Franks paper.
He even tells his colleagues that they aren’t doing “science”‘, and it’s clear that the climate establishment hasn’t a clue, or even tackled the question of propagation of error in their “non-predictions” (what IPCC calls projections).

So, for public consumption Carl Wunsch doesn’t want to be seen to be offside with the “politically correct agenda”. However he’s pretty critical when it’s supposedly just amongst “colleagues”

cheers
brent

40. Bernie says:

Buy low, sell high: I have a model that predicts future share prices of companies in 5 distinct industries. It has predicted the last peaks and lows for each of these industries. My model is based on a deep understanding of the underlying dynamics and interactions among all known key economic drivers, including interest rates, Government spending, raw material prices, etc.

41. Some curious points. Dr. Schmidt says:

4) Hansen et al (1988) shows substantial skill over a â€˜no changeâ€™ forecast for global mean temperatures in the 1984-2009 period.

So a 1988 model is used to hindcast to 1984, and forecast 20 years forward. Yet we have been told that GCM models are only skillful in 30+ year forecasts (scenarios) and not used for hindcasting. Which is it, short range or long range, hindcast or forecast? And did the 1988 GCM correctly project the 1995 to 2009 lack of warming that has been the subject of much debate in recent months?

6) Models designed for late 20th Century conditions reproduce large parts of the orbitally driven climate changes in the mid Holocene (increased rainfall in the Sahara/Sahel, increased summer time warming etc).

Wait a second. I thought the proxies used in the IPPC reports denied the existence of earlier climatic optimums, esp. the Medieval Warm Period. And that GCM’s used atmospheric GHG’s as model forcings, not insolation. Proxies for GHG’s in pre-measurement eras are inaccurate and imprecise. That measurement error is not accounted for in the GCM’s. Instead, assumptions are based on unwarranted levels of certainty about paleo conditions.

7) Models designed for the late 20th Century show good matches first time out for the climate impacts of a slow down in the North Atlantic ocean circulation 8200 years ago driven by the final collapse of Lake Agassiz (LeGrande et al, 2006).

That statement is too generous. First much of the “match” is based on the aforementioned highly uncertain paleo proxy record. Second, other paleoclimatic events, such as the Younger Dryas, are poorly understood and poorly measured, even by the paucity of proxies that do exist. To say that GCM’s model paleoclimatic events that are not well known is giving credit where none has been established.

42. Briggs says:

All, Brent,

Everybody should click through and read Pat Frank’s article in Skeptic. For convenience, here’s the link again: http://tinyurl.com/635bf8

Frank makes the point about model versus actual uncertainty—-precision versus accuracy—better than I ever could.

Thanks to Bernie for insisting I read this, and for brent for reminding us of it.

43. Bernie says:

I know Pat Frank frequents some other sites but if someone knows more, it would be very useful. Amongst other things, the guy can write.

44. Briggs–

Incidentally, these kinds of models are sensitive to initial conditions. This is exploitedâ€”well, acknowledged and lived withâ€”in the operational sense with ensemble forecasting.

I know the weather models are. Climate models use long “spin ups” that, in principle, should reduce or eliminate the sensitivity to initial conditions. (I couldn’t say if it’s eliminated. We’d have to grill Gavin on how many variations of ‘spin up’ are done for each model.)

SteveF–

Canâ€™t you, however, develop your model parameterizations using the 1910 to 2009 data set and then run the model (understanding that the base conditions e.g. CO2 concentration will need to be adjusted)

Not really, because it’s not done that way. I’m going to make up a wildly oversimplified example that might make Gavin cringe. Suppose you were a modeler back in 1980, and you were creating a climate something you could run on your radio shack whatever. You recognize that you are going to transport water vapor around. You also recognize that if air rises, the tempeature drops, and the water condenses into droplets.

You need to “parameterize” this– that is, create a codable formula that predicts when droplets form based on other variables also computed in the code. So, you decide to take data today. You make some observations during a 3 week campaign in Illinois, send ballons up and based on your data, you conclude that coulds form when the the relative humidity hits about 80%. Â±10%. You get a paper and discover some guy did a similar experiment in outer slobbobia, and got 70%Â±15%. Someone else got 65% Â±5%. Or so everyone claims. (Maybe you find an obscure group who has an entirely different theory and thinks the level of dust in the air matters– but you think that’s too complicated.)

So, what do you pick for your model? You have a big range, you can justify. But no matter what you pick, you didn’t pick it based on 1910 to 2009 data. You picked it based on these field experiments measuring relative humidity and dectecting clouds using weather balloons.

Now, in principle, you oculd pick your critical relative humidity and all other paremterizations, then run your model and tweak them until you match the temperature series for 1910-2009 as best as possible. But, in practice, this is much too computationally intensive, and, for the most part, you can’t do that. (You can do other things– but not quite that.).

Dearieme.

Lucia, your examples happen to involve models or situations where the long run solution is a steady state. But surely climate is nothing like that? Iâ€™ve written lots of models that I could run either way in time: appeals to the Second Law wouldnâ€™t matter a button for them.

Well, climate is nothing like that. But, to some extent climate model runs are like that. The modelers apply the forcings they think make sense for some year far back in time– say 1850. They freeze those forcings, and run year after year after year. Then after some number of years say 300) They start varying forcings as they think forcing varied during the 20th century. So, in climate model runs, the models do start from a quasi- steady state.

MikeD

Gavin and Lucia seem to think that climate models are a) thought experiments, or b) use minimal actual observations but we just donâ€™t like to talk about them.

I don’t think climate models are thought experiments. I was merely pointing out that the way Briggs described the influence of the data describing initial conditions wasn’t quite right. It appears he agrees.

SteveE

Except maybe the Wright brothers, minus the passengers of course. And all those other crazy flight pioneers who jumped off barns, cliffs, etc.

The Wright brothers built windtunnels to test theoretical models for predicting lift!

Gavin

Why do I think climate models are useful?

I see you are answering a question no one ask. I seem to recollect you’ve answered this precise unanswered question before. Clearly, it must be the right question, because you ask it of yourself and answer it so often! 🙂

Mysteriously, you did not answer Briggs question which was

How do you knowâ€”by what specific measuresâ€”that our best climate models are accurate?

I wonder if you ever will decide to not substitute “useful” for “accurate”. Well, you do prefer rebutting strawmen, don’t you?

Try and learn something instead of just making up stories to convince yourself that the models arenâ€™t worth bothering with.

It’s always nice to see you wrote a post that not only opens with a strawman, but closes with one! Bravo, two snaps and around the world! ( Briggs does not appear to have claimed “convinc(ing) yourself that the models arenâ€™t worth bothering with”. Or are you arguing with someone else? )

45. Steve E says:

Bernie and Briggs,

Thanks for the heads up. Great piece (h/t Bernie) the guy can write.

It occurs to me though that we’re talking past each other. Gavin defends important work. Though as Frank points out it’s not work that we can use to make meaningful policy decisions today. Climate scientists need to continue to improve their modelling, which is only going to take time and better data.

The villain in this piece is the IPCC. It has polarized the argument because it has framed the argument with incomplete science (which isn’t fair to scientists because much of it is opinion–based on the policy guides–not science). Climate scientists have got caught up in the hype and the spotlight and are falling into the over confidence trap because they’re being cheered on by the IPCC, the NGOs and they’re being fed with taxpayer dollars.

Pat Frank provides the perfect perspective. He gives a straightforward analysis of what the models do say and what they don’t say. There’s so much we don’t know. Creating policy based on what we think we know could be more catastrophic to real, living people than the IPCCs worst case scenario will do to a future generations.

46. Steve E says:

Lucia,

Thank you for your response I’ve seen how busy you are in the blogosphere, it’s appreciated. I’m not arguing with your logic. It just seems to me that you’re creating a narrower and narrower margin of error that isn’t supported by observation. When you look at Pat Frank’s projected error http://tinyurl.com/635bf8 based on your logic the error bars at the end of the century are ridiculous. I’m in business and I could never commit a capital expenditure let alone an operational expense to something like that. When I suggested the out of sample test (believe it or not) I was trying to find a way to believe that the models are more accurate than they are.

I can understand Gavin’s need to build strawmen (bless you for calling him on it). He’s convinced himself that the value of his work is dependent on a scenario he didn’t create (IPCC did) and is playing out the prisoner’s dilemma.

The modelling is important work that has made huge strides in the past 20 years. It’s just not robust enough to hog tie the world’s economy nor ensure that the third world never has a chance to make it to the first.

Cheers!

47. Steve E says:

Lucia,

“The Wright brothers built windtunnels to test theoretical models for predicting lift!”

I stand corrected. It’s still pretty f***ing crazy to do what they did ;-). As Briggs said they sure weren’t going to load up the folks at Kittyhawk for a quick spin around the town based on those windtunnel tests.

48. Bernie says:

Steve E:
The plaudits really go to Brent. IMO, Pat Frank writes like Montford and argues with McIntyre’s precision. That’s a tough combo.

49. Steve E says:

Apologies to Brent. Thanks for the link best piece I’ve read on models.

50. Lance says:

Thanks to Bernie and Briggs for the tip about the Pat Frank article.

I have made the point about error propagation many times in arguments with people that insist that GCMs are “accurate”. They seem impervious to the actual meaning of the word accurate and use it interchangeably with “precise”.

In my physics undergraduate work if one were to submit an experimental write-up that had error estimates an order of magnitude greater than the claimed precision of the measured phenomenon it might produce laughter as a response (along with a poor grade).

If the student then insisted that the the entire world population should make sweeping changes to their lifestyles based on these results the student would have been quietly escorted to psychiatric ward of the university hospital.

51. RichieRich says:

Lucia, Briggs

Lucia rightly calls Gavin on refuting a straw man arguments. However, it strikes me that Briggs’ characterization of
Gavin’s argumement is also something of a straw man. Briggs responds to Gavin

Your points show that climate models do a respectable job of mimicking certain climatic signals. Very well; I have never disputed that either. In my â€œcartoonâ€ way, I say that that means models â€œfit historical data.â€ This is a necessary but not sufficient step to prove that the theory behind the models is valid.

However, as I pointed out, three of Gavinâ€™s examples â€“ 1, 3 and 4 â€“ are not examples of fitting historical data but of predicting aspects of future climate. Almost everyone posting here seems to have ignored this. Now maybe the predictions aren’t sufficiently numerous or of sufficiently high quality to constitute validation: I’m sure there are many here who can comment knowledgeably on this. But, nevertheless, it does appear that predictions have been made!

52. Briggs says:

RichieRich,

No, not ignored. For example, Gav’s (1) is a prediction, and an accurate one. Nobody disputes it. It says that volcanic ash can lower surface temperature temporarily. I accept that, of course. I do not say, and have never said, that climate models won’t occasionally make accurate predictions of this or that.

But the big question is one of skill, which we talked about here. Can climate models consistently beat, say, persistence “models”? Can they best non-positive-CO2-feedback models? (We don’t know on that one, since these have not really been built.)

Think of it this way: you might tell me, accurately, the aspects of the physics of a dice roll. “Increase the spin by this much and that happens” and so forth. And you might even predict the outcome of a die roll from time to time. But can you use your physical model to beat the “model” of saying “each side will come up roughly equally often”? If not, then your physical model, despite its occasional successes and undoubtedly true portions of theory, is flawed in some way, and we would be better off using the uniform-guess “model.”

53. RichieRich

However, as I pointed out, three of Gavinâ€™s examples â€“ 1, 3 and 4 â€“ are not examples of fitting historical data but of predicting aspects of future climate.

Let’s look at these in the context of the “strawman”. Recall that Briggs asked about accuracy but Gavin knocked that down to “useful”, gave a list that only supported the conclusion that model are not totally worthless. I agree they are not totally worthless.

1) Hansen et al (1992) predicted the impact of Pinatubo aerosols on temperatures accurately before they happened.

Ben Franklin suggested volcanic aerosols reduce temperatures. Hansen had quantitative data from El Chichon, Agung etc. If he used tuned to those, why wouldn’t he get the right dip for this specific thing? Yes. Models make decent qualitative predictions: if you reduce forcing things cool, if you increase things warm. How does this tell us the models are accurate for their main uses– predicting the future climate. That’s not dominated by the volcanic forcing effect.

4: Ahhhh! The tyranny of low expectations. All Gavin says is Hansen’s model predicted that there would be warming and there was warming. This says nothing about accuracy. It says they got the sign right. This is no better than what we could predict by creating a very simple radiative-convective model, with a gross estimate of the heat capacity of the ocean. Does Gavin’s example say the model is “accurate”? Or just “not useless”?

This leaves

3) Hansen et al (2005) and Domingues et al (2009) showed that predicted ocean heat content anomalies matched the current best estimates. Note that they didnâ€™t match the â€˜bestâ€™ estimate at the time the models were run. Models instead predicted that that estimate was flawed (AcutaRao et al). (i.e. the models were *not* tuned to get the â€˜rightâ€™ answer).

On a list of 10 count them 10 bullet points, Gavin lists the only one that is worth even beginning to think supports accuracy. I’m pretty sure Domingues compared to a subset of models though… (I’ll have to check.) I don’t know what Hansen showed.

So, the 7 out of 10 of Gavin’s examples are not forecasts.
Of the 3 that could remotely be forecasts, the first two are providing evidence to counter the strawman: They only suggest the models are not completely worthless. That is: They are no worse than simple radiative convective models. They can beat a blindfolded man predicting climate using a coin flip.
The final one is the only one that might support a claim of accuracy.

But Gavin has filled the comment with drek. Is the goal to waste people’s time? Distract them? And, assuming Domingues (2007) looked at all the models, not a subset– how much weight should be give to Gavin being able to show the models were able to forecasts one thing well? Out of all the things they are supposed to forecast?

I’m sure in Gavin’s mind, the 10 points sound like a lot. But with respect to Briggs actual question, only one was remotely responsive!

54. Bernie says:

Lucia:
That is what I would call a TKO.

55. SteveE–
Looking quickly, I’d say that one problem with Frank’s analysis is that it appear he might not understand that– from the point of view of climate modelers– getting the absolute value of cloudiness is unimportant. What their models need to compute is the anomaly in cloudiness over time. But, Frank’s application of the 1st law of thermo is a bit off owing to this issue and the result means that what he computes doesn’t relate very well to the uncertainty in climate models projections.

I might change my mind if I looked at it further, but it appears that way to me right now.

Mind you, the fact that modelers get the absolute level of cloudiness is a factor that should reduce our confidence in their ability to predict evolution of anything even if that anything is expressed as an anomaly. But Frank is making a bolder claim about uncertainty intervals, and I’m not sure he is correct.

56. Bernie–
The case for model accuracy is worse if we examine the graph describing Dominguez at RC which is here.

The image is at :
http://www.realclimate.org/images/ohc_models_domingues.jpg

The text below the image state:

(Note that the 3-year smoothed observations are being compared to annual data from the models, the lines have been cut off at 1999, and everything is an anomaly relative to 1961). In particular, the long term (post 1970) observational trends are now a better match to the models, and the response to volcanoes is seen clearly in both. The recent trends are a little lower than reported previously, but are still within the envelope of the model ensemble. One interesting discrepancy is noted however â€“ the models have a slight tendency to mix down the heat more evenly than in the observations.

Note:
1. using an anomaly relative to 1961 means all models and observations are pinned to match in 1961. Even if models are totally wrong, it takes time for accumulated errors to show deviations. Bear that in mind when seeing differences.

2. The readers is challenged by the need to compare a 3 year observational average to annual data. Neverthelss, it is clear that at least some models– beige diamond in particular, show practically no effect due to volcanic eruptions while the smoothed observations appear to show these effects. This argues against accuracy of that model. Moreover, that model predicts twice the warming displayed in the observations.

3. Even using the anomaly method, the models are all over the place. The “peach circle model simulation shows cooling.

The agreement Gavin suggest exist exists due to the very lax standard that the data do not fall outside an enormous range from a bunch of models that are all supposed to predict the same thing.

It’s difficult to consider this sort of “agreement” as the sort that demonstrates the models are accurate.

However, since Gavin has defined Briggs question down to show they are useful– and the definition of “useful” seems to be better than we could do if we based on forecasts on a blind man flipping a coin, then, maybe Gavin has shown the models are useful.

57. Steve E says:

Lucia,

“I’ve looked at clouds from both sides now…”;-)

Still I’m not sure that Frank’s basic contention is incorrect that climate models as they currently exist are unreliable to project out a hundred years. I’m not prepared to bet based on the blind man’s forecasts.

58. A few points. “Are models accurate?” is a question that presupposes a binary response yes or no. Given the complexities of issues, and the tremendous variation in the metrics one could use, the fact that they will never exactly match some observations, this just isn’t an appropriate question. Is a global mean temperature anomaly forecast within 0.1 deg C accurate? It depends entirely on the context. So forgive me if I don’t answer questions that are ill-posed.

A much better question is whether the models have skill over some naive estimate. And there the answer is yes. It is clear for the Hansen et al (1988) projections compared to the alternative “no change” forecast (which was what was being touted at the time) (work it out). It is clear from the Pinatubo example where the quantitative forecast was indeed within 0.1 deg C of the obs (and I am aware of no other quantitative estimate made on the basis of anything). It is clear in the 8.2 kyr event case, the mid-Holocene etc.

I also have to take issue with the idea that predictions of things that were not known or used in the construction of a model somehow doesn’t count unless the statement is specifically about the future. This is simply not true. If a model predicts that if you do a certain analysis of the data (which has not been done beforehand) you will find a certain result and indeed you find that result, this is a successful out-of-sample prediction regardless of whether that analysis is of data that are 1000 years old, 100 years old or will not exist until next year.

Briggs, What is a “non-positive-CO2 feedback” model? This doesn’t even make sense. Feedbacks are due to processes in the models and real world that are independent of the forcings. They occur regardless of whether the planet is being driven by solar changes or volcanoes or greenhouse gases. The only thing that would be a “CO2 feedback” would be the change of the natural carbon cycle as a function a change in climate – and which we know from the ice core record is positive – more CO2 in the air as the planet warms. If you mean a model that doesn’t have any positive feedbacks at all, well there is a reason that doesn’t exist – it has nothing to do with the real world. Ice-albedo feedbacks are positive, real and observed, water vapour feedbacks are positive, real and observed. People really have tried to get cloud feedbacks to be negative and large enough to counteract all of this (GISS even worked with Lindzen back in the day to try out all his ideas) but none of them have succeeded. Even more important, a climate system with net negative feedback does not match any of the constraints from the paleo-climate record. You simply cannot reconcile an negligibly sensitive climate with the pattern of the ice ages, Pliocene warmth or earlier climate changes. If anything, they indicate long term sensitivities that are higher than the “Charney sensitivities” we are discussing here (Lunt et al, 2010).

And finally, I have to laugh when you bring up Pat Frank’s Skeptic article. That really was a terrible piece of work. Perhaps you would care to examine his statements on the second page of his supplementary material where he apparently thinks that a logarithmic function has a finite limit at 0, (and indeed, that log(0) is equal to 1). And even you should think it odd that a linear model that fits a more complex model over a short interval is somehow sufficient to predict how that more complex model will behave in other circumstances or what it’s error characteristics are. Something that is trivially shown to be nonsense. See this comment and the thread preceding for more detail than you could ever want.

59. Gavin

A few points. â€œAre models accurate?â€ is a question that presupposes a binary response yes or no.

This is silly. This question no more requires a binary response than your substitution of “Are models useful?”
If you wish to firm up the question and provide a more nuanced answer you are free to elaborate and explain how accurate they are at projecting.

A much better question is whether the models have skill over some naive estimate. And there the answer is yes. It is clear for the Hansen et al (1988) projections compared to the alternative â€œno changeâ€ forecast

Yes. The models have skill relative to a blind man flipping a coin. So do much simpler radiative convective models, linear extrapolation of the previous 10 years or any number of other models.

It is clear from the Pinatubo example where the quantitative forecast was indeed within 0.1 deg C of the obs (and I am aware of no other quantitative estimate made on the basis of anything).

How is this even remotely impressive? It’s not as if you are telling us it was within 0.1 C of a 20C drop. The drop at it’s highest was 0.73C. It’s not as if people who might have made estimates based on simple scaling from Agung, Fuego or El Chichon were going to get those into the peer reviewed literature. Those would never have passed the screen of being considered interesting predictions.

It is clear in the 8.2 kyr event case, the mid-Holocene etc.

What are you trying to communicate with this? What’s clear? The eruption of Pinatubo?

I also have to take issue with the idea that predictions of things that were not known or used in the construction of a model somehow doesnâ€™t count unless the statement is specifically about the future. This is simply not true. If a model predicts that if you do a certain analysis of the data (which has not been done beforehand) you will find a certain result and indeed you find that result, this is a successful out-of-sample prediction regardless of whether that analysis is of data that are 1000 years old, 100 years old or will not exist until next year.

What you do not seem to understand is that anything used to drive the simulation might have been influenced by knowledge of the quantity you used to predict it, you do not have a pure comparison. With respect to comparing model simulation of 20th century surface temperatures, it is possible for knowledge of the properties of your model which you gain from sensitivity experiments, knowledge of the general effect of variations in aerosol loadings — which you learn from sensitivity tests, and knowledge of the observations of surface temperatures can all combine to influencing a modelers specific choice of aerosol loadings (within the range supported by the data) to get better agreement than they might have if they’d been forced to make choice with no knowledge of the surface temperatures.

I would not go so far as Briggs seems to and say the comparisons don’t count, but they certainly count less. I would go so far as to suggest that they count quite a bit less than a modelers who gets pleasure from seeing his models do well might wish to believe it counts.

What makes the comparisons count even less is that you get to progressively “improve” the models, and many of the choices in the improvements are driven by noting points where the models deviate from existing data, and incorporating changes you believe will remedy that. While this is a necessary part of science and entirely rational, it does make the later comparisons “count” much less than you, Gavin, might wish to believe they count.

Bernie

Given the uncertainty as to the complex role of clouds, how can the anomaly be sufficient?

Sufficient for what purpose?

Sufficiency, and utility can only be gauged if we describe the purpose.

If you want to know total cloud cover, the anomaly is not sufficient. But, skimming, it looks like he builds an argument about heat accumulation over time by noting that the absolute value of cloud cover is wrong. But, that’s wrong because even if the absolute cloud cover is wrong, that’s in the baseline temperature (which, admittedly, could be wrong.) But in the anomaly process, the earth responds to increases in forcing relative to the baseline. So, in some sense, the temperature anomalies will rise improperly if the change in cloud cover is wrong, but will be less sensitive to problems with the absolute level.

I think there are reasons to believe that models are not particularly good for forecasting the evolution of climate. But Frank’s article seems to have a sufficient number of difficulties that I don’t want to spend a lot of time on it.

60. Briggs says:

Gav,

Hey, sorry. Was away from the computer all day. Duty called.

I’ll have to go through your reply later; but I went through it quickly.

I notice you have forgotten to tell us whether you think the sorts of criticism engaged in here is “treasonous”, “unpatriotic”, “crimes against humanity” and so forth. Again: that OK with you?

61. Frank says:

Gavin says – “If you mean a model that doesnâ€™t have any positive feedbacks at all, well there is a reason that doesnâ€™t exist â€“ it has nothing to do with the real world. Ice-albedo feedbacks are positive, real and observed, water vapor (sic) feedbacks are positive, real and observed…. Even more important, a climate system with net negative feedback does not match any of the constraints from the paleo-climate record. You simply cannot reconcile an negligibly sensitive climate with the pattern of the ice ages, Pliocene warmth or earlier climate changes.”

Doesn’t visual inspection of the paleo-climate record suggest that negative feedbacks limited the extent of the glacial/interglacial cooling/warming periods, respectively?

62. Criticism is none of those things of course. Though, given that I am not a US citizen, my opinion of what is treasonous or unpatriotic in your country is moot. Neither am I a lawyer so deciding what is and is not a ‘crime against humanity’ is not really my domain (though I note that there is a clear legal difference between causing harm inadvertently and continuing to cause harm once you have been notified of the problem, as the tobacco and asbestos industries have found to their cost).

However, the implication of your question is that there is no ethically or legally questionable behaviour going on under the guise of ‘criticism’. I beg to differ.

Deliberately spreading misinformation is immoral and, in many circumstances, illegal (lying about the efficacy of a patent medical procedure for instance). Lying in the service of a political goal is immoral (though depressingly common). Funding people to lie about scientists and the science they have presented is immoral. Smearing scientists as criminal by insinuation using a powerful position in the Senate is not only a abuse of power, it is strangely reminiscent of a previous period of “un-american” practices. Accusing whole US Govt agencies and their employees of fraud and improper ‘data manipulation’ on the basis of no evidence is, at the very least, not the behaviour of honorable men.

Now I’m sure you indulge in none of those things, so I’m a little puzzled as why you want to ally yourself with those who do. Asking a legitimate question about how well climate models perform cannot be equated flinging false accusations or indulging in other morally questionable behaviour. That a senator or energy company exec may agree with your opinion on climate models (if they give it any thought at all) does not make you culpable for their conduct otherwise.

So now that I’ve answered your question, if it’s ok with you, perhaps you could tell us what you think of these accusations of ‘fraud’, ‘misconduct’ and ‘criminal’ behaviour that are being directed at climate scientists in general (and more specifically) and whether you think that is ok or beyond criticism in some way? And if it is not beyond criticism, perhaps you could help us out in letting us know what words can be safely applied to this behaviour. One wouldn’t want to cause unnecessary offence.

63. Frank says:

Gavin / Matt,

Looks like we’re arrived at the ‘West Side Story’ stage of the thread. Can we please redirect to the models?

64. Steven Mosher says:

gavin help me understand this

In the second type of calculation, the so-called â€˜inverseâ€™
calculations, the magnitude of uncertain parameters in the
forward model (including the forcing that is applied) is varied in
order to provide a best ï¬ t to the observational record. In general,
the greater the degree of a priori uncertainty in the parameters of
the model, the more the model is allowed to adjust. Probabilistic
posterior estimates for model parameters and uncertain forcings
are obtained by comparing the agreement between simulations
and observations, and taking into account prior uncertainties
(including those in observations; see Sections 9.2.1.2, 9.6 and
Supplementary Material, Appendix 9.B).

65. Steve E says:

Gavin,

Is Inhofe over-the-top? Without question! Is there criminal intent by scentists? No way! But, like you, I’m also not an American. This is American process, as wild west as that may appear. It seems to get to the truth more often than any other jurisdiction.

Read Peiser’s submission to Parliament (which I’m sure you have)…What was Jones playing at? I can rule out criminal intent, but what the F*** was going on? I can understand a professional and all that implies questioning how a “commoner’s” assertions may create doubt, especially when they are well thought out, well presented and fit the facts (observations).

Humble transparency! The rest of aren’t as stupid as we may appear!

Briggs, I’m sorry that this OT with your post, put Gavin opened this question in his response to you.

66. Briggs says:

Gav,

I think treason is pretty much the same in most places, including the States. I’m not a lawyer, either, but I know treason’s bad and I know what happens to traitors. So you’ll forgive me for not accepting yours as a real answer. Would you, like Jim Hansen and that Times guy, call for criminal punishment for critics?

As for saying that climatologists like you are committing “fraud” and so forth, I have answered. You can look and see that I have, many many times, said that to use those words is wrong. You even know that I defended the “climategate” emailers. I even did so once on your blog (search for my name in the comments right after the story broke; I wrote about this as well on my blog). This hasn’t put me in solid with the die hard skeptics. I’m not invited to their parties, either.

I especially said words like “fraud” and “misconduct” should not be used, and I did so in the post when I called upon gentlemen like yourself to repudiate Hansen’s and others’ apocalyptic language.

Deliberately spreading misinformation—except, e.g., in times of war; which this is not—is immoral. As is lying.

My criticisms are neither deliberate misinformation, nor lies, deliberate or otherwise. Like I have long admitted, they might be wrong. I don’t think they are, naturally; but I can be talked out of them if convinced.

My long experience with the overconfidence of forecasters and other assorted experts leads me to believe that I won’t be convinced easily.

67. Steve E says:

Lucia

“…I really don’t know clouds…at all”

It seems awfully presumptuous to suggest that a modellers point-of-view is ground zero. Frank’s base position seems to carry–at least as much weight–as any modeller. To say he doesn’t see their point of view suggests that the modellers and their anomaly based view is the starting point for all debate.

You may still be right, but I’d like to see your argument for dismissing Frank. Why? For a layman like me his argument is lucid and appears to make sense.

68. Steve E says:

Lucia,

I think this is where I am:

“I would not go so far as Briggs seems to and say the comparisons donâ€™t count, but they certainly count less. I would go so far as to suggest that they count quite a bit less than a modelers who gets pleasure from seeing his models do well might wish to believe it counts.
What makes the comparisons count even less is that you get to progressively â€œimproveâ€ the models, and many of the choices in the improvements are driven by noting points where the models deviate from existing data, and incorporating changes you believe will remedy that. While this is a necessary part of science and entirely rational, it does make the later comparisons â€œcountâ€ much less than you, Gavin, might wish to believe they count.”

But where does that leave us? To me it, at least, reinforces Frank’s conclusions that models as they currently exist are inadequate to project (predict?) until 2100. Otherwise we have to concur with Gavin whose 100 year conclusions are based on changing points where the models deviate from the existing data.

To me Gavin isn’t wrong in his model development, in fact, I think that is what you have to do to improve accuracy and precision. But both are wanting in a century based time scale.

69. Bernie says:

It seems to me that there is nothing wrong per se in the incremental improvement of models based upon testing them against observations. What else is a modeller to do? Clearly the conundrum is at what point and how one argues for the accuracy of one’s model while still incrementally improving them.

70. brent says:

Steve E, Bernie,

Sadly one has to pay very close attention to nuance and semantics.

The AGW true believers have two basic teams.

The “A” team is the GCMs

The “B” team are the “Hokey Team” Studies.

All the rest is armwaving.

As noted in the Vincent Gray note I posted,
http://www.pensee-unique.fr/GrayCritique.pdf
the IPCC agreed with him that the “A” team, that is the GCM’s had never been validated in the sense that he means. Instead of admitting that GCMs are “incapable of prediction”(I.E Briggs Accuracy Issue), the IPCC outlaws the word “prediction” (relative to the GCMs, ).

Briggs tried to get at this essential central issue by pointedly asking about “accuracy”. and, Lucia followed up by pointing out how this was evaded in favor of drek (I like the characterization Lucia).

It’s never been established that the GCMs have predictive power, I.E. the accuracy issue, but the advocates don’t want the sheeple to understand this

They loudly tout their storyline (which is exactly what it is, storytelling), and when they do deign to talk about “predictions”, what they cite if considered in a limited sense as “predictions” at all, are trivial issues.(relative to the larger question, as noted by Lucia).

One of the most amusing exchanges I recall was Pielke Jr trying to delve into what “observations” would ever lead the advocates to admit that the GCMs were falsified
(note that Pielke Jr does not necessarily clue in to the particular sense in which the word “prediction” is a sensitive issue for the advocates. He tends to use the term more loosely )

The Consistent-With Game: On Climate Models and the Scientific Method
http://tinyurl.com/22rhrg

Global Cooling Consistent With Global Warming
http://tinyurl.com/4thuxx

Pielke Jr concludes that there is nothing that would lead the advocates to admit that their precious GCMs are falsified. (The God hypothesis. All evil flows from CO2 🙂 )

Of course there actually is no need to falsify the GCMs . They are false, because they’ve never been established to have “predictive power”, I.E. accuracy, in the first place 🙂

Sometimes the advocacy crowd slips up and a bit of truth is revealed as per following links. Read closely

Real Climateâ€™s Agreement That The IPCC Multi-Decadal Projections Are Actually Sensitivity Model Runs
http://tinyurl.com/yzpjg3y
Comment on the Nature Weblog By Kevin Trenberth Entitled Predictions of climate
http://tinyurl.com/ycunacr

It’s not necessary to get hung up on any issue of whether Pat Franks particular specific calculations are appropriate. What is more important is that he does describe very well certain principles.
The actual point I wanted to make with an earlier post was that none other than Carl Wunsch agrees in principle with the central tenet of Pat Franks paper.I.E. That there is a need to understand propagation of error through the GCMs. And that it is not understood now.

Wunsch even tells his colleagues that what they are doing “isn’t science”

Once again

The Present (Circulation) Is the Key to Understanding The Past(Circulation)
Carl Wunsch
Cambridge UK
Leverhulme Meeting
March 2008
One uses GCMs for the modern world in two distinguishable ways:
(1) Run them forward from some initial state in an â€œextrapolationâ€ mode.
(2) Use them to interpolate data over finite time intervals.
All time-stepped models in extrapolation mode exhibit error growth. The only exceptions are kinematically and dynamicaly trivial (the stopped-clock analogy
Snip
It is urgent to come to some understanding of error growth in GCMs in extrapolation mode. Mere assertion that feedbacks or compensating errors occur is not science. Determining error growth is difficult, but hardly impossible

The issue is not whether the big complex models are useful. I agree as well as others , if we are going to even inch forward in understanding in a field that is in it’s infancy, that we would have to use such tools.

I’m going to close this post with a few other tidbits:

Pielke Jr

This means that from a practical standpoint climate models are of no practical use beyond providing some intellectual authority in the promotional battle over global climate policy. I am sure that some model somewhere has foretold how the next 20 years will evolve (and please ask me in 20 years which one!). And if none get it right, it won’t mean that any were actually wrong. If there is no future over the next few decades that models rule out, then anything is possible. And of course, no one needed a model to know that.

Climate Change And The Death of Science

climate change models are a form of â€œseductionâ€â€¦advocates of the modelsâ€¦recruit possible supporters, and then keep them on board when the inadequacy of the models becomes apparent. This is what is understood as â€œseductionâ€; but it should be observed that the process may well be directed even more to the modelers themselves, to maintain their own sense of worth in the face of disillusioning experience.
â€¦but if they are not predictors, then what on earth are they? The models can be rescued only by being explained as having a metaphorical function, designed to teach us about ourselves and our perspectives under the guise of describing or predicting the future states of the planetâ€¦A general recognition of models as metaphors will not come easily. As metaphors, computer models are too subtleâ€¦for easy detection. And those who created them may well have been preventedâ€¦from being aware of their essential character.

The Social Simulation of the Public Perception of Weather Events and their Effect upon the Development of Belief in Anthropogenic Climate Change
http://www.tyndall.ac.uk/sites/default/files/wp58.pdf

Just as the GCMs are “incapable of prediction” and that’s the alarmists “A” team, I think most people on this site would be well aware of the issues with the alarmists “B” team the “Hokey Team”, from following several sources, notably ClimateAudit

The AGW â€œAâ€ Team are the GCMâ€™s.
The AGW â€œBâ€ Team are the â€œHockey Teamâ€ studies

The importance to the alarmists are that both are â€œmetaphorsâ€, as the Post-Normal Scientists like Mike Hulme explain

http://www.americanthinker.com/2009/07/what_climate_change_can_do_for.html

Lindzen has a good insight into Hulme’s PNS at link below

cheers
brent

P.S. My thanks to our host,Dr Briggs. This is one of the really useful blogs that help promote discussion and understanding, and I regret that I haven’t followed it as closely as it’s merit warrants.

PPS. Excellent thread with a lot of good contributions. Special thanks to Tom Vonk for some insights

71. To say he doesnâ€™t see their point of view suggests that the modellers and their anomaly based view is the starting point for all debate.

No. That’s not what I’m saying. I’m saying that giving it a quick look, it appears to me that Frank may be making a mistake implementing first law of thermo because he doesn’t understand that excess heat accumulates based on the increment of excess heat above the steady state value. This is true in all engineering problems, not just models.

It may be that I am wrong about his error, but I don’t think so.

You may still be right, but Iâ€™d like to see your argument for dismissing Frank. Why? For a layman like me his argument is lucid and appears to make sense.

Dismissing? I don’t utterly dismiss him.

If all he did was note that GCM’s did not get the absolute cloudiness of clouds correct, and this matters in some way and stopped there, I would admit that fact decreases our confidence in GCM’s ability to correctly capture the physics of climate relative to models with simulations that better predicted the absolute cloudiness of models. After all, if they are off 10% on the average cloudiness integrated over the entire planet, why should we be confident they can detect changes in the level of cloudiness? Or that even if cloudiness is mostly unaffected, they get the sensitivity correct?

But that’s a much less serious deficiency than Pat Frank goes on to discuss.

The argument developed in the paragraph just prior to figure 4 dramatically overstates the severity of the problem because– in any thermo problem– whether in a GCM or a hand calc, being off 10% on the clouds affects your prediction of the quasi-steady problem. That is: all other things being equal, getting cloudiness wrong should result in the wrong temperature for the earth at steady state. Other than that, we don’t know the consequences. But whatever they are, the aren’t what Pat Frank suggests.

Suppose the model is 10% on any forcing wither anthropogenic or natural. Now, freeze the model forcings at this wrong level. Now, run the model without varying the forcing, starting with some guess about the ‘earth’s’ surface temperature.

Do you think that according to the models, the earth’s temperature will rise and rise, and rise and rise (or fall, fall, fall) forever as in Pat Franks figure 4? This wouldn’t (and doesn’t) happen in remotely reasoanble model that applies the 1st law of thermo and permits heat loss to increase as the surface temperature increases. Such a model will reach a quasi-steady state, where, for the most part, the earth’s surface no longer accumulates heat.

That model steady state might get the earth’s surface temperature wrong– but it’s a steady state. If forcings are off by 10%, I can’t be sure how how far that steady state will differ from the true earth temperature– but it’s sure as heck not 120C after 20 years. It’s not going to be 120C after an infinite number of years. In reality, the computed annual average earth’s surface temperature will rise to an assymptotic state. (It will be noisy– but still, noisy around some basic level.)

The fact is, the reasoning leading to Pat Franks figure 4 is simply wrong. Noting that, I don’t plan to spend any more time discovering other things that might also be wrong.

72. I for one did try Gavin’s suggestion and started reading the “2006_Schmidt_etal_1.pdf” document on model E’s construction. I got about 1/3 of the way through it before my head started to hurt too much.

On page 7 of the PDF (JOC page 159) I noticed some discussion of numerical instability in the dynamical solutions and how they were corrected. I can understand that there might be problems at the polar cusp of a spherical coordinate system used for the simulation, but what bothers me more is about how these problems are corrected. A diffusion method is discussed but is there a physical or numerical basis for it? We also see that there is “velocity filtering” on the whole solution surface to remove “two-gridpoint noise.” Near the end of the page there is another parameterized correction for gravity wave instabilities with some handy tunable constants (if there is physics involved why is tuning needed?).

Problems with numerical stability at the poles creates some concern for me because the poles tend to show more warming than the lower latitudes and should require more accuracy to get right.

With so many valid scientific “tricks” applied to get the model to work correctly, I find it very hard to believe it is useful for any purpose, let alone prediction.

73. Pat Frank says:

Lucia, you wrote,

[L] >>… it appears to me that Frank may be making a mistake implementing first law of thermo because he doesnâ€™t understand that excess heat accumulates based on the increment of excess heat above the steady state value. This is true in all engineering problems, not just models.<>The argument developed in the paragraph just prior to figure 4 dramatically overstates the severity of the problem becauseâ€“ in any thermo problemâ€“ whether in a GCM or a hand calc, being off 10% on the clouds affects your prediction of the quasi-steady problem. That is: all other things being equal, getting cloudiness wrong should result in the wrong temperature for the earth at steady state. Other than that, we donâ€™t know the consequences. But whatever they are, the arenâ€™t what Pat Frank suggests.<<

Lucia, what you described is not what I suggested. Please look at the second paragraph under Figure 3, where Figure 4 is discussed. This paragraph says, "In terms of the actual behavior of Earth climate, this uncertainty does not mean the GCMs are predicting that the climate may possibly be 100 degrees warmer or cooler by 2100. It means that the limits of resolution of the GCMsâ€”their pixel sizeâ€”is huge compared to what they are trying to project. In each new projection year of a century-scale calculation, the growing uncertainty in the climate impact of clouds alone makes the view of a GCM become progressively fuzzier.”

The wedge around the mean line in Figure 4 represents uncertainties, not heat excursions. I followed the very straight-forward rules for propagating uncertainties, such as found here, for example. A centennial temperature anomaly trend just involves summing the set of individual annual anomalies. If each temperature anomaly has an invariable +/-10% error, the uncertainty propagates through the trend as the running sum of the individual errors.

The climate mean temperature can stay within reasonable bounds, and it would still be true that the uncertainty in the calculated temperature would increase with every calculational time step. Eventually, the uncertainty would increase to the point that the calculated temperature would cease to have any scientific meaning; which translates as having no predictive value. That’s what Figure 4 is meant to show; uncertainty, not mean. Non-predictability happens very quickly with GCMs. The “perfect model” test by Matthew Collins (Skeptic reference 28) shows that the HadCM3 became incoherent vs. the test climate within 1 year. Demetris Koutsoyiannis has shown the same lack of predictive value by GCMs, using real comparisons.

[L] >>Suppose the model is 10% on any forcing wither anthropogenic or natural. Now, freeze the model forcings at this wrong level. Now, run the model without varying the forcing, starting with some guess about the â€˜earthâ€™sâ€™ surface temperature.

[L] >>Do you think that according to the models, the earthâ€™s temperature will rise and rise, and rise and rise (or fall, fall, fall) forever as in Pat Franks figure 4? This wouldnâ€™t (and doesnâ€™t) happen in remotely reasoanble model that applies the 1st law of thermo and permits heat loss to increase as the surface temperature increases. Such a model will reach a quasi-steady state, where, for the most part, the earthâ€™s surface no longer accumulates heat.<>That model steady state might get the earthâ€™s surface temperature wrongâ€“ but itâ€™s a steady state. If forcings are off by 10%, I canâ€™t be sure how how far that steady state will differ from the true earth temperatureâ€“ but itâ€™s sure as heck not 120C after 20 years.<<

It has surprised me how many people misread Figure 4 in this way. I tried very hard to be clear that I was representing uncertainties, not temperature excursions. Sorry that I led you astray, somehow, Lucia.

74. Pat Frank says:

I see, by the way, that Gavin has derogated my skeptic article, using false arguments. No surprise there. Please do examine Figure 1 and page 2 of my article SI. Nowhere will you find me claiming that log(0) = 1, or that logs have a finite limit at zero. There is an implicit assumption in that figure, which is that the forcing of CO2 is negligible at 1 ppm, and this forcing can be safely equated to 0 ppm CO2. Regrettably, I didn’t make that assumption clear and Gavin has opportunistically beat his alleged discovery drum ever since our debate on RC.

If you read the thread succeeding, as well as “the thread preceding” as Gavin coyly suggested, you’ll find post 461, part 3b of which explicitly shows that 1 ppm CO2 has zero greenhouse forcing. Post 461 refuted all that remained of Gavin’s substantive charges, apparently leading him to fixate onto an irrelevant argument about log(0), which seemed to have played well with the RC gallery.

75. Pat Frank says:

I need to make a correction to my post above, where I wrote that, “A centennial temperature anomaly trend just involves summing the set of individual annual anomalies.” What I should have written is that the anomaly trend involves a running set of individual annual anomalies.

The reason error propagates as it does, is because GCMs calculate future climates in a step-wise way through time. The cloud error means that every climate will have an average (+/-)2.7 Wm^-2 uncertainty in forcing, which produces an uncertainty in each mean temperature. In every step, the average cloud error in that step produces a new temperature uncertainty, must be compounded with the uncertainty of the input prior mean temperature. The forward-propagating uncertainty increases in each calculated temperature, and that increasing uncertainty trend remains with the anomalies when the baseline temperature is subtracted (actually, the uncertainties would have to be convolved with the uncertainty in the baseline temperature, too).

In short, each climatological temperature mean has an average uncertainty due to cloud error, and that temperature plus its uncertainty is used as input for the time-wise calculation of the next mean temperature. The uncertainty in each prior mean temperature must be propagated forward into the uncertainty in each new mean temperature.

There are different ways of calculating uncertainty, of course, and since no one has ever published an uncertainty propagation through a GCM climate projection, we don’t know what the structure of the uncertainty looks like in a projected climate. So, I just chose to propagate the uncertainty in the most straight-forward way, given the algebra of my equation and the validating correspondence between the calculated line and the outputs of the GCMs, shown in Figure 2.

76. Bernie says:

Pat:
Many thanks for clarifying the issues. You may want to drop Lucia a line at her website and reference her back to these comments.

77. John L Norris says:

gavin,

For entertainment value I read some modelE code here:

Some code even has your name on it.

There is a module “param” and there are 19 files titled INIT:

INITS, INIT_CLD, INIT_DECOMP, … , INIT_VEGETATION.

That looks like an awful lot of parameters. I would have guessed that tweaking the input parameters, I could get about any climate output I want. Perhaps some of these inputs are certain, but I bet most are at least arguable. Is your experience running the model with different parameters contrary to that? That is to say, do you think I couldn’t find a somewhat reasonable set of parameters that showed a flat line, or cooling climate?

78. MikeC says:

Younger Dryas? I hope climate models can’t predict ET impacts.

79. Pat Frank says:

Bernie, thanks, I’ve dropped Lucia a line and a link.