Thanks to reader Roger Cohen for brining this to my attention.
Atmospheric scientist Judith Curry recently ran a series of blog posts entitled, “Overconfidence in IPCC’s detection and attribution.” In Part III of that series, she set out a public appeal:
I am no expert in logic. My only formal exposure was a course in freshman logic nearly 40 years ago…I can understand most Bayesian arguments that I’ve encountered, although I’ve never attempted to make one on my own…My point is that I think there are some glaring logical errors in the IPCC’s detection and attribution argument, that it doesn’t take an expert in logic to identify. I look forward to the input from logicians, bayesians, and lawyers in terms of assessing the IPCC’s argument.
I flatter myself that I am an expert in the matter of how evidence supports belief; further, I am happy to offer my assistance. Let me first answer a potential criticism that others might use after reading your appeal. It is possible to argue that when the term “Bayesian” is used, it is synonymous with “better”, especially in statistical analysis contexts. But “Bayesian” is not synonymous with “good results” or “correct modeling.” In other words, following a Bayesian procedure does not guarantee validity. Thus, even though you in your criticism of the IPCC, and the IPCC in its self assessments, use Bayesian procedures, both of you can be wrong in your conclusions. I’m guessing you knew that.
One of your questions involved a potential circular argument used by the IPCC:
The most serious circularity enters into the determination of the forcing data. Given the large uncertainties in forcings and model inadequacies (including a factor of 2 difference in CO2 sensitivity), how is it that each model does a credible job of tracking the 20th century global surface temperature anomalies…? This agreement is accomplished through each modeling group selecting the forcing data set that produces the best agreement with observations, along with model kludges that include adjusting the aerosol forcing to produce good agreement with the surface temperature observations. If a model’s sensitivity is high, it is likely to require greater aerosol forcing to counter the greenhouse warming, and vice versa for a low model sensitivity…Any climate models that uses inverse modeling to determine any aspect of the forcing substantially weakens the attribution argument owing to the introduction of circular reasoning.
A succinct way to state the fallacy is this: A climate modeler assumes the hypothesis that increasing atmospheric CO2 greatly increases atmospheric surface temperature, perhaps through some feedback mechanism. The scientist builds a model that contains a routine which increases (modeled) atmospheric surface temperature when atmospheric CO2 is increased.
He then runs the model using both low and high levels of CO2, and compares the surface temperatures produced by the model under both scenarios. If the temperatures are higher under high levels of CO2, then he writes a press release which says, “Increasing CO2 greatly increases atmospheric surface temperatures.”
As you guessed, this argument is circular. This is not to say the CO2 hypothesis wrong: in fact, the conclusion that “Increasing CO2 greatly increases atmospheric surface temperatures” is certain given the premise that “Increasing CO2 greatly increases atmospheric surface temperatures.” And this hypothesis might still be true given other evidence (a.k.a. premises).
Of course, real models are more complicated, but only slightly. As you say, one goal is to have a model “track” historic temperature. No climate model does this exactly—part of the broad agreement between models is because each model is not an independent creation—but models can be “tuned” so that they crudely mimic previously observed data while still containing the CO2 subroutine. This tuning is accomplished just as you say: in an ad hoc fashion.
A model that tracks historic data is not a reason to believe the CO2 hypothesis. There are an infinite number of hypotheses that might account for the historic observations: the CO2 hypothesis (coupled with other hypotheses about how the atmosphere works) is just one of these.
We can whittle down the set of explanatory hypotheses by asking how each of them fit in with true or highly likely non-climate hypotheses, such as the theories of thermodynamics, etc. Indeed, the CO2 hypothesis is consonant with some of these theories. Thus far, this is the only evidence we have for the CO2 hypothesis.
The true test of climate models, hence of the CO2 hypothesis, will be in how well they predict data not yet seen. If they do that more skillfully than other, parsimonious climate models—like “persistence, or something like it”—then we would have non-circular evidence that the CO2 hypothesis is true. We do not (yet?) have this evidence.
Later, I’ll try to talk about the other matters you mentioned.
Lubos Motl in the other day talked about the stupidity of claiming that a climate model could, ever, be run in time reversal to, say, predict the climate of 1950.
This because of the 1st law of thermodynamics, entropy.
The argument is very obvious, so glaring I even think “how come I didn’t think of this?”. The law basically “states” that the predicting of the past is qualitatively different than predicting the future, they cannot be done in the same way, and perhaps the former is even completely impossible.
Imagine a gedankenexperiment. In this experiment we have 4 glasses of water. One of them is very very hot, while all the other are relatively cold. We can predict that in the future, if you mix these waters, they will all have the same temperature. This prediction is not only possible, but very precise. Now imagine the other way around. You have 4 glasses of water all the same temperature, and you know that “something happened”. But what exactly? Was one glass colder than all the others? Were 2 colder and the other 2 warmer? There are infinite possibilities, and you are left stranded.
This distinction of past and future is in the very definition of the time arrow, and what defines our own sense of time. In the second case, the information of what happened is “lost”, and that’s one definition of Entropy right there, entropy has just increased and information lost. So you cannot do “predictions” for the past.
As you see, the problem here is potentially fatal to all models who purport to “predict” climate in 1950.
Now my question is, how much of a problem is this really?
Seems to me the way climate modeling (& “research”) is done is much like how those old-time philosophers assessed the number of a horses teeth…right down to how they objected to getting actual facts:
In the year of our Lord 1432, there arose a grievous quarrel among the brethren over the number of teeth in the mouth of a horse. For thirteen days the disputation raged without ceasing. All the ancient books and chronicles were fetched out, and wonderful and ponderous erudition such as was never before heard of in this region was made manifest. At the beginning of the fourteenth day, a youthful friar of goodly bearing asked his learned superiors for permission to add a word, and straightway, to the wonderment of the disputants, whose deep wisdom he sore vexed, he beseeched them to unbend in a manner coarse and unheard-of and to look in the open mouth of a horse and find answer to their questionings. At this, their dignity being grievously hurt, they waxed exceeding wroth; and, joining in a mighty uproar, they flew upon him and smote him, hip and thigh, and cast him out forthwith. For, said they, surely Satan hath tempted this bold neophyte to declare unholy and unheard-of ways of finding truth, contrary to all the teachings of the fathers. After many days more of grievous strife, the dove of peace sat on the assembly, and they as one man declaring the problem to be an everlasting mystery because of a grievous dearth of historical and theological evidence thereof, so ordered the same writ down.
http://bedejournal.blogspot.com/2008/02/francis-bacon-and-horses-teeth.html
http://answers.google.com/answers/threadview?id=713157
To be precise, Luis Dias, the first law of thermodynamics states: Energy can neither be created nor destroyed, only transformed.
Your intellect is quite superior to mine if you are able to say that this “obviously” leads to the fallacy that you point out. Perhaps you meant the 2nd law, which basically states that the universe cannot get less chaotic. Even that is a stretch to me.
Also, I have never heard entropy defined as “information lost”.
Adam, I surely deserved that, what a monumental typo!!!
The information about past states is lost due to the incessant increase in entropy. This degeneration is obvious if you for instance define “information” as a difference between two states, eg. a bit (1,0). If entropy is maximum, all is thermalized, and no information exists (all is equally irrelevant).
But my example was simple enough for one to understand. Let me simplify it for you even further. Imagine the classic example of two boxes, one with a red gas, one with a blue gas. If you are about to open them up to one another, you do know what will (mostly likely) happen. They will become one single purple gas. But if someone else enters the room after this event, he will not know what has just happened. He cannot predict the past. He will not know if the first box contained the red or the blue gas, he will not know if they were purple to begin with.
Retrodiction is different from prediction. Past =/= future.
The most unsettling passage of the Curry/ Scientific American article:
“Not to say that the IPCC science was wrong, but I no longer felt obligated in substituting the IPCC for my own personal judgment,†she said in a recent interview posted on the Collide-a-Scape climate blog.”
Statistical methods are the least of the problems if scientists have an “obligation” to anything other than the science.
“Lubos Motl in the other day talked about the stupidity of claiming that a climate model could, ever, be run in time reversal to, say, predict the climate of 1950.”
Does anyone try this? Why? I agree it is doomed to failure. Don’t researchers start e.g. with a 1950 state and run time forward from there? Isn’t that what is meant by “tracking historic data”?
Luis,
IMHO, your first example is better than the gas example but only because there is no apparent reason to believe that the glass of water has a memory of its temperature just prior to mixing. Your red and blue gases may have memory of their prior states – depending on the source and persistence of their redness and blueness.
This is an interesting line of thought, since temperature proxies surely assume that there is some memory of the past temperature that is distinct from the other climate and environmental factors that are as comingled in the proxy material as your red and blue gases, not to mention your glasses of water.
The abuses are not piling up in just climate. A Canadian Commission authorized in 2009 began hearings this week to determine the cause(s) of the lower than model predicted returns of sockeye salmon to the Fraser River. The salmon having not read the model for 2010 returned 24 million more fish than pedicted- the largest run since 1913.
One would assume that all testifying before the Commission must keep a straight face while doing so.
So, Global Warming shares some deep insights with homeopathy? That would be greatly ironic, to see campaigners against homeopathy like PZ and Plait defending what would turn out to be an homeopathic science… of course I’m merely fantasizing…
Yes, this is one of my doubts as well. I’m somewhat lazy, so I haven’t checked it out, that’s why I’m asking…
A couple of comments:
1. Models run backwards won’t work because dissipative processes conform to the arrow of time; that is, the equations are not invariant under a change in the sign of time.
2. My favorite Baysian question in this business is: “Given that outside agencies have found errors and poor organizational practices embedded in the IPCC (dare we say ‘corrupt’ practices), what is the probabilty that they have got the science assessment right? Multiple choice:
A. Organizational malfeasance does not affect the science appraisal
B. Zero
C. “Very likely that most of the” assessment is due to human bias.
D. “Very likely that most of the” assessment is right
@Luis –
I always thought “greatly ironic” and “PZ” kind of went hand-in-hand.
Stevebrookline:
That is exactly what I am assuming too.
What Lubos meant was I think, if you translate the article back to what the model had done, it amounted to running it backwards.
If the climate modelers know what they are doing, why do they have over 20 different models?
Back in 2006 the BBC, in conjunction with Oxford Uni, tried an experiment using a computer model that used spare processing power of anyone who downloaded it to predict future climate. They initially had to withdraw it because they’d got the sulphur dioxide (I think) effects wrong and the models were sliding way off. It was a bit of a blooper because I don’t think anywhere near as many people signed up for version 2.
The site isn’t a bad primer for understanding climate models… and how crap they are. But they’re much better now… Aren’t they?
The model starts at least in 1920 (didn’t download it so I’m going on the graphics) and runs up to 2080. But they started the graphs in 1960.
http://www.bbc.co.uk/sn/climateexperiment/theexperiment/abouttheexperiment.shtml
If you look at the results for the UK
http://www.bbc.co.uk/sn/climateexperiment/theresult/abouttheresults.shtml
The values are already well off the mark. We’re now at or below zero anomaly and the last two years were towards the lower end of the model. Still I expect that’s just natural variation. Ha, ha. /sarc.
http://www.metoffice.gov.uk/climatechange/science/monitoring/hadcet.html
I think one of the issues with models is the lack of clarification of whether a model is being TUNED or whether it is being developed to more accurately model reality.
As Ms. Curry, yourself, and others who would appear to know have stated, when you start with an assumption and TUNE the model to match the past without ever reconsidering the central assumption you are indulging in circular reasoning.
When you research and attempt to improve the model by testing ALL assumptions you may be practicing science.
http://climateprediction.net/content/modelling-climate
More detailed information on modelling from the team behind the BBC thing. They also have an interesting list of ongoing experiments including one testing MWP models.
Ray,
Go to your room.
It turns out that the uncertainty for most of the GCMs is on the order of +-1 degree C / year, therefore any forecast or prediction using these models is mostly worthless. It is no wonder the models cannnot successfully predict anything. The crime here is that the “scientists” proclaiming doom via AGW know that the models are rife with error and ignore the uncertainty and then proclaim impending disaster.
Thanks, look forward to future dialogue on this!
Pingback: Tweets that mention William M. Briggs, Statistician » I’m Happy To Help, Judith Curry: Overconfidence In IPCC’s Detection And Attribution -- Topsy.com
“Luis Dias says:
26 October 2010 at 7:19 am
Lubos Motl in the other day talked about the stupidity of claiming that a climate model could, ever, be run in time reversal to, say, predict the climate of 1950.” Several points:
(i) The gedankenexperiment with 4 glasses of water is a lousy guide: what it says is that starting off with a system in equilibrium, you can’t uniquely model how it got there. True, but that’s not an intrinsic problem in climate modelling: the atmosphere/ocean/etc are never at equilibrium, are they?
(ii) If it’s meant to be a universal statement, it’s false. I’ve written lots of models that run “backwards” perfectly well. Some have run backwards better than they’ve run forwards. It all depends on the content of the model, including the boundary conditions.
(iii) If it’s meant to be specific to climate models, then the answer is the one given above in the comment thread; “running the model backwards” can reasonably be interpreted as a shorthand to mean “run it forwards from (say) 1900 to 1950, or 1100 to 1900, and compare with the observations”.
(iv) Sometimes the difficulty of running a model backwards stems from an artefact of that particular model. For example, the modellers might have tried to ameliorate a severe, shock-like, near step-change, by sticking a rather unphysical dispersion term into the model. That might be a decent pragmatic repair job for integrating in the time dimension, but might also bugger things up in the opposite direction. (I mention this because it once taught me not to assume that Global Warmers were likely to argue in good faith.)
When I discovered what ‘parametisation’ was I realised the models where little different from ‘etch-a-sketch’. The knob twiddling keeps the line going in the direction you want.
dearieme,
Thanks for your comments. Of course, my experiment is an extreme case, but it is useful to point out that there is information loss in thermodynamic cases. It doesn’t matter if the system is “in equilibrium” or not, since any information you have regarding the system at time, say, 1950, will be completely irrelevant to “predict” the state of the system at, say, 1940.
It is not meant as an universal statement, I made it clear that I don’t know the “impact” of this small personal epiphany of mine. Moreso, if we make mechanical models, instead of thermodynamical ones, this problem disappears altogether, i.e., planet orbit models. But it seems to me very difficult to argue that this isn’t a problem in thermodynamical models.
…can reasonably be interpreted as a shorthand to mean “run it forwards from (say) 1900 to 1950, or 1100 to 1900, and compare with the observations
Yes, it can. Still, Lubos was referring to a scientific model used to “predict” the climate of Tanzania of 1950 or so, and it was amazing because it predicted it very well (!)…. so even if that’s the case of the “majority” of such uses, there are clearly people using it in insanely wrong ways. How many? Dunno.
I’m more with Luis Dias on this.
First, numerical methods are generally optimized on a history of past steps to smooth out the higher derivatives going forward. Simple first order methods quickly lose accuracy, thus the reliance on further past information to keep them on track. You just can’t run time backwards without reformulating the method.
Secondly, these climate models do calculations on energy flows between cells, just like the heat flows between the glasses of water mentioned. Once the energy is mixed from several cells into another, there is no way to really know where it came from. Entropy does matter. You can’t go backwards without guessing which cell to send the energy into. There is no predictive factor that will tell you this like there is going forwards.
Luis:
The models are not run backwards in time. Rather, they are started at some period in the past, run forward in time, and compared to known historical data. This is what is referred to as “hindcasting”. There are many potential issues and problems with hindcasting of the climate models, but violating the 2nd Law’s “arrow of time” is not one of them.
Hindcasting is standard methodology for all sorts of models, not just climate models. Before you go to the time and expense of making predictions about the future, waiting for that future period to arrive, and then doing the comparison, it makes great sense to compare model predictions against already collected data.
Of course, you must have high enough quality data to make valid comparisons, and the process must be done “without peeking” to get an honest evaluation. Refinements of the model need to be done based on underlying principles rather than just in attempt to “wiggle match” the data.
Proper hindcasting of a model is widely considered to be a “necessary, but not sufficient condition” of the model’s validity. Stock market investment timing models are notorious for great hindcasting and zero predictive capability.
Curt,
The models have too many tunable parameterizations. Give a scientist something to tune and the knob will be turned. They can’t help it. I know how these things go because I was asked to do something similar by my professor for a paper a long time ago. It was fully disclosed so wasn’t fraud like “hide the decline” and was even approved by a famous physicist before we published it.
I also understand there are boundary conditions imposed in the models to keep them from blowing up and going into physically impossible ranges. Once a model works for a hindcast, there is no reason it should work when the internal states are forced out of the range of values in the hindcast because the boundary conditions and parameterizations are tuned only for that range. Increase the CO2 level and you are out of the working range.
James,
I’m well aware of the problems with the models such as excess parameterization. Kiehl’s 2007 paper showing how the various models on which the IPCC relies all get the “same answer” with vastly different parameter values is a good overview of that particular issue. I was simply pointing out to Luis that running backwards in time for hindcasts was not one of the many problems with the models.