The rebuttal to the criticism of our original peer-reviewed climate model paper “Why models run hot” has been published in Science Bulletin. It is also peer-reviewed, and therefore it must be correct. Keeping it simple: the value of an irreducibly simple climate model. It’s free, so download and read.
Lord Monckton prepared a press release that we’re going to use for the British press—and here at my place. There is another, more sedate, version that will also circulate. Take it away, Christopher!
Four skeptical researchers’ new Chinese Academy paper devastatingly refutes climate campaigners’ attempt to rebut their simple model
In January 2015, a paper by four climate researchers published in the prestigious Science Bulletin of the Chinese Academy of Sciences was downloaded more than 30,000 times from the website at scibull.com. By a factor of 10 it is the most-read paper in the journal’s 60-year archive.
The paper presented a simple climate model that anyone with a pocket calculator can use to make more reliable estimates of future manmade global warming than the highly complex, billion-dollar general-circulation models previously used by governments and weather bureaux worldwide.
The irreducibly simple climate model not only showed there would be less than 1 Co global warming this century, rather than the 2-6 Co the “official” models are predicting: it revealed why they were wrong.
By April, climate campaigners had published a paper that aimed to rebut the simple model, saying the skeptical researchers had not checked it against measured changes in temperature over the past century or more.
Now the skeptics are back with a fresh Science Bulletin paper. Keeping it simple: the value of an irreducibly simple climate model, by Christopher Monckton of Brenchley, Dr Willie Soon of the Harvard-Smithsonian Center for Astrophysics, Dr David Legates, geography professor at the University of Delaware, and Dr Matt Briggs, Statistician to the Stars (download the paper from www.scibull.com), explains that the simple model had not been tested against past temperature change because it was designed from scratch using basic physical principles.
Unlike the complex climate models, each of which uses as much power as a small town when it is running, the new, “green” model — which can be run on a solar-powered calculator – had not been repeatedly regressed (i.e., tweaked after the event) till it fitted past data.
Lord Monckton, the inventor of the new model and lead author of the paper, said: “Every time one tampers with a model to force it to fit past data, one departs from true physics. All other models were fudged till they fit the past — but then they could not predict the future. They exaggerated.
“We took the more scientific approach of using physics, not curve-fitting. But when the climate campaigners demanded that we should verify our model’s skill by ‘hindcasts’, we ran four tests of our model — one against predictions by the UN’s climate panel in 1990 and three against recent data. All four times, our model accurately hindcast real-world warming.
“On the first of our four test runs of our model (left), the 1990 forecast by the Intergovernmental Panel was a very long way further from reality than our simple model’s spot-on central estimate.”
Dr Willie Soon was subjected to a well-funded and centrally-coordinated campaign of libels to the effect that he had not disclosed that a utility company had paid him to contribute to the skeptical researchers’ January paper. Inferentially, the aim was to divert attention from the paper’s findings that climate alarm was based on a series of elementary mistakes at the heart of the complex models. In fact, all four co-authors had written the January paper and the new paper on their own time and their own dime.
Dr Soon said: “What matters to campaigners is the campaign, but what matters to scientists is the science. In 85 years’ time our little model’s prediction of just 0.9 Co global warming between now and 2100 will probably be a lot closer to observed reality than the campaigners’ prediction of 4 Co warming.”
Dr Matt Briggs said: “The climate campaigners’ attempted rebuttal of our original paper was littered with commonplace scientific errors. Here are just a few of those errors:
- The campaigners cherry-picked one scenario instead of many, to try to show the large models were better than our simple one. Even then, the large models were barely better.
- They implied we should tweak our model till it fitted past data. We used physics instead.
- They said we should check our model against real-world warming. We have. It works.
- They criticized our simple model but should have criticized the far less reliable complex models.
- They complained that our simple model had left out ‘many physical processes’. Of course it did: it was simple. Its skill lies in rejecting the unnecessary, retaining only the essential processes.
- They assumed that future warming rates can be reliably deduced from past warming rates. Yet there are grave measurement, coverage and bias uncertainties, particularly in pre-1979 data.
- They assumed that natural and manmade climate influences can be distinguished. They cannot.
- They said we should not have used a single pulse of manmade forcing. But most models do that.
- They said our model had not been “validated” when their own test showed it worked well.
- They said our model had not been “validated” when they merely disagreed with our parameters.
- They said we should not project past temperature trends forward. We did no such thing.
- They used root-mean-squared-error statistics, but RMSE statistics are a poor validation tool.
- They incorrectly referred to the closed-loop feedback gain as the “system gain”, but it is the open-loop gain that is the system gain.
- They inaccurately described our grounds for finding temperature feedbacks net-negative.
- They assumed that 810,000 years was a period much the same as 55 million years. It is not.
- They said we had misrepresented a paper we had cited, but their quotation from that paper omitted a vital phrase that confirmed our interpretation of that paper’s results.
- They said net-negative feedbacks would not have allowed ice ages to end. Yet the paper they cited gave two non-feedback reasons for sudden major global temperature change.
- They said temperature buoys had found a ‘net heating’ of half a Watt per square meter in the oceans: but Watts per square meter do not measure “heating”: they measure heat flow.
- They implied the “heating” of the oceans was significant, but over the entire 11-year run of reliable ocean temperature data the warming rate is equivalent to only 1 Co every 430 years.
- They said the complex models had correctly predicted warming since 1998, but since January 1997 there has been no global warming at all. Not one of the models had predicted that.
- They praised the complex models, but did not state that the models’ central warming prediction in 1990 has proved to be almost three times the observed warming in the 25 years since then.
- They failed to explain how a substantial reduction in feedback values in response to an unchanged forcing might lead, as they implied it did, to an unchanged climate sensitivity.
Professor David Legates said: “As we say in our new paper, the complex general-circulation models now face a crisis of credibility. It is perplexing that, as the models’ predictions prove ever more exaggerated, their creators express ever greater confidence in them. It is time for a rethink. Our model shows there is no manmade climate problem, and — so far — it is proving to be correct, which is more than can be said for the billion-dollar brains operated by the profiteers of doom.”
So, when will Monkton finally let go of his “luke-warm” pretensions?
The data show that there is no correlation, much less causation, between CO2 in the atmosphere and warming in the atmosphere (or oceans, for that matter).
This excellent work is destroying the entire myth of “greenhouse gasses.”
Recall Edward A. Murphy’s saying (the original Murphy’s law): “if there are two or more ways to do something and one of those ways can result in a catastrophe, then someone will do it.” Complex model advocates forecasting climate doom from human carbon emissions is the catastrophe. Your simple model proves Murphy’s law. Truth through simplicity! It’s a beautiful thing.
Briggs, you mentioned your paper is peer-reviewed. I’m not sure that really counts for anything these days. Read S. Fred Singer’s critique of peer-reviewing:
Peer Review is not what it’s cracked up to be:
http://www.americanthinker.com/articles/2015/08/peer_review_is_not_what_its_cracked_up_to_be.html
If you want to keep the rhetorical ring of Trotsky v. Stalin, you will need to adopt names that end in -ist. “Campaigners” is muffled and dull. Other that that, great paper.
Have you compared your simplified model to Dr. Spencer’s 1D model at all?
Oh dear. The Standard Model is not proper fysics then because some parameters have to be measured. Nor the Big Bang theory, because the Hubble constant does’t follow automatically, you have to measure that one too. And is has varied wieldy over the last half century.
Sander, what are you talking about? The gravitational constant G has to be measured, the Planck’s constant h has to be measured, etc., but THOSE are fundamental constants, unlike parameters in a climate model… There is a difference. If you don’t see it, I’ll try to explain in subsequent comments in a more dog and pony show jargon.
@Bob Kurland
Our host did not make that distinction.
The model may be simple but I found the paper anything but. A worked example would be nice…
Monckton et al.’s paper is an embarrassment to skeptics and ammunition for alarmists who characterize us as know-nothings. It is a farrago of ambiguities, misrepresentations, bad math, and bad physics.
Now, the authors could instead have satisfied themselves with saying only that they think the climate system’s closed-loop gain most likely falls between 0.21 K per W/m^2 and 0.35 K per W/m^2, with the median of that gain’s probability-density function at 0.26 K per W/m^2, and that for the purpose of determining multi-year trends it is harmless to treat that system as memoryless (i.e., to take the “transience fraction” as unity for such a coarse time resolution). Robert G. Brown has repeatedly said something similar without any objection from me. They could even have said they got their closed-loop-gain range by taking the IPCC open-loop-gain value of 0.31 K per W/m^2 and assuming a loop-gain range of -0.5 to +0.1 with a median at its average, -0.2.
As thus presented, the model would indeed have been simple: you get the temperature change by simply multiplying the forcing change by the closed-loop gain. And the authors would have been candid in admitting that they’d just pulled the loop-gain values out of the air. That would have fine to the extent that the resultant model works. (if you apply its central, 0.26 K per W/m^2 estimate to the 1.51 W/m^2 forcing change resulting from the RCP CO2-equivalent values for the 63 years preceding 2014, then, in contrast to the impression one may take from the Fig. 6 they apparently put in their press release, you actually get 0.06 K/decade rather than the 0.12 K/decade exhibited by HadCRUT4 for that period, but, hey, we’re just talking about models here, so what’s a hundred-percent difference among friends?)
That’s not what they did in their paper, though. The authors instead purported to derive the memorylessness from the Gerard Roe paper, which says no such thing. They gave the impression that their conclusions were based on their Equation (1), whereas anyone reasonably well versed in feedback theory would recognize that equation to be like saying you can get the product of two numbers by taking their sum. And they got their loop-gain values by chanting “thermostasis” and “process engineers’ design limit.” Dr. Briggs says, “They implied we should tweak our model till it fitted past data. We used physics instead.” That’s physics; it’s mumbo-jumbo.
The authors tout their Equation (1) as “irreducibly simple.” To the extent that it is used to obtain equilibrium values or results for memoryless systems, however, it is merely a more-cumbersome way of expressing the closed-loop-gain equation, which has been in use since before the authors were born. To the extent that it is used, as it was in their Table 6, to compute transient responses implied by the “evolutionary curves” of the Roe paper, it produces results for the RCP 2.6 scenario that are less than a third of what they should be.
In short, the paper is dreadful.
Ultimately a model’s value will be determined by how well it forecasts, not on what individual assumptions in the model one person prefers over another. Nearly all the climate models are dreadful. It has become apparent that climate models will always be dreadful until they are able to accurately predict temperature on shorter time scales, something none of them can do. And it’s not even certain they will ever have such an ability.
“Ultimately a model’s value will be determined by how well it forecasts, not on what individual assumptions in the model one person prefers over another. ”
True enough, and the multiplier (which is all they really have: a value they multiply total forcing change by to get temperature change) they proposed may end up being very close to working for this century as a whole. But that’s not what got them published. It was the errant nonsense they claimed was the basis for that multiplier.
Dr. Briggs says it was based on physics. I say it wasn’t.
Joe, if you read the papers, you’ll see they’re visibly based on physics. You say they’re not. Observation wins.
Now, what Monckton et al have done is constructed a simple physical model; they’ve shown it’s a better fit than the complex models. In general, more parsimonious models are preferred, and models that better fit all observations are preferred.
Neither statement means this model is more “right” in some sense, but when the existing models are proving unpredictive, and are being preserved by means of occult heat in the oceans and pretty arbitrary adjustments to actual ocean observations, it’s time to think maybe, just maybe, a new model is worth looking at.
“Joe, if you read the papers, you’ll see they’re visibly based on physics. You say they’re not. Observation wins.”
Not “visibly”; ostensibly. What they’re really based on is cargo-cult physics: it looks like physics unless you understand physics.
Suppose many people have said pigs can’t fly but I say I’ve come up with a new proof of the proposition. It’s simple, irrefutable logic: Mammals can’t fly, but pigs are mammals, so pigs can’t fly. It sounds just like logic, just like science. But it’s, well, hogwash; the premise is wrong. You may agree that pigs can’t fly, but my proof doesn’t add to the evidence for that proposition.
You can believe, as I do, that several investigators have made compelling cases for the proposition that the sensitivity of global average temperature to atmospheric CO2 concentration is low. Is it as low as Monckton et al. say? Maybe. Personally, I’m inclined to think it’s actually lower. Is the response as quick as they say? No, but on the time scales their paper deals with that fact may not matter. In short, I’m inclined to agree that the warming over the rest of the century will be significantly less than the IPCC says.
But nothing in their paper gives any critical thinker further evidence for drawing such a conclusion. Contrary to what they say, their temperature projection is not based on physics—or at least not on correct physics. In essence, they just pulled a sensitivity number out of a hat but wrapped it in enough misdirection that most people don’t realize it.
Now, their paper is too target-rich an environment to permit me to list all of its errors. When it comes to the temperature projection on which their claim of skill is based, though, most of those errors are irrelevant. That’s because, if you read the paper critically, you see that their projection is based on almost nothing the paper says.
What is the model they use for their projection? It’s not apparent behind all the logorrhea, but it’s actually nothing more than this:
,
where and and are respective anomalies in total (CO2 plus other) forcing and global average temperature and is 0.26 kelvins per watt per square meter. (I use simpler symbols than the equation they use for their “irreducibly simple” model; for example, they use instead of .)
Where did that gain value come from? Well, despite the authors’ contention that what they are wont to call the “Bode feedback-amplification equation” is “the wrong equation” for climate work, they got the value by applying that equation to the IPCC value = 1 / (3.2 watts per square meter per kelvin) for the climate system’s open-loop gain and the average = -0.2 of two loop-gain value values -0.5 and 0.1. And where, in turn, did those values come from?
In essence, the authors just made them up.
The following passage (in which they use for my above-mentioned loop gain ) is their sole justification for picking them:
“A plausible upper bound to may be found by recalling that absolute surface temperature has varied by only 1 percent or 3 K either side of the 810,000-year mean [40, 41]. This robust thermostasis [42, 43], notwithstanding Milankovich and other forcings, suggests the absence of strongly net-positive temperature feedbacks acting on the climate. In Fig. 5, a regime of temperature stability is represented by ? 0.1, the maximum value allowed by process engineers designing electronic circuits intended not to oscillate under any operating conditions. Thus, assuming ? 0.5. . . ..”
Despite being asked repeatedly to justify that upper, “process engineers’” value, lead author Christopher Monckton has been unable to do so, and it’s clear that the authors simply assumed the lower value. Any other values would have fit just as well into the above passage’s (pseudo-) logic. In other words, there was no physics there; they just pulled numbers out of a hat.
Even so, that model’s performance is hardly stellar. The observed 63-year temperature trend they exhibit in their Fig. 6 is 0.11 K/decade, and they say their model projects 0.09 K/decade. That’s pretty close, right? Still, the most-recent IPCC value they show, 0.13 K/decade is not so shabby: it’s 0.02 K/decade above observations, whereas Monckton et al.’s projection is 0.02 K/decade below. Monckton et al.’s model is no better, but it’s no worse, either.
What Monckton et al. don’t tell you, though, is that to get that close to the observed value they used a forcing trend for the projection that’s half again the forcing trend that was experienced over that sixty-three years. In other words, the projected trend should have been half again the observed trend: it should have been 0.17 K/decade, or right between the two most-recent IPCC projections of 1.3 K/decade and 2.6 K/decade—and 0.08 K/decade above Monckton et al.’s projection.
Of course, the hadCrut4 numbers have been so massaged that the actual temperature trend over the last sixty-three years was likely lower than the dataset says, and, as I say, I doubt the IPCC projections. But the fact remains that, however much other investigators’ results may support a low projection, Monckton et al.’s paper adds nothing to the evidence for it.
And, as I said, that paper was just dreadful.