This is an editorial that I sent out to various places.
I am one of the scientists that attended the recent Heartland Climate Conference in Manhattan, where I live. It is my belief that the strident and frequent claims of catastrophes caused by man-made global warming are stated with a degree of confidence not warranted by the data.
Although it is a logically fallacy to invoke this argument against opponents, let me say first that I have never accepted any money (except my graduate student tuition) for the work I have done in statistical meteorology and climatology. Incidentally, it isn’t because I wouldn’t, it’s just that nobody’s ever offered. I also did not get the one-thousand dollar honorarium from Heartland for speaking at this conference.
At the conference, I presented the same original research that I recently gave at the American Meteorological Society conference in New Orleans. I serve on the Probability and Statistics Committee of the AMS. This work was based on a paper I wrote and is about to appear in the Journal of Climate that shows that the number of tropical storms and hurricanes have not increased in number or intensity since we have had reliable satellite measurements. I also find that previous crude statistical methods others have used to analyze hurricanes have given misleading results.
It is trivially true that man, and every other organism, influences his environment, and hence his climate. It is only a question of how much, is it harmful, and can the harm be mitigated. It is indisputable that mankind causes climate change, even harmful change. But most of this change is local and due mainly to land use modifications. For example, replacing a forest with crop land creates different heat exchange characteristics in the boundary layer. These differences are easily measurable: cooler nighttime temperatures over crop land is an easy example.
It is important to recognize that some changes to our climate are beneficial. That converted crop land, for example, feeds people, which most would agree is a benefit. Diverted and dammed rivers provide water.
We also know with something near certainty that carbon dioxide has been increasing since the late 1950s. We are less certain, though nearly sure, that it has been increasing since about 1900. Before this date, we are even less certain of the global average amount. The reason is that before 1959 there were no consistent direct atmospheric measurements and so we must estimate the values based on proxies. Converting proxies to estimates requires statistical modeling. Part of every statistical model is, or should be, a quantification of the uncertainty of the estimates. This uncertainty is known by those who convert the proxies, but nearly always forgotten by those who use the estimates as input to climate or economic models.
It is absolutely clear that mankind is responsible for a portion of the carbon dioxide increase. What most people—not climatologists, but others—do not know is that this portion is only a fraction of the increase. The rest of the increase is due to other causes. These causes are not fully understood—a sentence you have often seen, and which means that we are not certain.
Temperatures have been directly measured for a little over a century. The number of locations at which temperature is taken has gradually increased, reaching something like full coverage only in the last thirty to forty years. It is certain that at many individual stations mankind has caused changes in measured temperature. Mankind caused both warming, due to the urban heat island effects, and cooling, such as by land use changes.
Joining these disparate measurements, and controlling for the changes and increases in locations, and the changes known to be due to urban heat island and other land use changes, to form an estimate of global average temperature again requires statistical modeling. And very difficult and uncertain statistical modeling at that. The resulting estimate should be presented with its error bounds, though it never is. These error bounds are currently larger than any projected increases in temperature, which makes it difficult or impossible to verify climate model output.
Surprisingly, climate models are not certain. We have deduced, and therefore know, the fundamental equations of motion, but there is some uncertainty in how to solve them inside a computer. We also are fairly sure of the physics of heat and radiative transfer, but there is large uncertainty in how to best represent these physics in computer code because climate models describe processes at very large scales and heat physics take place at the microscopic level. So these physics are parameterized, which increases the uncertainty in the climate model forecast.
All climate models undergo a “tuning” process, whereby the parameterizations and other parts of the computer code are tweaked so that the model better fits the past observed data. This necessary step always increases the uncertainty we have in predicting independent data, which is data that has not been used in any way to fit or tune the models. And it is a fact, and therefore certain, that, so far, climate models have over-forecast independent data, meaning that they have said temperatures would be higher than have actually occurred.
Lastly, there is the abundance of secondary research that uses climate model output as fixed input. This is the work that shows global warming causes every possible ill. I have never met one of these studies that quantified the uncertainty due to assuming climate models are error free. This means that their conclusions are vastly overstated.
Too many people are too confident about too many things. That was the simple message of the Heartland conference, and one that I hope sinks in.