As is well known by now, a passel of climatologists in the 1970s, including such personalities as Stephen “It’s OK to Exaggerate To Get People To Believe” Schneider, tried to get the world excited about the possibility, and the dire consequences, of global cooling.
From the 1940s to near the end of the 1970s, the global mean temperature did indeed trend downwards. Using this data as a start, and from the argument that any change in climate is bad, and anything that is bad must be somebody’s fault, Schneider and others began to warn that an ice age was imminent, and that it was mainly our fault.
The causes of this global cooling were said to be due to two main things: orbital forcing and an increase in particulate matter—aerosols—in the atmosphere. The orbital forcing—a fancy term meaning changes in the earth’s distance and orientation to the sun, and the consequent alterations in the amount of solar energy we get as a result of these changes—was, as I hope is plain, nobody’s fault, and because of that, it excited very little interest.
But the second cause had some meat behind it; because, do you see, aerosols can be made by people. Drive your car, manufacture oil, smelt some iron, even breath and you are adding aerosols to the atmosphere. Some of these particles, if they diffuse to the right part of the atmosphere, will reflect direct sunshine back into space, depriving us of its beneficial warming effects. Other aerosols will gather water around them and form clouds, which both reflect direct radiation and capture outgoing radiation—clouds both cool and warm, and the overall effect was largely unknown. Aerosols don’t hang around in the air forever. Since they are heavy, over time they will fall or wash out. It’s also hard to do too much to reduce the man-made aerosol burden of the atmosphere; except the obvious and easy things, like install cleaner smoke stacks.
Pause during the 1980s when nothing much happened to the climate.
Then, since the 1990s, the Earth’s temperature noticeably began to increase. So back to the old argument: any change is bad, and it’s somebody’s fault. One of the main culprits everybody knows: increasing carbon dioxide, a gas which (fairly inefficiently, actually) captures outgoing radiation, leading to warming. Both CO2 and warming also tend to increase plant production (making a greener world), but never mind that. Aerosols are still in the game, but now are seen as mitigators: the sunlight they reflect helps to cool things off (the overall effects of clouds is still unknown).
Changes in orbital forcing still need reckoning, but these were and are largely ignored. These orbital changes, and their inevitability, form one of the two main differences in perception between cooling and warming.
For both global cooling and global warming, we were able to find a way to perceive it as being our fault: by ascribing the changes either to man-made increases in aerosols or CO2. But back in the cooling days, we also had unchangeable circumstance in the form of the Milankovitch cycle (or the earth’s orbit) and other obscure physics, which there was nothing anybody could do to change. Because of that, more people were resigned to their fate, so to speak, thus more ignored the scientists.
Overall, then, in the late 1970s it was hard to get people too excited about mankind’s effect on climate, though there was a consensus (a now favorite word) that some kind of global cooling was coming our way. But there just wasn’t enough substance to hold the media’s and the public’s attention.
So how did global warming become so well known? If it were only reports of man-made increases in CO2, as it was for man-made increases in aerosols, I doubt the world would have taken much notice of global warming. But there was one other difference between the two theories that I think accounts for the heightened importance of global warming.
That is difference models.
Computational power in the 1970s was, of course, trivial compared to that of today. Complex global climate models back then were no more sophisticated, actually even less sophisticated than, the algorithms in the digital wristwatches of today. In short, intricate computer models and, much more importantly, reliance and trust in these models, was an impossibility then. It is not so now.
Predictions of global cooling, then, relied more on observations of actual cooling, and gross—on paper—approximations to the physics of the atmosphere. Today, the predictions of global warming rely almost exclusively on the output of models. Computational power has certainly increased, and by orders of magnitude. Have the models themselves also improved?
Yes, but not as much as you would think, or hope. More of the physics is now known, but it is impossible (see this review) to completely insert this new physical understanding into the models. Many of the models’ “subroutines” are based on parameterizations, which are educated, but not infallible, guesses of how certain processes (like clouds) work. Other model components, such as how the atmosphere interacts with the ocean, land, and outer space, are increasingly crude approximations (the later, outer space, is related to orbital forcing, and is usually ignored). Other parts of the models are based on nothing more than assumptions of how the physics works in certain situations.
Too, we don’t have accurate actual measurements of all locations, levels, and components (like temperature, moisture, wind speed, and so on) of the atmosphere, land, ocean (and outer space). The number of measurements we do have is minuscule compared to the actual size of the earth. So it is hard to reconcile the output of models with what the actual state of the earth is, though, of course, this is a necessary step. The process whereby the models are adjusted so that they conform to both the actual measurements and to the scientists’ expectations is called tuning. All models undergo extensive tuning until the majority of it users are satisfied of the output.
The models are sophisticated; they rely on very difficult mathematics and require years of training in physics and computer science to understand and implement, thus model tuning is an art. People devote entire careers to just tweaking these models. Books and journal articles regularly appear suggesting changes or offering new interpretations. There is an entire culture built around these models. It is also safe to say that no one person has a complete understanding of the models. This is why organizations like the IPCC must cobble together hundreds of scientists in order to summarize what these models are saying.
We have now arrived at the second difference in perception. To believe in global cooling, we had to believe in the individual scientists who propounded the theory; we had to have trust in their capabilities, their ethics, their motivations. There were not so many climate scientists back then, and the theories they touted were complex and difficult to explain to the public, so we basically had to take their word that what they predicted would come to pass.
In global warming, we no longer have to believe in a single scientists, we can instead choose to believe in their collaborative models. Computer models, I should say. Because to say a model was done on a computer gives it a certain lustre and mystery, which in turn makes it difficult to question its results. Lustre and mystery that is undeserved, however, because computers, I unfortunately have to emphasize, do nothing more than what they are told to do by people. We still have to trust in scientists’ capabilities, motivations, etc., only now this trust is once removed, and it becomes more a trust in technology.
That trust is, in 2007, nearly complete. But it is a trust that is not deserved. It is true technology has done marvels in other areas of our lives, and is to be trusted in those areas. It is not true, however, that the climate models upon which scientists base their projections should be trusted. The models have not yet proven their capabilities; which is another way of saying that they do not yet make accurate predictions of actual observed conditions. Not even one-percent of the effort devoted to working on these models is set apart to measuring how well they actually perform. The evaluation and verification studies that have been done so far imply that the amount of uncertainty in output of these models is vastly underestimated.
But again, never mind, because there is also the shear number of people making cataclysmic claims. Most of these people are not, of course, climatologists. They are instead people who use the results of climate models as input to their own models of economics, ecology, agriculture, sociology, and on and on. Aynsley Kellow, Professor of Government at the University of Tasmania,
…describes one paper published in the journal Nature in January 2004 that “warned of the loss of thousands of species with a relatively small warming over the next century. But just how virtual was this science is apparent when we consider that the estimates of species loss depended upon a mathematical model linking species and area; modelled changes in the ? distributions of areas of habitat depended in turn upon the results of climate models tuned to reflect climate changes as a result of increasing greenhouse gases ? these in turn were driven by scenarios of what [such] emissions might look like over the next century, driven in turn by economic models.” Source http://www.smh.com.au/articles/2007/12/21/1198175338154.html
Model builds upon model which builds upon other models, all of which make approximatiosn and assumptions and so on. This is not the kind of work in which to put your beliefs. It it not the kind of work to write treaties or raise taxes. Yet it has become so.
(For an amusing article showing how some are trying to spin the old predictions about imminent cooling such that they actually were predictions of global warming, see this Wikipedia article. — Note: this article, like all on that site, are subject to change.)