Reader Marvin was kind enough to pass on Wednesday’s Pajama’s Media article What is—and what isn’t—evidence of global warming to a “highly educated” person (those are not scare quotes, but Marvin’s words).
And that person was sweet enough to respond, in writing. I post that person’s (who is unknown to me) comments below, in block quotes; the bolds are in the original. I answer, point by point.
My remote interlocutor discusses only one argument of my article, the skill of the climate models. I will have much more to say later about the main thrust of my article, which is why most people believe in man-made global warming.
Climate does change on times scales both long and short.
We now have a better appreciation of where and when climate has changed in the past and what its effects may have been on human history.
There has been global warming over the last 150 years concurrent with an increase in atmospheric CO2 unprecendented in the last few millenia.
That we are better equipped to detect and assess climate changes currently underway and their impacts, and to plan for future climate change.
There has not only been warming over the last 150 years. In some periods of time, the temperature has gone up, others down. Further, the certainty we have in the actual temperature measurements has increased in time. It has now reached the point where we can be roughly certain, but not in all areas—yet. Before 1900 or so, we are a lot less sure, and can only state changes to temperatures in a crude way, and only at a few locations. The global mean temperature before that date has a large plus/minus bound.
Given that, I will agree with my interlocutor that some sources would point to temperatures roughly increasing a small amount over the last 150 years. However, many sources (proxies) also point to other eras which were much warmer than it is presently.
CO2 has increased, true. Some of that increase is inarguably man-made. Some of it is natural and the result of the world growing warmer. I would shy away from the word “unprecedented”, especially as many studies point to CO2 and temperature roughly tracking each other. Before industrialization, it was likely the case that temperature increases preceded CO2 increases.
I agree that we can and should plan for climate change. Importantly, I have yet to see anybody attempt to define an “ideal” climate. Lastly, as I always say, it is a trivial fact that humans influence the climate; it is only a question of how much, etc.
What is at issue is whether the rapid increase in atmospheric CO2 and other greenhouse gases is capable, to first order, of inducing or accelerating global warming. To this end a number of climate research centers have been developing computer models that might show the response of climate to increasing CO2 would be; their consensus is that the increasing CO2 is responsible for most of the global warming over the last century, and they predict substantially more global warming over the next century based on various scenarios regarding future anthropogenic greenhouse gas emissions. The article that you sent, and other critiques of these models, have made several points:
The models do not predict periods of global temperature stability during the last century.
The models fail to incorporate some physical processes and feedbacks, on relevant spatial and temporal scales, that could noticeably affect their predictions.
Regarding the second of these critiques we can safely say that continued research will provide new and better ways to model climate, like any other scientific endeavor. In any case, we try to use what we think we know today — unless it can be refuted or demonstrated to have uncertainty that renders it not useful. Some of the earlier refutations centered around inadequate physics have been progressively addressed as the models become more sophisticated, for example with regard to coupling multi-layer ocean circulation in a dynamic way, effects of soil moisture, ice caps and snow cover, solar variation, volcanic eruptions, etc.
It is rational to believe that “continued research will provide new and better ways to model climate” and that model physics will improve. But I do not agree with the suggestion that we use to make decisions a partial, error-prone theory because it is all we have. We do not just have the AGW theory—there are rivals.
There are many competing theories for why the climate takes the temperature values it does. AGW is one of these. So is one we can call, the “Dude, whatever” theory (DWT), which predicts the climate will be just what it was last year with a little added plus/minus. AGW predicts that next year will be warmer than this year, and that the year after that will be warmer than next year, and so on, all with a little plus/minus.
Any climate observation is consistent with both our theories (and many others). DWT has the advantage of being very simple. A sophisticated theory (like AGW) should be able to beat it silly in any kind of forecasting contest.
Well, AGW predictions do not beat DWT predictions; and in fact DWT beats AGW. So what’s the better theory?
DWT also beats the Nothing Ever Changes (NEC) theory, which says (among other things) that the mean global temp will always be a constant (with no plus/minus). NEC theory is the simplest we can think of, DWT is next in complexity, and AGW is the most complex. There are other theories in between the complexity of DWT and AGW that, I think, would beat DWT in making predictions.
Therefore, I claim that it is DWT that should be used in decision making until such a point that the AGW models can spank it.
The article’s critique is more related to evaluation of model predictions. To quote: “But model fitting old data is not direct evidence that the theory behind the model is true. Many alternate models can fit that data equally well. It is a necessary requirement for any model, were it true, to fit the data, but because it happens to is not a proof that the model is valid. For a model to be believable it must make skillful predictions of independent data. It must, that is, make accurate forecasts of the future.” This is an eminently reasonable point but there are some nuances that seem to be ignored, for example that the predictive ability of models may be scale dependent. In particular, detection of climate change especially on regional spatial scales may require multi-decadal future observation to overcome the short-scale inter-annual variability. In other words, a truly post-model test might need to wait a few generations. Given the back-casting skill of the climate models, and their predictions of the effects of future CO2 emissions growth on climate, what is the prudent course of action?
Absolutely, the current climate models might make skillful forecasts for certain regions and scales and not for others. And I agree that a true “post-model test might need to wait a few generations.” I am willing to wait and believe it is a requirement that we do so.
This is because I do not agree that the “back-casting skill of the climate models” inspires us to trust them (my interlocutor uses the word “skill” in its normal, plain English sense, and I use it in its abnormal, technical one). Another way to state this is that we should trust the climate models because the fit they past data well.
But many models would fit that past data well, including DWT. And there are still other rival models which predict, via physical arguments of orbital changes and solar forcing, that we are headed back to our next ice age. Those models could also fit the past data well.
It is only true that the AGW model is the one that had been worked on most assiduously. Orders and orders of magnitude more money and man hours have been devoted to its study. Further, its predictions of temperature increase are only a small percent of the overall observed variability of temperatures—not modeled temps, but their actual, error-prone observations; error which increase our uncertainty in the AGW predictions. Really, we are talking about very small, even barely noticeable changes. So small, that it would be rational to devote energy to rival climate theories.
Note: I am talking about climate model predictions, and not forecasts of what might happen given the climate takes a certain state. Those predictions tend to be apocalyptic, or silly, or both; like the increase in prostitutes in the Philippines.
There still remains the question of the back-casting skill of the climate models. First, the models have been tuned to agree with some large-scale trends, although recent models rely less on tuning. Second, they do not reproduce some multi-year global-scale warming pauses. Third, models disagree on regional scale climate for even for spatial scales on the order of 1000 km. And there are other issues of model fit to data, for example with regard to the ice caps. But, by and large, they are pretty good at getting right the geographic variation of climate in the 20th century. The general spatial patterns of temperature and precipitation are reasonably well reproduced as well as their year to year variability, for example the major rainforests and deserts are where they should be. The models have also been used to see what the climate of the 20th century might looked like if there were no increases in atmospheric CO2. Comparisons of this kind are sometimes used to detect ‘signatures’ of CO2-induced climate change by comparison with observed climate changes during the same period, for example by a close examination of daily minimum and maximum temperatures and seasonal variations.
Even though models circa 2009 are tuned less than those circa 1984, they all still rely heavily on tuning, which includes the process of “analysis.” This is where the raw observations are statistically manipulated so that they fit into model space. This is not an unusual process—meteorological models make use of these tricks—but it is a form of tuning (I use the word “trick” like we normally do, as a technique).
I’ll agree that the models can be used in the sense of making sure the deserts are where they should be and so forth. I would not suggest that our models can’t say anything useful about the physics of the atmosphere; clearly they can. The signatures of CO2 are the point in question, however.
Let’s recall that these models were not built in isolation: they were designed with CO2 in mind, and tuned to those physics. Would it not be interesting to develop a suite of GCMs without this explicit call to CO2 forcing? To, that is, attempt to discover other physics that could account for the observed temperatures?
The models will not reproduce the details of the historical record because ‘natural’ variability will obscure trends on short time scales. But the failure of the climate models to reproduce multi-year pauses in global warming is still perplexing to some modelers. I am not so concerned because I feel that the unmodeled short-scale natural variability would not preclude such patterns even over a decade or two — but I haven’t done any analysis to support this belief. In short, my belief is that the models provide a good enough representation of the consequences of increasing atmospheric greenhouse gases.
It is true that the physical processes that the models were unable to represent, and that caused those “trends on short time scales” (something caused them), might be unimportant in the long run. AGW theory is consistent with this. In fact, any observation of the climate is consistent with AGW. No matter how much cooling we see, it still might get hotter in the future.
Meanwhile, another interpretation simply states that the models have—so far—failed to make correct independent predictions because those physical processes that the models were unable to represent, and that caused those “trends on short time scales”, are extremely important in the long run.
Summing up
Really, the only point at which I and my interlocutor disagree is in how much to trust the models. I am not yet ready to trust them to the extent that they be used to create rules and regulations that are mandated by force of law. I am willing, as I assume my questioner is, to pay, in the form of taxes, for more research on climate—to, that is, continue to do what we have been doing. I would not support new taxes.
Freudian Slip, “I am my interlocutor” 🙂
whatAboutBob,
I am at war with myself!
Thanks for your swift reply. I copied to the statistics prof. what I consider your main points. As your summary stated, the essential point is that you do not trust the AGW models. The great physicist Freeman Dyson publicly stated long ago and repeatedly that he too does not trust them. Intelligent people should trust them even less as a result of the climategate scandal.
Excerpts from the last excerpted paragraph
‘The models will not reproduce the details of the historical record ….the failure of the climate models…. still perplexing. …….. I haven’t done any analysis to support this belief…..my belief is that the models provide a good enough representation of the consequences of increasing atmospheric greenhouse gases.’
‘Then a miracle happens’ seems to have been left out.
Or am I being too harsh?
TH. Once we assume “a miracle happens” do we bow towards East Anglia? Or Nashville, TN? Quick, we need to know.
Who needs evidence? Penn State announced that they were investigating Mann. In the release announcing the investigation, they claim that the committees formed by Congress concluded in 2006 that his results were sound?!
If universities can just issue bald-faced lies like this, what’s the point of evidence? Isn’t the whole concept of evidence an archaic relic? Logic has already been banned from contemporary political debate (see e.g. destroying billions in personal property enriches the nation, a high murder rate means doctors should be controlled by the government, insuring and extra 30 million people will reduce costs, extensive taxation and regulation will create jobs (net), massive pork spending two years from now will create jobs today, ……), so what’s the point of evidence anyway?
Alice’s encounter with a looking glass world was nothing compared to the bizarro world of BO-zo we live in today.
Matt, you need to consider long-term forecasts if you’re going to compare DWT with AGW in blind tests – whether in the past or in the future. DWT is not useful, precisely because it can’t do long-term forecasts. On the other hand it’s pretty good for day to day amateur meteorology if it’s raining in the morning I don’t get my hopes up that I’ll be taking the kids to the park in the afternoon.
I don’t want to teach my grandmother to suck eggs, but in assessing whether a model appears to justify historical data, you should be inputing the starting conditions before a long span of data, then checking how well the models match the data. For this, DWT will give the same result as NEC, but with bigger error bars for later years.
I’d suspect that DWT underperforms here compared to AGW, especially if the AGW model was tuned/selected/”engineeed” to fit the data in the first place. I doubt there’s any region of the historical temperature record that hasn’t had any input into the AGW models, and rightly so – but as you’ve said before, this specifically weakens the evidence that can be drawn from how well they replicate historical data.
Like you, though, accuracy on past results doesn’t excite me much. The real test has to be future results.
On the other hand, failure to expain past results should clearly detract from our confidence in a model, whether we’re talking about climate models, proxy data, … Any case where your made-up numbers don’t match the real numbers, where available, indicates that you’re not liable to do a good job of guessing what the numbers should be where the real numbers are not available.
The warming from approx 1918 to to 1945 is statistically indistinguishable from that from 1979 to the present. I have not found any agreement as to why this is so. Can the models explain it? Until they do, I remain skeptical. Also, so much for the recent warming being unprecedented. Hardly 1000 years.
I am not at all impressed with the back-testing ability of climate change models. Back-casting “skill” is a minimum requirement for successful predictive models but it is no guarantee of success as there are an infinitude of models that will back-cast as well as or better than state of the art models. Painful real-world experience has taught me this!
Given the old saw that most American school kids can’t find Paris on a map, I guess it’s progress that the GCMs can correctly locate major rain forests and deserts, although I’d argue that such feats are not independent of the “physics” programmed into the models. I really wish we had climate modelers with the courage / common sense to tune up one of their creations to fit what they think the climate record was over the past, say 100 years, and then re-run that model keeping CO2 constant at 280 ppm, or whatever it supposedly was before we started using fossil fuels.
These models will never predict anything. We still don’t know the exact physics that controls the climate process to rule out all missing factors. Just recently it was noticed that aerosols can block sunlight and promote cooling. If this is so then cutting CO2 emissions will reduce the aerosols and increase warming on the short scale. How do we trust and use the model to optimize this effect for the least damage when we start reducing emissions? What about cloud feedback? What will be the next bit of physics we forgot to model? Also, the weather models don’t work beyond about 2 weeks for the same reason the AWG and other models won’t for longer periods: chaos is involved.
When you get into modeling and predictions from models, especially over long periods, you get into the dark corners and alleys of physics. We can do it fairly well with rockets, but even now our deep space probes are showing that current Newtonian gravity with Einstein’s corrections isn’t quite right and that space beyond our solar region may have different physics or some new unknown force. Between the fact that we really don’t fully understand the physics involved and that chaos causes the model to deviate from reality over relatively short periods of time, I sometimes wonder why we even attempt modeling. Sometimes the simple equation on the napkin explains it well enough for what we need. The trouble is finding someone smart enough to get it right.
Rocket science is simple compared to these climate models. Not only do you have the gas and thermal physics of the atmosphere (the easy rocket science), but you also have chemical processes and the life factor, that is: our actions and the biosphere’s reactions to our actions. The chemical and biological factors are the hardest to get correct. They interact in strange force and feedback loops that defy modeling. For example, China and other industrial nations massively seed clouds and force rain from one area to another. Can we predict how our models will react to active climate controls like this? While we have faith in rocket science to accurately predict space probe positions at least well enough over years of time, do we have faith in the other science involving chemical reactions and life forces to predict correctly using similar methods? I’m only an amateur Physicist and can’t make the argument that the models are a bunch of manure, but I feel that we are missing the point: AWG and CO2 emissions are really a political question and we should not rely on science for all the answers.
George,
Persistence forecasts—which are the result of DWT—can certainly be used long term. They probably won’t have skill compared to some climate model, but it’s perfectly usual to use persistence in forecasts.
Steve,
Well…I don’t love at all the term “statistically indistinguishable”, as it has no meaning without respect to a model. And that, of course, is the very point at issue.
Tesla,
You look a lot younger than I would have guessed.
Frank,
Correct. Land masses are built in, so if the models can’t find them, it’s time to worry.
James Gibbons,
Chinese claim to have perfected cloud seeding. Am skeptical, given the history of these attempts.
“Chinese claim to have perfected cloud seeding. Am skeptical, given the history of these attempts.
I quite agree. There are many other ideas on planet modification running around such as seeding the oceans with iron to create algae blooms to sink CO2. Wonder if that would work? (I doubt it, although it might kill some fish.)
It also appears the LA Times thinks the science shouldn’t be used either:
http://www.latimes.com/news/nation-and-world/la-fg-climate-hacker22-2009nov22,0,913036.story
Funny how when the model is in question they are willing to throw it away.
I just don’t understand this blind reliance on models. Even the best ones, such as those used to design airplanes, must be validated by building and flying a test plane. The same thing is happening in the nuclear bomb area. We are now starting to use pure models for new bomb designs without any test verification because the political fallout would be too great.
A scientist friend of mine who is agnostic about AGW wrote this to me in response to our discussion of whether to trust the AGW models:
” Well, the oceans have “oscillations” like the PDO [Pacific Decadal Oscillation] and El Nino, whose beginnings are not predictable, so the models can’t include them. So, the “long enough” time [for being convinced by the models] has to be long enough that those effects wash out…. Here’s where the argument works like Pascal’s gamble [http://www.iep.utm.edu/pasc-wag/]: If the models are right, we can’t afford to wait for their verification! ”
How would you argue against his last sentence?
Here’s how I replied:
That statement requires much more argument from you to convince me. Please provide some discussion of that, taking into account the large economic costs of reducing carbon emissions globally (and only a global effort would make a significant difference), especially in less developed nations. Take also into account that the models may be turn out to be right about an influence of greenhouse gases on global warming, but may not be reliable in predicting the severity of that warming – as Freeman Dyson explained in some detail, global warming could be overall positive for the world if it does not get too extreme. Take also into account that climategate and other investigations by skeptics have revealed that the data for AGW modeling is not reliable.
It seems to come down to making a plausible estimate of risk vs. reward, given the costs.
This may not be entirely relevant, but I’m reminded of the hysteria over potential technolgical disaster as Y2K approached; it turned out to be a relatively harmless event, and the preparation for it was not that costly.
What is the name of that fairy tale about the sky falling? Chicken Little?
Marvin,
Ask you pal if he has decided to become a Christian because of Pascal’s argument.
Here is a link to a layman’s story on the latest NASA model for aerosols. The NOx radicals have more cooling power than CO2 or CH4. In other words, we need to cut CO2 and NOx more so that we get the city heat islands to cool down instead of warming up from the short term of NOx reduction. This also implies that the cities are already being cooled by the pollution.
http://www.sciencenews.org/view/generic/id/48940/title/Aerosols_cloud_the_climate_picture
“The newly revised NASA model only begins to address the complexities of atmospheric chemistry, Shindell says. It doesn’t, for example, consider how pollutants such as ozone and acid rain suppress the uptake of carbon dioxide by trees and other plants.”
Wonder if acid rain has anything to do with the CRU emails about divergence of the tree ring data after 1950 in the NH only? Is this the fix for divergence?
He’s basically saying all the current AGW models are not accurate without a lot more work.
Having initiated my own investigation of so callled “global warming” in my own region, being New Zealand, and having downloaded nearly all the long term thermometer temperature series (more than 100 of them, some going back to the 1860’s), and having compiled a good number graphically, I can say that the warming down here is remarkably trendless (i.e. little or no warming at most sites that are not inside or immediately adjacent to a city). This is in complete contrast to James Hansen’s GIStemp and to New Zealands “National Institutre of Water and Atmospheric Research” (NIWA). The raw data come direct from the NIWA database. So my preliminary opinion is that the climate in NZ, and likely over most of the Southern Ocean, is not currently warming. This contravenes what the computer models suggest should have been happening.
This project is ongoing and likely will be so over the next few months. Nevertheless, I am already harbouring a suspicion that temperature indices generated from rural sites in the database will prove to be trendless from about 1950 onward and that this will be able to be verified independently. This is the case for the first index, which contains South Island sites above an elevation of 150 m (max of 20 such sites reporting in any one year). Next cab off the rank is South Island sites under 150 m elevation (will be many more sites in this index). Most of the candidate non-urban component series do not exhibit a significant post 1950 trend.
So, I am like Briggs. I think its too soon to cripple the global economy on the basis of computer models that can’t account confidently for the effects of clouds and aerosols.
I suspect that full and independent due diligence of instrumental temperature records on a Country by Country basis would reduce the 20th Century warming trend to about 1/3 of what certain “trusted” groups (CRU, GISS, NOAA) have been reporting.
This “we haven’t got time for the models to be verified” thing reminds of a statement by the purveyor of a quack remedy. He said, in essence, “it would be morally wrong to withold this until testing and verification is done because it can help people now”. Missing the point that you don’t know it’s helping anybody till the testing and verification is done. Likewise the models. If they aren’t verified then you don’t know that’s there’s anything that needs to be urgently done. Why does the “precautionary principle” always lead to action instead of cautious inaction?
But here’s the thing. We KNOW the climate modelers have cooked the data then discarded the originals. We KNOW the models are a mish-mash of arbitrary and capricious code without inferential foundation. We KNOW a massive effort has been made to undermine critics. We KNOW that governments have responded to the phony crisis with Enron-style programs designed to raise taxes, steal like there’s no tomorrow, and inflate the cost of everything.
We don’t need a model to speculate about that stuff — the details from the depths of the CAGW conspiracy may not be fully known, to the public, yet, but the basic framework of decades of science fraud and deceit on the part of climate modelers is established fact.
Is it possible that the phony models built by lying cheat modelers could be right? Yes, just as a blind pig occasionally finds an acorn, or a broken clock is right twice a day. But for all intents and purposes, the models are GIGO crap.
The interlocutor is willing to cripple the world’s economies, cause suffering and pain beyond anything Mankind has yet perpetrated upon itself, and invite the umbrage of the entire human race down upon his anonymous person — based on his acceptance of known lies and fraud.
I’m with Briggs in that I am unwilling to sit back while people with remarkable similarities to totalitarian space lizards eat me, my friends, and neighbors. I DO endorse the Precautionary Principle — we need to take reasonable precautions against known frauds, hucksters, global saboteurs, and other sociopaths who hatch vicious and deceitful anti-human conspiracies.
I’m not sure the precautionary principle is a helpful idea, even in principle. Most in the CAGW camp who state that they are relying on the precautionary principle simply don’t understand it. There is even a very popular video on YouTube that makes the explicit argument that we don’t need to know the underlying science or settle the debate, because the precautionary principle mandates that we should act to reduce GHS’s in any event. However, what is misunderstood is that the very application of the precautionary principle requires that we make some conclusion in advance about the possible underlying outcomes.
Specifically, many CAGW advocates seem to think that the possible outcomes (broadly speaking) are (i) catastrophic natural disasters and untold suffering, versus (ii) a relatively small up-front pain in the form of additional taxes (or if they are really putting on a positive spin, no negative at all, as it is couched in terms of “creating new green jobs” or “stimulating” the economy). Given this view of the possible outcomes, one might indeed come to the conclusion that we should act quickly to reduce GHG’s under the “precautionary principle.”
However, many climate realists would argue that the possible outcomes are (i) minor nuisance warming, or potentially even beneficial warming, versus (ii) massive tax increases and significant wealth distribution to traders and brokers (or if they are really putting on a negative spin, crippling economic results). Given this view of the possible outcomes, the “precautionary principle” would dictate that we should definitely not act to reduce GHG’s. (Some realists would argue, as does Briggs, for further study, but this call for additional research is simply the application of a further risk/benefit analysis ($ spent on further research, versus expected benefits of the research).)
The bottom line is that it is impossible to apply the precautionary principle in any logical fashion without first grappling with the thorny and challenging questions of what the potential outcomes are and what the real costs and benefits are of specific approaches.
I’m just talking so far about a simple view of the ultimate outcomes (which is the way I have typically seen CAGW advocates use the precautionary principle argument), and I am ignoring the fact that the probability of any particular outcome would also have to be taken into account in determining what action the precautionary principle dictates.
The fact is that even with this wonderful “precautionary principle” we are still right back to where we started in the first place: we need to determine, on a substantive, scientific basis, what the potential outcomes are and what can and should, if anything, be done about it.
I suspect that the basic data that was lost by CRU actually still exists.
This data will be held by the various national climate monitoring agencies (or their decendents) from which CRU originally obtained the data. So it will be possible to recreate almost all of the “non-value-added” global dataset (this is a real pain in the butt, however it needs to be done). But it would be best if this work was carried out by a non-politicised group completely independent of the IPCC, CRU, NOAA and GISS.
The compilation of the temperature data should be carried out by a group that does not have any role in gridding the data and making temperature graphs/indices for propaganda purposes. The data needs to be 100% available for cross-checking and use by any other independent person or organisation. This will help to end the arguements and advance climate science.
Matt, you forgot to mention another great thing about DWT is one has so little accumulated data to throw away each year, thus freeing up time and space. You are an organizational genius!
Further articles of interest:
1. Much of the raw historical temperature data used by the University of East Anglia’s Climate Research Unit was deleted some time ago, leaving only the processed temperature record—processed according to very dubious methods which now cannot be checked or confirmed because no one thought it was important to keep the raw data.
http://www.timesonline.co.uk/tol/news/environment/article6936328.ece
2. See also the discussion by Christopher Booker about how central the CRU and the other Climategate figures are to the propagation of the global warming hysteria.
http://www.telegraph.co.uk/comment/columnists/christopherbooker/6679082/Climate-change-this-is-the-worst-scientific-scandal-of-our-generation.html
3. What is also dangerous about those e-mails is the combination of corruption and dishonesty with self-righteousness. Melanie Phillips describes Climategate as the product of “the totalitarian personality” which seeks to “airbrush out of the record” any facts or persons who challenge its pre-conceived ideological conclusions.
http://www.spectator.co.uk/melaniephillips/5565331/green-totalitarianism.thtml
Have you heard of “Post-Normal Science?”
Mike Hulme, Professor of Climate Change at UEA, and the founding director of the Tyndall Centre for Climate Change Research, made the really remarkable admission in 2007 that AGW theory could not be supported by the ‘normal’ rules of scientific inquiry. He wrote:
The danger of a ‘normal’ reading of science is that it assumes science can first find truth, then speak truth to power, and that truth-based policy will then follow… Self-evidently dangerous climate change will not emerge from a normal scientific process of truth-seeking, although science will gain some insights into the question if it recognises the socially contingent dimensions of a post-normal science.
Global warming, he claimed, was an example of ‘post-normal science’ which did not seek to establish the truth through evidence. Instead, truth had to be traded for influence. In areas of uncertainty, scientists had to present their beliefs instead as a basis for policy.
http://www.guardian.co.uk/society/2007/mar/14/scienceofclimatechange.climatechange
It was an admission that, in the name of science, scientific reason had been junked altogether to promote mere ideological conviction. That is the real message of ‘Climategate’.
– Melanie Phillips
http://www.spectator.co.uk/melaniephillips/5582321/postnormal-science.thtml
Marvin,
Not until recently, when JJD put me onto it. An excellent review he sent is at this site. I’m still digesting, but I want to write about this.