# Homogenization of temperature series: Part I

Be sure to see: Part I, Part II, Part III, Part IV, Part V

Introduction

Time to get technical.

First, surf over to Willis Eschenbach’s gorgeous piece of statistical detective work of how GHCN “homogenized” temperature data for Darwin, Australia. Pay particular attention to his Figures 7 & 8. Take your time reading his piece: it is essential.

There is vast confusion on data homogenization procedures. This article attempts to make these subjects clearer. I pay particular attention to the goals of homogenizations, its pitfalls, and most especially, the resulting uncertainties. The uncertainty we have in our eventual estimates of temperature is grossly underestimated. I will come to the, by now, non-shocking conclusion that too many people are too certain about too many things.

My experience has been that anything over 800 words doesn’t get read. There’s a lot of meat here, and it can’t all be squeezed into one 800-word sausage skin. So I have linked the sausage into a multi-day post with the hope that more people will get through it.

Homogenization goals

After reading Eschenbach, you now understand that, at a surrounding location—and usually not a point—there exists, through time, temperature data from different sources. At a loosely determined geographical spot over time, the data instrumentation might have changed, the locations of instruments could be different, there could be more than one source of data, or there could be other changes. The main point is that there are lots of pieces of data that some desire to stitch together to make one whole.

Why?

I mean that seriously. Why stitch the data together when it is perfectly useful if it is kept separate? By stitching, you introduce error, and if you aren’t careful to carry that error forward, the end result will be that you are far too certain of yourself. And that condition—unwarranted certainty—is where we find ourselves today.

Let’s first fix an exact location on Earth. Suppose this to be the precise center of Darwin, Australia: we’d note the specific latitude and longitude to be sure we are at just one spot. Also suppose we want to know the daily average temperature for that spot (calculated by averaging the 24 hourly values), which we use to calculate the average yearly temperature (the mean of those 365.25 daily values), which we want to track through time. All set?

Scenario 1: fixed spot, urban growth

The most difficult scenario first: our thermometer is located at our precise spot and never moves, nor does it change characteristics (always the same, say, mercury bulb), and it always works (its measurement error is trivial and ignorable). But the spot itself changes because of urban growth. Whereas once the thermometer was in an open field, later a pub opens adjacent to it, and then comes a parking lot, and then a whole city around the pub.

In this case, we would have an unbroken series of temperature measurements that would probably—probably!—show an increase starting at the time the pub construction began. Should we “correct” or “homogenize” that series to account for the possible urban heat island effect?

No.

At least, not if our goal was to determine the real average temperature at our spot. Our thermometer works fine, so the temperatures it measures are the temperatures that are experienced. Our series is the actual, genuine, God-love-you temperature at that spot. There is, therefore, nothing to correct. When you walk outside the pub to relieve yourself, you might be bathed in warmer air because you are in a city than if you were in an open field, but you aren’t in an open field, you are where you are and you must experience the actual temperature of where you live. Do I make myself clear? Good. Memorize this.

Scenario 2: fixed spot, longing for the fields

But what if our goal was to estimate what the temperature would have been if no city existed; that is, if we want to guess the temperature as if our thermometer was still in an open field? Strange goal, but one shared by many. They want to know the influence of humans on the temperature of the long-lost field—while simultaneously ignoring the influence of humans based on the new city. That is, they want to know how humans living anywhere but the spot’s city might have influenced the temperature of the long-lost field.

It’s not that this new goal is not quantifiable—it is; we can always compute probabilities for counterfactuals like this—but it’s meaning is more nuanced and difficult to grasp than our old goal. It would not do for us to forget these nuances.

One way to guess would be to go to the nearest field to our spot and measure the temperature there, while also measuring it at our spot. We could use our nearby field as a direct substitute for our spot. That is, we just relabel the nearby field as our spot. Is this cheating? Yes, unless you attach the uncertainty of this switcheroo to the newly labeled temperature. Because the nearby field is not our spot, there will be some error in using it as a replacement: that error should always accompany the resulting temperature data.

Or we could use the nearby field’s data as input to a statistical model. That model also takes as input our spot’s readings. To be clear: the nearby field and the spot’s readings are fed into a correction model that spits out an unverifiable, counterfactual guess of what the temperature would be if there were no city in our spot.

Tomorrow

Counterfactuals, a definition of what error and uncertainty means, an intermission on globally averaged temperature calculations, and Scenario 3: different spots, fixed flora and fauna.

Be sure to see: Part I, Part II, Part III, Part IV, Part V

#### Homogenization of temperature series: Part I — 50 Comments

1. I agree with Briggs and would say the following:

1. Don’t splice stations together. Treat them as individual stations.
2. Don’t apply corrections to a station using nearby stations. Such corrections will be dependent upon the order in which they done.
3. Take photographs of all the stations. Otherwise you have no idea what is being measured.
4. Do some historical research and get a better idea of what past stations looked like.
5. If the location is bad (e.g., on a roof, in a parking lot, in a city, at an airport etc.) toss the data. It doesn’t give us any useful information.
6. The only way the tossed data could possibly get back into the analysis is by a detailed study of the thermal distortions caused by the poor siting.
7. Use an equal area gridding to get hemispheric and global means.
8. Use a one year normal rather than 30 year normal to get anomalies. You will get many more reconstructions and get a better idea of the errors in the trends.
9. Try eliminating the short term stations and using only the ones with longest records.
10. If the remaining stations don’t give a hemispheric mean close to 288K, then the hemispheric mean can’t be calculated reliably and this should be admitted. Prior to 1957, you will find there are too few stations to get a hemispheric mean in the Southern Hemisphere because there are no measurements in Antartica.

I probably have forgotten something, but that is a start.

2. Oh well, at least your hypothetical field stays put. Most of the earth’s surface is water, and it moves.

3. Can’t wait for the next parts. I wonder why you chose a pub rather than say a church in your example? You must have liked the Crocodile Dundee ambience. ;>)

4. Doug:
It is hard to disagree with your suggestions for improving the temperature record, but I am really interested in Matt’s take on the temperature records we are currently dealing with as exemplified by Willis’ piece on Darwin and the conditions under which homogenization of data is reasonable or not. What should happen if we got all the actual raw data plus all available metadata. Do we need to homogenize anything at all? How do we, as you note, estimate the error or the uncertainty of our measures.
I know it is a different process, but the parallel issue for me is around smoothing.
It seems to me that some of these homogenizing techniques were introduced to compensate for a lack of efficient computing power, fully specified models and to simply provide particular types of visual summaries of the data, i.e. line graphs. These do not now seem to me to be particularly compelling justifications per se for homogenization.

5. Matt:
Yeah, but one was also a Post Office. Though in Australia who know what other services it provided. ;>)

6. Bernie,
It is time to think outside the box.
Also ocean water temperatures are so poorly measured that I would not use them at all.

7. Well as us experimental scientists say: Garbage in, garbage out. The whole idea of “curing”, “fixing” and “interpolating” all these old, bad, spotty data strikes me as totally wacky. Can’t be done – waste of time. Could only be dreamed up by people with no hands-on science experience.

8. Thank you thank you thank you.

For A long time at CA I’ve been questioning the uncertainity that Homogenizing brings to a temperature data set, not to mention the in filling of missing data. Each of these “corrections” to the data carries with it an error, yet when it comes to constructing final error values the researchers pretend that the corrections are errorless and they imagine that filling in missing data just gets them the added bonus of more “n” in their error calculations. Looking forward to more pieces..

9. Mr. Hoyt,

Regarding suggestion #3: much work to this end has already been done, at least in the US. Go to (I’m not sure what the policy here is about linking so I’ll just spell out the address): surface stations dot org.

-Matt B.

10. Adjusting the data to remove the heating from man-made intrusions & changes in siting & so forth is reasonable–provided the adjustments are based on a reasonable reference. In this case that might be an independent measure of temperatures across the region under different weather conditions to correlate with the conditions taken when the original readings were taken. With such a reference anyone could reproduce the conversions made.

Likewise, variations in what are considered reasonable adjustments could be compared to reveal a rough sensitivity to the technique (what we in management loosely refer to as ‘error bars’ around the reference trend line). This would give a rough margin of error, with the truth/reality falling somewhere therein.

This is very common in many areas. Even carpentry. Just for fun compare a couple of standard metal tape measures that exceed 25 feet — say a new one & an old well used one. Its common to find them to differ by around an inch, sometimes more. That’s good to know BEFORE cutting boards for a large project (say your new deck) based on someone else’s plans (perhaps literally drawn on an envelope) that they developed using a different tape measure. In such cases it’s a matter of course to annotate drawings to account for the correct measure.

Digression aside, given that most changes are due to urban heating effects (“urban heat island” – UHI) one would expect the adjusted measures to indicate decreases from the raw measures.

That clearly didn’t happen with the “Darwin Zero” data. The “removal” of the UHI effect resulted in further additions to temperature. Curious indeed.

11. This world is really cool
It’s really not so hot
Let the data tell the story
It’s really all you’ve got

12. Matt B.
I am aware of surfacestations.org and was one of their volunteers. The US is nearly done. It is time to do the rest of the world.

13. I recall Anthony’s first experiments with acrylic painted, whitewashed and bare weathered Stephenson Screens – he found a small but measurable effect. The possible effect of screen condition is one of the variables that could be considered just like a new coating of asphalt on the close by parking lot. At a certain point one cannot keep track and adjust for all these variables so what do you do? I suspect you have to expand the error bars – but how do you combine all the sources of known errors but which may not be documented?

14. Bernie says:

Matt:
Yeah, but one was also a Post Office. Though in Australia who know what other services it provided. ;>)

The Post Office in question took a direct hit from a bomb when the Japanese bombed Darwin in 1942 killing the PostMaster, his wife and a number of employees.

While I lived in Darwin much later (1956 through early 1975 with some years elsewhere) I seriously doubt that they provided any other services than the dispatch of letters and parcels and the delivery of the same to local residents.

15. Think about the word homogenized. Who would imagine that’s a good thing to do to raw data? And isn’t the point of taking an average to “boil it down” to a simpler number? So why the added step of homogenization? In the example above of the city growing up around the pub and temperatures subsequently improving, it seems like you have a wonderful conclusion right there. Now you want to take the same data and prove that increased and improved temperatures are due to something else also? That seems like asking a lot.

Here’s a sweet little piece of research.
I guess there’s not much chance of getting that past the peer review process though. One does wonder though if peer review might benefit from a little more blessed innocence.

16. It may turn out that we do not have enough homogeneous data to get hemispheric and global means. It may only be possible to regional variations.

17. If they hadn’t been in such a stupid rush to ‘save the world’, then twenty five years ago, when this whole kerfuffle was getting started, they would have installed a network of climate stations in every continent and every type of terrain specifically designed to measure climate change, with a concomitant array of buoys to measure the climate of the oceans.

This would not only have given us a baseline measurement with which to test any models that we developed, but also a solid reference on which to judge the accuracy of the existing weather stations, and even a good baseline for proxy measurements.

But that would be science. It might also be hard and give less opportunity for playing with super-duper computers and travelling to conferences with loads of big shot politicians, rock stars and hollywood actors.

18. Seems like there’s a third scenario.

Rather than adjust the temperature to what it would be if the city weren’t there, you should adjust it to what you would have gotten if you had instrumented that region then averaged over it.

Unlike your scenario #2 this is physically realizable.

19. Richard:
I apologize if I caused any offense. From Willis’ thread at WUWT I did know that the PO had been bombed. I should have been more thoughtful. I spent a wonderful couple of weeks near Port Douglas, Queensland a few years ago. While there I learned that WWII and the terrible hardships it created are still very real to many Australians – and not just the one’s who served. I had dinner with someone who did the (in)famous trek across New Guinea. I have a great admiration for Aussies.

20. Carrick,

Right. That’s part of the next post, under counterfactuals. Quick point: it’s no different logically. In either case, we’re asking to guess a value of temperature given something that wasn’t true.

21. Temperature sensors for long-term measurements should be encased in a cubic metre of concrete, to insulate them from high frequency changes (e.g. cars being parked nearby), and always sited on a large area of asphalt, e.g. a car park if possible. It might be easier to maintain the local conditions if they start out man-made.

22. When I first read about homogenisation, I must admit that my first reaction was not “golly, that opens the door to a lot of crookedness”, it was “golly, how surreally stupid”. I’m not quite sure how to get across to non-scientists just how extraordinarily dud these Climate Scientists are.

23. As we learn more and more of the political but unscientific methodology used in AGW, I’m reminded of a bawdy story concerning a mouse and an elephant. Aptly equating AGWers with the story’s mouse simply makes the tale funnier. Looking forward to part II. Excellent links from several commenters, btw.

by Erik Larsen via Small Dead Animals – and too funny not share

BTW – I’m happy to wade through more than 800 words here anytime…

Man doth use the petro flame, but
Sun and wind are not the same!
They power honest, fair, and true;
(Though kilojoules produced are few),
Our evil fuels we must eschew

It’s been foretold that all will die,
And oceans rise, because the sky
is filled by fruits of our endeavor;
From our lives we have to sever
Carbon, or doom earth forever.

Do not question!! “Yes we can!”;
Carbon we will forthwith ban,
Science has outlived its span; for
Climate change was made by Mann

25. A simple alternative to all this homogenization nonsense: to obtain a global average temperature, weight each thermometer’s temperature readings according to how representative it is of the earth’s surface. A simple framework for deciding “representativeness” might look like this: all ocean temperatures are given a weight of 7. All non-urban land temperatures are given a weight of 3. All urban land temperatures are given a weight of zero (maybe 0.1 at a stretch).

By the way, given that the aim is supposedly to obtain a global average surface temperature, why are ocean measurements taken in the water?

26. The complexity emerges not from just calculating the global average temperature – which is in itself complex – but from the need to calculate the trends in global average temperatures. The resulting implicit model of temperature measurement is much more complex.

27. Professor Briggs,
I was hoping to learn something (without paying for it) from this topic.
Background.
I only got involved with this global warming stuff because I am looking for practical
methods to detect data corruption that can be taught to a computer. Specifically, I am
having no luck at all in creating a robust algorithm that can discern an outlier event (keeper)
from data corruption (toss out).
Request.
So I am hoping that you will address the Darwin temperature data from a viewpoint of determining if you can find one or more signals in the data. If you also want to address the subject of measuring a planet’s temperature and how to do it – fine.
Fallback position. As possible payment I would be willing to edit the first N chapters of your book using Microsoft word with track changes enabled. Still too many typos.

28. Last night’s temp (degrees C) & RH at my place (Darwin) – this is why we long for those Elysian fields of cooling breezes!!

10/08:00am 30.7 73
10/07:30am 30.3 76
10/07:00am 30.1 76
10/06:47am 30.0 77
10/06:30am 29.8 78
10/06:00am 29.9 75
10/05:30am 29.9 75
10/05:00am 29.9 74
10/04:30am 30.0 74
10/04:00am 30.0 74
10/03:30am 29.8 75
10/03:00am 29.9 75
10/02:30am 30.1 74
10/02:00am 30.3 72
10/01:30am 30.1 73
10/01:00am 30.4 70
10/12:30am 30.2 75
10/12:00am 30.5 70

29. Matt

Slow down, calm down, smoke a cigar!! I fear you are having far to much fun in this new life of yours.

Regards

30. Scotty:
I am unilaterally going to apply the TDH adjustment of -5C. Now don’t you feel better.

Now if you could engage your mystic powers to cover the Daly Waters Pub (one of Willis E’s discussed sites) then the 20 people who live there would be very grateful.

On a serious note, Daly Waters (inland, low rainfall) is in a different climatic regime to Darwin (coastal, high wet season rainfall) and should not be used to adjust Darwin temps.

32. Matt, the difference is you can actually densely instrument e.g. an urban environment and average over it. That is “realizable” even it hasn’t been done. You can construct and test a physics based empirical model to relate the single sensor temporally averaged measurement to the spatially and temporally averaged temperature field value. Scenario 2, you’d have to remove the urban environment. While physically possible, it’s not empirically practicable.

The averaging over the urban environment is not only very doable, I’d bet its already been done at least for some cases.

The problem with a single sensor is it completely under-samples spatially the temperature field. Much of that is fixed if you average over 24-hours (due to advection of the air across the sensor, you are effectively averaging over a distance 24 hours * V …. typically 70-100 miles averaging distance). But that doesn’t fix biases introduced by homogeneities in the environment, heat sources and sinks and so forth.

I’d bet that an approach is possible where one could enter the buildings, locations of heat sources/sinks in the vicinity of the sensor and recover something approaching the spatially/temporally averaged temperature over e.g. that 100-mile region of coverage. The heat equation is a dispersion equation, so you don’t generally have to worry about objects that are very far away unduly influencing your single-point measurement.

This case can be modeled and corrected for. In the end you’d have what’s called a “transfer function” (usually a complex-valued function that depends on frequency) that relates the single-point measurement to the averaged urban environment.

The case where you remove the city…well, that one isn’t a sensible approach, though I believe that is what GISS tries to do with their urban corrections.

33. Your welcome. Of course that adjustment doesnt work here in Massachusetts. I need a TDC adjustment of about +10C. Perhaps we can arrange a kind of teleconnection or arbitrage.
As for Daly Waters, let Willis know via the relevant thread at WUWT, I am sure it will merit a mention.
When I visited Queensland I found it astonishing how dramatically the climate changed in about 30 miles as you move inland from the coast through the tropical forest to the hinterland. There is, I assume, a pretty dramatic rain shadow effect. So what explains the change at Daly Waters?

34. For all you do-it-yourselfers, I recomend “Measurement Error Models” by Wayne A. Fuller, 1987, John Wiley & Sons. It will blow your mind.

Error aggregates. A little error in the early measurements inflates and expands as time goes on. Not to mention predicting the future. The error inflates exponentially in all directions. The mean is meaningless. We are left with a Cone of Uncertainty that approaches infinity.

However, there is no error in manufactured data. No reality, either, but there is no measurement error in numbers you just make up.

If somebody deliberately selects the data points with outcome aforethought, and plugs that manufactured data into a model, and the model says in a few short years the seas will boil, all that has to be taken with a grain of salt.

35. Good one Bernie. I will see if the weather gods can siphon some hot air in your direction (maybe this is it?).

Distance from the coast is the governing factor. Precipitation declines linearly in a southeasterly direction in the Top End (colloquial name) of the Northern Territory. Put another way, rainfall increases linearly as you move northwest of, say, Daly Waters.

36. Mike D,
It’s a good book. I will further recommend another one: Measurement Error in Nonlinear Models: A Modern Perspective by Raymond J. Carroll, David Ruppert, Leonard A. Stefanski and Ciprian Crainiceanu. D. Rupport is Mr. Briggs’ Ph.D dissertation advisor.

—-
I would like to applaud Mr. Eschenbach’s efforts in exploring the data and raising several good points in the post.

Here are some thoughts I have after reading Section 6 (Homogeneity) in the paper referenced in his post. I won’t bore you with a summary of the homogenization method. However, I would like to say the reasoning behind each step of the method is actually interesting.

Nope, the uncertainty in the estimated reference series is not discussed in the paper.

The motivation of the homogenization is to detect the inhomogeneity /discontinuities (DC) , such as a shift in the mean, caused by non-climate influences (locations and types of instruments), and subsequently correct them. It seems reasonable to me if one wants to study only the variations in climate (i.e., a homogeneous climate time series).

It’s stated in the paper the historical data were adjusted to make the series homogeneous with present-day observation so the new data points can be easily added. However, judging from the figures 7 and 8 in Mr. Eschenbach’s post, the opposite seems to have been done. According to what I have understood about the homogenization, the adjusted and observed series should coincide for the last few observed time periods. Hmmm?

The reference series is assumed to be homogenous and representative of the climate in the region. The validity of the test for DC hinges on this assumption. The reference series is produced with techniques that could minimize the potential DC in the reference series, in a way, to establish a “robust” (against DC) first difference series. I would think that the researchers were aware of problems in the selection of neighboring station data and the assumption of homogeneous references in practice. The paper is dated 1997. I imagine that the issues have been further studied.

37. I’d bet that an approach is possible where one could enter the buildings, locations of heat sources/sinks in the vicinity of the sensor and recover something approaching the spatially/temporally averaged temperature over e.g. that 100-mile region of coverage. The heat equation is a dispersion equation, so you don’t generally have to worry about objects that are very far away unduly influencing your single-point measurement.
.
Only for conduction Carrick , only for conduction .
I am taking a radically different approach and during my free time just for fun (I don’t get the millions of Jones) I work on the following .
1) The spatio-temporal dynamical variables like temperature , velocity and pressure obey a wave equation . This is obvious from the form of Navier Stokes equations .
2) If I put the wave equation in the form of Schrödinger’s equation (it’s what comes naturally to me :)) the spatial variations are given by a Laplacian . Please note that this is sofar only semi quantitative musing .
3) I can of course express the wave in an eigenvector basis and there are many interesting results about eigenvectors of a Laplacian and spatially averaged Laplacians . Technically it is VERY analogous to looking at spatial autocorrelations in a statistical model .
4) Therefore the spatial autocorrelations in this model would be strong . They would have the same “shape” because there would all be given by eigenvectors of a Laplacian but a very variable amplitude . From that would follow that :
a) There is no way one could construct statistically significant regional averages from one (or few) arbitrary points .
b) The spatial autocorrelations are very strongly variable from region to region (this is intuitively obvious – f.ex trade winds)
.
William :
Excellent beginning , thanks !
I like that : Our thermometer works fine, so the temperatures it measures are the temperatures that are experienced. Our series is the actual, genuine, God-love-you temperature at that spot. There is, therefore, nothing to correct
.
You will probably mention that later but I’d like to mention already a particular point .
All of those averages are constructed from daily averages . And daily average is in the majority of cases Max+Min / 2 .
In another life I had once to work on a problem which was related to defining a certain amount of energy . This amount depended among others on the outside temperatures .
So I took the local daily averages and tried to find an analytical expression for the energy .
As I had to integrate the temperatures , I discovered (25 years ago !) that it is hard to integrate a function which had only values in 2 points and on top I didn’t know when .
Indeed at what time happens the Max and Min ?
So I tried to estimate at least the sensibility by fitting all kinds of functions and you know that an infinity of functions would go through 2 points
The results were HORRIBLE !
.
So as the “global warming” is about energy and not about temperatures , I submit that the errors induced by “homogenizing” temperatures are vastly amplified when one goes from temperatures to energies .

38. Creationists and AGW enthusiasts share a common trait – magical thinking that ignores the length of time this planet has been revolving around the sun.

For creationists – they want to limit the history of the Earth to the last 10,000 years, so it will fit in a pre-determined viewpoint of creationism, and try to fit fact or data set into a world view that supports the world only being 10,000 years old.

For AGWers, they want to boil the history of the world into an even shorter time frame – the last 100 to 200 hundred years, and then only use raw data as an entry point into a “model” that adjusts the raw data to fit into a world view that shows industrialization through production of co2 has led to an undeniable warming trend.

Both eschew a longer time span in favor of human-centric view of time and our own importance and our impacts.

Anyone hazard a guess on the most common type of rock on the planet today?

ICE (covers about 10% of the surface of the planet – and yes Ice is categorized as a rock).

We are in an ICE AGE.

People surprisingly know or remember very little about the history of Ice Ages on the Planet Earth. Maybe because we normally teach geology to students in 7th and 8th grades – the normal time frame when everything but the brain is growing in students.

The earth has been in an ice age for the last 2 million years. This is known as the Pleistocene. The holocene (the last 10,000 years) is a fake geology accounting invention created to appease the Creationist and AWG types.

The Pleistocene will continue for many more millions of years, because the major driving force for an ice age is that a major land mass occupies a polar region, thus disrupting the ocean thermal belts that convey heat from the tropics to the polar regions.

Thus ICE will most likely continue to be the most common rock on the Planet Earth for million years of years until Antartica moves off of its position at the South Pole.

For every 1 million years of an ICE AGE, there are on average 10 glacial and interglacial epochs, due to PLANETARY and SOLAR cycles, otherwise known as the Malenkovich Cycle.

Which means that for every 1 million years Antartica continues to occupy the South Pole position, there will about 10 glacial epochs where ice sheets blanket most of North America, Europe and Asia. Care to hazard what those future impacts will have on human civilization?

The ice sheets develop maxima extent of coverage when there is a solar minimum in radiation hitting the polar latitudes, and the ice sheets develop minimus extent of coverage when there is a solar maximum in radiation hitting the polar latitudes.

The change in solar radiation, due to changes in the Planet Earth’s rotation and orbit around the Sun, is the heart beat or engine that drives the Ice Ages between Glacial and Inter-Glacial epochs.

This is not complex. This is not new. This is 7th grade science. Maybe we can teach it to 5th graders, as they seem to be pretty smart before getting their evolutionary induced bath of growth hormones.

The geologic boundary time between the Paleozoic and the Mesozoic Age was an ICE AGE (the Permian Ice Age that lasted about 25 million years), as was the geologic boundary time between the PreCambrian and the Cenozoic (the Cambrian Ice Age, which also lasted tens of millions of years).

Ice Ages are infrequent in the history of the Earth, but act as a major evolutionary force on the Planet and all its inhabitants.

Ice Ages are characterized by Lower Average Global Temperatures; Extreme changes in Sea Levels due to the appearance and disappearance of Continential Ice Sheets (hundreds of feet; the differentiation of climates from one warm, wet environment into four major climate types – warm wet, cold wet, warm dry, and cold dry; and the compounded interest effects of island biogeography due to land masses being isolated and then rejoined as part of the rising and falling of sea levels.

On an evolutionary scale, Ice Ages rev up one of the components of Evolution – a changing environment to which organisms must adapt or go extinct.

Prior to the Cambrian Ice Age, there existed only single cells; afterwards multicellular life made its appearance – it is during the Cambrian time that all major Phylums of multicellular life evolved – 32 Phylums evolved on this planet in a relatively short geologic time frame. In times of stress (rapidly changing environments) there was a selection pressure for single cells to form multi-cellular structures.

Prior to the Permian Ice Age, there existed only water based reproduction forms for plants and animals (dominant plant forms were the ferns – require water to reproduce / dominant animal form was amphibians – require water to reproduce). Afterwards, there is the appearance of plants that had evolved air base transportation methods for reproduction (pollen) and the appearance of animals that had evolved external eggs (Reptiles) or internal eggs (Mammals).

To tie it back upon our current debates of Anthro Global Warming – the History of the Ice Ages is the backdrop and context to compare any impacts and changes of Anthro Global Warming against.

So in the scale of AWG, anthro projections are to have a raise in the sea level of five to ten feet – the change in sea levels during an ICE AGE are up to 1,000 feet. And we are currently in an ICE AGE.

ICE AGES by there very nature have the most dynamic, changing climates in the history of the Planet Earth.

As a species, humans are children of this current ICE AGE.

Not to say that humans can’t screw up a great planet – we can.

Not to say that humans can’t make mistakes – we do.

But to posit as a scientific theory, that is a concensus of the scientific community, that during an ICE AGE, which by its very nature and defintion is the most volatile and variable climate conditions that the Planet Earth experiences over its long 4,000 million year old history, that Human Induced Climate Change becomes the number one driver of Climate Change and Variability – over and above any of the 30 plus known vectors that factor into the Milenkovich Cycle, is pure magical thinking.

It’s like saying man-made lighting is the most important lighting on the planet today, more so than light from the sun.

It is pure magical thinking – a con game, a diversionary trick, a snake oil sales job.

I admire the scientific method cause it is a process, a way of examining nature and trying to come up with explanation that are functional and testable, not magical.

When science gets hijacked by magicians, who don’t want the public to see how the magic is created, then we have big problems and it can no longer be called science.

Pseudo – Science is too kind a word.

Call it Magic.

39. Sorry if I missed it, but if urban site data need to be homogenized with reference to rural site data, why not just skip the urban site data altogether and just use the rural data?

40. dearieme says:
9 December 2009 at 5:36 pm

“When I first read about homogenisation, I must admit that my first reaction was not “golly, that opens the door to a lot of crookedness”, it was “golly, how surreally stupid”. I’m not quite sure how to get across to non-scientists just how extraordinarily dud these Climate Scientists are.”

Not to worry. :)) Some of us have just enough statistical knowledge (as in sitting in a classroom 20 years ago) to say, “Why in the world would they do something like that?” The first time I heard that’s what they were doing alarm bells started going off. It just didn’t seem like an appropriate way to treat the data–it tells you want you WANT it to tell you, not what really is.

We may be non-scientists but that doesn’t mean we can’t think.

I thought the first step was to decide which method of analysis you’re going to use **before** you look at the data?

41. Rodney,

“Which means that for every 1 million years Antartica continues to occupy the South Pole position, there will about 10 glacial epochs where ice sheets blanket most of North America, Europe and Asia. Care to hazard what those future impacts will have on human civilization? ”

As the ice sheets have a very low velocity, I would say that so long as humanity does not otherwise snuff itself out, there will be no ice sheets blanketing most of North America, Europe and Asia ever again.

Canadians with jackhammers. Nuclear ice melters. On the one hand, Saskatewan will have a hard time being a bread basket when growing seasons end, on the other hand they’ll make a fortune pipelining fresh water to Southern California.

And your attacks on creationism are nonsense. Creationists openly admit the world was created by a supreme being unhindered by human reason. If god can create a universe at all, nothing would stop him from creating one with billions of years of geology built in. Thats just adding more cup holders to a Corolla. The question is whether or not to believe in a god that creates things. Your choice.

42. Thank you. I will be back tomorrow.

The whole concept of ‘a global temperature’ is flawed, however if they must produce one, let us see what it looks like with the raw data. The more I dig in the whole temperature mess the more errors, adjustments and fudge factors become appartent. A modeller (non-climate field) recently explained to me, which is obvious really, that models are build to prove a theory, so it is easy to build in all your pre(mis)conceptions into the model. In this case it is not so much ‘garbage in:garbage out’ as ‘fanatsy in:fantasy out’.

The more I have investigated, the more I believe that the adjustments are a key to the warming. Why leave in such glaring mistakes such as the counter-intuitive negative adjustments for UHI: there are some examples here:

http://diggingintheclay.blogspot.com/2009/11/how-would-you-like-your-climate-trends.html