Statistics

Homogenization of temperature series: Part IV

Be sure to see: Part I, Part II, Part III, Part IV, Part V

How much patience do you have left? On and on and on about the fundamentals, and not one word about whether I believe the GHCN Darwin adjustment, as revealed by Eschenbach, is right! OK, one word: no.

There is enough shouting about this around the rest of the ‘net that you don’t need to hear more from me. What is necessary, and why I am spending so much time on this, is a serious examination of the nature of climate change evidence, particularly with regard to temperature reconstructions and homogenizations. So let’s take our time.

Scenario 3: continued

We last learned that if B and A overlap for a period of time, we can model A’s values as a function of B’s. More importantly, we learned the severe limitations and high uncertainty of this approach. If you haven’t, read Part III, do so now.

If B and A do not overlap, but we have other stations C, D, E, etc., that do, even if these are far removed from A, we can use them to model A’s values. These stations will be more or less predictive depending on how correlated they are with A (I’m using the word correlated in its plain English sense).

But even if we have dozens of other stations with which to model A, the resulting predictions of A’s missing values must still come attached with healthy, predictive error bounds. These bounds must, upon the pain of ignominy, be carried forward in any application that uses A’s values. “Any”, of course includes estimates of global mean temperature (GMT) or trends at A (trends, we learned last time, are another name for assumed-to-be-true statistical models).

So far as I can tell (with the usual caveat), nobody does this: nobody, that is, carries the error bounds forward. It’s true that the older, classical statistical methods used by Mann et al. do not make carrying error simple, but when we’re talking about billions of dollars, maybe trillions, and the disruption of lives the world over, it’s a good idea not to opt for simplicity when more ideal methods are available.

Need I say what the result of the simplistic approach is?

Yes, I do. Too much certainty!

An incidental: For a while, some meteorologists/climatologists searched the world for teleconnections. They would pick an A and then search B, C, D, …, for a station with the highest correlation to A. A station in Peoria might have a high correlation with one in Tibet, for example. These statistical tea leaves were much peered over. The results were not entirely useless—some planetary-scale features will show up, well, all over the planet—but it was too easy to find something that wasn’t there.

Scenario 4: missing values, measurement error, and changes in instrumentation

Occasionally, values at A will go missing. Thermometers break, people who record temperatures go on vacation, accidents happen. These missing values can be guessed at in exactly the same way as outlined in Scenario 3. Which is to say, they are modeled. And with models comes uncertainty, etc., etc. Enough of that.

Sometimes instruments do not pop off all at once, but degrade slowly. They work fine for awhile but become miscalibrated in some manner. That is, at some locations the temperatures (and other meteorological variables) are measured with error. If we catch this error, we can quantify it, which means we can apply a model to the observed values to “correct” them.

But did you catch the word model? That’s right: more uncertainty, more error bounds, which must always, etc., etc., etc.

What’s worse, is that we suspect there are many times we do not catch the measurement error, and we glibly use the observed values as if they were 100% accurate. Like a cook with the flu using day old fish, we can’t smell the rank odor, but hope the sauce will save us. The sauces here are the data uses like GMT or trend estimates that use the mistaken observations.

(Strained metaphor, anybody? Leave me alone. You get the idea.)

A fishy smell

Now, miscalibration and measurement error are certainly less common the more recent the observations. What is bizarre is that, in the revelations so far, the “corrections” and “homogenizations” are more strongly applied to the most recent values, id est, those values in which we have the most confidence! The older, historical observations, about which we know a hell of lot less, are hardly touched, or not adjusted at all.

Why is that?

But, wait! Don’t answer yet! Because you also get this fine empirical fact, absolutely free: The instruments used in the days of yore, were many times poorer than their modern-day equivalents: they were less accurate, had slower response times, etc. Which means, of course, that they are less trustworthy. Yet, it appears, these are the most trusted in the homogenizations.

So now answer our question: why are the modern values adjusted (upwards!) more than the historical ones?

The grand finale.

If you answered “It’s the urbanization, stupid!”, then you have admitted that you did not read, or did not understand, Part I.

As others have been saying, there is evidence that some people have been diddling with the numbers, cajoling them so that they conform to certain pre-conceived views.

Maybe this is not so, and it is instead true that everybody was scrupulously honest. At the very least, then, a certain CRU has some fast talking to do.

But even if they manage to give a proper account of themselves, they must concede that there are alternate explanations for the data, such as provided in this guide. And while they might downplay the concerns outlined here, they must admit that the uncertainties are greater than what have been so far publicly stated.

Which is all we skeptics ever wanted.

Be sure to see: Part I, Part II, Part III, Part IV, Part V

Update Due to popular demand, I will try and post a part V, a (partial) example of what I have been yammering about. I would have done one before, but I just didn’t have the time. If I just had one of those lovely remunerative grants GHCN/CRU/GISS receive…One point at which all the IPCC beats us skeptics is in the matter of compensation (and then they wonder why we don’t have as much to say officially).

Update It’s finished.

Categories: Statistics

19 replies »

  1. Oh dear I think I put my replies in the wrong place. Sorry about that put it down to senile dementia or possibly too much of this fine old Rioja.

    Kindest Regards

  2. I have been lurking for a while, enjoying your thinking and style. Vis. “too much certainty” I have been bothered by the tree ring stuff since Mann’s original paper. The claim is to 0.1 degree C precision, which is about 1 part in 3000. This also implies that the measurement of tree rings is at least this precise. Tree rings are what…~1mm size? That means measurement precision of 0.3 microns, which is quite absurd.

  3. For another interesting perspective, you might want to look at the how the rules of evidence applied in federal courts would apply if AGW were the subject of a court action. To let a jury see a computer simulation, all parties have to have access to the raw data and method by which that data is processed by the computer to produce the simulation. If the data is adjusted in a way that creates doubts as to its accuracy, or if the program is anything less than objective and reliable, then the simulation is toast… Take a look at this summary:
    http://www.animators.com/aal/pressarticles/legaltimes2.html

    So, if the original data is either withheld or lost, if the data used has been “value added” in ways that are less than unimpeachable, if the programming incorporates assumptions drawn from thin air (or air presumed thick with sulfates and particulates), then no attorney worth his fee would let the model see the inside of a courtroom…

    And keep in mind, the party trying to makes its case just has to get the court to believe that its story is “more likely than not,” and not “beyond reasonable doubt.” Personally, I would suggest that if the likely cost of a judgment would be calculated in the multiple trillions of dollars, the party advancing the theory should be held to the highest possible standard of proof, their evidence should be so firmly and clearly rooted in undeniable data and observations, and their analysis should be so transparent and congent that doubts will dissolve in their face.

    That it ain’t happened yet kinda tells ya somethin…

  4. I expressed this in another way recently: once data is “adjusted” it is no longer data, it is a hypothesis. As such it is subject to much more uncertainty than the original data.

    The evidential trails from historic temperature measurements to global average temperatures are so murky that little confidence can be assigned to the results. Acceptance of them amounts to belief rather than science.

  5. All,

    Am a bit busy at the moment and am way behind in answering comments. Apologies. Will do so tomorrow.

  6. Matt:
    When you have time, I think a Part V is in order to clarify how all these sources of error get aggregated. Are they simply additive or are there some bounds that exist, e.g., limited by the largest likely error. Also if you put all this into a mathematical expression what would it look like? Part of the reason for this is that the law of large numbers gets floated about to justify a lot of the aggregation and the precision in the numbers – see for example http://rankexploits.com/musings/2009/false-precision-in-earths-observed-surface-temperatures/
    The comments on this thread may be useful in identifying the different misapprehensions and more profound disagreements.

  7. Mr Briggs, I felt you muffed the finale (though I can sympathize with long posts on esoteric matters).

    You never stated why the modern temps are raised with clarity.

    I also would have liked if you had produced a statistical model, and shown how the error bars integrate and propagate. It could all be notional. Say a modern sensor has a measurement accuracy of+/- 0.02°C and is linked to UTC within 1 sec, zero response lag. And let assume an 1880 sensor has a general accuracy of 0.5°C, but the time of measurement is off by 120 sec from UTC (there was none then of course) and again zero lag response lag (assume it is in the time reference error).

    How would the error bars on the parametric models grow as you integrated stations from a regional daily into a global annual index between now and 1880?

    Is this not (by comparing error bars between today and 1880) a good guesstimate of the error in any trend over this period (and I am assuming this will go into the many degrees C when all is said and done).

    Nothing like a reasonable example to deliver the final blow.

    Cheers, AJStrata

  8. This is very helpful. Now I’m curious what the temperature trend should be with properly stated bounds. If that’s possible.

    My bigger question relates to the method of homogenization GHCN is using. From what I understand, it attempts to make up for a lack of information about station changes that would create a discontinuity by modeling relationships with other stations and using the results to assign adjustments. There’s more than statistics to the method, what ever it is, too.

    My question is, do we know enough about the method to determine whether it’s creating a bias? Looking at the Darwin Station 0 data, it’s clear that the adjustments made to remove a discontinuity seem to be result in another discontinuity of the same size but below the trend rather than above it. How the heck did that happen.

    And, some sources claim that GHCN’s data shows more adjustment for the tropics than other areas. Is it possible that the statistical analysis done against US data doesn’t work for a station like Darwin on the coast in the tropics where going 500 kilometers inland finds a semi-arid region?

  9. Some of you may not know but AJStrata has a somewhat parallel exposition at http://strata-sphere.com/blog/index.php/archives/11824 I had read after my previous post and it sounds like there is a standard language for dealing with exactly the problem of all these multiple sources of error and uncertainty.
    AJ, your humility in not linking to your site is welcome but misplaced.
    Obviously, I agree with your idea of building an error model – perhaps you and Matt can propose to DOE, NSF etc to do exactly this since I believe that it is far from trivial to construct such a model with reasonable parameters. DOE et al., should have some extra money laying around now that Jones et al will have to hand some of it back!!

  10. Strick, an even more interesting observation made by Geoff Sherrington; rural Australian stations showing warming inland, but flat at the coastline [raw data]. Why?

    Thanks Matt/Briggs/William [delete whichever is inapplicable]. Nice exposition until the end. I’m with AJStrata that we need a better ending to this fine story 🙂

  11. Thanks for the link to AJStrata. It answers my most basic question: were these adjustments the right thing to do in the first place? No. I’m reminded of a comment Mark Steyn made recently. There’s no recipe for combining dog feces with ice cream that doesn’t make the final produce taste like dog feces. Some things aren’t improved by mixing them.

    I’m also grateful my initial instinct is confirmed. When the data’s not what you expected, you don’t fix the data.

  12. All: see update. I will try (soon) to post a partial example. Using made up data to make it go faster for me. I’m a little stuck for time.

  13. “Don Guillermo” should not be rushed. Your faithful lackeys will wait. Put your feet up and grab some “Z’s”. Pork out on your favorite foods. Play and sight-see awhile. And only when your rested and relaxed should you postulate and post.

  14. Thanks Briggs! Commenters, I am in the same boat as Briggs, I am sitting on a system requirements review right now and have no real time to make these people do the job right (I should be paid for the training).

    Sadly, as we head towards X-mas the LJStrata (the chief) has less patience for me dawdling away on the computer!

    But will be back in force right after the kids open their presents!

    Happy Holidays to all.

  15. Don William wrote: “PJ, It’s “Don William” nowadays. “Emperor” to come.”

    Presumably that was supposed to be “PG”. Queen Elizabeth’s advice to John Howard when he was Prime Miniature of Australia and aspiring to be a king (ruling a kingdom), or emperor (ruling an empire) was to stick to ruling a country 😉

Leave a Reply

Your email address will not be published. Required fields are marked *