Statistics

Homogenization of temperature series: Part III

Be sure to see: Part I, Part II, Part III, Part IV, Part V

We still have work to do. This is not simple stuff, and if we try and opt for the easy way out, then we are guaranteed to make a mistake. Stick with me.

Scenario 3: different spots, fixed flora and fauna

We started with a fixed spot, which we’ll keep as an idealization. Let’s call our spot A: it sits at a precise latitude and longitude and never changes.

Suppose we have temperature measurements at B, a nearby location, but these stop at some time in the past. Those at A began about the time those at B stopped: a little before, or the exact time B stopped, or a little after. We’ll deal with all three of these situations, and with the word nearby.

But first a point of logic, oft forgotten: B is not A. That is, by definition, B is at a different location than A. The temperatures at B might mimic closely those at A; but still, B is not A. Usually, of course, temperatures at two different spots are different. The closer B is to A, usually, the more correlated those temperatures are: and by that, I mean, the more they move in tandem.

Very well. Suppose that we are interested in composing a record for A since the beginning of the series at B. Is it necessary to do this?

No.

I’m sorry to be obvious once more, but we do not have a complete record at A, nor at B. This is tough luck. We can—and should—just examine the series at B and the series of A and make whatever decisions we need based on those. After all, we know the values of those series (assuming the data is measured without error: more on that later). We can tell if they went up or down or whatever.

But what if we insist on guessing the missing values of A (or B)? Why insist? Well, the old desire of quantifying a trend for an arbitrary length of time: arbitrary because we have to, ad hoc pick a starting date. Additional uncertainty is attached to this decision: and we all know how easy it is to cook numbers by picking a favorable starting point.

However, it can be done, but there are three facts which must be remembered: (1) the uncertainty with picking an arbitrary starting point; (2) any method will result in attaching uncertainty bounds to the missing values, these must remain attached to the values; and (3) the resulting trend estimate, itself the output from a model which takes as input those missing values, will have uncertainty bounds—these will necessarily be larger than if there were no missing data at A. Both uncertainty bounds must be of the predictive and not parameteric kind, as we discussed before.

Again, near as I can tell, carrying the uncertainty forward was not done in any of the major series. What that means is described in our old refrain: everybody is too certain of themselves.

How to guess A’s missing values? The easiest thing is to substitute B’s values for A, a tempting procedure if B is close to A. Because B is not A, we cannot do this without carrying forward the uncertainty that accompanies these substitutions. That means invoking a probability (statistical) model.

If B and A overlap for a period, we can model A’s values as a function of B’s. We can then use the values of B to guess the missing values of A. You’re tired of me saying this, but if this is done, we must carrying forward the predictive uncertainty of the guesses into the different model that will be used to assess if there is a trend in A.

An Objection

“But hold on a minute, Briggs! Aren’t you always telling us that we don’t need to smooth time series, and isn’t fitting a trend model to A just another form of smoothing? What are you trying to pull!?”

Amen, brother skeptic. A model to assess trend—all those straight-line regressions you see on temperature plots—is smoothing a time series, a procedure that we have learned is forbidden.

“Not always forbidden. You said that if we wanted to use the trend model to forecast, we could do that.”

And so we can: about which, more in a second.

There is no point in asking if the temperature at A has increased (since some arbitrary date). We can just look at the data and tell with certainty whether or not now is hotter than then (again, barring measurement error and assuming all the values of A are actual and not guesses).

“Hold on. What if I want to know what the size of the trend was? How many degrees per century, or whatever.”

It’s the same. Look at the temperature now, subtract the temperature then, and divide by the number of years between to get the year-by-year average increase.

“What about the uncertainty of that increase?”

There is no uncertainty, unless you have used made-up numbers for A.

“Huh?”

Look. The point is that we have a temperature series in front of us. Something caused those values. There might have existed some forcing with added a constant amount of heat per year, plus or minus a little. Or there might have existed an infinite number of other forcing mechanisms, some of which were not always present, or were only present in varying degrees of strength. We just don’t know.

The straight-line estimate implies that the constant forcing is true, the only and only certain explanation of what caused the temperature to take the values it did. We can—even with guessed values of A, as long as those guessed values have their attached uncertainties—quantify the uncertainty in the linear trend assuming it is true.

“But how do we know the linear trend is true?”

We don’t. The only way we can gather evidence for that view is to skillfully forecast new values of A; values that were in no way used to assess the model.

“In other words, even if everybody played by the book and carried with them the predictive uncertainty bounds as you suggested, they are still assuming the linear trend model is true. And there is more uncertainty in guessing that it is true. Is that right?”

You bet. And since we don’t know the linear model is true, it means—once more!—that too many people are too certain of too many things.

Still to come

Wrap up Scenario 3, Teleconnections, Scenario 4 on different instruments and measurement error, and yet more on why people are too sure of themselves.

Be sure to see: Part I, Part II, Part III, Part IV, Part V

Categories: Statistics

8 replies »

  1. Reading Mr. Briggs’ stuff always reminds me of the following caution from 140 years ago.

    “Mathematics may be compared to a mill of exquisite workmanship, which grinds you stuff of any degree of fineness; but, nevertheless, what you get out depends on what you put in; and as the grandest mill in the world will not extract wheat flour from peascods, so pages of formulæ will not get a definite result out of loose data.”

    ~Thomas Henry Huxley, 1825-1895, Quarterly Journal of the Geological Society of London 25: 38, 1869.

  2. Interesting point about trends. I agree entirely. Being a signal processor, I would regard the temperature “signal” as a band-limited Fourier series, in the first instance. This has physical resonance as it is the solution of a linear system with various forcing functions. Clearly, the longest period we can resolve in that series is determined by the record length and if there are harmonics with periods longer than the record length, we artificially remove this as a trend to avaoid discontinuities at the ends of the record. Since we are interested in the bahviour of the record at its end, then normal windowing procedure cannot be applied. If one looks at the NOAA ice series, it is clear that there are periodicities in the data which are longer than the instrumental temperature record. In this case, fitting a trend is meanless as a model for the behaviour of the data and certainly does not apply prediction.

  3. I am continually mystified why data which is (a) fairly variable and (b) measured over a long term, such as temperature graphs, often overlaid with a single linear trend line.

    What does the line mean? Even if one assumes that the line has underlying an statistical basis (as opposed to one with a slightly different slope), it has no predictive value for future data points – since most of the existing data points do NOT lie on the trend line.

    The trend line also says nothing about real world causes for the slope of the line. Is one real world cause affecting the slope of the line, or four, or seventeen, or two hundred and twelve?

    The only reason I can figure why the linear trend lines appear is that they provide simplification for people reading about complex issues. Straight lines are simple and easy to understand and to describe in plain English to someone else. This is a substantial advantage when having a conversation or discussion or trying to make a decision on what to do next.

    The problem is that if the information graphed is indeed complex in nature, then simplifying it to a linear trend is deceitful. Complex issues need to be discussed with their complexity intact, even if it makes for more potentially confusing discussions and makes it harder and more time consuming to make decisions.

    Also, if uncertainty ranges are added to a graph, which linear trend line is the “best” ?

  4. Having nothing much to do this Sunday morning, lets compare graphs:

    First off is the article from the blog “Watts up with That?” , article at http://wattsupwiththat.com/2009/12/08/the-smoking-gun-at-darwin-zero/

    You can read the article later, but check out Figure 7 for now. Look at the data series in blue. The trend line indicates a decrease in temperature over time. The data points actually show that the temperature more or less stays the same from 1880 to about 1930, then the data points drop downward from about 1930 to 1940, then the temperature more or less stays the same from 1940 to about 2005.

    Which is the better interpretation: (a) average temperature drops over the 100 year period as per linear trend or (b) something screwy happened in the data collection method between 1930 and 1940 as per the data points.

    Next one is from The Economist Blog “Democracy in America” , article at http://www.economist.com/blogs/democracyinamerica/2009/12/trust_scientists

    You can read this article later too. The graphs aren’t numbered in this article, but the ones you should be looking at are the 3rd, 4th and 5th on the page – labeled Katherine Aer, Wyndham Port, and Kalumburu respectively.

    What do they show? Well, when I scanned them the first time, the middle one (the Wyndham Port graph) showed what looked like an upward trend, which then influenced me to see upward trends in the other two.

    Except that the Wyndham Port graph is on a different scale to the other two. It starts at about 1900, while the Katherine Aer graph starts at about 1940 and the Kalumburu graph starts at about 1945. The graphs also end at different years, which really doesn’t help to compare information.

    What happens if you look only at the period 1945 to about 1970?

    Well, the Katherine Aer temperatures look to vary, but doesn’t seem to go particularly upwards or downwards.

    Wyndham Port temperatures also don’t seem to trend in either direction, though there is a noticeable increase from 1950 to about 1953.

    The Kalumburu data looks like it is trending upward somewhat, but it too has the sharp 1950-1953 increase. From 1953 to the end there seems to be a slight increase in temperatures. But this graph is complicated – I’m not sure if there is any pattern or not.

    Actually all three graphs show a sharp increase from 1950 to 1953. It this an actual temperature change, or a change in data collection method?

    This is the state of data being presented to us laymen from people who are supposedly making sense of the data before telling us about it.

    Mr. Willis Eschenbach at the Watts Up With That? needs to stay away from rulers.

    Whoever it was that wrote the article in The Economist Blog “Democracy in America” needs to understand what he or she is doing before posting graphs. The comment is that “They [the three graphs] all show basically the same rising trend.” That’s not clear from the graphs, and saying so doesn’t make it true.

  5. Matt,

    Maybe I’m missing something. If you are looking for a trend, it’s rarely as simple as taking the reading now; subtracting from then and dividing by the interval length. All you get is an answer compared to exactly the interval which is not often the real question. The real question is more often: is it generally hotter or warmer now than before?

    Harder to answer that without making some assumptions. A somewhat robust regression of say splitting the interval into two or three partitions and fitting a line to the of each partition might be more useful.

  6. Hmmm…
    I see some of my last comment was eliminated by the curious blog software here.

    Last paragraph should have read:

    Harder to answer that without making some assumptions. A somewhat robust regression of say splitting the interval into two or three partitions and fitting a line to the (median x, median y) of each partition might be more useful.

  7. Mr. Briggs have you seen this?

    http://strata-sphere.com/blog/index.php/archives/11824

    It doesn’t tell us anything we did not know but puts it in a refreshingly different way. Haven’t myself worked on satellites since the 1960’s when it was a bit crash bang, sometimes literally, they are obviously a bit more sophisticated today.

    Worth a look I would say. Hope the link works.

    Kindest Regards

  8. Well Mr Briggs, having read all three parts I’m glad to say that I created my world map of old temperature datasets from 1660 strictly on the basis that a thermometer is measuring its own very narrowly drawn micro climate.

    http://climatereason.com/LittleIceAgeThermometers/

    If the station moves-to say an airport- it may still be called the same location in the records, but it sure as heck isn’t measuring the same micro climate.

    To try and glue together hundreds of micro climates which have been wandering round the general locality, average them out, and call the result a ‘global’ tremperature is the height of folly.

    Tonyb

Leave a Reply

Your email address will not be published. Required fields are marked *