# How to look at the RSS satellite-derived temperature data

It’s already well known that the Remote Sensing Systems satellite-derived temperature data has released the January figures: the finding is that it’s colder this January than it has been for some time. I wanted to look more carefully at this data, mostly to show how to avoid some common pitfalls when analyzing time series data, but also to show you that temperatures are not linearly increasing. (Readers Steve Hempell and Joe Daleo helped me get the data.)

First, the global average. The RSS satellite actually divides up the earth in swaths, or transects, which are bands across the earth whose widths vary as a function of the instrument that remotely senses the temperature. The temperature measured at any transect is, of course, subject to many kinds of errors, which must be corrected for. Although this is not the main point of this article, it is important to keep in mind that the number you see released by RSS is *only an estimate* of the true temperature. It’s a good one, but it does have error (usually depending on the location of the transect), which most of us never see or few actually use. That error, however, is extremely important to take into account when making statements like “The RSS data shows there’s a 90% chance it’s getting warmer.” Well, it might be 90% before taking into account the temperature error: afterwards, the probability might go down to, say, 75% (this is just an illustration; but no matter what, the original probability estimate will always go down).

Most people show the global average data, which is interesting, but taking an average is, of course, assuming a certain kind of statistical model is valid: one that says averaging transects gives an unbiased, low-variance estimate of the global temperature. Does that model hold? Maybe; I actually don’t know, but I have my suspicions it does not, which I’ll outline below.

So let’s look at the transects themselves. The ones I used are not perfect, but they are reasonable. They are: “Antarctica” (-60 to -70 degrees of latitude; obviously not the whole south pole), “Southern Hemisphere Extratropics” (-70 to -20 degrees; there is a 10 degree overlap with “Antarctica”), “Tropics” (-20 to 20 degrees), “Northern Hemisphere Extratropics” (20 to 82.5 degrees, a slightly wider transect than in the SH), and “Arctic” (60 to 82.5 degrees; there is a 22.5 degree overlap with NH Extratropics). Ideally, there would be no overlap between transects, global coverage would have been complete, and I would have preferred more instead of fewer transects, which would have allowed us to see greater detail. But we’ll work with what we have.

Here is the thumbnail of the transects. Click it (preferably open it in a new window so you can follow the discussion) to open the full-sized version.

All the transects are in one place, making it easy to do some comparisons. The scale for each is identical, each has only been shifted up or down so that they all fit on one picture. This is not a very sexy or colorful graph, but it *is* useful. First, each transect is shown with respect to its mean (the small, dashed line). Vertical lines have been placed at the *maximum* temperature for each. The peak for NH-Extratropics and Tropics was back in 1998 (a strong El Nino year). For the Arctic, the peak was in 1995. For the Antarctic, it was 1990. Finally, for the SH-Extratropics it was 1981.

You also often see, what I have drawn on the plot, a simple regression line (dash-dotted line), whose intent is usually to show a trend. Here, it appears that there were upward trends for the Tropics to the north pole, no sort of trend for the SH-Extratropics, and a downward trend for the Antarctic (recall there is overlap between the last two transects). Supposing these trends are real, they have to be explained. The obvious story is to say man-made increases due to CO2, etc. But it might also be that the northern hemisphere is measured differently (more coverage), or because there is obviously more land mass in the NH, and—don’t pooh-pooh this—the change of the tilt of the earth: the north pole tipped closer to the sun and so was warmer, the south pole tipped farther and so was cooler. Well, it’s true that the earth’s tilt has changed, and will always do so no matter what political party holds office, but effects due to its change are thought to be trivial at this time scale. Of course, there are other possibilities such as natural variation (which is sort of a cop out; what does “natural” mean anyway?).

To the eye, for example, the trend-regression for the Arctic looks good: there is an increase. Some people showing this data night calculate a classical test of significance (don’t get me started on these), but this is where most analysis usually stops. It shouldn’t. We need to ask, what we ** always** need to ask when we fit a statistical model: how well does it fit? The first thing we can do is to collect the

*residuals*, which are the distances between the model’s predictions and the actual data. What we’d like to see is that there is no “signal” or structure in these residuals, meaning that the model did its job of finding all the signal that there was. The only thing left after the model should be noise. A cursory glance at the classical model diagnostics would even show you, in this case, that the model is doing OK. But let’s do more. Below is a thumb-nail picture of two diagnostics that should always be examined for time series models (click for larger).

The bottom plot is a time-series plot of the residuals (the regression line minus the observed temperatures). Something called a non-parametric (loess) smoothing line is over-plotted. It is showing that there is some kind of cyclicity, or semi-periodic signal, left in the residuals. This is backed up by examining the top plot: which is the auto-correlation function. Each time-series residual is correlated with the one before it (lag 1), with the one two before it (lag 2), and so on. The lag-one correlation is almost 40%, again meaning that the residuals are certainly correlated, and that some signal is left in the residuals that the model didn’t capture. (The “lag 0” is always 1; the horizontal-dashed lines indicated classical 95% significance; the correlations have to reach above these lines to be significant.)

The gist is that the ordinary regression line is inadequate and we have to search for something better. We might try the non-parametric smoothing line for each series, which would be OK, but it is still difficult to ask whether trends exist in the data. Some kind of smoothing would be good, however, to avoid the visual distraction of the noise. We could, as many do, use a running mean, but I hate them and here is why.

Show in black is a pseudo-temperature series with noise: the actual temperature is dashed blue. Suppose you wanted to get rid of the noise using a “9-year ” running mean: the result is the orange line, which you can see does poorly, and shifts the actual peaks and troughs to the right. Well, that is only the start of the troubles, but I don’t go over any more here except to say that this technique is often misused, especially in hurricanes (two weeks ago a paper in *Nature* did just this sort of thing).

So what do we use? Another thing to try is something called Fourier, or spectral analysis, which is perfect for periodic data. This would be just the thing if the periodicities in the data were regular. They do not appear to be. We can take one step higher and use something called *wavelet analysis*, which is like spectral analysis (which I realize I did not explain), but instead of analyzing the time series globally like Fourier analysis, it does so locally. Which means it tends to under-smooth the data, and even allows some of the noise to “sneak through” in spots. This will be clearer when we look at this picture (again, just a thumb-nail: click for larger).

You can see what I mean by some of the original noise “sneaking through”: these are the spikes left over after the smoothing; however, you can also see that the spikes line up with the data, so we are not introducing any noise. The somewhat jaggy nature of the “smoothed” series has to do with the technicalities of using wavelets (I’ll have to explain this a latter time: but for technical completeness, I used a Daubechies orthonormal compactly supported wavelet, with soft probability thresholding by level). Anyway, some things that were hidden before are now clearer.

It looks like there was an increasing trend for most of the series starting in 1996 to 1998, but ending in late 2004, after which the data begin trending down: for the tropics to north pole, anyway. The signal in the southern hemisphere is weaker, or even non-existent at Antarctica.

This analysis is much stronger than the regression shown earlier; nevertheless, it is still not wonderful. The residuals don’t look much better, and are even worse in some places (e.g. early on in the Tropics), than the regression. But wavelet analysis is tricky: there are lots of choices of the so-called wavelet basis (the “Daubechies” thing above) and choices for thresholding. (I used the, more or less, defaults in the `R wavethresh`

package.)

But the smoothing is only a first start. We need to model this data all at once, and not transect by transect, taking into account the different relationships between each transect (I’m still working on a multivariate Bayesian hierarchical time-series model: it ain’t easy!). Not surprisingly, these relationships are not constant (shown below for fun). The main point is that modeling data of this type is difficult, and it is far too tempting to make claims that do not hold up upon closer analysis. One thing is certain: the hypothesis that the temperature is linearly increasing everywhere across the globe is just not true.

APPENDIX: Just for fun, here is a scatter-plot matrix of the data (click for larger): You can see that there is little correlation between the two poles, and more, but less than you would have thought, between bordering transects.

How to read this plot: it’s a scatter plot of each variable (transect), with each other. Pick a variable name. In that *row*, that variable is the y-axis. Pick another variable. In that *column*, that variable is the x-axis. This is a convenient way to look at all the data at the same time.

My next questions were going to be about testing trendlines. You sure have answered me in spades!

How significant are these oscillations. You seem to imply that they could signal a change in the earths tilt. Did I misinterpret this?

If this is true could they not be signals for changes in PDO, Volcanoes, Solar Irradiation etc.?

No trend from 1979 to 1996/8? Kind of like my funny bar graphs visually indicated?

Regarding your Appendix. It’s going to require a lot of explanation for a layperson like myself. Just looks like a bunch of blobs right now!

Eagerly anticipating your next installment!

Dr. Briggs, your conclusion is exactly what the Giss data also state:

that “global” warming is not global, ,

it is in fact a seasonal and latitude specific effect

http://data.giss.nasa.gov/gistemp/maps/

take a look at the maps and latitude plots by season

comparing 1880 to 1950 for the baseline period and 1951 to 2007 as the comparison set. The use of 1950 is entirely arbitrary, but could be considered a mid point for industrialization. Using the same baseline years, and comparing to say 1975 to 1980, you will produce maps which show 1975 to 1980 are colder than the baseline. A plot of this nature shows seasonal oscillation of the arctic anomalies and about a .2 degree change over 50 years for about 90 percent of the surface of the earth

for example

http://www.box.net/shared/3o6wsyuscs

If you wish I will post the rest if this is successful

Any idea why the variations are greater in the Arctic and Antarctic?

An regarding the oscillations in my first comment , could they be inherent in the instrumentation?

Thomas,

Thanks for the useful links; more are always appreciated, except your later link appears to go nowhere.

Steve,

Well, the earth tilt thing was just a wild guess; truly, I think, like I said, while the earth is always changing it’s position, 30 years is probably not enough time to notice the differences.

I don’t think there are any “trends”, where by that word I mean “straight line changes”. There are oscillations and changes, and discovering exactly what they are—which I do not claim to have completely done—is very, very difficult.

Your supposition about differing variances might be correct, or partially correct, but it also might be that the higher variance is because of a reduced sample size. That is, the Arctic and Antarctic transects contain a smaller area, hence are averaging over fewer points, and standard statistical arguments show that smaller number of data points leads to larger variances.

Everybody else,

By the way, another technical note: an ARIMA(1,1,2) seems to fit all the series somewhat OK. At times. But I found standard time series (as I always do) to be somewhat unilluminating.

Briggs

I’ve never understood the phenomenon whereby a moving average always seems to shift data forward in time. Why is this?

Very interesting post, by the way.

Andrew,

It’s because the running mean, to estimate a value at some point t, only uses data prior to time t, when more modern methods of smoothing use data on both sides of t. Because of this, it can make sense to use running means as a crude forecast tool, because, obviously, you only have data up to time t to predict future data. But running means are horrible smoothing, function estimating tools; their use is, unfortunately, ubiquitous.

I wrote about this earlier here and here :

Hope that makes sense.

Briggs

might be fun to look at artic sea ice extent and the upper latitude temps…..over time

http://www.flickr.com/photos/23668657@N07/

try this please. sorry for the other link.

please use the sea data in addition to the land data if you want to recreate these

Note that doing this exercise , baseline 1881-1950, and using comparator years as 1975-1980 or there abouts provides maps and latitude plots which are frankly cooling.

please also note that the maps and plots are not corrected for surface area and therefore might mislead an observer to think that a .5 degree rise at latitude 85 is as significant as a .5 degree rise at the equator in terms of amount of warmed surface. I think you have to correct by 1-sin (lat) to get an idea of area.

my kudos to nasa for making the data available. you can download the text data file also.

The raw giss data can be inspected station by station also

http://data.giss.nasa.gov/gistemp/station_data/

It is very instructive to look at the raw data. pick an area in the middle of the usa and start pulling up graphs station by station rural and otherwise. I did this to a randomly selected area or two and clicked through over 20 stations. Its a good source of data to study

Yes, thank you, I understand now.

Dr Briggs, I also look carefully at climate data, and have received direct from Prof Spencer some of his data. What I would really like to get hold of are the actual numbers that you used to prepare your really illuminating and recent analyses. I use totally different techniques and it might be very interesting to compare out conclusions. If you cold post a url I’d be most grateful.

Matt

Is this:

http://www.climateaudit.org/?p=2720#comments

related to what you were describing in the 3rd graph (on smoothing)?

Re: Analysis of temperature data.

Anthony Watts has been testing variations of Stevenson screens for the measurement of ambient air temperature. His blog at

http://wattsupwiththat.wordpress.com/2008/01/14/a-typical-day-in-the-stevenson-screen-paint-test/#com

shows some of his recent findings. Anthony has given us glimpses but not proceded to complete his analysis. I am a hobbyist in statistics and have my own thoughts. How would you analyze this data? Please contact Anthony if more information is required. My own impression is that spectral analysis would be suitable. Or perhaps some model using phase shift and amplitude multiple.

Can you really trust those wavelets near the extremities? Look at the preponderance of positive departures in the most recent data from the two NH series.

Max,

Wavelets don’t suffer as much from boundary effects as some other non-parametric function estimation methods, but there is always more uncertainty at the ends that in the middle.

Briggs