Two New Papers vs. BEST: Guest Post by Lüdecke, Link, and Ewert

Horst Lüdecke is a Professor of Physiker in EIKE, European Institute for Climate and Energy,
Heidelberg; Dr. Link is a physicist with EIKE; Prof. Dr. Friedrich-Karl Ewert is a geologist and member of EIKE.

1. General

Our papers [1; here] , hereafter LU, and [2; here], hereafter LL, were published almost parallel to the BEST curve of global temperature. The basic intentions of all of them are the same—to document reliably the surface temperature of the Earth from the beginning of the 19th century until the present.

LU analyzes the period of 2000 years BP until the present whereas LL examines the 20th century only. The BEST curves covers the period of 1800-2010. Quite different methods are used. BEST as a global temperature curve is a patchwork of more than 35,000 mostly short temperature series. The result is a global temperature run established by special algorithms. LU analyzes five of the longest available instrumental series and two proxies as a stalagmite and a tree ring stack. LL examines 2249 exclusively unadjusted local temperature surface records. Further, both LU and LL use a new method [3], [4] as a combination of the detrended fluctuation analysis (DFA), synthetic records, and Monte-Carlo simulation. With it, the exceedance probability of the naturality of an observed temperature change is evaluated. Finally, LL derives the basic overall probability that all in all the global warming of the 20th century was a natural 100-year fluctuation. The instrumental records applied by LU and LL are monthly means because the DFA requires a minimum of about 600 data points.

2. The 19th century

The database for the instrumental temperatures used by LU consists in the following long-term series going back at least until the year 1791 AD: Hohenpeissenberg, Paris, Vienna, Munich, and Prague. During the 100 year period 1791-1890 all those show an overall temperature decline with roughly the same magnitude and natural probability as their appropriate 20th century rise. More long-term instrumental records—not analyzed by LU—as are for instance Innsbruck, Kremsmünster, Stockholm, and Kopenhagen show consistently similar temperature declines. Finally, the discussed 19th century cooling is confirmed by reconstructions [5]. However, the enlargement of the NH temperature decline as a global phenomenon is limited by the absence of appropriate instrumental temperature series of the SH. Note that the 19th century cooling is not present in the BEST curve.

3. The 20th century

LL takes the data from the GISS temperature pool that contains about 7500 series in total. From this number, 2249 reliable continuous records as monthly means were selected. Those are 1129 stations over the 100 year period 1906-2005, 427 stations over the 50 year period 1906-1955, and 693 stations over the 50 year period 1956-2005. As the criterion for the selection, no more than 10.5% voids are allowed in any record. This constriction ensures that the DFA analysis is still reliable.

     Every record of the GISS pool following this condition was selected. It can be assumed that the GISS pool contains the most long-term records as monthly means worldwide. LL used only unadjusted raw data. In addition, no homogenisation, smoothing or grid-procedures are applied, except for the linear interpolation by filling voids in the records. Figure 1 depicts the frequencies of the station latitudes and the temperature changes for the 1129 records of 100 years length.

Hosrt Fig 1
Fig. 1: Frequencies of station latitudes (left panel) and of the temperature change ∆ of linear regression lines.

The results from the first step of LL’s analysis are as follows:

  1. In the period 1906-2005 the 1129 stations of 100 year duration show a 0.58oC warming mean. About one quarter of all these stations show cooling. The mean reduces to 0.52oC if stations with less than 1000 population only are admitted, which documents the UHI. Further evidence of the UHI depicts Figure 2. After all, the mean value of global warming reduces further to 0.41oC if stations below 800 m.a.s.l. only are allowed. Figure 3 depicts this effect of which the cause is not known.
  2. As the left panel of Figure 1 demonstrates, the available stations are concentrated between 20o and 70o latitude. In particular, the station density is sparse in the SH. However, the warming is feebler in the SH than in the NH.
  3. In the period 1906-1955 the mean of all 125 SH stations is actually negative (see in LL Table 1 and Table 2 for group B5). As a consequence, the global warming as the mean of all local stations worldwide for the first 50 years of the 20th century can be assumed to be somewhat feebler if it would be established from surface stations of equal density distributions over the Earth.
  4. A total of 1386 stations with no voids within the appropriate periods shows that the mean temperature change is -0.34oC between 1998-2010 and -0.15oC between 2000-2010.

The results of items 1. and 4. are not in accordance with BEST.

Hosrt Fig 2
Fig. 2: UHI in 1129 records of the 100-year period 1906-2005

Hosrt Fig 3
Fig. 3: Warming due to increasing station elevation in records of the 100-year period 1906-2005

4 . Probability analysis for the 20th century

Temperature records are persistent (long-term correlated). This is well known since a warm day is more likely to be followed by another warm day than by a cold day, and vice versa. Short-term persistence of weather states on a time scale of days until several weeks is caused by general weather situations and meteorological blocking situations. However, the causes of long-term persistence over many years and even several decades are largely unknown. Persistence—a purely natural phenomenon—is measured by the HURST exponent α and is explicitly opposed to external trends like the UHI or warming by anthropogenic CO2.

     Both autocorrelated real temperature records without external trends and autocorrelated synthetic temperature records that can be generated by special algorithms are denominated as ‘natural.’ As the main feature of autocorrelated natural records, extremes arise that seem to be external trends. This poses a fundamental problem because without further efforts an external trend and an apparent ‘trend’ that is caused by persistence are not distinguishable. Figure 4 depicts this effect.

Hosrt Fig 4
Fig. 4: A synthetic purely autocorrelated (natural) record that nevertheless seems to be determined by external trends.

     The method of [3], [4] that tackles this problem of determination is based on the assumption that an observed real record has the following two constituents: A natural part, which is governed by autocorrelation, and (possibly) an external trend. Next, the probability has to be found out how much an observed real record is ‘natural’. To this end, only two authoritative parameters are needed, its relative temperature change Δ/σ and its HURST exponent α whereas the DFA provokes that α comes from the ‘natural’ part only. Δ is the temperature difference of a linear regression line through the record and σ is the standard deviation around the line.

     The analysis yields the exceedance probability W for the occurrence of the value Δ/σ, including all stronger values, in a natural record of a fixed α, which is the same as this of the observed real record evaluated by DFA. Next, (for warming) one has to check whether the value of W is below a defined confidence limit. If this is the case, the observed real record is gauged to be determined by an external trend. Otherwise it is assessed as ‘natural.’ The method provides no information about the nature of the trend. In a last step the overall natural probability of the stations in a group is basically evaluated from all W values. As a result, the probabilities of naturalness lay between 40% and 90%, depending on the stations characteristics and the periods considered (1906-2005, 1906-1955 or 1956-2995).

     It is stressed that in general the applied procedures to establish global records from local ones result in unrealistically small values of the standard deviation σ. This becomes in particular obvious regarding the BEST curve by eye. Therefore, in general, we assume that global records are not feasible for an autocorrelation analysis.
     
5. Conclusion

LL demonstrates that the 20th century’s global warming was predominantly a natural 100-year fluctuation. The leftovers are caused by UHI, the warming effect by increasing station elevation, changes to the screens and their environments in the 1970s, variations in the sun’s magnetic field that could influence the amount of clouds, warming caused by increasing anthropogenic CO2, and further unknown effects. However, the station density over the Earth is strongly irregular, which makes any global record but also the results given by LL disputable. The SH stations of the GISS data pool show less warming (resp. stronger cooling) than the NH ones. Since the available stations worldwide are concentrated in the NH, the real mean of the 20th century warming could be even somewhat smaller than LL have evaluated. LU and LL compared with BEST reveal differences in the following items:

  • the magnitude of the 20th century warming,
  • the 19th century cooling (not present in BEST),
  • the exceptional small standard deviation of BEST.

6. References

[1] H.-J. Lüdecke, Long-Term Instrumental and Reconstructed Temperature Records Contradict Anthropogenic Global Warming, Energy & Environment, Vo. 22, No. 6 (2011),

[2] H.-J. Lüdecke, R. Link, and F.-K. Ewert, How Natural is the Recent Centennial Warming? An Analysis of 2249 Surface Temperature Records, International Journal of Modern Physics C, Vol. 22, No. 10 (2011),

[3] S. Lennartz and A. Bunde, Trend evaluation in Records with Long-term Memory, Application to Global Wariming, Geophys. Rev. Lett. 36, L16706, doi: 10.1029/2009GL039516 (2009)

[4] S. Lennartz, and A. Bunde, Distribution of natural trends in long-term correlated records: A scaling approach, Phys. Rev. E 84, 021129 (2011)

[5] T.J. Crowley et al., Causes of Climate Change Over the Past 1000 Years, Science 289, 270 (2000), doi: 10.1126/science.289.5477.270

11 Comments

  1. El Sabio

    Wasn’t the Best paper pushed out early, and before proper peer review?

  2. Alan Bates

    El Sabio

    Yes, they (4 papers) were. Along with a press release and press articles which bore little relationship to the material in the papers.

    To (slightly mis-) quote Charles Dickens from “A Tale of Two Cities”:

    “It was the best of times, it was the worst of times; it was the age of wisdom, it was the age of foolishness …”

  3. RC Saumarez

    I find this approach interesting, although I am not entirely convinced. I have always been struck by the “trends” that occur in Random walk computations.

    The argument comes round to how does one model temperature? If there is a high frequency or non-deterministic component in the measurements, coupled with a lag (as shown by the autocorrelation in temperature records), are some of the trends in the temperature record distinguishable from a superimposed random walk process? In this case, does it increase the uncertainty surrounding the temperature record itself and also to thecertainty of attribution?

  4. dearieme

    “Wasn’t the Best paper pushed out early, and before proper peer review?” Improper peer review is the custom in Climate Science.

  5. Les Johnson

    William|: what is your take on the issues raised by Richard Tol at Climate Etc on these papers? In your opinion, is his critiscm justified? (of the statistics, not his shots aimed at JC)

  6. Milton Hathaway

    @RC Saumarez

    Your mentioning of a “Random Walk” strikes a chord with me. I associate a random walk with systems that have a large ‘memory’ relative to the magnitude of the succession of random or chaotic impulses acting upon it. The temperature of water seems to fit the bill for such a system – water has a high heat capacity, a low thermal conductivity, and, for a large mass of water, a low rate of mixing.

    Which leads me to ask: does air temperature matter for determining climate? I mean, like . . . at all? If I have a perfect measurement of the entire global 3D air temperature profile, does that help me predict what the weather will be next month, or even next week? It seems like air has an almost negligible ability to hold heat, given how much the air temperature changes during a 24-hour period, or how quickly the air temperature drops on a cloudless night.

    It seems like I’d have a much better chance of predicting the weather a month from now if I had a perfect measurement of the entire global 3D ocean temperature profile.

    So, I’ll ask again, does air temperature matter?

  7. POUNCER

    I second Les Johnson’s question.

    Isn’t this going towards an answer to Doug Keenan’s question about what number falls in the parenthesis after the AR() ?

    @Milton Hathaway — I’ll see oceans and raise icecaps. It’s hard for me to imagine that a disc 2 miles deep and 15 million square miles around at 70 degrees below zero is going to “melt” in response to an air temperature rise of 1.6 to 10 degrees. (miles? kilometers? I don’t remember my Antarctic geography. )

  8. RC Saumarez

    @Milton Hathaway.
    A very interesting question. I should explain that I know nothing about climate – I am a physician/bioengineer, with some interest in statistics.
    This is interesting because the papers here use Hurst statistics which is a time scaling of the autocorrelation function to allow for processes with different speeds.
    It seems likely that, if one can account for some of the variability, by a lagged random walk, there will be a number of components in the delays in recovery (or “persistance” in Hurst terminology) that correspond to different processes. One might assume that the time course of the atmospheric temperature has a short term effect on current temperature and the sea would have a longer term effect on temperature. This would give rise to Hurst type process.

  9. JayTee

    (The following is a comment I made at Climate Etc.)

    In the first paper (LU), “Long-Term Instrumental and Reconstructed Temperature Records Contradict Anthropogenic Global Warming”, a cooling is observed in the 1800′s, followed by a warming in the 1900′s. Along with the small number of stations presented, I am concerned with the readings shown from the late 1700′s through the 1800′s. Following citations mentioned in comments made earlier […in the Climate Etc. thread…], I read “Revision and necessary correction of the long-term temperature series of Hohenpeissenberg, 1781-2006″ by P. Winker (Theor. Appl. Climatol (2009) 98:259-268) doi://10.1007/s00704-009-0108-y.

    This paper goes into great detail as to the known errors inherent in this data set (+0.5 R until at least 1878). The discussion raises reasonable questions as to the usefulness of uncorrected data from the Hohenpeissenberg observatory as well as other similar observatories, as I would assume they also used similar instruments which would be subject to similar error sources.

    I do not see that Ludecke removed this bias from the datasets before analysis. It would be useful if this problem was addressed before any great stock was put on the records shown in the 1800′s. Has anyone found any papers detailing the thermometers and observational methods and practices at the other stations included in this paper?

  10. JayTee

    “Finally, the discussed 19th century cooling is confirmed by reconstructions [5]. ”

    While the paper cited in [5] does show a decrease in NH temperature in the reconstructions, it certainly does NOT show the 1800’s decrease to be anywhere near the 1900’s increase.

Leave a Reply

Your email address will not be published. Required fields are marked *