BEST’s Worst Work; What Is Significant?

You will have already seen this comparison graph from the Daily Mail.

Daily Mail BEST

That paper points out correctly that the version of the plot provided by BEST is presented in such a fashion as to obscure that not much of interest has happened over the past decade.

But this graph also obscures the uncertainty in the curve. It appears as if it is certain that historical temperatures were lower in years prior to 1950. This is false. The further back in time we go, the less certain we are of what the global average temperature was. We cannot tell with sufficient confidence whether dates before 1900 were warmer or cooler than they are now.

(See this, this, and this for more details about the importance of quantifying this uncertainty and what it means.)

According to the Daily Mail story:

[Muller] admitted it was true that the BEST data suggested that world temperatures have not risen for about 13 years. But in his view, this might not be ‘statistically significant’, although, he added, it was equally possible that it was — a statement which left other scientists mystified.

‘I am baffled as to what he’s trying to do,’ Prof Curry said.

Prof Ross McKittrick, a climate statistics expert from Guelph University in Ontario, added: ‘You don’t look for statistically significant evidence of a standstill.

‘You look for statistically significant evidence of change.’

Muller’s and McKittrick’s reported1 comments belie a (common) misunderstanding of what statistical “significance” means. Here is what it does mean in the context of temperature change.

To achieve statistical “significance” requires three things: a start date from which the analysis begins, an end date on which the analysis ends, and a fixed probability model. All three are arbitrary, at least partially ad hoc, and changing any of them will give different results. Which is the best start date? Depends on what question you want to ask. Is the best end date always today? No. And what probability model is best? Nobody knows.

Any analysis also assumes that the data underlying these three choices is perfect, error free, and that it represents the question asked. For example, are the few land surface stations we have chosen truly representative of global average temperature? Let’s don’t argue about that and assume that BEST’s data is perfect, representative, etc.

Take the (assumed perfect) Daily Mail graph at January 2001. What is the probability that the temperature was warmer in January 2007? Do look at it before answering.

The answer is 1, or 100%. And the reason is that it was certainly warmer in 2007 just because the measurements showed that it was. Is this increase—for we already know it was an increase—“statistically significant”? We have start and end dates, we have assumed pure data, so all we need is a model. Which should we choose?

People are inordinately fond of (various forms of) straight-line regression. Why? I’m guessing when I say simplicity. This is true even though there is not much solid physical evidence that the atmosphere responds linearly to forcings and feedbacks over any scale measured in years, while there is plenty of evidence that the atmosphere instead responds non-linearly, perhaps even chaotically. What physics dictates temperature should increase only in a straight line over any two arbitrary time points?

But never mind that. For as soon as we start asking these kinds of questions, we have already gone off the rails. Remember: we already know that the temperature was higher in 2007 than in 2001. We were done before we began.

To insist on answering whether the change was “statistically significant” is nonsensical unless we believe that the data was measured with error and we are trying to ascertain what the real change in temperature was given our belief that we have modeled the error and the time course of temperature accurately, or that we believe the data sources were different over this time span and we want to quantify the probability of a change given no difference in the data sources. But that also requires having a model of the change and of the time course of temperature.

As it is, for data over the last decade, which has a large component of satellite measurements, there is (probably) negligible error, thus there is no need any statistical model2. But there is still this notion of “no change” we would like to get at. We cannot say there has been no change, because (looking at the graph) clearly there has been. But we can say things like “Only thrice have monthly temperatures increased more than 1.6 degrees centigrade over the 1950 to 1980 average.” And so on.

Incidentally, why 1.6 degrees? Why not? It too is arbitrary: pick whatever number is meaningful to the exact decision you want to make.

For historical data we do need a statistical model because temperature was measured with error and the components that went into creating the (operationally defined) global average temperature (GAT) have changed. We can’t directly measure what the GAT was, so we have to model3. Since a model is necessary to declare (withing some level of certainty), so are beginning and end dates. Which to choose? And which model? What physical justification do you offer for these three choices?

Change focus: The best picture that gets at the idea of uncertainty comes from Anthony Watts:

GISS and BEST for LA

This shows NASA’s GISS and BEST’s data for Los Angeles. The difference from these two different sources—for years that are not historically remote, even—is as much as two degrees. Is it any wonder, then, that some of us are concerned when we hear predictions that, say, fifty years hence the GAT will be 0.5 degrees higher than some arbitrary number, when we cannot even say within that accuracy what the average temperature of Los Angeles was last year?

Update Still assuming the perfection of the data points in the last decade, something cause the GAT to change. What? A statistical model (chosen from an infinite number of models) might be fit and insight might—might—be gleaned from it, but far better would be to investigate the physics over this period. And if that statistical model has any value, it should be able to skillfully (used in the technical sense) predict data into the future. We cannot tell whether the model is actually good until we wait to see what happens.

————————————————————————————

1I say “reported” because of my suspicion that these words are imperfect representations of what each gentleman said.

2A model is only needed for these years if one wants to predict temperatures beyond the end date. There is no reason to predict what we already know.

3And we should give our predictive uncertainty of this temperature, not the uncertainty of the parameters in our model. See this.

47 Comments

  1. dearieme

    “and incite might—might—be gleaned from it”: yup, there’s plenny incitin’ in these here parts.

  2. Briggs

    dearime,

    Egads! I’ve been hacked!

  3. Grzegorz Staniak

    OMG. Mr. Briggs, for a week or two you behave like a reasonably sensible guy, and then off you go again, parrotting none other but David Rose, a famous scientist and a paragon of impartiality. Have a look at what Tamino has to say about it:

    http://tamino.wordpress.com/2011/10/30/judith-curry-opens-mouth-inserts-foot/

    Pay special attention to the uncertainties graph (the third one). Can you see a red flag there? No? Then read what moyhu found in the data:

    http://moyhu.blogspot.com/2011/10/gwpf-is-wrong-warming-has-not-stopped.html

    See where the dip at the end comes from? Here’s the link to the original data, see for yourself:

    http://www.berkeleyearth.org/data.php

    How can you claim any professional integrity, devote pages and pages to the importance of quantifying uncertainty, and then join an incompetent journalist in a FUD campaign based on total disregard of uncertainty in drawing his lame conclusions? How can you take part in discussions of data quality, talk about station dropout etc. and then suddenly and silently accept “data points” for April/May 2010 based on, literally, 47 stations in Antartcica, as opposed to 14488 entries for March 2010?

    This is not even double standards, this is plain and simple disinformation and dissemination of cheap denialist propaganda.

  4. Briggs

    Grzegorz Staniak,

    Not one of your points is in response to anything I have written about the meaning and use of statistics. You need to learn to read more carefully before reacting. Particularly examine my assumptions. Try again.

  5. Grzegorz Staniak

    Mr. Briggs,

    I’m not talking about your comments on the use of statistics. I’m talking about “that paper points out correctly that the version of the plot provided by BEST is presented in such a fashion as to obscure that not much of interest has happened over the past decade”.

    This is simply bullshit. You accept uncritically a nonsense spread by Curry and Rose and thus contribute to the denialist echo-chamber. Have you checked the BEST data? Seen the uncertainties graph? Counted the data records for April/May 2010? On what basis did you use the word “correctly” above? On what basis did you imply anyone’s intention to “obscure that not much of interest has happened over the past decade”? On what basis did you even assume that “not much of interest has happened over the past decade”? If anything, the BEST project did what they could to hide the incline, by attaching a steep dip to the end of the temperature series based on very scarce and uncertain data.

    Do you often build your opinions on scientific research by reading tabloid press, Mr. Briggs? Or is it just climate science that you single out for this?

  6. Briggs

    Watch the language Grzegorz.

  7. Grzegorz Staniak

    Mr. Briggs,

    Are you going to answer any of my questions?

  8. Andy

    And on this thread we learn that being attacked by mediocrity is not being attacked at all.

    Go read Dr Briggs resume fool and then apologise.

  9. Grzegorz Staniak

    @Andy

    I don’t care about arguments from authority. He’s very wrong, just as he was about the CLOUD project. But he won’t acknowledge that, because of his very clear, irrational, ideologically and/or politically motivated bias towards the climate change denialist side. Better ask him to apologize to his readers for the hypocrisy.

  10. KDavis

    Grzegorz,

    So you are saying that the data is no good, but it proves that you are correct and Briggs is wrong? What a very interesting deduction method you have! Do you think for yourself or does someone else always do it for you?

  11. Briggs

    A little keyword searching, eh Grzegorz?

  12. DAV

    Regardless of whether or not he meant “statistical significance” in the right way, McKittrick’s You don’t look for statistically significant evidence of a standstill. You look for statistically significant evidence of change. doesn’t make sense. Is there some theoretical reason why, in general, testing for A is more valid than testing for Not-A? Granted, doing one may be easier than the other but being easier normally doesn’t convey more validity (usually, the opposite — nothing’s easy).

    Grzegorz,

    Can’t speak for you but, when it comes to statistics, Tamino would not be at the top of my reference list.

  13. Grzegorz Staniak

    Mr. Briggs,

    Are you going tgo answer any of my questions? Somehow you’ve found time to look at the blog logs, then maybe you could spare a minute to explain what made you say “that paper points out correctly that the version of the plot provided by BEST is presented in such a fashion as to obscure that not much of interest has happened over the past decade”? I just don’t want to think that you just accepted a lie from a tabloid as a basis for your opinion.

  14. Grzegorz Staniak

    @DAV

    That’s beside the point. At the top or not, in this specific case Tamino/moyhu are right, and Briggs is wrong. The only reason why Curry and Rose can blabber about “hiding the decline” again is a pair of clearly defective data points at the very end of the series. It’s not even “outliers”, it’s very obviously a data glitch — if this kind of mistake had been done in the other direction, the whole denialist blogosphere would be on fire with righteous wrath at the fraudulent scoundrels who hide the decline. But somehow it never bothers them if the mistakes agree with their prejudice. And anyway, looking for decadal trends in climate data is simply stupid. The WHO reference period for detecting climate signal has been 30 years, for a long time, and for a good reason.

  15. DAV

    Grzegorz,

    The last 10 years of BEST are pretty flat and to a lot people that’s of primary interest. The BEST version does indeed obscure the last 10 years. Just look at it. Why do you think this is a lie? Why don’t you think Judy Curry is qualified to state opinions on the content of a paper she co-authored? Because Tamino thinks so? If so how does that jive with not accepting arguments from authority?

  16. Grzegorz Staniak

    @DAV

    Do you have any doubts that the two data points are corrupt? When you remove the April and May of 2010, suddenly the last 10 years have a linear trend of 0.14°C per decade, i.e. no slowdown of the warming is shown. The average delta for March 2010 is calculated on the basis of 1440 measurements, and then April and May 2010 use only 47 stations, all in the Anarctic. And uncertainties spike up by an order of magnitude. The glaringly spurious dip at the end of the series is the only reason Curry and Rose could pull the “Muller is hiding the decline” trick. It’s exactly the opposite — the data glitch is effectively hiding the incline. If you’re naive enough to look for trends in a single decade, that is.

    I’m in no position to judge whether Curry is qualified or not to speak about the BEST project results. I don’t know what exactly was her role in it. However, I know that Tamino/moyhu are right, because I checked it myself. Yes, for April/May 2010 there are only measurements from 47 Antarctic stations in the data. Yes, when you remove April/May 2010 from the series, you get a radically higher linear trend. These are facts, go check them yourself. If in spite of this you insist that Curry is a competent scientist and knows the data she’s talking about, then the alternative is that she’s intentionally lying.

  17. Bruce

    Grzegorz: “he average delta for March 2010 is calculated on the basis of 1440 measurements, and then April and May 2010 use only 47 stations”

    1440? What happened to the other 38,000 stations data?

  18. Grzegorz Staniak

    @Bruce

    You’re kidding, right? Or channeling Watts or some such? Read the papers. The main selling point of the BEST project findings is the range of data used in their analyses, and statistical methods used to extract value from shorter records. Because, you know, the fact itself that they managed to replicate the results that people have been getting for decades, using algorithms that can be implemented in two days, is not impressing anybody much.

    They used GHCN-M (7280 stations), they used METAR and SYNOP reports (from more than 39 000 stations), they used whatever they could lay their hands on. You know, graphs are nice to play with, but you should do some background reading before — CRU doesn’t extrapolate the temperatures in the Arctic, which is warming faster than other areas, so they persistently show a bit less warming than other series. That’s why your two graphs are not aligned.

  19. DAV

    Grzegorz,

    Corrupt in what way? I’ve spent nearly the last 40 years working with spacecraft data. The Acme Super Unpredictable Rube Goldberg Extractor was well noted for suffering from the electronic version of PMS and alas not on a predictable schedule. These periods saw a lot of data glitches. They weren’t totally unexpected because the SURGE was a very complicated device with high probability of failure.

    Thermometers OTOH are really simple devices that rarely glitch because they are so simple. The glitches in thermometer data arise from the people using them. There’s always that someone writes ‘6’ for ‘8’ or vice versa or neglects to write down the minus sign causing unexpected 30C jumps in daily temperatures.

    Outliers can (sometimes) be spotted because they are far out of line with the other values. But how does one spot a “glitch” on data that is hiding amongst the normal (presumable valid) ones? The short answer is: you can’t. The only way a flat temperature reading would be a glitch is if the thermometer is broken. The only way 10 years of thermometer readings can be a glitch is if ALL thermometers were broken. And that really stretches credibility.

    Comparing today’s reading with a reading 10 years ago assumes that data 10 years ago should be the same as today’s. As for whether today’s should conform to a trend from 10 year’s ago assumes a predetermined answer to the question supposedly being asked as does the assumption that the present will be the same as the past.

    “And uncertainties spike up by an order of magnitude” won’t change the average in any way. It’s still the average. The latter paragraphs in Briggs’s post cover some of this. Try reading it this time.

  20. Grzegorz Staniak

    @Bruce

    My mistake, lost a zero. For 2010.208 it’s 14502 stations. Merely three orders of magnitude more than for April/May.

  21. Grzegorz Staniak

    @DAV

    What are you talking about? What “thermometer glitches”? It’s lack of data, a methodological glitch in the project, not mechanical glitches in thermometers. From month to month, the sample was suddenly reduced by three orders of magnitude. You really don’t comprehend what it means for the reliability of analyses based on that sample?

  22. Dr Briggs,
    I think Mr Staniak can’t just be brushed off. The graph you have shown does have bogus points. This is not trivial.

    But you should also give some attention to sourcing. The Daily Mail is a poor choice. The graph you have described as “Best’s Worst Work” is in fact from the Global Warming Policy Foundation. Your graphic says it is from the BEST papers, but that is not true.

    The comparable graphs from the paper “Berkeley Earth Temperature Averaging Process” are Figs 1,5,6,8 are liberally adorned with uncertainty information. It’s a large part of what the paper is about. And while Fig 1 in the Decadal Variations paper does not show uncertainty limits, it is a comparison of the four indices, and so does indicate plenty of variability.

  23. DAV

    Grzegorz,

    Grzegorz: “he average delta for March 2010 is calculated on the basis of 1440 measurements, and then April and May 2010 use only 47 stations” Outside of whether or not this is true it will only affect the last year in the plot. How do you explain away the preceding years?

    Well, I see your problem. What ever in the world makes you think a linear regression is called for here? The data are cyclic. More Tamino? Applying a straight line to cyclical data rarely tells you anything and depends on the end points for one thing. I say the graph is essentially flat. And, in my view, it’s the flattest part of the last 35 years. My guess is that the last 10 years is the top of a crest. If the previously seen 60 year cycle continues (the last minimum was circa 1975 and the last peak was ca. 1940) the temperatures will begin to accelerate downward just as it has in previous cycles. What causes these changes is anyone’s guess.

    There is a bias signal starting from the last Ice Age. These cycles (plus the bias) are counter-indication of the claimed underlying causes associated with AGW. No wonder the Tamino’s of the world are so agitated.

  24. Steve Hempell

    Grzegorz/Nick

    Here are two plots of UAH/RSS (land) vs BEST.

    There is a large descrepancy between the two. The BEST results are close to double the trend of RSS and UAH in all cases. I am a fan of Bob Tisdale and the idea that the temperature seems to occur in steps. I like to take the 1998 El Nino out of the the plots and look at the temperatures before and after. The 1998 El Nino seems to be unusually large event.

    I take the dates of the El Nino from here beginning and ending with the red readings. Seems to me not too cherry picked.

    http://www.cpc.ncep.noaa.gov/products/analysis_monitoring/ensostuff/ensoyears.shtml

    Until this descrepancy is addressed I’ll look at the BEST results with a very jandiced eye and not much changed from the GISS, GHCN etc.

    http://www.woodfortrees.org/plot/best/from:1979/to:2010/mean:3/plot/rss-land/from:1979/to:2010/offset:0.6/mean:3/plot/best/from:1979/to:2010/trend/plot/rss-land/from:1979/to:2010/offset:0.6/trend/plot/best/from:1979/to:1997.42/trend/plot/rss-land/from:1979/to:1997.42/trend/offset:0.6/plot/best/from:1998.42/to:2010/trend/plot/rss-land/from:1998.42/to:2010/trend/offset:0.6

    http://www.woodfortrees.org/plot/best/from:1979/to:2010/mean:3/plot/uah-land/from:1979/to:2010/offset:0.6/mean:3/plot/best/from:1979/to:2010/trend/plot/uah-land/from:1979/to:2010/offset:0.6/trend/plot/best/from:1979/to:1997.42/trend/plot/uah-land/from:1979/to:1997.42/trend/offset:0.6/plot/best/from:1998.42/to:2010/trend/plot/uah-land/from:1998.42/to:2010/trend/offset:0.6

    DAV

    “And, in my view, it’s the flattest part of the last 35 years”

    If you look at the raw data from the following plots, UAH 1979 to JJA (NOAA data), is a pretty flat time too. A little less so is RSS. Compare to BEST.

  25. Bruce,
    That’s not bogus. They have 1484 SH stations (Jan 2007), which is more than enough. Their Krigish weighting should avoid the NH preponderance biasing the result. 47 Antarctic stations, on the other hand, is not globally representative.

    But my main beef here is the sourcing of this post. It seems BEST is being trashed for a graph which actually comes from GWPF.

  26. Bruce

    Nick: “They have 1484 SH stations (Jan 2007), which is more than enough.”

    Asserted. Unproven.

    “Their Krigish weighting should avoid the NH preponderance biasing the result.”

    It doesn’t. They should attempt to prove it does.

    “BEST is being trashed for a graph which actually comes from GWPF”

    And who released data with only 47 antarctic stations? And way too few SH stations?

    BEST.

  27. Artifex

    Nick says:

    “47 Antarctic stations, on the other hand, is not globally representative.”

    I could believe this and I could easily believe that the points left off are outliers. The argument is convincing. On the other hand, I find people such as Nick and Tamino arguing this point incredibly funny. When outliers or problematic data such as Yamal or the Bristlecones were considered, I recall hearing rationalizations about how it would be scientifically dishonest to leave them out from pretty much the same people now arguing that we should be remove the outliers and who are absolute outraged (outraged I tell you) that Briggs would display a graph with them included.

    Seems sort of disingenuous to defend the Yamal tree out of one side of your mouth while attacking the number of stations with the other side of your mouth. On the bright side, I don’t need physics or even statistics to predict whether Tamino will exclude a given outlier. I model his response based on the ideological story Tamino wants to sell and note that my predictive rate for his actions approaches 100%. How this correlates to the value of Tamino’s physics and statistics, I leave as an exercise for the reader.

  28. Richard

    A thought, if the two data streams shown above need reconciling then the most obvious, eyeball step would be to sissor the data into seperate sets at 1991. From there on (and prior) the two series seems to be well correlated. This would imply that there is ‘break’ in one steam or the other. That is an upward or downward step adjustment has been made in one of them. The two steams then resolve quite nicely over the whole record.

    Is this really the case or is it just co-incidence that a ‘better’ solution exists with a simple cut? Of course it could just be that this is what the data really, trully shows but…

  29. Grzegorz Staniak

    @DAV

    Are you for real? Or just happily trolling away?

  30. Grzegorz Staniak

    @Artifex

    The data for April/May 2010 are not “outliers”, they’re simply a result of sample error. You could just as well track weekly political preferences on a representative sample of US population and then for two consecutive weeks restrict the sample to a dozen rich white males from Houston.

  31. Briggs

    Nick, DAV, others,

    Staniak can be brushed off. His comments are irrelevant to the main point of the article. BEST’s “worst” work was obviously about the demonstration of “statistically significant” change (note the whole title) and what that meant.

    Remember that I supposed for the sake of example that the data as error free. Whether or not it was error free is entirely irrelevant: it just doesn’t matter whether the Daily Mail‘s graph is perfect or flawed or wherefore it arose (but note Judy Curry’s comments; see my review of BEST’s Fig. 5 from the links above).

    The point of this post is what does “statistical significance” mean in terms of temperature change. I tried to show how that term does not mean what most people think it does.

    If Staniak, or if anybody, wants to discuss this, then let’s do so. But we cannot let ourselves be distracted by cheap debating tricks.

  32. Stosh from the Sticks

    Briggs,

    It is very kind of you to provide a blog site for this Grzegorz Staniak person, but I think the bandwidth might be better invested in individuals who demonstrate their gratitude with a bit more courtesy.

  33. Ken

    Phil Jones, of the CRU & “Climategate” e-mail fame, has gone on record stating that there has been no statistically significant warming for about 15 years: http://www.dailymail.co.uk/news/article-1250872/Climategate-U-turn-Astonishment-scientist-centre-global-warming-email-row-admits-data-organised.html?ITO=1490

    K. Trenberth got a lot of mention in the CRU e-mails–especially for some of what he not only conceded, but lamented about such as: http://fumento.com/environment/showme.html & http://www.fumento.com/weblog/archives/2009/11/global_warming.html

    “It has become evident that the planet is running a ‘fever’ and the prognosis is that it is apt to get much worse. ‘Warming of the climate system is unequivocal’ and it is ‘very likely’ due to human activities. This is the verdict of the Fourth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC), known as AR4 . . . . Warming of the climate system is unequivocal as is now clear from an increasing body of evidence showing discernible physically consistent changes.”

    – Kevin Trenberth, head of the Climate Analysis Section at the Colorado-based National Center for Atmospheric Research and a lead author of the warmist bible, the 2007 Intergovernmental Panel on Climate Change (IPCC) report, congressional testimony of February 2007.

    THEN WE READ:

    “We can’t account for the lack of warming at the moment and it is a travesty that we can’t,” and “any consideration of geoengineering [is] quite hopeless as we will never be able to tell if it is successful or not!”

    – Kevin Trenberth, unintentionally released email to various recipients, October 14, 2009.

    CONCLUSION: When the High Priests of the avowed AGW “warmists” complain (and Trenberth isn’t the only one on record addressing the non-warming) about a trend that doesn’t fit with the orthodoxy, one can be pretty certain that whatever the figures are–they’re “statistically significant.” Otherwise, why so much griping???

  34. RC Saumarez

    I agree that statistical significance is a somewhat abused term.

    One could ask the question:

    What is the probability that the mean temperature from 2000-2010 is different from that of 1990-2000? This appears to me to be a hypothesis that can be tested. It would involve other questions about the distributions of temperature over those periods (assume they are normal?) and the serial correlations to determine the degrees of freedom. Why pick these periods, why not 30 year periods?

    Or one could ask:

    What is the probability of a 10 year period in the temperature record with no, or negative, temperature rise? This, again would have some implications: Should one just count the periods in the whole record, or should one “bootstrap” the problem and adjust for the observed serial correlation in the data?

    Or:
    What is probability that the temperature is distinguishable from a linear trend with a random walk of certain characteristics away from that trend?

    Personally I prefer the third question, but that is prejudice on my part. I suspect that there is no question that is so stupid that one cannot assign a p value to it. What it means is another matter.

  35. Grzegorz Staniak

    Mr. Briggs,

    I specifically pointed out what I object to in your article — the smuggling in of denialist nonsense as aside assumptions for discussions on related topics. Your bias in the climate change debate has been very clear, like in the case of the CLOUD project, when you managed to directly contradict lead author’s own conclusions with your “translation” of his results. You’re of course in your own right to disseminate on your blog whatever propaganda fits your political sympathies. I’m in my right to object to mixing it with science.

    And as far as the science is concerned, you’re the hammer expert to whom every problem is some kind of nail. Climate science is physics, not statistics. It’s a study of physical, measurable phenomena. Like, for example, solar variation, volcanic activity, or ENSO cycles. Whose influences, by the way, together with the anthropogenic radiative forcing, explain 75% of monthly variance of the temperature anomaly records:

    http://pubs.giss.nasa.gov/docs/2009/2009_Lean_Rind.pdf

    Sound use of statistical methods is important, but nowhere near as important for the AGW theory as you’ve been painting it on your blog. You don’t need to deliberate how to detect the signal in the noise when you know that this spike is the 1998 El Nino, this dip is the Pinatubo eruption, this slope is steeper due to the rise in solar activity etc. etc. Have a look at Figure 1 in the above paper. Do you see any flattening in the “anthropogenic influence” component of the temperature record in the last decade? Do you think statisticians can change this picture in any way, just by theorizing about what climatologists can measure?

  36. Grzegorz Staniak

    @Ken

    I think you should consult a dictionary and check what “cherrypicking” means. And please, no more this “Climategate” nonsense picked up from tabloids. Choose your sources more carefully. No, Burt Rutan quoting Beck is no good either.

  37. Ken

    Gregorz,

    RE: check what “cherrypicking” means, and, no more this “Climategate” nonsense picked up from tabloids

    GOT IT.

    – Cherrypicking — its only “cherry picking” when the person doing it reaches a conclusion you disagree with, otherwise, ok. Good ole Grant Foster has an arrow pointing to the point of his interest. That is cherrypicking. It would be even more pronounced if ole G.F. retained older posts–he had one some years ago addressing the statistical significance of the past few years (maybe less than 10 at that point) — concluding that seasonal & other variations made weather trends ‘statistically non-significant’ at periods of less than 10 years. In context with his earlier, now apparently deleted, lengthy blog entry this very recent one from him is clearly hypocritical relative to his own self-proclaimed standards.

    – “”Climategate” nonsense … from tabloids” — that seems oxymoronic & more.

    First, your reference to “picked up from tabloids” suggests that if its picked up from a non-tabloid source it is then ok? Given your propensity to disregard the data & attribute value & truth to sources via which data was conveyed instead, its just this sort of thing that’s made you a curious source of whimsy on this blog, though I doubt you comprehend how you’re being received–even in the “black & white” of words on the screen.

    Second, the quotes I quoted are, ultimately, sourced directly from the person making them — that was/is Phil Jones & K. Trenberth, in case you got distracted from the source passing along their statements. Such a profoundly obvious fact seldom needs stating, but, again, your inclination to ignore the facts based on the how they were relayed, or by whom, forces mention. Both Phil Jones & Kevin Trenberth are on record addressing facts which you find disagreeable. Who reports from who else reports them, etc., matters not. Except to you.

    Third, given that you have rejected quotes of Phil Jones & Kevin Trenberth — which is precisely what you have done, unless you can show how/that they were misquoted — indicates a very particular type of Emotional Reasoning. By dismissing facts–reality–prior to actually confronting them (which is what you’re doing by dismissing a quote because it is relayed by someone with whom you disagree or fear) you are effectively retaining a type of make believe world. Fundamentally, this is no different than dismissing the content of a magazine, letter, etc. delivered by a mail carrier that has Down’s Syndrome. This begs the question “why?” — why do you believe this facade as fact, and, why even engage it?

  38. Grzegorz Staniak

    @Ken

    “Cherrypicking” means selecting your data to fit your purposes. Please explain where exactly Tamino selected anything this way.

    And please spare me these attempts at psychological analyses. You base radical opinions on tabloids quoting out of context things that you don’t understand — and spinning them for political purposes. And it’s evident that you haven’t even tried to understand them. That’s your problem, not mine. There’s plenty of info on the web who said what and why. If you prefer to live in the denialist mutual admiration society and close your eyes to uncomfortable informatuin, at least try not to project this attitude on others.

  39. John Vetterling

    While I think I understand the point you are making, I disagree with this application.
    We cannot measure Global Average Temperature. We only measure temperature at a specific point and time. We then have to calculate GAT using a statisitcal model. So it would seem that if we want to compare the temperature difference between say 2001 and 2007, then the question of whether that different is statistically significant would be relavent.

    Or am I missing something?

  40. Hoi Polloi

    If Staniak, or if anybody, wants to discuss this, then let’s do so. But we cannot let ourselves be distracted by cheap debating tricks.

    I’m afraid that especially the last 2 years all that the AGW advocates is left are cheap debating tricks….

  41. DEEBEE

    Grzegorz, hats off. Obviously your mastery of the english language is much better honed than mine.
    I read and re-read other posts and could not divine, the way you do, the posters’ intent.
    Is that attained by going to Tamino’s pit-bull school of argumentation?

  42. DEEBEE

    Oh and Nick as usual your neirons always tend to focus on the most insignificant points

  43. Grzegorz Staniak

    @Hoi Polloi

    Yeah, you keep repeating that to yourself. Hard proofs are cheap debating tricks. Lies repeated after tabloids are “scientific debate”. War is peace. Freedom is slavery. Ignorance is strength.

  44. DEEBEE says: 2 November 2011 at 7:28 am

    “Oh and Nick as usual your neirons always tend to focus on the most insignificant points”

    Not sure what they are, but the heading of this post is:
    BEST’s Worst Work; What Is Significant?

    I don’t think the fact that what is discussed is not from BEST but from GWPF is insignificant. Especially as the BEST paper is very heavily focussed on the estimation of error.

Leave a Reply

Your email address will not be published. Required fields are marked *