Be sure to see: Part I, Part II, Part III, Part IV, Part V
Aside: counterfactuals
A counterfactual is statement saying what would be the case if its conditional were true. Like, “Germany would have won WWII if Hitler did not invade Russia.” Or, “The temperature at our spot would be X if no city existed.” Counterfactuals do not make statements about what really is, but only what might have been given something that wasn’t true was true.
They are sometimes practical. Credit card firms face counterfactuals each time they deny a loan and say, “This person will default if we issue him a card.” Since the decision to issue a card is based on some model or other decision process, the company can never directly verify whether its model is skillful, because they will never issue the card to find out whether or not its holder defaults. In short, counterfactuals can be interesting, but they cannot change what physically happened.
However, probability can handle counterfactuals, so it is not a mistake to seek their quantification. That is, we can assign easily a probability to the Hitler, credit card, or temperature question (given additional information about models, etc.).
Asking what the temperature would be at our spot had there not been a city is certainly a counterfactual. Another is to ask what the temperature of the field would have been given there was a city. This also is a strange question to ask.
Why would we want to know what the temperature of a non-existent city would have been? Usually, to ask how much more humans who don’t live in the city at this moment might have influenced the temperature in the city now. Confusing? The idea is if we had a long series in one spot, surrounded by a city that was constant in size and make up, we could tell if there were a trend in that series, a trend that was caused by factors not directly associated with our city (but was related to, say, the rest of the Earth’s population).
But since the city around our spot has changed, if we want to estimate this external influence, we have to guess what the temperature would have been if either the city was always there or always wasn’t. Either way, we are guessing a counterfactual.
The thing to take away is that the guess is complicated and surrounded by many uncertainties. It is certainly not as clear cut as we normally hear. Importantly, just as with the credit card example, we can never verify whether our temperature guess is accurate or not.
Intermission: uncertainty bounds and global average temperature
This guess would—should!—have a plus and minus attached to it, some guidance of how certain we are of the guess. Technically, we want the predictive uncertainty of the guess, and not the parametric uncertainty. The predictive uncertainty tells us the plus and minus bounds in the units of actual temperature. Parametric uncertainty states those bounds in terms of the parameters of the statistical model. Near as I can tell (which means I might be wrong), GHCN and, inter alia, Mann use parametric uncertainty to state their results: the gist being that they are, in the end, too confident of themselves.
(See this post for a distinction between the two; the predictive uncertainty is always larger than the parametric, usually by two to ten times as much. Also see this marvelous collection of class notes.)
OK. We have our guess of what the temperature might have been had the city not been there (or if the city was always there), and we have said that that guess should come attached with plus/minus bounds of its uncertainty. These bounds should be super-glued to the guess, and coated with kryptonite so that even Superman couldn’t detach them.
Alas, they are usually tied loosely with cheap string from a dollar store. The bounds fall off at the lightest touch. This is bad news.
It is bad because our guess of the temperature is then given to others who use it to compute, among other things, the global average temperature (GAT). The GAT is itself a conglomeration of measurements from sites all over (a very small—and changing—portion) of the globe. Sometimes the GAT is a straight average, sometimes not, but the resulting GAT is itself uncertain.
Even if we ignored the plus/minus bounds from our guessed temperatures, and also ignored it from all the other spots that go into the GAT, the act of calculating the GAT ensures that it must carry its own plus/minus bounds—which should always be stated (and such that they are with respect to the predictive, and not parametric uncertainty).
But if the bounds from our guessed temperature aren’t attached, then the eventual bounds of the GAT will be far, far, too narrow. The gist: we will be way too certain of ourselves.
We haven’t even started on why the GAT is such a poor estimate for the global average temperature. We’ll come to these objections another day, but for now remember two admonitions. No thing experiences GAT, physical objects can only experience the temperature of where they are. Since the GAT contains a large (but not large enough) number of stations, any individual station—as Dick Lindzen is always reminding us—is, at best, only weakly correlated with the GAT.
But enough of this, save we should remember that these admonitions hold whatever homogenization scenario we are in.
Next time
More scenarios!
My travails of a week ago, battling the air at thirty-eight thousand feet was such a life-affirming experience that I have decided to repeat it. From this afternoon, I will be out of contact for a day or so.
Hi –
Having, at one point for my sins, had to splice multiple base years together without overlapping data, this is nothing terribly difficult, but you do need to understand what is to be done.
I think – and fear – that the climatologists who were tasked to do this were not given either the time or the resources to actually do a proper job. I say that because this is going to be very, very painful for quite a number of people who consider themselves professionals and was/is completely unnecessary. Perhaps this was deliberate so that they could get out from the mucky world of empirical data to apply themselves to their theoretical models; perhaps it is simply sloppy; perhaps it was deliberate; perhaps we’ll never know.
Simply put, the most recent data has to be inviolable: it is, after all, what is being actually produced by the station regardless of where it is located. What should be of interest is not the level of temperature, but rather the vector of temperature changes: after all, this is what is being empirically verified, n’est pas? The claim, after all, is that the World is warming!
There is one fundamental set of data points that have apparently not survived: max and min. We have the average (I imagine this is simply (max-min)/2) but not the variance. This is severely unfortunate, since it’s what is really interesting when watching the temperatures, and, more importantly, would actually give us a handle of what could actually be used to homogenize a vector over time. After all, urbanization doesn’t just raise temperature levels, but also changes the min/max relationship because of heat retention. With that information, you could normalize the temperature variations, but without it, you have to leave the data as is.
You can deal with this two ways: either by accepting the temperature breaks or by making an assumption at the transition points (by working with the most recent vectors of series t1 and series 2 at the point of transition, which could work rather nicely to avoid transition errors in industrial time series, although one can also use the average vectors at the transition points based on calendar analysis…but I digress).
From a statistical viewpoint, leaving the vector breaks in the series can be quite useful, as you then use a dummy variable in your regression analysis to give you a fairly accurate rendition of long-term growth rates.
Nonetheless there are huge problems with the way the numbers have been crunched and modified: the climatologists appear to be making the error of viewing the oldest numbers as their baseline and adjust the newest numbers away from their empirical values as a result. This is an egregious error of the first order. As I said, the most recent data should never be changed: this is reflected in Eschenbach’s analysis in Figure 6, where he shows how it should have been done.
Instead, it appears that the climatologists cross-reference temperature scalars from neighboring stations (even when these are hundreds of miles apart) and apply correction factors to harmonize temperatures within a spatial cell based on an apparently arbitrary value or set of values determined by … I’m not sure what.
And neither can the climatologists with … any degree of accuracy (pun intended).
Such fundamental stuff, & so fundamentally important….
It turns out our favorite “open minded” blogger has touched on this theme, if somewhat obliquely, at:
http://tamino.wordpress.com/2009/12/07/riddle-me-this/#comments
There’s lots of fodder there for a specific example involving uncertainties, etc. & the curious remark that effectively states that 10 years of climate data is too little (“a fool’s exercise”).
I couldn’t help but notice the remark: “…with GISS data I started at 1975 because that’s a natural turning point in the temperature trend…”
For a nice example of that “turning point” (which seems closer to 1970 than 1975, but who’s quibbling) see:
http://wattsupwiththat.com/2009/06/28/nasa-giss-adjustments-galore-rewriting-climate-history/
John:
I think what you are saying is relevant to forecasting, hindcasting and reconstruction.
I am not sure I quite get what you are saying about tmax and tmin since these in themselves hide the actual temperature record. I would assume that the actual temperature of interest depends on what question you are asking. For example, tmin strikes me as being more relevant when talking about a GHG effect. The average temperature itself has no special value or does it?
Bernie:
Given that the full analogue spread of a day’s temperature is not available – in other words, a nondiscrete constant temperature feed – the use of tmax and tmin are decent proxies to determine the “average” temperature (I use scare quotes because we know how sudden temperature extremes can distort an otherwise pristine Gedankenexperiment), simply by removing the minimum temperature from the maximum and dividing the difference by two to get the average temperature. Hence if tmax = 100° and tmin = 32°…
Woops: my bad. That’s not (tmax-tmin)/2, but rather (tmax+tmin)/2. Hence the average temperature would be 66°. I spend too much time with vectors, not enough time with scalars…
But that average temperature represent a large solution set in terms of tmax and tmin, ie you can provide a very large set of numbers that give you 66°: 90°+42°, 110°+22°, 200°+-68°…having the min and max gives you the personality of that single calculated value.
You are right: the average temperature doesn’t have any special value. But tmax and tmin don’t “hide” the actual temperature, but rather give the parameters to understanding temperatures.
Matt, wishing you Nice Air today. Here’s a link from 2007 concerning “extended facts” and “post-normal science” that is interesting.
On using Tmax and Tmin Dr. Christy with Dr. Pielke Sr. and I forget who else did a study back in 2006 in California that showed that Average surface Temperatures are a poor proxies for Greenhouse detection because of contamination by human developement
Starting at 13:45 of this link Dr. Christy in a lecture at Auburn shows why its a poor proxy:
http://www.youtube.com/watch?v=-WWpH0lmcxA
boballab
Many thanks for the link. It was long but worth it. John Christy is not a great public speaker but he sure is credible.
But tmax and tmin don’t “hide†the actual temperature, but rather give the parameters to understanding temperatures.
John, you forgot to add, “error-free” parameters because none of the measurement, parametric, or probablistic error is calculated, described, or even considered. It’s there, but the crunchers ignore it, or throw out some jimmied “confidence” bounds at odds with all statistical logic.
Further, “warmth” is temperature across time, i.e. the area under the curve. If warmth is what you are interested in, I suggest you don’t throw away (or smooth) any of the raw data, but be mindful of the fact that times and temperatures are paired data points.
In the Darwin case, however, statistical logic does not apply to the data, directly, because the data themselves had been jimmied. Quoting Willis Eschenbach:
Those, dear friends, are the clumsy fingerprints of someone messing with the data Egyptian style… they are indisputable evidence that the “homogenized” data has been changed to fit someone’s preconceptions about whether the earth is warming.
How do you calculate the error of manufactured data?
It’s not really a problem of estimating counter-factuals or their confidence bounds. It’s fraud. That’s something else entirely. Eschenbach used statistics and statistical logic to uncover a fraud.
It’s generally understood that the uncertainty (e.g., standard error) of estimation and the efficiency of a test should be quantified and examined. Any paper without those measures will probably be rejected by a statistical journal.
Good luck on the trip. Air looking forward to a long weekend off can become evilly testy when forced to work on a Friday.
Keep up the good work Briggs.
Update on non-warming in Copenhagen: As emergency AGW conference continues, it is perhaps ironic to calmly study some weather information readily available on the internet. I apologize that the figures are not absolutely precise because they are taken from graphs at weatheronline.co.uk. Anyone can check my calculations, which took about 20 minutes and were not taxpayer funded.
In the last 28 years (as far as the online records go back), the highest December temperature in Copenhagen was 11 degrees C and that was back in 1983. Over these years, the average highest December temperature was around 7 C.
First day: a high of 7 C, exactly the same as the average high of the last 28 years and 4 degrees COOLER than the high of the last 28 years.
Second day: a high of 7 C, the same.
Third day: a high of 6 C, 5 degrees COOLER than the December high of the last 28 years.
Fourth day: a high of 6 C
Fifth day: a high of 5 C, 6 degrees COOLER than the December high of the last 28 years.
Can someone please point this out to the Met, the BBC and all the eminent and learned delegates?
A point perhaps… NOAA keeps temperature on an hourly basis (I think). Is this what is averaged or do they use the max/min only. On the same quest What is a NOAA ‘Degree Day’ as reported for what seems like forever by our local utility? The utility seems to believe that equates energy needed to heat and cool.
It is much worse than any of you think.
The “alleged” raw data are a measurement, once a day, of the readings on a maximum/minimum thermometer. The maximum is on a different calendar day from the minimum. Some stations switch to the right day, others do not The time varies within a country and from time to time.
The temperature follows an irreguler skewed progress. There is no way you can define an “average” of a skewed distribution.
Most readings have been taken by opening the screen, thus changing the temperature suddenly, usually coolong it. The automatic stations thus are different from the older stations.
There is no standard screen. I was surprised to learn that there are two different screens used in the USA.
The average of a maximum and a minimum reading cannot be related to any sort of average by a mathematical process. No empirical comparison has been made as far as I know.
The urban effect is judged by comparison between “urban” sites and “rural” sites, but the errors of “rural” sites are ignored.
Instruments vary, stations move or are upgraded, the numbers of stations varies , observers are usually “volunteers”, and at time wer slaves or unpaid employees
There is no quality control. Anthony Watts reckons that most US stations are unable to meaure temperature better than ± 2ºC . He found that even changing from whitewash to latex paint caused a half degree bias.
Finally, it is a fraud. The 1980 paper quoted by IPCC to prove no urban warming was fraudulent since two of the same authors publshed another paper the same year using some of the same data which showed urban warming, and the data have been shown to be contaminated. Climategate gives details of how the computer code is “fudged” (their own words)
Mr. Gray–are you the same Vincent Gray from IPCC? Just asking.
“The “alleged†raw data are a measurement, once a day, of the readings on a maximum/minimum thermometer. The maximum is on a different calendar day from the minimum. Some stations switch to the right day, others do not The time varies within a country and from time to time.”
Well, as a non-scientist, that seems like a very poor way to do business. That’s zero quality control…which means that, in effect, none of the data are reliable?
“Finally, it is a fraud. The 1980 paper quoted by IPCC to prove no urban warming was fraudulent since two of the same authors publshed another paper the same year using some of the same data which showed urban warming, and the data have been shown to be contaminated. Climategate gives details of how the computer code is “fudged†(their own words)”
Are you referring to these two papers?
Jones, P. D., P. Ya. Groisman, M. Coughlan, N. Plummer, W. C. Wang & T. R. Karl 1990. Assessment of urbanization effects in time series of surface air temperature over land, Nature 347 169- 172.
Wang, W-C, Z. Zeng, T. R Karl, 1990. Urban Heat Islands in China. Geophys. Res. Lett. 17, 2377-2380.
Also, I have a question for John. Please keep in mind I’m not a statistician, just a curious observer. What value does the average temperature have? If greenhouse theory is correct (and I have doubts about that, too, since it seems to violate the laws of thermodynamics), then the minimum temperature would be more important than the maximum, yes? Or to put it in context of so-called global warming, hotter summers wouldn’t be the issue. It would be warmer winters–the Tmin would rise and skew the average over time.
Am I at least in the ballpark? 🙂
Mr. Gray’s comments about the “raw data” are, in my opinion and experience, factual.
This comes from a volunteer/slave weather observer.