Several readers asked me to look at Ross McKitrick’s paper “HAC-Robust Measurement of the Duration of a Trendless Subsample in a Global Climate Time Series”, which is receiving the usual internet peer-reviewing (here, here, and here).
Before we begin, it is absolutely crucial that you understand the following point: both the IPCC (you know I mean the people and groups which contribute to it) and McKitrick have produced time series models.
Many people and groups have created time series models of the temperature, including the rank amateurs who attended the People’s Climate March. The latter model is, in essence, “The End Is Nigh”. This is simplistic, yes, and stupid certainly, but it is still a time series model.
Now we know, without error, that the IPCC’s time series model stinks. That it should not be trusted. That decisions should not be made based on its forecasts. That it is, somewhere, in error.
How do we know this? Because it has consistently and for many, many years said temperatures would be high when in reality they were low (relative to the predictions). People who refuse to see this are reality deniers.
Because the IPCC’s model said temperatures would be high these past eighteen or so years, when in reality the temperature bounced around but did nothing special, the IPCC has taken to calling reality a “pause” or “hiatus”. Everybody must understand that this “hiatus” is model-relative. It has nothing to do with reality. Reality doesn’t know squat about the IPCC’s model. The reality versus the model-relative “hiatus” is how we know the IPCC’s model stinks.
If the IPCC’s model did not stink, it would have predicted the reality we saw. It did not predict it. Therefore the model stinks. The debate really is over.
Now where the IPCC’s model goes wrong is a mystery. Could be it represents deep ocean circulation badly; could be that cloud parameterizations are poor. Could be a combination of things. It’s not our job to figure that out. The burden is solely on the IPCC to identify and fix what’s busted.
Enter McKitrick, who has his own model (or models; but for shorthand, I’ll speak of one). McKitrick’s model is a standard econometric model, which uses the Dickey-Fuller test (economists are always using the Dickey-Fuller test; I just like to say, “Dickey-Fuller test”; try it).
Is McKitrick’s model any good? There is no reason to think so. (Sorry, Ross.) It’s just a simplistic set of equations which is scarcely likely to capture the complexity of the atmosphere. If McKitrick’s model should be trusted, there is one test it could take to prove it. The same test the IPCC took—and failed.
McKitrick needs to use his creation to predict data he has never before seen. He hasn’t done that; and in fairness, he hasn’t had time. We need to wait a decade or so to see whether his model’s predictions have skill. But in a decade, I predict nobody will care.
The objection will be raised: but McKitrick’s model was built not to make predictions but to measure how long the “hiatus” was.
We needed a model for that? No, sir. We did not. We could just use our eyes. We need no model of any kind. We just take reality as she comes. To show you how easy it is to fool yourself with time series, here’s Figure 1 from McKitrick’s paper:
It shows “Globally-averaged HadCRUT4 surface temperature anomalies, January 1850 to April 2014. Dark line is lowess smoothed with bandwidth parameter = 0.09.” Let’s don’t argue about the dots, i.e. the temperature, a.k.a. reality, which really should have accompanying error bounds. Let’s just assume that the dots were the reality, full stop.
The black line is a chimera, a distraction, put there to fool the eye into believing the author has discovered some underlying “signal” in the reality. Well, he might have done. But if he has, he should be able pass the reality test mentioned above. Unfortunately, you can’t make forecasts with that kind of black line. The black line is not what happened! To say it is is to commit the Deadly Sin of Reification.
We must take reality as she is. All we need is a working definition of trend. Easy, right? No, sir. Not really. See this post. But skip all that and call a trend, “Over any ten year period, the temperature increased more than it decreased.” That’s one possible definition of trend.
Accepting that definition (but feel free to make up your own, using the post as a guide), there is no trend in the last two decades. But then there are many other periods sine 1850 without trends. So maybe bump up the time window to 20 years. Still no trend in the latter years.
And so on. No model is needed. None. We just look. There is no need for “statistical significance”, or any other pseudo-quantification.
Listen: make sure you get this. It doesn’t even matter if the IPCC or McKitrick perfectly predicted reality. We still do not need their models to see whether there was a trend. A trend only depends on (1) its definition, and (2) reality.
Update Ken below discovered this gem, which shows Richard Feynman destroying the IPCC’s global warming models.
??? Why do I need to predict anything? The definition I propose is for the purpose of measuring something. I propose a definition of something measured in years, and then apply the technique to come up with an estimate of the number of years in question. At no point do I claim that it can predict anything out of sample.
The technique in my paper is simply not a prediction model. Nor, for that matter, is it useful for hunting rabbits, baking muffins, or refinishing furniture. I suppose any number of website posts could be written pointing out all the things my model cannot do.
What would be interesting, though, would be a comment along the lines of: Here is the definition McKitrick proposes; then either (i) it’s a good definition and I concur with the results, or (ii) here’s a better definition and this is what it implies. So my question is, assuming you read the paper, do you think the definition I propose of the hiatus is suitable, and if not, what would you propose instead?
Ross,
Your comment was anticipated above. All statistical models are predictive; or rather, predictions are implied by them. You have in no way demonstrated your model is any good. (We know the IPCC’s isn’t.) A hundred different statistical models can be fit to the same data, each telling us different things. Why trust any of them before they have proven their worth? Answer: why indeed?
I don’t think you, and probably new readers, really grasp what statistical models are. Ignore the points made from Doug and assume you’ve calculated everything correctly. So what? Like I said (I think you missed this) to see whether there was a trend requires only our eyes (and a working definition of trend). Your model isn’t needed. Nobody’s is. So don’t take it personally.
You say, “assuming you read the paper.” Good grief, Dickey-Fuller!
Oh, that link about trends is worth a read.
Someone might want to make a transcript/quote of the following 61 second explanation by Richard Feynman, and include that with every discussion on models, especially global warming models, that don’t comport with reality:
http://www.youtube.com/watch?v=viaDa43WiLc
Much of those that espouse the flawed models as credible also idolize Feynman. Being confronted with his clear refutation those ‘espousers’ will be faced with ‘cognitive dissonance’ and will be annoyed, perhaps driven insane (it’s possible)…which is the next best thing to changing their minds, which (though possible, is even more remote than driving them insane) is still the next best thing.
All,
To understand the real purpose of statistical models, be sure to read items from the Classic Posts on the subject. If you’re familiar with regression, and most users of statistics are, read those first.
Statistics just isn’t what you think (probably).
Dickey-Fuller.
Update Ken, Love it! I had never seen that clip before. It’s being promoted to the post proper.
All,
I see I missed Ross’s last request, where he asked “do you think the definition I propose of the hiatus is suitable, and if not, what would you propose instead?”
I propose reality instead. The “hiatus”, as I took some pains to define, is model relative. There is in reality no hiatus. To say there is a “hiatus” is to assume, against all evidence, that the model was true. But how could it be true when it blew the forecasts? Answer: it cannot be true. Therefore to speak of a “hiatus” is quite literally to speak nonsense.
Hey Matt,
What I seem to be picking up here is that it seems impossible to describe (in words) both succinctly AND accurately some sort of physical behavior–like the earth’s temperature history over the past couple of decades–other than perhaps, “it is what it is.”
Given limited word counts in many media outlets, how then do you suggest to best to address the situation?
-Chip
The flaw is your assumption is that the “dots” are reality. They are not. They are dots created from equations that attempt to measure the global average temperature. They have been averaged across surface area and assumed to have a meaning.
Ross is effectively averaging across surface area and time to create another “non-reality” which may or may not be useful, but is not to be assumed to be more flawed than the first “non-reality”.
I get the point that we should simply examine observations, in which case we should simply plot the output of every thermometer on earth every millisecond over a century and see what we have. We would get a huge wide band of dots much larger on the vertical axis in which trends would largely be invisible (and no doubt this is a useful exercise for those convinced this “large” temperature rise is alarming).
But even this has serious flaws. Certain areas of the globe are poorly covered and actually have the largest temperature changes.
The point in the post is almost purely academic. I think this is really a trade off of data visualization techniques (not models!) and which one shows the data in the most useful way.
Ken @ 10:20 am
No, I don’t think that the espousers will change their minds or become insane. They just put forward the CAGWers versions of epicycles, then epicycles on epicycles, epicycles on epicycles on epi……..
Anthony Watts has a list of 52 of them so far http://wattsupwiththat.com/
Chip,
If you’re picking that up, that means I’ve described things badly.
The temperature is what it is. We look and see and report. Finished!
Now you either have an explanation of why the temperature was what it was, or you don’t. Simply reporting what happened does not require us to say why what was, was.
But the IPCC purports to say why, and so does McKitrick, and so do many others (like those marchers). Very well, they have made a claim saying they know. Let’s test it, à la that Feynman quote.
We have already seen that IPCC fails the test (multiple times). We don’t know whether McKitrick will or not, but it’s reasonable to suppose he will. I know Ross doesn’t see that his model implies a prediction of new data, but I accept responsibility for that, because that part of the philosophy of probability is rarely taught (and never in econometric course, to my knowledge). Nevertheless, his model does, indeed, imply a prediction. We can let his model make it (supply it with times t+1, t+2, … and see what happens) and then wait. But I don’t see the profit in it, because that’s not why he made his model.
He made his model to replace reality with a model, and to say the model says the “hiatus” is this many years. This doesn’t fly. We already have the reality, we don’t need the model.
And see my earlier comment about the “hiatus.” To say there is one is to say the IPCC model is correct and that reality is wrong. It just makes no sense whatsoever.
But see tomorrow’s post (if you haven’t already read the recommended links). I’ll have another go explaining the purpose of statistical models.
Tom S,
You’ll have noted where I said I accept the dots arguendo. It’s valid to discuss this assumption, elsewwere, but it’s irrelevant here.
To be more accurate the dots are already averaged over time (usually a year) and Ross’s technique is simply averaging over a larger period of time. I use “average” in the most basic sense, the techniques actually used are mathematically sophisticated forms that effectively do averaging and “noise” removal in a more advanced manner.
Hopefully the result is more useful than simple area or time averaging. This should not be assumed however. “Advanced” statistical methods are sometimes used to willfully torture data to find a preconceived result. Hockey Stick anyone?
In my opinion, the more advanced the methods, the more wary one should be of the result. At the very least the authors should show the results of the basic methods (simple averaging, etc.) and then demonstrate why the advanced methods were superior for the intended result and show the data transformations through each step. What we mostly get is raw information was popped into this black box of mathamagic and out popped truth.
Go examine the raw tree ring data used in reconstructions and compare that to the alleged HS “truth” output. Ever seen the media show a plot of simple averaged raw tree ring data? Spaghetti plots? Ever wonder why not?
This is not to say there are not legitimate uses for advanced methods. Separating the legitimate uses is pretty difficult for the layman. Science needs to be trusted to do it right, and I have begun to lose faith that the charlatans are being properly marginalized, they seem to be rewarded all to often.
First, I agree with Briggs that measurements and trend lines based upon them are somewhat meaningless.
I’ve always thought that averaging temperature across many sites was a fools game. In reality there is no average temperature for the earth that makes any sense. There are just not enough data points.
Instead I would like to see temp records for the individual stations. Then regional comparisons over time might be valid.
I have had many bizarre conversations with people that worship the statistical measurements of data. The phrase “I can simply look at this and tell you this linear estimation of the raw data is equal to exactly crap” doesn’t always go over well.
An example of this is tornado and hurricane trends. There is nothing to see here and you can tell this in 2 seconds looking at the raw information. But, but, but there is a 2% increase in the trend over the last 35 years (but not the last 37 years) and if we extrapolate that over time and exponentially model it due to carbon hand waving, Armageddon is certainly here, already, just look out the window, this is what climate change look like.
@ VftS at 12:08: “No, I don’t think that the espousers will change their minds or become insane. They just put forward [more & increasingly creative rationalizations]”
UNDOUBTEDLY TRUE. (One might debate their sanity as-is…) One curious facet of human nature is that the more effort [& time] expended, the more one is inclined to believe in the thing invested in (or, the less likely one is inclined to reject a strongly held belief obtained at great effort).
The global warming climate change & disruption cultists will undoubtedly respond to repeated failures with renewed vigor of their own belief. It’s a curious trait of human nature:
Typical examples include instances of religious cults expecting the 2nd coming/end-of-times/etc. and nothing happens — repeatedly the cultists believe even more vigorously in their leader & beliefs rather than seeing him [usually a him] for the self-serving manipulator [or nutcase] they are…often the cult grows in response to prophetic failures! If some of the more abusive cult leaders understood this they wouldn’t be inclined to have their flock commit mass suicides when they realized their “jig” was up…instead they’d concoct some rationalization because their “brainwashed” flock will believe even more when the prediction is proven false.
Related are anti-smoking campaigns that included graphic imagery (e.g. of black lungs) seemingly sure to induce revulsion–but in fact leading to increased smoking; etc.
Psychologists–in this area a subgroup heavily lopsided to those working for marketing firms–have some very interesting studies regarding this consistently observed behavioral pattern (but they’re basically descriptive of what various prompts lead to what response themes — what will happen if this or that approach is used … with minimal or less explanations for why people respond this way).
Also consider military basic training — We all “know” the US Marine Corp (USMC) is, in the U.S., the “elite” of the combat arms soldiers (not getting into the special cases of the various special forces). Many/most of us have heard about the USMC’s brutal basic training culminating in “hell week” and so on & so forth, illustrating how only the most elite graduate.
Contrast that behavioral stereotype [one “everybody knows”] with some numbers: USMC basic training washout rates are lowest among the combat services (excluding USAF, which emphasizes brains over brawn & whose low washout rate ought surprise no one); see: http://usmilitary.about.com/od/joiningthemilitary/l/blbasicattrit.htm Intuitively, the tougher & more demanding the training the higher the washout rate — and the actual numbers generally come as a surprise to everyone.
So, a question is, is USMC training really that much tougher (as any Marine will proudly assert and most “everybody knows”), or, is the psychological indoctrination just that much more refined?
If you wanna debate that — or address other factors as more relevant — you might first want to do a background check on who’s involved or listening in.
That’s presented to highlight a strong indicator: If/when one confronts/challenges a popular truism [as being something other than the “truism” it’s held to be] and the response is highly emotional, chances are the belief is held for emotional, not logical/objective/rational, reasons. In such situations, changing such a belief is very very difficult … and not always possible regardless of the facts…application of some alternative emotion is often more effective in changing the belief than objective presentations of facts & tutoring, (many consider the emotional approach, often correctly, as ‘pursuing the right thing for the wrong reasons’ or the ‘wrong means justified by a proper end’).
I personally cannot decide whether the global-averaged temperatures during the last, say,15 years have overall leveled off or decreased slightly without the black smoothed line on the monthly data plot (Figure 1). The graph also indicates the data values after the year 2000 are mostly higher than those prior to 2000.
Mr. Briggs,
Imply a prediction of what?
Now this paragraph really raises the question of whether you have read McKitrick’s paper. He proposes a method (not a model) to statistically estimate the duration of the hiatus/levelling-off. Yes, he has to assume a model with a deterministic linear trend for the proposed method since linear trend over time is what he is talking about. I have no time to explain more about what’s in the paper. However, his conclusion:
I am not sure what “test test it could take to prove whether he is correct, but aren’t you happy he actually confirms your claim that the global-averaged temperatures have leveled off during the past however “many many years�
So how did you decide the global-averaged temperatures have leveled off by looking at the monthly data plot shown here? You did not calculate the difference between consecutive months and then count if there are more negatives than positives, did you?
Ah, the eye-ball method rocks! Yes, it can work and no thinking is needs. Sometimes, we do see what we want to see.
It would be nice if you can show your readers how to run a Bayesian analysis on the monthly HadCRUT4 data shown in Figure 1 and make predictions!
JH: “He proposes a method (not a model) to statistically estimate the duration of the hiatus/levelling-off. ” If estimating the duration of the hiatus is not prediction, then what is it? It’s a prediction, plain and simple. He is attempting to estimate a future point where the temperature starts to go up or down. Future point=prediction. If he is simply reporting the temperature average and how it changes, that’s just taking numbers and putting them on a graph. No statistical method required, other than averaging the temperatures to obtain a single point per year.
Question on statistics: When is the use of average appropriate? In data sets with huge variation, does average tell us anything? I cannot see how average is useful with widely varying data. It’s like hitting a target with shotgun blast and then giving us the “average” of where the pellets hit. It means nothing whatsoever. Anyone?
“If the IPCC’s model did not stink, it would have predicted the reality we saw. It did not predict it. Therefore the model stinks. The debate really is over.”
You’ve never got the hang of the “how to apply for a green research grant” thing have you….:-)
““Dickey-Fuller test—
Motion at the next international stat’s shebang to rename it the “Fuller-Dickey”, rolls off the tongue better…..
Sheri,
McKitrick proposes a definition for “the duration of hiatus” and then finds when the hiatus started based on statistical estimation results.
I give grades of A to top 15% scores. My definition of A. Now I examine the scores to find the minimum score for the grade of A. Would you say that I “predict” the minimum score?
I won’t argue with you if you want to call it a prediction; however, the paper does not make a prediction of a future point or future value of temperature (the variable of interest).
Monthly data are used in the paper. I assume that you know what average monthly temperatures tell you. If you want to know more what an average annual temperature means, check out realclimate.org. They have several posts on that, if I remember correctly.
If you compare the time series plot for the monthly (average) temperature (12 data points varying about its annual average temperature) with the one for the annual temperature time series plot, you might notice that the overall trends appear the same in both plots and the seasonality is absent in the plot of the annual temperatures. What does this tell you? The annual temperatures can be useful in studying the overall trend or pattern.
Averaging and smoothing can be very useful. They can be used to eliminate seasonality for certain purposes in time series analysis. The usefulness also depends on the context. For example, see Calendar Adjustment in https://www.otexts.org/fpp/2/4. (One of the authors of the textbook writes a great statistics blog.)
I charge $250 per hour.
One area where averaging makes sense unquestionably is when the noise (or variation, etc.) is due to measurement error, not actual variation in the “real value”.
For example reading an analog to digital converter in electrical circuitry is often plagued by electrical noise that has leaked into the signal of interest. By over-sampling (sampling multiple times and averaging) this can be reduced and you actually get a better representation of the “real value”.
Now averaging simply to smooth things out so it looks pretty may not always be wise and can be misleading. Numerically it depends on the type of “noise” you are encountering and wish to remove. It just so happens that simple averaging is the perfect filter to remove Gaussian (truly random) noise.
Unfortunately in the real world the noise you encounter is often not really random. Again with the electrical example, a signal is often plagued by occasional glitches that are large and only spike in a single direction. If you run an averaging filter on this type of signal it will have the effect of moving the baseline and this is not a true reflection of the signal. In this case a median filter is a better choice, but I digress. Needless to say you need to understand what the undesirable part of your signal is in order to select the best noise reduction method.
With temperature trends determining what is noise and what is signal maybe difficult to determine, and confirmation bias can rear its ugly head. Certainly averaging over a year, a day, a month, a solar cycle etc. may make sense to visualize a trend better.
JH: I know of no one worth $250/hr. I’m guessing you’re not either.
I stopped reading Realclimate long ago. When I want “real science” on climate, that is ranked just above SkS on sites that provide little useful data. Plus, I generally read the research papers, not blogs, for such information.
Average monthly temperatures tell me absolutely nothing. I don’t know what they tell you.
Again, the plot tells me nothing. I assume it’s supposed to. I don’t find any of this useful for finding trends in a world where temperatures varying by over 200 degrees. I cannot see any way the average tells me anything other than ballpark guessing.
Yes, averaging can remove a lot of things–many of which probably should not be removed. Average, so far as I can tell, is one of the most used and abused statistics out there. I will check out the link you posted when I have the time.
Tom: I can see where averaging might be useful when the noise is due to measurement error, but being the unrepentent stickler that I am for accuracy, I would hope that instead of averaging data, one would get a more precise instrument and thus better data. In the absence of the ability to obtain better instrumentation, the average could be useful while more accurate measuring techniques are being developed. My fear is that very often averages tell us nothing but because they are “statistics” and “science”, we blindly assume that the numbers don’t lie.
Sheri,
More precise instruments typically do this exact thing. They either over sample internally or use electrical low pass filtering / integrating capacitors internally to effectively average. If you know you are going to get Gaussian noise, averaging will absolutely positively solve it. It’s real.
This is evident by the fact that almost all more precise instruments are slower than less precise instruments. There are of course better and worse technologies that do make a real difference.
Many measurement instruments allow you to trade off accuracy/precision for speed.
There’s much more wrong with McKittrick’s paper than flawed analysis. As was pointed out in an earlier comment, what is the meaning of a global average of temperature? Are the data points taken from randomly spaced stations (including those near air conditioner outlets and on parking lots, as monitored by the IPCC)? Are they taken from ocean points? Arctic and Antarctic points? .Siberian tundra points? Are they taken from equally spaced points of longitude and latitude?
Let me put a homely example: I live in an exurb of a small Pennsylvania town. Driving back from the center of town this evening to our home on top of a hill outside of town, the temperature fluctuated from 75F (center of town) to 67 F (our house) in the space of 25 minutes–no cold front was passing by. We see temperature differentials of at least 2F between the shaded valley of our property and the open driveway of our garage. And these mini-climates are evidenced by such phenomena as different dates for first blooming of daffodils and other perennials.
So, I’m not sure if Briggs or others commenting on this post would agree, but I’ll quote Mark Twain (or was it Lord Disraeli?):
“There are three kinds of lies: lies, damn lies and statistics.”
Bob,
Just what it says; the sum of observations divided by the number of observations. But McKitrick’s graph isn’t looking at global average temperature, it’s looking at global average temperature anomaly, which is the change in temperature for each station from some arbitrary baseline period averaged together.
Yes, of course. Handling the dearth of observations near the poles is the subject of much study and discussion in literature.
Clearly not. Even if we were currently doing so, we’d still be stuck with the unequally distributed data from the past.
Indeed. That’s one reason for using temperature anomaly, not absolute temperature. As well, predicting the temperatures of your driveway and a bed of peonies near a shade tree 20 yards away is something a weather forecaster couldn’t do for you a week in advance, much less a decade.
Briggs, on 1 October 2014 at 10:18 am, said:
“I don’t think you [Ross McKitrick] … really grasp what statistical models areâ€.
I think that you have hit the nail on the head here. Several things that McKitrick has said, in comments at Bishop Hill and in e-mails to me, support your idea that McKitrick does not really know what a statistical model is. It did not occur to me, previously, that such a thing was possible. After all, McKitrick is a full professor at a respected university where he teaches time series.
The statistical model that McKitrick relied upon is described in the cited paper by Vogelsang &Franses. The paper’s first three pages are on my web site. (Only the first three, because the paper is under copyright.) Page 3 describes the model (§2.1). The model is misrepresented by McKitrick’s paper, as well as by McKitrick’s e-mails and comments at Bishop Hill.
Storms and teacups. Could we not just all agree that:
1. It got a bit warmer in the 80s and 90s
2. Since then it hasn’t got warmer
3. You can come up with a method to quantify how long it’s not been getting warmer
4. But that answer will depend on what method you use.
I had thought that the McKitrick/Telford/Keenan spat was the most meaningless of the current climate disputes. But now people are getting very excited over whether the meaningless 2C “target” should be replaced by an even vaguer and more meaningless one.
Brandon, thank you but I think you missed my point. It would be more meaningful to take five or six specified locations, say antarctic, sahara, amazonian jungle, London, rural Canadian town and follow each, individually over a period of time.
The average over global locations is meaningless.
Bob Kurland,
I think an average global temperature could have meaning. In the U.S. it’s generally warmer in the summer than winter. Doesn’t that mean the average temperature is higher? The same idea for the entire globe.
What I find questionable is the expressed precision. Can we really get a global average to a precision of ±0.1K considering the problems you’ve outlined? Satellite measurements from the EOS series do cover the entire globe even though they don’t go directly over the poles with an inclination of 98,2 degrees (if I recall the orbital parameters correctly; it’s been a while since I worked on EOS) but can still view the poles. So maybe the precision of their averages is warranted if only for the atmosphere.
Sheri,
If average monthly temperatures tell you absolutely nothing, I guess the terms of spring, summer, fall, and winter don’t mean anything to you either.
So, the best thing to do is to read the paper yourself. I hope you have a ball or two!
Yes, one may say the definition or derivation of the duration of hiatus doesn’t mean anything, but it’s mean-spirited to trash the paper and the person without understanding what has been done in the paper first.
Have you ever heard the Chinese idiom “Rotten wood cannot be carved?”
I still charge $250 an hour.
It was McKitrick who reminded this engineer several years ago that the average of two different (i.e. spatially separated) temperatures is not the ‘temperature’ of anything; it is merely a statistic.
The simplest demonstration of this truth is the average of (i) the temperature of a cup of coffee (80 deg C) and the temperature of (ii) an ice-cube (0 deg C), both sitting on the same bench; while the average of the numerical parts of these two data is clearly ’40’, that number does not represent a temperature of ’40 deg C’ belonging to any object at any position, anywhere; it is simply a mental construct and is not entitled to have the unit name ‘deg C’ applied to it.
numerical parts of these two data is clearly ’40’, that number does not represent a temperature of ’40 deg C’ belonging to any object at any position, anywhere; it is simply a mental construct
That’s true of any average. It may be a meaningless quantity in your example but that doesn’t mean all averages — even of temperature — are meaningless. Your example is using (presumably) unlinked objects and doesn’t necessarily apply to an atmosphere. If it’s hot on one side of a room and cold on the other you may want to know what to expect in the middle.
Tom: Part of the misunderstanding here appears to be my definition of precise equipment. Sometimes I forget that most of the world is made up of digital and electronic equipment rather than the good old mercury thermometer. I see that if you know you are going to get noise, averaging would be okay. It still offends my precise nature (I like slow but accurate) but I can deal with it! However, in the case of noise, the variables tend to be closely grouped, right? So averaging would make sense. Unless the noise comes in at 3 to 4 times the signal–then averaging would not make sense.
Brandon: Yes, it is looking at an anomaly from an arbitrary starting point. It does matter. As the 30 year intervals roll forward, the anomaly becomes smaller and smaller, due to the arbitrary starting point generally being the 30 year average (not in all cases, which adds to the confusion in this mess). That arbitrary starting point does matter. If I use an average from a colder period, the anomaly is larger–I think that where the “cherry-picking” (pie making, baking away) charges come from.
Bob: I think it was NOAA who said they are down to 1200 data points for temperature. Some areas are over represented and some under represented. It seems there are fewer and fewer stations as the estimates become “more and more precise”. I don’t really think that works, but that is the claim. (I guess it’s like “doing more with less”.)
You make the same argument I have repeatedly. Why must we average or integrate or whatever the entire globe and get one single number?
Paul: I don’t think “Could we not all agree” is going to work here. We are talking statistics and temperature changes of a hundredth of a degree that will dictate political actions globally if that can be done. I agree on the vagueness of it all, but vague is a politician’s best friend. It didn’t used to be a scientist’s best friend, but some seem to have taken up the cause.
JH: As far as the temperature, no the seasons don’t tell me anything. There can be very cool summers, very warm winters, etc. Even with averaging, I doubt you could correctly identify the month it matches even if you know the location. Might be an interesting experiment though–I will have to get onto that one…….
Why do I need a ball to read a paper?
I don’t really care what you charge. Have I not made that clear?
The “idea” of global average temperature doesn’t bother me in the least. We would like to know what the uniform warming rate of the earth is (or isn’t).
Now if the earth wasn’t spinning and the atmosphere and oceans weren’t moving the heat around in non-uniform chaotic ways in both short and long time spans, then we could determine this rate of warming with a single long term thermometer. Unfortunately these confounding influences make this near impossible.
So one answer is to measure the temperature across the globe and average it out in the hope this will show you the overall uniform warming rate (although its not really uniform).
I get that some will debate whether the particular methods in place are adequate for this purpose, but what I don’t get is that some believe this objective is somehow not meaningful at all.
Now I think Spencer has a point in that if want to understand the warming of the atmosphere, this obsession with temperature measurements 2M off the ground is a bit strange and makes this problem harder, not easier, One would be better off measuring the entire volume of the atmosphere if this is feasible.
Tom: It is bothersome because once you reduce something to an average–a single number–there seems to be a tendency to believe that number somehow represents reality. The anomaly from average global temperature is often treated as if it has some reality outside of being a number. It does not reflect anything in reality. It’s purely a mathematical concept. (As DAV noted)
If you have temperatures of 100, 50 and 0, the average is 50. If the temperatures are 50, 50 and 50, the average is 50. If the temperature at the North pole rises to 100 and the equator freezes over, the average can still be the same and the anomaly is still the same. Average says nothing about the distribution of variables within the set, which in the case of temperature changes, would seem to be important. This is why I question it–there are many ways the average can change, and many of them result in catastrophic changes while many are minor inconveniences. As you note, there is no uniformity. I just cannot see what value the statistic has in the case of widely varying data points.
Sheri,
Only when comparing two or more different timeseries. If they have different baselines, it’s trivial, but necessary, to adjust one timeseries to match the baseline of the other. Or one can adjust both timeseries to a third, completely arbitrary baseline as suits one’s whimsy.
If I chose 20 ka as my baseline, then today’s GAT anomaly would be north of 12 C. That’s not cherry-picking, it’s (potentially) playing games with numbers to make them sound bigger. But even with that point in time as my baseline period, I could still take the difference between the anomaly today and the anomaly in 1880 and tell you that the change is 0.87 C. I could convert the GISTEMP anomaly (1951-1980 baseline) to Kelvins and the change from 1880-2014 would be 273.80 – 272.93 = 0.87 K, which is still 0.87 C.
It does not matter what the baseline period is unless two or more timeseries are being compared.
Brandon: If we are not comparing time series, then what are we comparing? I thought the warming today was more than it was in the past, which would be comparing time series. If we are just talking about the changes in a specific period, okay. However, that eliminates the ability to claim that today’s rate is higher than in the past and only tells us that things were getting warmer for a while.
Bob,
Regional-level analyses are done and published. Frex, you can go to GISS and get an anomaly timeseries for the US. Or if you’re in a masochistic mood you can download gridded data. The bleeding edge GCMs spit out temperatures on a 0.25-degree grid (or higher?) (in Kelvins), but 1-degree grids are the highest resolution with which I’ve tortured my laptop to make pretty pictures:
https://docs.google.com/file/d/0B1C2T0pQeiaSRy1kZ2pzbjRUSFU
(Safe for work — no nude San Franciscoans, I promise.)
If there’s any meaningless average of temperature to represent the Earth it would one computed from GCM output. It’s entirely made up. Talk about reification!
The point is to determine if the total amount of heat content in the atmosphere has changed over the last 100 years. If it has, why?
One can say this parameter doesn’t matter because of X
One can say it is unmeasurable due to technology limitation Y
One can say they don’t trust the measurement due to Z
One can say it is an irrelevant measurement due to A
I think they have established that warming has happened, even with Phil Jones antics and so forth.
I don’t think they have answered the “why” question very clearly. What drives heat content is pretty much scrambled eggs, and I don’t think they have unscrambled these very well. If the models had good prediction skill and worked on a regional level I think they might have a story, as of now it is a lot of hand waving. They could be right, but my money is on they have carbon sensitivity too high in the models, and they are politically unable to walk this back.
You’d have to be one brave modeler to put out a model with significantly lower projections at this point. You would get kicked of the reservation in a nanosecond.
Tom Scharf @ 2.54 pm
“I think they have established that warming has happened, even with Phil Jones antics and so forth.”
Please explain and/or justify “they have established that warming has happened”… recently? up to 198x ??? or ???
Thanks.
Sheri,
Whatever we want to compare. If we want to look for divergence between two different time series, say GISSTEMP and HADCRUT then we need to make sure the baselines for both are taken over the same time period. Or perhaps more relevant, divergence between UAH lower troposphere, HadCRUT4 and CMIP5 model runs:
http://www.drroyspencer.com/2014/02/95-of-climate-models-agree-the-observations-must-be-wrong/
By the way, Spencer didn’t do it correctly … he took 5 year averages of the observations and zeroed all three time series at 1983. The way to do it is to baseline the annual data (’79-’83 would have worked ok, but standard practice is to baseline over at least 20 years) then compute the 5-year running means from the baselined data, then graph the output.
Not necessarily. We can do that comparison in one and only one time series … simply pick a starting time t0 and an ending time t1 separated by n years. Then subtract t0 from t1, divide by n years and that gives you the annual rate of change. Then shift t0 and t1 backward (or forward) 25, 50, 100, 1,000, 10,000, 100,000 … 1 bazillion years (or whatever … it depends on how many years covered by the series), do the same calculation and compare the rates of change.
Better to do a regression over all the data points between t0 and t1 though, as that somewhat (maybe) reduces sensitivity to which t0 and t1 we pick. As n number of years gets longer, the less sensitive the slope of the regression is to moving t0 around. Even so, one of my favorite Briggs posts is, “How To Cheat, Or Fool Yourself, With Time Series: Climate Example”: https://www.wmbriggs.com/blog/?p=5107
Which seems the most common discussion in the press and among we climate hobbyists/pundits. Talking about the 1st derivative is not terribly common and I’ve rarely seen Jean/Joe Public discussing the 2nd derivative.
As a practical matter, more than one time series is often necessary the further back one goes in time — HadCRUT4 doesn’t go back to the Lower Dryas.
If you think about it, something like HadCRUT4 is nothing more than a bunch of time series combined into one. Even though calculating the temperature anomaly from some arbitrary baseline is a useful way to reduce location bias when computing a regional or global anomaly it doesn’t solve all geographical bias issues as Bob has rightly pointed out. In the US, the surface station density is much higher on the East Coast than anywhere else in the country, so taking a simple average of all US station anomalies will skew the results to whatever trend is happening in, say, New England, and vastly understate the trend in Nevada or Montana — all of which are going to be different because they’re three distinctly different regions and climate regimes.
Better to do a regression over all the data points between t0 and t1 though, as that somewhat (maybe) reduces sensitivity to which t0 and t1 we pick.
No. A linear regression is highly sensitive to the the endpoints when the data contain cycles. Take a sine curve over several periods and linearly regress using random endpoints. The steepest line will be when one is at a nadir and the other at a peak. The flattest would be peak-to-peak or nadir-to-nadir. The same is true when computing the slope using just the endpoints.
Tom,
Climate sensitivity to GHGs is an emergent phenomenon in GCMs based on — among many other things — published absorption lines for each species. They get to a sensitivity figure by running a statistical model against the output of the physical simulation, NOT by cobbling together some statistical model before the run and feeding a sensitivity parameter to the GCM.
And IIRC, the IPCC did downgrade CO2 sensitivity in AR5, prompting a feeding frenzy in the contrarian community. In politics it’s damned if you do, damned if you don’t.
http://pubs.giss.nasa.gov/abs/ha05200y.html
A common view is that the current global warming rate will continue or accelerate. But we argue that rapid warming in recent decades has been driven mainly by non-CO2 greenhouse gases (GHGs), such as chlorofluorocarbons, CH4, and N2O, not by the products of fossil fuel burning, CO2 and aerosols, the positive and negative climate forcings of which are partially offsetting
Here’s a more recent paper from last year:
http://iopscience.iop.org/1748-9326/8/1/014024
Here, for the first time, we combine multiple climate models into a single synthesized estimate of future warming rates consistent with past temperature changes. We show that the observed evolution of near-surface temperatures appears to indicate lower ranges (5–95%) for warming (0.35–0.82 K and 0.45–0.93 K by the 2020s (2020–9) relative to 1986–2005 under the RCP4.5 and 8.5 scenarios respectively) than the equivalent ranges projected by the CMIP5 climate models (0.48–1.00 K and 0.51–1.16 K respectively). Our results indicate that for each RCP the upper end of the range of CMIP5 climate model projections is inconsistent with past warming.
It’s being openly discussed and … hotly … debated in the consensus community, which in and of itself doesn’t prompt a full-on witch hunt. Even MET’s latest decadal forecast update begins toward the lower end of the CMIP5 runs:
http://www.metoffice.gov.uk/research/climate/seasonal-to-decadal/long-range/decadal-fc
The forecast of continued global warming is largely driven by continued high levels of greenhouse gases. However, the forecast initially remains towards the lower end of the range simulated by CMIP5 models that have not been initialised with observations (green shading), consistent with the recent pause in surface temperature warming.
However, by the end of the MET forecast (2020) they cover pretty much the full range of the CMIP5 envelope. We’ll know in six years whether MET will have to eat crow.
they get to a sensitivity figure by running a statistical model against the output of the physical simulation
“physical simulation” is an oxymoron. It’s no more physical than a mental one.
The sensitivity figure assumes a significant causal relationship. See tomorrow’s post on the ice cream cone/drownings relationships. You can get forcing there too. It’s a fictitious lumping variable (F): G –> F –> C.
While there is a post determination, just exactly how do you think the models have been tuned? The short answer: by computing the sensitivity over a training period and feeding it to the model.
DAV,
Of course; however, as I stated previously a regression is less sensitive to endpoints as subtracting one endpoint from the other and dividing by n.
Since the topic is global temperature anomaly — which does contain cycles — here’s a look at annual GISTemp data from 1880 to present:
https://drive.google.com/file/d/0B1C2T0pQeiaSTlpSbFh2R3dHTFU
There are 4 sets of curves at the top of each graph representing 5 and 30 year moving trends, each one calculated with a linear regression and the difference in endpoints for comparison. Units are degrees C/decade.
Temperature curves are at the bottom of each graph for reference (I shifted down 1 degree C to avoid interference). For the 5 year trends, the regression and endpoint calculations are not much different from each other. However, for the 30 year trends the endpoint curve wiggles around more than the regression curve.
If you know of a way to calculate trends for cyclical time series that is not as sensitive to endpoints, I’d be glad to hear it.
I’m having deja vu all over again. Do you really think GHGs only absorb at IR frequencies in lab tests and/or instruments (such as IR thermometers) that rely on the absorption properties of a given gas?
There’s no need to speculate, you can get an introduction here:
http://cmip-pcmdi.llnl.gov/cmip5/getting_started_CMIP5_experiment.html#_T1
This document provides more detail:
http://cmip-pcmdi.llnl.gov/cmip5/docs/Taylor_CMIP5_design.pdf
The short answer: predetermined forcings are plugged in as parameters for very narrow and specific purposes such as cross-model comparisons. Otherwise, the bulk of the experiments described — including hindcasts — use GHG gas concentrations as parameters, not a pre-calculated forcing values.
Of course; however, as I stated previously a regression is less sensitive to endpoints as subtracting one endpoint from the other and dividing by n.
Neither is appropriate for data containing cycles.
I’m having deja vu all over again. Do you really think GHGs only absorb at IR
No. As you often do, you missed the point.
DAV,
Repeat: If you know of a way to calculate trends for cyclical time series that is not as sensitive to endpoints, I’d be glad to hear it.
You really didn’t expect me to take that thingy about ice cream cones seriously, did you? If substance A exhibits behavior y for some condition x in laboratory-controlled conditions, why wouldn’t it be reasonable to assume that the same substance exhibits the same behavior outside the lab? If the bench experiment tells you that x and y are correlated and you find also find a similar correlation out in the wild, why is it so outlandish to conclude that ice cream melts inside your mouth as well as on hot sidewalks?
@ DAV
“What I find questionable is the expressed precision. Can we really get a global average to a precision of ±0.1K considering the problems you’ve outlined? ”
How dare you slander the fine climate scientists by implying that they can only measure the ‘Temperature of the Earth’ with a precision of +/- 0.1 K?
In fact, according to this NASA press release:
http://news.google.com/newspapers?id=MqQgAAAAIBAJ&sjid=5mgFAAAAIBAJ&pg=1268,931341
their ACTUAL precision is two orders of magnitude BETTER!
From the above article: “The NASA scientists, using NOAA and other data, calculated an average 1998 worldwide temperature of about 58.496 degrees F., topping the record, set in 1995 of 58.154.”
They DID sound a bit apologetic about not being able to do better than a millidegree (‘about’ 58.496 F), but at least they gave it the old college try and should get ‘extra credit’.
Brandon: You might want to reread DAV’s comments on ice cream cones. You did appear to miss the point. Consider it “homework”.
Bob Ludwick ,
their ACTUAL precision is two orders of magnitude BETTER!
Oops. My bad. All of that from a paltry bunch of poorly spaced thermometers, too. 🙂
I’ll give them credit but it has more to do with the possession of brass spheres.
If you know of a way to calculate trends for cyclical time series that is not as sensitive to endpoints, I’d be glad to hear it.
I would think the person passing out homework assignments and extra credit points would already know. Try searching for “regression models of cyclical data” and skip over those attempting to force-fit a linear regression.
Add this to the homework assignment Sheri passed out.
You really didn’t expect me to take that thingy about ice cream cones seriously, did you?
It has more to do with this than you apparently think. And no, it’s rarely a good idea to assume the mechanisms in a simple system can be carried over willy-nilly to a complex one.
Sheri,
“Correlation is not causation” is stats 101 material, and greatly oversimplified to boot. I meant what I said about this being deja vu all over again.
DAV,
I know that I don’t know everything, so I ask questions of people that I think may have answers.
I accept your homework, thank you. 6th hit is this: http://www.ce.utexas.edu/prof/maidment/grad/kaough/webpage/public_html/mreport/fourier/fourier.html
I’ll play with it tomorrow.
I don’t recall reading any predictions from the 19th century predicting that eating ice cream increases the risk of drowning in swimming pools. Nor are people near bodies of water constrained by known laws of physics to behave in exactly the same way under certain specific circumstances. Apples and oranges have more in common than kids with ice cream do with CO2 and IR radiation.
By the same token, when observations depart from predictions in a complex system, arbitrarily falsifying one part of the model in a willy-nilly fashion is ill-advised.
Brandon,
Fourier analysis will require long time records. Here is a write-up of a 240 year central European analysis. As pointed out there, you can identify the cycles but not the causes. Cycles that jitter are sometimes misidentified. Also, it likely won’t help in detecting trends. Still something to look at.
Regardless of method, analyzing cyclical data is not a trivial undertaking.
You are still missing the point about forcings and sensitivities. They are little more than expressions of correlation although the terms imply assumption of causality even if none exists. You can even compute them for the ice cream/drowning example. They actually tell or explain little. Analyzing CGM output for them will only yield the forcings built into the model.
When observations depart from prediction, the entire model is falsified. The largest driver in the models are GHG’s as evidenced by what forcings are being calculated. The most prominent one waved around is CO2. It’s supposed effect on the climate is also falsified by the observations. CO2 is what all of this carbon-footprint (and now hand-print from the looks of it) is all about. So it shouldn’t come as a surprise that it is foremost in most discussions.
JH: It was 63 degrees yesterday for a high, snowed last night and today is expected to reach 60F. Monthly “average” high is 77F (not that I believe that number, trust me) and 43F average low. What season am I in?
Brandon: How, exactly, is “correlation is not causation” greatly oversimplified? It’s a statement of fact, pure and simple.
Explain how this is “arbitrarily” falsifying one part of the model in a willy-nilly fashion. CAGW has always claimed, up to the point when temperatures flattened out maybe, that CO2 was THE driver of climate and if we didn’t stop adding our tiny bit of CO2 to the .04% in the atmosphere, we were all going to die. (Yes, being melodramatic based on the statements and actions of those most prominent in the CAGW activist brigade. Don’t like it? Sorry, but Michael Mann does and he’s a science guy and a really important one. Hansen too. This IS CAGW whether you find it offensive or not.) As for that tiny percentage of CO2, it was necessary to explain CO2 behaving basically as a catalyst to warming, with back radiation, etc. to affect such a huge influence on global temperature. CO2 alone failed. So what is it acting as a catalyst to? Back radiation, most likely—which then failed to warm the atmosphere and instead went into the ocean (or somewhere). Bottom line, if the CO2 aspect cannot be correctly accounted for and verified, the theory dies. You can revive whatever parts you still believe work and prove that theory, but the original one is dead, DOA, etc. That’s how science works.
An MIT professor clearly stated that just because each separate part of a model works, IT DOES NOT MEAN THE MODEL ITSELF WILL WORK. Interactions exist and cannot always be accounted for. Are you disagreeing with a guy who teaches climate science? If so, congratulations, you have reached the level of the rest of the anti-science crowd who dare to disagree with an authority.
William: Is there anything fundamentally wrong with providing a more quantitative estimate of the length of a pause in a time series that one can see by eye but not quantify?
As I see it, the problem comes from interpreting the meaning of the pauses that Ross has quantified. Unlike the IPCC, Ross isn’t making a prediction about the future. The idiots who think the pause means global warming has ended are almost certainly wrong in the long run (given what we understand about the physics of global warming).
Frank,
Nothing fundamentally wrong, no, not in principle. But. We can see the data here, so no method beyond counting is needed. See the post on the next day.
And if we’re to use McKitrick’s method, we must have some evidence that it’s a good model. Have we? No. Any number of models can be fit to the data, each giving different answers. Why not just trust our eyes?
Your reply is also characterized by unsubstantiated belief. You say the pause means global warming has not ended. That might be true, because it is a contingent proposition. But what is your proof? The IPCC’s models which are known failures? Simple desire? What?
Frank: Thank you so much for calling people idiots because they disagree with you. Way to win friends and influence people. Reading the talking points of the climate change advocates again, aren’t we? Given what we understand of physics, modeling and politicized science run amok, I’d give equal odds that the temperature will either go up or down. Remember that—it’s profound and very important to understand.
You don’t need a “quantitative estimate” of the length of the pause. You graph the annual temperatures and look at the line. You can do some 5, 10 or 20 averages in another color if it makes you feel better. However, there is absolutely no need for anything other than a graph showing actual data. As for “eye-balling it”, use a larger graph scale. I’m pretty sure even a half-blind person can measure the length of pause on a big enough scale. No additional model or math needed.
Frank, here’s a physics idiot who doesn’t thing (anthropic) global warming has ended because it has never begun (significantly).
@ Bob Kurland
“Frank, here’s a physics idiot who doesn’t thing (anthropic) global warming has ended because it has never begun (significantly).”
Au contraire, Bob. I think that as climate data continues to accumulate, along with growing awareness of how that data was collected and processed, it is becoming obvious that the anthropogenic component of ‘Global Warming’ approaches 100%.
Of course the primary ‘drivers’ of the warming are computers, pencils, tax dollars, international conferences, and progressive politics, rather than any CO2 emitted as a byproduct of our energy infrastructure, but that is another story.
DAV,
I skimmed the JoNo post to get the gist, will dig in later for better comprehension of her argument. I like her ending note: Don’t read too much into the cycle lengths or the predictions. . Good advice.
Clearly not. My intention here is to apply as many methods as I can to real-world data and compare the results as a learning exercise.
Here again is your favorite graph from Spencer:
http://www.drroyspencer.com/wp-content/uploads/CMIP5-90-models-global-Tsfc-vs-obs-thru-2013.png
You can easily see that individual model runs wiggle around quite a bit over short periods of time.
From your reply to Kurland above: What I find questionable is the expressed precision. Can we really get a global average to a precision of ±0.1K considering the problems you’ve outlined?
How big do you think the error bars really are? ±0.25K? Which way do you want it — are the thermometers good or not?
Sheri,
http://en.wikipedia.org/wiki/Correlation_does_not_imply_causation#Usage
Quoting Edward Tufte:
* “Empirically observed covariation is a necessary but not sufficient condition for causality.”
* “Correlation is not causation but it sure is a hint.”
That’s still an oversimplification.
It lacks consideration of interactions (known and unknown) in the total system. See your climatology professor.
2,130,000,000,000 kg/ppmv atmospheric CO2
395.30 ppmv CO2 atmospheric concentration
841,989,000,000,000 kg CO2 total atmospheric mass
29,000,000,000,000 kg CO2 emitted per year by humans
Lacking other context, those numbers are just as meaningless as 0.04%. But they’re impressively large, aren’t they?
Why you might think I’d have any problem with taking shots at alarmism and the politics of fear is frankly beyond me.
It’s so easy to falsify a strawman, innit?
Any object dispersing energy via radiation does so omni-directionally.
Verification requires measurements which are subject to uncertainty and error, as you are wont to remind everyone. Same question I had for DAV: are the thermometers good or aren’t they?
He and I are in perfect agreement on that point. Stats 101 again.
How big do you think the error bars really are? ±0.25K?
You are overly fascinated with numbers.
The short answer: don’t know. Doubt anyone does without the raw data (HadCRU claims has been lost theirs) and actual interpolation methods. Some interpolations are perforce between points thousands of miles apart. Imagine estimating the temperature at St. Louis using measurements at San Francisco and Maine or estimating the temperature over the mid-Atlantic using measurements in Spain and Mexico. I once saw the temperature at Quantico differ from that at National in DC by 20F — a distance of only 15-20 miles — so what then can the error be in estimating the values at much larger distances? Even as anomalies, the assumption is they would change at the same rate — a really bold assumption. Then there’s the BEST data (I have the initial 45GB release) which only increases the concern over the quality of the thermometer record. Most of the record is in the U.S. — hardly global.
Stick with the satellite data. It’s the only true global measurement at more than one altitude and constantly calibrated thus most accurate. It’s rather short but if it wasn’t for the alarmist preoccupation with bringing down industrialization no one would really care that much or even know about it.
You can easily see that individual model runs wiggle around quite a bit over short periods of time.
Is there a point here? If anything it only goes to show how unreliable the models must be. Note how much they differ from each other. Consult enough broken clocks and at any given moment you might find one that’s right but how would you know which one is the right one?
DAV,
More like overly intolerant of selective specificity. Recall that you were the one to bring precision into this discussion … now suddenly talking about numbers is verboten. Thanks for my first busted irony meter of the day.
But you do know that “[CO2’s] supposed effect on the climate is also falsified by the observations”.
Yet you’re absolutely certain that “[CO2’s] supposed effect on the climate is also falsified by the observations”.
How then is it that you’re so sure that “[CO2’s] supposed effect on the climate is also falsified by the observations”? I’m not sure it’s possible for me to put a finer point on this one.
Yes. Quite useful, but rather short. It would be nice to send modern instrumentation back in time but we can’t. We’re stuck with the data we’ve got.
It really is not possible for you to resist ranting about fringe wingnuttery while pretending that sane, evidence-based, practical discussion is completely nonexistent, is it.
Yes, one I’ve made many many times in this very forum. With your engineering and math background it truly boggles my mind that you’re so obdurately opposed to deconflating short-term variations from long-term trends. It’s no secret that the instantaneous forcing of CO2 is a fractional determinant of atmospheric and surface temps. Carping about flat air temperatures for the past 16 years is a bit like going down to the jetty for an hour and then claiming that tidal theory must be wrong because the most significant sea level change you observed was the difference between the peaks and troughs of the breakers.
Call it the winddiddit theory of tides.
If you conveniently ignore the data which tell a more complete story, you’re going to come to the wrong conclusions. There are two time periods you like to talk about: 400,000-10,000 years ago, and 1998-2014. Heaven forbid someone mentions ocean heat content trend estimates below 700 m. I can only begin to imagine the names you’d call me for having the temerity to suggest that upwelling cooler water might just have something to do with hiatus in atmospheric temperature. [gasp!]
It speaks more directly to the difficulty of forecasting those wiggles on an annual basis.
I have, many times.
That depends on what the chosen standard of correct is, which answer from you is perennially elusive. It couldn’t possibly be that the reason you don’t answer is that you really do see the wiggles in the record on the order of a tenth to a quarter of a degree over a half-decade and realize that demanding forecasting precision on an annual basis inside that envelope is an impossibly high standard to achieve … could it?
Recall that you were the one to bring precision into this discussion
More like overly intolerant of selective specificity. What I said was that s precision of ±0.1K and I meant likely greater. How exactly is this specific let alone selectively specific? It is only you that is trying to be specific. Stop putting words in my mouth.
Yes, based values based on more precise values than those of HadCRU or GISS and comparing them to the models which claim to know the effect of CO2 yet can’t demonstrate it.
It’s no secret that the instantaneous forcing of CO2 is a fractional determinant of atmospheric and surface temps.
It’s also no secret that the effect of CO2 concentrations on the atmosphere are unknown. If they were then the models would be closer to observations. Worse the models don’t agree with each other. So again what’s your point?
It speaks more directly to the difficulty of forecasting those wiggles on an annual basis.
The why are you dwelling on it? You want to point at them and say what exactly?
It speaks more directly to the difficulty of forecasting those wiggles on an annual basis.
Not to mention the most recent period.
it truly boggles my mind that you’re so obdurately opposed to deconflating short-term variations from long-term trends
Putting words in my mouth again?
Carping about flat air temperatures for the past 16 years is a bit like going down to the jetty for an hour …
No it isn’t.
FYI: “Carping” means “characterized by petulant faultfinding”. I find no fault with flat air temperatures. And “petulant” mean “sulkingly”. I am pointing to the discrepancy as clear indication of model prediction failure which also implies the modelers don’t have a working theory of the atmosphere. Why would I sulk about this? Seems you are the one doing the sulking.
DAV,
And that’s a bad thing …. why? If the devil is in the details, what lies in ambiguities?
It’s no secret that the instantaneous forcing of CO2 is a fractional determinant of atmospheric and surface temps.
Translation: over short periods of time (annual to decadal) the global average atmosperic temperature fluctuates due to variability in other parts of the system which dominate the marginal changes in radiative forcings due to GHGs, aerosols, solar output, etc. CO2, CH4, O3, CFCs and the like only become dominant factors in the trend over longer periods of time. The canonical definition of “longer periods of time” is 15-30 years.
Graph of the day:
https://drive.google.com/file/d/0B1C2T0pQeiaSczJHSG1oU0p6M2c
The UAH anomaly has been adjusted to the GISTemp 1951-1980 baseline by moving it up 0.3984 C.
The short-term wiggles in temperature cannot be explained by short-term changes in CO2 forcing. No regressions or other stats are required to see this; you can look at published plots without my regressions and eyeball it yourself.
I note that once again you’ve fixated on 1998-2014 and ignored 1880-1997. Ok, fine, you trust the UAH anomaly series — I’ve got no problem with it. How about dealing with something else you persistently dodge: 700-2000 m OHC from ’79 to present?
My dictionary says, “complain or find fault continually, typically about trivial matters”. Not to argue which is the more correct definition, but I did not mean petulant. Nor did I mean trivial either, so it was the wrong word to use.
The short-term wiggles in temperature cannot be explained by short-term changes in CO2 forcing.
Nor has it ever been shown that CO2 affects the longer-term ones.
The canonical definition of “longer periods of time†is 15-30 years.
Anything as long it is in the future, eh? see: https://www.wmbriggs.com/blog/?p=13721#comment-129932.
I’m guessing you REALLY mean 30 years (or more) as the current flat observation period has been 17-20 years (depending on source) and you want to reject it. Ross summed it up quite well:
Not to mention that more money to fix the models would needed. (wink, wink, nudge, nudge).
Ben Santer should be eating crow about now. I’m with Anthony and place my bet on option (2). The probability of (3) is zero.
But then, if the 60-year cycle is real, temperatures will turn around again in 30 years and you’ll be crowing about how right you and the models were and are. Keep dancing and it eventually rains therefore the dance is what caused it, eh?
—
You have gone from pointing at the high frequencies in computer model output (you initially linked Spencer’s chart saying : You can easily see that individual model runs wiggle around quite a bit over short periods of time and nothing else). Now you substitute the GISS high-frequencies without explanation of how they have anything to do with GCM output.
Why did you suddenly change your focus? Was there something in Spencer’s plot that embarrasses you?
These are rhetorical. It takes too long to get to your points (when and if there are any) and you act evasive in the process. It is way too tiring so I quit. You of course will insist on responding but unless others want to pick it up here, you’ll be talking and playing with yourself.
Dear Dr. Briggs:
I read this a few days ago and took a few minutes just now to confirm that you have proven something utterly amazing!
As you know, much of foundation for the “social sciences” is made up of studies postulating edificial theories by comparing two halves of a sample of three – and that’s sort of what bothered me about your experimental design. Doing it right means collecting at least 1000 or more candidates, getting the right kind of saw, and cutting them into 3, not 2 groups: one of which gets the treatment, one of which gets the placebo and one of which gets no attention at all. A quick review of influential “studies” suggests, however, that many experimental designs are limited to a placebo control – and, in those case, the results are generally broadly similar to yours: some percentage of the treated get better and a smaller percentage of placebo users get better too.
Think in terms of three groups, however, and the correct interpretation is obvious: 60% of sugar pill users recovered from the screaming willies – but, assuming that a much smaller number of the imaginary third group would have, what you’ve proven is that sugar pills cure 60% of SW sufferers.
A quick review suggests, in fact, that sugar pills are the most effective form of generic medication we have. Pick an ailment, real or otherwise, and studies exist proving that sugar pills cure anywhere from about one third to more than two thirds of victims.
Sugar is cheap, no professional intervention is required to get it, and it works far better than some expensive, professionally administered treatments.
So you know what that makes sugar, right? the pill that will finally make Obamacare work!
Pingback: The IPCC’s And McKitrick’s “H...