Skip to content
January 24, 2008 | 2 Comments

AMS conference report: day 4

The AMS is re-issuing its statement on the necessity of using probability in forecasts. I am on the committee that is re-drafting, or, as they to say, “wordsmithing”, it. If you know anything about how committees write “statements” you’ll know exactly what to expect. I wrote, using material people had generated before, a just-over-one-page document months ago and gave it to the committee. The thing then bloomed into a ten-page monster and reads a lot like a news report. Did you ever notice how people switch to a sort of newsspeak, or stilted, vocabulary when talking to the press? Well, the statement reads like that.

But the gist is still important: all forecasts need to include a statement of their uncertainty, that is, they need to be probabilistic. For example, you shouldn’t just say that tomorrow’s max temperature will be “50 degrees”, but “there’s a 90% chance it will be between 47 and 52 degrees.” The same thing goes for climate forecasts, too. You cannot just say, “Mankind is surely causing all ills” but that “Mankind is surely causing all ills unless you vote for me to solve the crisis.”
My friend Tom Hamill lead a Town Hall meeting of a new group that wants to lead the way to insert uncertainty into forecasts in a programmatic way. Tom’s with NOAA and has a lot of experience with ensemble forecasting, which I’ll explain later. Point is: people are just starting to come to the idea that predictions are not certain. Yes, even the ones you hear about in the newspaper.

I spent the rest of the day chairing a session of statisticians and “artificial” intelligence computer guys. Lovely people, all. But rather too inclined to believe their own press results of neural nets being “universal approximators”, meaning, to them, that you don’t need any other kind of probability model except neural nets. I’ll explain these things later, too, except I’ll note that when they, NNs, first gained notoriety, there was a consensus among computer scientists that all modeling problems were solved, intelligent machines able to think were just about to happen, and etc. etc. Yes, a consensus. Needless to say, but I’ll say it anyway, it didn’t happen.

The conference does go for one more day, but I’m out this morning. Besides, it’s an unwritten, but well known fact, that they don’t always schedule the best talks on the last day. Because of this, they even resort to bribery and are awarding door prizes to people who show up to the exhibit hall today.

January 23, 2008 | No comments

AMS conference report: day 3

More on hurricanes today. Jim Elsner, with co-author Tom Jagger, both from Florida State University started off by warning against using naive statistical methods on count data, such as hurricanes. Especially don’t use such methods without an indication of the uncertainty of the results because it is too easy to mislead yourself.

Elsner also exhorted the audience to pay more attention to statistical models. What my readers might not realize is that, in meteorology and climatology, there always has been a tension between the “dynamicists” and “probabilists.” The former are the numerical modelers and physicists who want to have a deterministic mathematical equation to describe every feature of the atmosphere. The later want to show why the dynamical models are not perfect representations of the climate and weather system and to quantify the uncertainty on dynamical models. So, for example, if you have limited resources and you can either increase the “grid scale” of a forecast model, hence increasing the computational costs, or you can take the model as is but apply statistical corrections to the output, the dynamicists would opt for the former, the probabilists the later. Very broadly, the dynamicists have won the day, but this is slowly changing.

After the plea for statistical cooperation, which I of course wholeheartedly support, Elsner described statistics of hurricane trends using Jim Kossin, and others’, satellite-derived new database of hurricanes. The “best-track” data, which I and most other people use, has definite flaws, and Kossin and his co-workers try to correct for them by reanalyzing and reestimating tropical cyclones using statistical algorithms applied to satellite observations. This presents a unique opportunity because we know the uncertainty in the Kossin database, which means we can propagate this uncertainty through to other models, such as Elsner’s, which predict hurricane trends.

What I mean by this is, Elsner gave statistical confidence intervals that said hurricanes have not increased in number. These confidence intervals are conditional on Kossin’s data being perfect. But since Kossin’s data is not perfect, we have to incorporate the uncertainty in Kossin’s data into Elsner’s trend predictions. What this does in practice is widen Elsner’s original confidence intervals. Elsner and Jagger are working on this right now.

What Elsner did find was the maximum wind speed of any hurricane within a year has been increasing through time. I find this to be true, too, but I couch it in the terms of increasing variance of wind speed. Now, there is no way to know whether this increase (since 1980 in Elsner’s study) is due to actual physical trends, or due to changes in measurements. It is more likely to be due to physical trends because the data comes from Kossin’s more consistent hurricane database, but since the satellites themselves have changed over time, it is not certain that Kossin and others have captured the difference in wind-speed measurement capability between satellites. This is just another place to incorporate the uncertainty in the database through to the prediction of wind-speed increasing. And, of course, even if the increase is real, we cannot know with certainty that it is due to man-made causes.

When I got up to speak, I felt I was part of a tag team, because I presented this picture, very similar to one Elsner showed:

misue of running means

This looks like hurricane numbers through time, doesn’t it? Well, it’s not: it’s just a bunch of random numbers with no correlation from “year” to “year”, overplotted with a 9-year running mean. I wrote about this earlier: using running means on count data that actually has no correlation far too easily can show you trends (to the eye, anyway) where they do not exist. But you see these kinds of plots all the time, even in the best of the best journals, like Science and Nature. You cannot show these kind of plots without some kind of measure of the uncertainty that the trend is real, but it happens all the time, and people make decisions using these plots assuming such trends are real.

The rest of my talk went well, though I noticed that, for our session, we had only about one-quarter the number of people listening as listened to the people who were sympathetic to the idea that mankind causes increasing temperatures. I still owe the AMS (and you, my readers) a simple summary of my work, which I’m getting to.

I ran into Jim O’Brian, also from FSU, in the COAPS group. Jim tried to recruit me to that group last year: it was a good job, but circumstances would not let me move from New York City. Anyway, Jim was wearing his Nobel Peace Prize pin, somewhat ironically. He was, and is, one of the many 100s of scientists on the IPCC, but Jim is openly skeptical of man-made global warming. Which should give you pause next time you hear the word “consensus” used to describe scientific thought about the matter.

January 22, 2008 | No comments

AMS conference report: day 2

The convention center in New Orleans is impossibly overcrowded; the last time I saw lanes of people so thick was at the Ann Arbor Arts Fair many years ago. And I heard, from the Prob & Stat committee, that the AMS will likely choose to come to New Orleans more often in the future.

There were about two dozen sessions going on at any one time, meaning it is impossible to hear most talks (this is true of nearly any academic conference). I spent most of the day listening to technical statistics and probability talks that won’t be of much interest to you, and I missed some talks on climate change “impact”, which are always nothing but forecasts with no attempts at verification, and thus of little use.

But there were four talks that had some meat.

1. Kerry Emanuel spoke on a hurricane “downscaling” method his group at MIT developed. Most weather and climate models give results at the very large scale, they are computed at “grid points” over the earth’s surface, and these grid points can be very far apart. This means that phenomena that occur between those grid points are not modeled or not seen. But they can be estimated using statistical methods of downscaling. Emanuel’s method is to infer, or downscale, hurricanes from global climate models. He showed some results comparing their method with actual observations, which did well enough, except in the Pacific where it faired poorly.

The main point was to ask whether or not hurricane intensity would increase in the years 2180-2200, the time when CO2 is expected to be twice what it was in pre-industrial days. Intensity is measured by his “power dissipation index”, which is a function of wind speed: obviously, hurricanes that are windier are stronger. The gist was this PDI would increase only very slightly, because hurricane numbers themselves would increase only slightly, if at all.

But aren’t hurricanes supposed to spiral out of control in a warmer world? Not really. He gave a technical discussion of why not: broadly, some levels of the atmosphere are projected to dry, which, through various mechanisms, lead to fewer storms.

He gave no measure of the uncertainty of his results.

2. Tom Knutson asked “Have humans influenced hurricanes yet?” or words to that effect. He showed that Emanuel’s yearly summery of PDI correlates nicely with sea surface temperatures (SSTs): higher SSTs lead to higher PDIs. Well, kind of. Actually, the graph of his that people like to show are not the actual SSTs and PDIs but a “low-frequency filtered” version of SSTs and PDIs. There is an inherent and large danger in applying these kinds of filters: it is too easy to produce spurious correlations. Nobody mentioned this.

The obvious question to ask: why filter the data in the first place? The answer is that the signal is not there, or not there too obviously, in the raw data.? But people want to see the signal, so they go after it by other means.? And there are good physical reasons to expect that the signal should be there: all things being equal, warmer water leads to windier storms. But as I stress again and again: all things are rarely equal.

Knutson looked for an anthropogenic signal in hurricane number and did not find any and cautioned that we cannot yet tell whether man has influenced tropical storms. He gave no quantitative measure of the uncertainty in his results.

3. Johnny Chan looked at land-falling tropical storms in the West Pacific. He showed that there were large amounts of inter-decadal and inter-annual variations in typhoon numbers, but there was no increase in number. Again, no quantitative measure of uncertainty.

4. Chris Landsea showed some of his well-known results: before 1966 wide swaths of the North Atlantic were not accounted for in hurricane measurements. This is because before that time, there were no satellites; measurements then were serendipitous: if a ship traversing the ocean ran into a hurricane, it was noted, but, obviously, if no ship was there, the hurricane made no sound. Too, since 1966 changes in observation practice, in the instruments used to measure, and in the algorithms processing the raw data have led to a quantitative differences in the number and qualities of tropical storms. This basically means that, recently, we are able to see smaller, shorter-lived storms that went previously unnoticed.

Now, if you look at raw plots of storm number through time, it looks, sometimes, like these are increasing. But how much of this increase is real and how much do to increased observation power? Knutson and his group tried to answer that, but it’s difficult, and, of course, there will never be a way to be certain.

My talk, which I give this morning, echoes Landsea’s. I find that the variability of storm intensity has increased: this could be accounted for if more smaller storms are able to be observed.

The best thing is that all these scientists spoke just like you would think good scientists should: a lot of “maybes”, “conditional on this being right”, and “I could be wrongs” were heard. There was none of the apocalyptic language you hear in the press.

January 21, 2008 | 1 Comment

Prominent philosopher commits global warming fallacy

This post was supposed to be titled, “Conference Report: Day 1,” because I intended to give a blow-by-blow of the American Meteorological Society meeting which started yesterday here in New Orleans. But since I spent the day slowing dying in my hotel room, I have nothing I wish to report. I only missed the opening ceremonies, however. This is some loss, but not a big one; these introductory speeches usually have something worth teasing. Anyway, today I find I am still alive, and can go to the actual talks.

This one has been making its way around the net: original post. It is a story about the University of Amsterdam philosopher Marc Davidson who has written a peer-reviewed paper which claims that people who “deny” that global warming is catastrophic are just the same as those people who defended slavery! Yes: the full dull academic title of this pearl of an argument is “Parallels in reactionary argumentation in the US congressional debates on the abolition of slavery and the Kyoto Protocol.”

It seems that some U.S. Congressperson, about 200 years ago, said something stupid along the lines of “if we get rid of slavery, we will lose too much money.” The parallel that Davidson found, to his horror, is that some modern politicians are saying something like, “Punitive laws and tariffs to reduce CO2 may be unnecessarily costly and premature.” To Davidson, this only meant one thing: scientist-hating lynch mobs were just around the corner.

No, he didn’t actually say “lynch mobs”, but his hint is sufficiently strong.

My careful readers will have noticed, however, that Davidson, despite his prestigious academic pedigree, has committed the logical fallacy of the idiotic argument. This has an official Latin name, as all fallacies do–something like fatuus headus argumentum—but I can’t recall the exact phrase. Because, of course, to say that because some nincompoop once incorrectly applied an argument from economics, all future arguments based on economics imply sympathy with slavery.

Think about that the next time you clip coupons: just don’t say you are doing it to save money, because that is an economic argument.