January 23, 2008 | No comments
More on hurricanes today. Jim Elsner, with co-author Tom Jagger, both from Florida State University started off by warning against using naive statistical methods on count data, such as hurricanes. Especially don’t use such methods without an indication of the uncertainty of the results because it is too easy to mislead yourself.
Elsner also exhorted the audience to pay more attention to statistical models. What my readers might not realize is that, in meteorology and climatology, there always has been a tension between the “dynamicists” and “probabilists.” The former are the numerical modelers and physicists who want to have a deterministic mathematical equation to describe every feature of the atmosphere. The later want to show why the dynamical models are not perfect representations of the climate and weather system and to quantify the uncertainty on dynamical models. So, for example, if you have limited resources and you can either increase the “grid scale” of a forecast model, hence increasing the computational costs, or you can take the model as is but apply statistical corrections to the output, the dynamicists would opt for the former, the probabilists the later. Very broadly, the dynamicists have won the day, but this is slowly changing.
After the plea for statistical cooperation, which I of course wholeheartedly support, Elsner described statistics of hurricane trends using Jim Kossin, and others’, satellite-derived new database of hurricanes. The “best-track” data, which I and most other people use, has definite flaws, and Kossin and his co-workers try to correct for them by reanalyzing and reestimating tropical cyclones using statistical algorithms applied to satellite observations. This presents a unique opportunity because we know the uncertainty in the Kossin database, which means we can propagate this uncertainty through to other models, such as Elsner’s, which predict hurricane trends.
What I mean by this is, Elsner gave statistical confidence intervals that said hurricanes have not increased in number. These confidence intervals are conditional on Kossin’s data being perfect. But since Kossin’s data is not perfect, we have to incorporate the uncertainty in Kossin’s data into Elsner’s trend predictions. What this does in practice is widen Elsner’s original confidence intervals. Elsner and Jagger are working on this right now.
What Elsner did find was the maximum wind speed of any hurricane within a year has been increasing through time. I find this to be true, too, but I couch it in the terms of increasing variance of wind speed. Now, there is no way to know whether this increase (since 1980 in Elsner’s study) is due to actual physical trends, or due to changes in measurements. It is more likely to be due to physical trends because the data comes from Kossin’s more consistent hurricane database, but since the satellites themselves have changed over time, it is not certain that Kossin and others have captured the difference in wind-speed measurement capability between satellites. This is just another place to incorporate the uncertainty in the database through to the prediction of wind-speed increasing. And, of course, even if the increase is real, we cannot know with certainty that it is due to man-made causes.
When I got up to speak, I felt I was part of a tag team, because I presented this picture, very similar to one Elsner showed:
This looks like hurricane numbers through time, doesn’t it? Well, it’s not: it’s just a bunch of random numbers with no correlation from “year” to “year”, overplotted with a 9-year running mean. I wrote about this earlier: using running means on count data that actually has no correlation far too easily can show you trends (to the eye, anyway) where they do not exist. But you see these kinds of plots all the time, even in the best of the best journals, like Science and Nature. You cannot show these kind of plots without some kind of measure of the uncertainty that the trend is real, but it happens all the time, and people make decisions using these plots assuming such trends are real.
The rest of my talk went well, though I noticed that, for our session, we had only about one-quarter the number of people listening as listened to the people who were sympathetic to the idea that mankind causes increasing temperatures. I still owe the AMS (and you, my readers) a simple summary of my work, which I’m getting to.
I ran into Jim O’Brian, also from FSU, in the COAPS group. Jim tried to recruit me to that group last year: it was a good job, but circumstances would not let me move from New York City. Anyway, Jim was wearing his Nobel Peace Prize pin, somewhat ironically. He was, and is, one of the many 100s of scientists on the IPCC, but Jim is openly skeptical of man-made global warming. Which should give you pause next time you hear the word “consensus” used to describe scientific thought about the matter.