AMS conference report: day 2

The convention center in New Orleans is impossibly overcrowded; the last time I saw lanes of people so thick was at the Ann Arbor Arts Fair many years ago. And I heard, from the Prob & Stat committee, that the AMS will likely choose to come to New Orleans more often in the future.

There were about two dozen sessions going on at any one time, meaning it is impossible to hear most talks (this is true of nearly any academic conference). I spent most of the day listening to technical statistics and probability talks that won’t be of much interest to you, and I missed some talks on climate change “impact”, which are always nothing but forecasts with no attempts at verification, and thus of little use.

But there were four talks that had some meat.

1. Kerry Emanuel spoke on a hurricane “downscaling” method his group at MIT developed. Most weather and climate models give results at the very large scale, they are computed at “grid points” over the earth’s surface, and these grid points can be very far apart. This means that phenomena that occur between those grid points are not modeled or not seen. But they can be estimated using statistical methods of downscaling. Emanuel’s method is to infer, or downscale, hurricanes from global climate models. He showed some results comparing their method with actual observations, which did well enough, except in the Pacific where it faired poorly.

The main point was to ask whether or not hurricane intensity would increase in the years 2180-2200, the time when CO2 is expected to be twice what it was in pre-industrial days. Intensity is measured by his “power dissipation index”, which is a function of wind speed: obviously, hurricanes that are windier are stronger. The gist was this PDI would increase only very slightly, because hurricane numbers themselves would increase only slightly, if at all.

But aren’t hurricanes supposed to spiral out of control in a warmer world? Not really. He gave a technical discussion of why not: broadly, some levels of the atmosphere are projected to dry, which, through various mechanisms, lead to fewer storms.

He gave no measure of the uncertainty of his results.

2. Tom Knutson asked “Have humans influenced hurricanes yet?” or words to that effect. He showed that Emanuel’s yearly summery of PDI correlates nicely with sea surface temperatures (SSTs): higher SSTs lead to higher PDIs. Well, kind of. Actually, the graph of his that people like to show are not the actual SSTs and PDIs but a “low-frequency filtered” version of SSTs and PDIs. There is an inherent and large danger in applying these kinds of filters: it is too easy to produce spurious correlations. Nobody mentioned this.

The obvious question to ask: why filter the data in the first place? The answer is that the signal is not there, or not there too obviously, in the raw data.? But people want to see the signal, so they go after it by other means.? And there are good physical reasons to expect that the signal should be there: all things being equal, warmer water leads to windier storms. But as I stress again and again: all things are rarely equal.

Knutson looked for an anthropogenic signal in hurricane number and did not find any and cautioned that we cannot yet tell whether man has influenced tropical storms. He gave no quantitative measure of the uncertainty in his results.

3. Johnny Chan looked at land-falling tropical storms in the West Pacific. He showed that there were large amounts of inter-decadal and inter-annual variations in typhoon numbers, but there was no increase in number. Again, no quantitative measure of uncertainty.

4. Chris Landsea showed some of his well-known results: before 1966 wide swaths of the North Atlantic were not accounted for in hurricane measurements. This is because before that time, there were no satellites; measurements then were serendipitous: if a ship traversing the ocean ran into a hurricane, it was noted, but, obviously, if no ship was there, the hurricane made no sound. Too, since 1966 changes in observation practice, in the instruments used to measure, and in the algorithms processing the raw data have led to a quantitative differences in the number and qualities of tropical storms. This basically means that, recently, we are able to see smaller, shorter-lived storms that went previously unnoticed.

Now, if you look at raw plots of storm number through time, it looks, sometimes, like these are increasing. But how much of this increase is real and how much do to increased observation power? Knutson and his group tried to answer that, but it’s difficult, and, of course, there will never be a way to be certain.

My talk, which I give this morning, echoes Landsea’s. I find that the variability of storm intensity has increased: this could be accounted for if more smaller storms are able to be observed.

The best thing is that all these scientists spoke just like you would think good scientists should: a lot of “maybes”, “conditional on this being right”, and “I could be wrongs” were heard. There was none of the apocalyptic language you hear in the press.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *