Skip to content

Category: Statistics

The general theory, methods, and philosophy of the Science of Guessing What Is.

January 22, 2008 | No comments

AMS conference report: day 2

The convention center in New Orleans is impossibly overcrowded; the last time I saw lanes of people so thick was at the Ann Arbor Arts Fair many years ago. And I heard, from the Prob & Stat committee, that the AMS will likely choose to come to New Orleans more often in the future.

There were about two dozen sessions going on at any one time, meaning it is impossible to hear most talks (this is true of nearly any academic conference). I spent most of the day listening to technical statistics and probability talks that won’t be of much interest to you, and I missed some talks on climate change “impact”, which are always nothing but forecasts with no attempts at verification, and thus of little use.

But there were four talks that had some meat.

1. Kerry Emanuel spoke on a hurricane “downscaling” method his group at MIT developed. Most weather and climate models give results at the very large scale, they are computed at “grid points” over the earth’s surface, and these grid points can be very far apart. This means that phenomena that occur between those grid points are not modeled or not seen. But they can be estimated using statistical methods of downscaling. Emanuel’s method is to infer, or downscale, hurricanes from global climate models. He showed some results comparing their method with actual observations, which did well enough, except in the Pacific where it faired poorly.

The main point was to ask whether or not hurricane intensity would increase in the years 2180-2200, the time when CO2 is expected to be twice what it was in pre-industrial days. Intensity is measured by his “power dissipation index”, which is a function of wind speed: obviously, hurricanes that are windier are stronger. The gist was this PDI would increase only very slightly, because hurricane numbers themselves would increase only slightly, if at all.

But aren’t hurricanes supposed to spiral out of control in a warmer world? Not really. He gave a technical discussion of why not: broadly, some levels of the atmosphere are projected to dry, which, through various mechanisms, lead to fewer storms.

He gave no measure of the uncertainty of his results.

2. Tom Knutson asked “Have humans influenced hurricanes yet?” or words to that effect. He showed that Emanuel’s yearly summery of PDI correlates nicely with sea surface temperatures (SSTs): higher SSTs lead to higher PDIs. Well, kind of. Actually, the graph of his that people like to show are not the actual SSTs and PDIs but a “low-frequency filtered” version of SSTs and PDIs. There is an inherent and large danger in applying these kinds of filters: it is too easy to produce spurious correlations. Nobody mentioned this.

The obvious question to ask: why filter the data in the first place? The answer is that the signal is not there, or not there too obviously, in the raw data.? But people want to see the signal, so they go after it by other means.? And there are good physical reasons to expect that the signal should be there: all things being equal, warmer water leads to windier storms. But as I stress again and again: all things are rarely equal.

Knutson looked for an anthropogenic signal in hurricane number and did not find any and cautioned that we cannot yet tell whether man has influenced tropical storms. He gave no quantitative measure of the uncertainty in his results.

3. Johnny Chan looked at land-falling tropical storms in the West Pacific. He showed that there were large amounts of inter-decadal and inter-annual variations in typhoon numbers, but there was no increase in number. Again, no quantitative measure of uncertainty.

4. Chris Landsea showed some of his well-known results: before 1966 wide swaths of the North Atlantic were not accounted for in hurricane measurements. This is because before that time, there were no satellites; measurements then were serendipitous: if a ship traversing the ocean ran into a hurricane, it was noted, but, obviously, if no ship was there, the hurricane made no sound. Too, since 1966 changes in observation practice, in the instruments used to measure, and in the algorithms processing the raw data have led to a quantitative differences in the number and qualities of tropical storms. This basically means that, recently, we are able to see smaller, shorter-lived storms that went previously unnoticed.

Now, if you look at raw plots of storm number through time, it looks, sometimes, like these are increasing. But how much of this increase is real and how much do to increased observation power? Knutson and his group tried to answer that, but it’s difficult, and, of course, there will never be a way to be certain.

My talk, which I give this morning, echoes Landsea’s. I find that the variability of storm intensity has increased: this could be accounted for if more smaller storms are able to be observed.

The best thing is that all these scientists spoke just like you would think good scientists should: a lot of “maybes”, “conditional on this being right”, and “I could be wrongs” were heard. There was none of the apocalyptic language you hear in the press.

January 19, 2008 | 7 Comments

National Post says statisticians needed too

Canada’s National Post, in a piece from a little more than a year ago, made a call for more statisticians to be involved in climate change research, much as the American Meteorological Society recently did. It’s relevant again, because the Post article is highlighted today on the indispensable ClimateDebateDaily.com.

The title of the article is somewhat ridiculously called “The Deniers — Part I,” as if the only opposite course is to be devout and be a “Believer.” But let that pass. There are IX–excuse, me, 9—other parts to the series, highlighting topics like “warming has benefits” (true), “the sun moves climate change” (true), and “limited role for CO2” (also true). None of these are strictly “denialist” positions; they are, in fact, attempts to fully understand the physics of the climate system. To call those who study these areas “deniers” shows, then, how far we’ve slipped from sanity.

Anyway, Part I is more or less an interview with Ed Wegman, who is “professor at the Center for Computational Statistics at George Mason University, chair of the National Academy of Sciences’ Committee on Applied and Theoretical Statistics, and board member of the American Statistical Association.” Wegman was the guy who investigated Michael Mann’s “hockey stick” temperature curve and found the statistical methods behind its data analysis wanting.

Ed says that if “statistical methods are being used, then statisticians ought to be funded partners engaged in the research to insure as best we possibly can that the best quality science is being done, [and] there are a host of fundamental statistical questions that beg answers in understanding climate dynamics.”

One place to recruit these statisticians is from the American Meteorological Society’s Probability & Statistics Committee, but Ed is suspicious: “I believe it is amazing for a committee whose focus is on statistics and probability that of the nine members only two are also members of the American Statistical Association, the premier statistical association in the United States, and one of those is a recent PhD with an assistant-professor appointment in a medical school” (emphasis mine).

My readers, who are lovers of logic, will have instantly noticed that Ed has committed the “appealing to authority” fallacy when he implies that the poor schmuck stuck in the medical school cannot possibly know anything of climatology.

My friends, that schmuck is me. The other ASA member is Tilmann Gneiting, of the University of Washington, who is brilliant and one of the world’s biggest sweethearts. I can also set your minds at ease by telling you that the non-ASA members are no slouches at statistics, and they know a great deal of physics.

I wrote Ed to let him know that if he wanted to check my record, he might find that he and I are not too far apart. But Ed’s pretty busy and I’m still, at 1+ years, waiting for his response.

No, my point of writing this post was not just to stick it to Ed a little. It turns out the Post article and Wegman’s comments are topical, because starting tomorrow the AMS holds in annual meeting in New Orleans. The Probability & Statistics Committee will meet at this time, and also sponsor a four-day conference. I give my global “hurricanes have no increased” paper; Gneiting has a paper, and so do several other statisticians. I’ll be (trying to) write daily updates about major papers and so forth.

I’ll also let you know if I see Wegman haunting the halls.

January 17, 2008 | 8 Comments

Statisticians global warming plea: don’t forget about us!

Who doesn’t love to read about statistics and statisticians? That’s a rhetorical question, my friends, so don’t bother answering. But I will allude to an answer, by telling you that I begin the statistics classes that I teach by asking the students whether they’d like to learn a magic trick. They always say yes. It goes like this:

Next time you are at a social gathering and somebody introduces themselves to you and asks what you do, say these magic words, “I am a statistician.” And…Poof! They will vanish before your eyes! It never fails.

So it’s not surprising that some of us feel left out from time to time. Which explains why the American Statistical Association (of which I am a member) has issued this statement “endorsing” the conclusions of the IPCC report while also admonishing climatologists to include more statisticians in their work.

The ASA recently convened a meeting of statisticians to ask them how they can be more involved with climate change. The statement was their answer. These sort of meetings do not always go well. The ASA had another such confab back in ’95 and invited Chicago high school students to listen to the delights that awaited them if they chose a career in statistics. The lecture was by the distinguished ASA president, who was thorough, as all statisticians are. At the end of his talk, he opened the floor for questions. There was a period of silence when, finally, one brave young man shouted out, “Yeah. Why are you so goofy?” So you can see the danger.

Anyway, except for the blanket political* “endorsement”, given only to show that we’re willing to play along, the rest of the statement is pretty good, including this, “Over the course of four [IPCC] assessment reports, a small number of statisticians have served as authors or reviewers. Although this involvement is encouraging, it does not represent the full range of statistical expertise available.”

And this, “Even in the satellite era ? the best observed period in Earth?s climate history ? there are significant uncertainties in key observational datasets. Reduction of these uncertainties will be crucial for evaluating and better constraining climate models.”

Most importantly, this, “The design and analysis of computer experiments is an area of statistics that is appropriate for aiding the development and use of climate models. Statistically based experimental designs, not currently used in this field, could be more powerful. It is also important to understand how to combine the results of experiments performed with different climate models. Despite their sophistication, climate models remain approximations of a very complex system and systematic model errors must be identified and characterized.”

The main thrust is that climate scientists have not done as well as they could quantifying the uncertainty in their models, results, and speculations, and that statisticians should be more frequently consulted, because if we’re good at anything, this is it.

We’re also not too bad at magic.

*Of course it’s political, because you cannot simultaneously have a plea for statistical? analysis of climate models while at the same time concluding those analyses are proper.
January 15, 2008 | 7 Comments

Ralph Peters gets his stats right: the New York Times purposely misleads

I’m a veteran and haven’t killed anybody in years. But if you read the New York Times you’d be right to worry that I might.

The Sunday, 13 January 2008, edition of the Times spent four pages! detailing that, in the four and three-quarter years since the Iraq war began, returning soldiers, sailors, and airmen came home horribly scared—mentally, of course—and committed 121 murders. Which is a big number, no question; and probably some, or even most, of the people killed didn’t even have it coming to them.

Military writer Ralph Peters, in today’s column for the New York Post, shows that about 350,000 soldiers have come back from both the Iraqi and Afghanistani wars. That makes the per-year murder rate equal to about 7.3 per 100,000.

Time to seriously fret about the mental health of soldiers? Perhaps we should lock them down for a cooling off period until they loose their aggressiveness.

It was at this point that Peters did what any good statistician would have done: he refused to look at the statistic in isolation. He asked: is 7.3 a lot, or is it a little? How can you find out? It’s easy: by going to the Bureau of Justice web site and looking at the murder rates per 100,000 in a demographic most similar to that of GIs, which are 18-24 year-olds:

The civilian murder rate is 26.5 per 100,000

which is more than 3.5 times higher than for GIs! Incidentally, the murder rate for 14-17 year-olds is 9.3; and for those 25-34 it is 13.5, both higher rates than for GIs. It isn’t until you reach the the 35-49 year-olds do you find a lower rate at 5.1 per 100,000. As Peters says the Times

unwittingly makes the case that military service reduces the likelihood of a young man or woman committing a murder.

But his best work comes when he notes

In 2005 alone, 8,718 young Americans from the same age group [as GIs] were murdered in this country. That’s well over twice as many as the number of troops killed in all our foreign missions since 2001. Maybe military service not only prevents you from committing crimes, but also keeps you alive?

Peters has called on the Time’s “public editor” Clark Hoyt (who is in charge of correcting errors) to acknowledge the paper’s purposeful character assassination of our veterans. Add your voice to Peters’s: Hoyt’s email is public@nytimes.com.

Update: 16 January 2008.? Good thing I bought a bigger hat. NYPost.com