Skip to content
December 17, 2007 | 1 Comment

You have to die of something: or, COPD deaths “skyrocket”

If you think it’s good news that the death rates by stroke, heart disease, cancer, and just plain accidents have declined last year (diabetes deaths have remained steady), then you’re not trying hard enough to find the dark lining to this silver cloud.

But, thank goodness, death rates from COPD have “skyrocketed“, so we don’t need to stop worrying! The New York Times even supplied a graph (below) as evidence of this calamity.

COPD death rates by males and females

There are two things wrong with this bleak outlook. The first is an error in logic, the second is one of bad graphics.

Can you see what’s wrong with the statistical graph? Looks like a dramatic increase in COPD deaths, right? Well, maybe. But hasn’t the population, for men and women, also increased—skyrocketed—since 1980? I have only been able to discover (from this site) the COPD deaths per 100,000 up until 2004 (not 2005 like the Times picture), but here is that picture:

Continue reading “You have to die of something: or, COPD deaths “skyrocket””

| 1 Comment

“The Future of Everything” by David Orrell

The Future of Everything by David Orrell. Thunder’s Mouth Press, New York.

I wanted to like this book, which was supposed to be an examination of how well scientists made predictions—my special area of interest—but I couldn’t. It wasn’t just Orrell’s occasional use of juvenile and gratuitous political witticisms: for example, at one point in his historical review of ancient Greek-prediction making, Orrell sarcastically assures us that the “White House” would not, as dumb as its occupants are, stoop so low as to rely on the advice gained from examining animal entrails. It also wasn’t that the book lacked detailed explanations of the three fields he criticizes—weather and climate forecasts, economic forecasts, and health predictions. Nor was it that Orrell was sloppy in some of his historical research: for example, he repeats the standard, but false, view that Malthus predicted mankind would overpopulate the world (more on this below).

No. What is ultimately dissatisfying about this book is that Orrell wants it two ways. He uses the first half of the book warning us that we are, and have been over our entire history, too confident in our forecasts, that we are unaware of the amount of error in our models, and that we should expect the unexpected. Then he uses the second half of the book to warn us that, based on these same forecasts and models, we are heading toward a crisis, and that if we are not careful, the end is near. He softens the doom and gloom by adding an unsatisfactory “maybe” to it all. He cannot make up his mind and make a clear statement.

Now, it might be that the most dire predictions of climate models, economic forecasts, and emergent disease predictions are true and should be believed. But it cannot also be true that the models that produced these guesses are bad and untrustworthy, as he assures us they are. So, which is it? Are scientists too confident in their predictions, given their less-than-stellar history at predicting the future? Almost certainly. For example, we recall Lev Landau, saying of cosmologists, “They are often wrong, but never in doubt.” Could this also apply to climatologists and economists? If so, how is it we should believe Orrell when he says we should prepare for the worst?

To solve that conundrum, Orrell approvingly quotes Warren Buffet who, using an analogy of Pascal’s wager, says it’s safer to bet global warming is real. Pascal argued that if God exists you’d better believe in him because the consequences of not believing are too grim to contemplate; but if He does not exist, you do not sacrifice much by believing anyway. This argument is generally acknowledged as unconvincing—almost certainly Orrell himself does not hold with it, as he shows no sign of devoutness. Orrell does, sometimes, allow himself to say that people are too sure of themselves and their predictions. To which I say, Amen.

You now need to understand that weather and climate models both require a set of observations of the present weather or climate before they can run. These are called initial conditions, and the better we can observe them, the better the forecasts can be. Ideally, we would be able to measure the state of the atmosphere at every single point, see every molecule, from the earth’s surface, way up to where the solar wind impacts on the magnetosphere. Obviously, this is impossible, so there is tremendous uncertainty in the forecasts just because we cannot perfectly measure the initial conditions. There is a second source of uncertainty in forecasts, and that is model error. No climate model accurately models the real atmosphere. Moreover, it is impossible that they can do so. Approximations, many of them crude and no better than educated guesses, are made for many physical phenomena: for example, the way clouds behave. So some of the error in forecasts is due to model error and some due to uncertainty in the initial conditions.

Orrell makes the claim that most of the error in weather forecasts is due to model error. Maybe so—though this is far from agreed upon—but he goes further to say that these weather models do not have much, or any skill. (Skill means that the model’s forecast is better than just guessing that the future will be like the past.) This is certainly false. Orrell is vague about this: at times it looks like he is saying something uncontroversial, like long-range (on the order of a week) weather forecasts do not have skill. Who disagrees with that? Perhaps some private forecasting companies providing these predictions—but that is another matter. But often, Orrell appears to lump all, short- and long-term, weather forecasts in the same category and hints they are all error filled. This is simply not true. Meteorologists do a very good job forecasting weather out to about three or four days ahead. Climatologists, of course, do a very poor job of even forecasting “past” weather; i.e., most climate models can not even reproduce past known states of the atmosphere with any degree of skill.

Lovelock’s Gaia hypothesis is lovingly detailed in Orrell’s warning that we had better treat Mother Nature nicely. This curious—OK, ridiculous—idea treats the earth itself as a finely tuned, self-regulating organism.? Orrell warmly quotes some “environmentalists” as saying that Gaia treats humans as a “cancer”, and that it sometimes purposely causes epidemics, which are its way of keeping humans in check and curing the cancer. Good grief.

Of course, the Gaia idea is invoked only after humans come on the scene. The earth is only in its ideal state right before humans industrialized. But where was Gaia when those poor, mindless and apolitical, anaerobic bacteria swam in the oceans so many eons ago? The finely tuned earth-organism must have decided these bacteria were a cancer too, as the oxygen dumped as their waste product poisoned these poor creatures and killed them off. So too have other species come and gone before humans came down out of the trees. Belief in Gaia in this sense is no better than those who also believe that the climate we now have is the one, the one that is perfect and would always exist (and didn’t it always exist?) if only it weren’t for us people, and in the particular the Bush “Administration.”

But again, Orrell is wishy-washy. He assures us that Gaia is “just another story” (though by his tone, he indicates it’s a good one). His big-splash conclusion is that models should not be used as forecasts per se, that they should only be guides to give us “insight”. Well, a guide is just another word for a forecast, particularly if the guide is used to make a decision. Making a decision is nothing but making a guess and a bet on the future. So, once again, he tries to have it both ways.

A note on Malthus. What he argued was that humans, and indeed any species, reproduced to the limit imposed upon them by the availability of food. If the food supply increased, the population would increase. Both would also fall together. What Malthus said was that humans are in *equilibrium* with their environment. He never said that people would overpopulate and destroy the earth. He was, though, in a sense, an early eugenicist and did worry that a March of the Morons could happen if somebody didn’t do something about the poor; but that is a story for another day.

December 16, 2007 | 1 Comment

12 million kids are being held hostage by a psychiatric disorder

Ad campaign pic for 12 million warped kidssource: New York Times

My God! 12 million! Something must be done!

This bulletin board appears at various places in New York City (as reported in the New York Times). It is sponsored by the New York University Child Study Center. It’s purpose is to—wait for itraise awareness! (Of autism and other maladies that afflict the young.)

But the only thing that I was made aware of, is that this number almost certainly cannot be true. And that this ad is yet another example of a group nobly, but wildly, exaggerating a claim in order to make a point. The inherent dishonesty in this practice is ignored or explained away because the topic is so awful. This isn’t the place to talk about it, but if the lesson of Chicken Little or the Little Boy Who Cried Wolf have taught us anything, it is that exaggeration ultimately undermines its very purpose. When people discover the original claim is false, they tend to discount whatever else the claimant might say.

Anyway, how do I know that there can’t be 12 million kids with psychiatric disorders? Let’s figure it out together.

How many people live in the United States? According to the Census Bureau, a little over 300 million. And how many of these people are “kids”? Well, what’s a “kid”? Somebody under 12? Under 18? We can’t be sure what the advertisement actually implies, but let’s suppose, say, 14 (which I chose because that’s a break-point in the Census Bureau tables; but it makes little difference to my conclusion).

About 21% of all people, then, are kids. Which is about 63 million. And if 12 million kids are being “held hostage”, that means 1 out of every 5 kids must have some sort of ransom (in the form of prescriptions?) paid for them.

To put that number into perspective, if we were to walk into a typical school classroom with 30 kids, then there is a 50% chance that we would see 6 or more of these kids currently being “held hostage”!

I need hardly tell you that that number is not consonant with our experience. That is to say, that the original “12 million” estimate, is almost certainly false. And probably by an order of magnitude, too: which is a fancy way of saying we should divide the 12 million by 10 or so. That makes a 50% chance that we see one kid held hostage per room. And that number is more realistic.

So this ad gets a 5 on the Briggs Statistical Deception Scale. Of course, the original 12 million number could be right if we are allowed to, as unfortunately is increasingly the case, define “normal” behavior narrower and narrower, with even slight deviations from the accepted norm being declared due to newly discovered (or expanded in scope) “disorders” and “syndromes.”