Skip to content

Category: Book review

Please email me at matt@wmbriggs.com before sending books to be reviewed.

January 1, 2008 | 1 Comment

Calculated Risks: How to know when numbers deceive you: Gerd Gigerenzer

Gerd Gigerenzer, Simon and Schuster, New York, 310 pp., ISBN 0-7432-0556-1, $25.00

Should healthy women get regular mammograms to screen for breast cancer?

The surprising answer, according to this wonderful new book by psychology professor Gerd Gigerenzer, is, at least for most women, probably not.

Deciding whether to have a mammogram or other medical screening (the book examines several) requires people to calculate the risk that is inherent is taking these tests.? This risk is usually poorly known or communicated and, because of this, people can make the wrong decisions and suffer unnecessarily.

What risk, you might ask, is there for an asymptommatic woman in having a mammogram? To answer that, look at what could happen.

The mammogram could correctly indicate no cancer, in which case the woman goes away happy.? It could also correctly indicate true cancer, in which case the woman goes away sad and must consider treatment.

Are these all the possibilities?? Not quite.? The test could also indicate that no cancer is present when it is really there—the test could miss the cancer.? This gives false hope and causes a delay in treatment.

But also scary and far more likely is that the test could indicate that cancer is present when it is not.? This outcome is called a false positive, and it is Gigerenzer’s contention that the presence of these false positives are ignored or minimized by both the medical profession and by interest groups whose existence is predicated on advocating frequent mammograms (or other disease screenings, such as for prostate cancer or AIDS).

Doctors like to provide an “illusion of certainty” when, in fact, there is always uncertainty in any test.? Doctors and test advocates seem to be unaware of this uncertainty, they have different goals than do the patients who will receive the tests, and they ignore the costs of false positives.

How is the uncertainty of a test calculated?? Here is the standard example, given in every introductory statistics book, that does the job. This example, using numbers from Gigerenzer, might look confusing, but read through it because its complexity is central to understanding the his thesis.

If the base rate probability of breast cancer is 0.8% (the rate of cancer in women in the entire country), and the sensitivity (ability to diagnose the cancer when it is truly there) and specificity (ability to diagnose no cancer when it is truly not there) of the examination for cancer is 90% and 93%, then given that someone tests positive for cancer, what is the true probability that this person actually has cancer?

To answer the question requires a tool called Bayes Rule.? Gigerenzer has shown here, and in other research, that this tool is unnatural and difficult to use and that people consistently poorly estimate the answer. Can you guess what the answer is?

Most people incorrectly guess 90% or higher, but the correct answer is only 9%, that is, only 1 woman out of every 11 who tests positive for breast cancer actually has the disease, while the remaining 10 do not.

If people instead get the same question with the background information in the form of frequencies instead of probabilities they do much better.? The same example with frequencies is this: If out of every 1000 women 77 have breast cancer, and that 7 of these 77 who test positive actually have the disease, then given that someone tests positive for cancer what is the true probability that this person actually has cancer?

The answer now jumps out—7 out of 77—and is even obvious, which is Gigerenzer’s point.? Providing diagnostic information in the form of frequencies benefits both patient and doctor because both will have a better understanding of the true risk.

What are the costs of false positives?? For breast cancer, there are several.? Emotional turmoil is the most obvious: testing positive for a dread disease can be debilitating and the increased stress can influence the health of the patient negatively.? There is also the pain of undergoing unnecessary treatment, such as mastectomies and lumpectomies.? Obviously, there is also a monetary cost.

Mammograms can show a noninvasive cancer called ductal carcinoma in situ, which is predominately nonfatal and needs no treatment, but is initially seen as a guess of cancer. There is also evidence that the radiation from the mammogram increases the risk of true breast cancer!

These costs are typically ignored and doctors and advocates usually do not acknowledge the fact the false positives are possible.? Doctors suggest many tests to be on the safe side—but what is the safe side for them is not necessarily the safe side for you. Better for the doctor to have asked for a test and found nothing than to have not asked for the test and miss a tumor, thus risking malpractice.

This asymmetry shows that the goals of patients and doctors are not the same.? The same is true for advocacy groups.? Gigerenzer studies brochures from these (breast cancer awareness) groups in Germany and the U.S. and found that most do not mention the possibility of a false positive, nor the costs associated with one.

Ignoring the negative costs of testing makes it easier to frighten women into having mammograms, and he stresses that, “exaggerated fears of breast cancer may serve certain interest groups, but not the interests of women.”

Mammograms are only one topic explored in this book.? Others include prostate screenings “where there is no evidence that screening reduces mortality”, AIDS counseling, wife battering, and DNA fingerprinting.

Studies of AIDS advocacy group’s brochures revealed the same as in the breast cancer case: the possibility of false positives for screenings and the costs associated with these mistakes were ignored or minimized.

Gigerenzer even shows how attorney Alan Dershowitz made fundamental mistakes calculating the probable guilt of O.J. Simpson, mistakes that would have been obvious had Dershowitz used frequencies instead of probabilities.

The book closes with tongue-in-cheek examples of how to cheat people by exploiting their probabilistic innumeracy, and includes several fun problems.

Gigerenzer stresses that students have a high motivation to learn statistics but that it is typically poorly taught.? He shows that people’s difficulties with numbers can be overcome and that it is in our best interest to become numerate.

December 17, 2007 | 1 Comment

“The Future of Everything” by David Orrell

The Future of Everything by David Orrell. Thunder’s Mouth Press, New York.

I wanted to like this book, which was supposed to be an examination of how well scientists made predictions—my special area of interest—but I couldn’t. It wasn’t just Orrell’s occasional use of juvenile and gratuitous political witticisms: for example, at one point in his historical review of ancient Greek-prediction making, Orrell sarcastically assures us that the “White House” would not, as dumb as its occupants are, stoop so low as to rely on the advice gained from examining animal entrails. It also wasn’t that the book lacked detailed explanations of the three fields he criticizes—weather and climate forecasts, economic forecasts, and health predictions. Nor was it that Orrell was sloppy in some of his historical research: for example, he repeats the standard, but false, view that Malthus predicted mankind would overpopulate the world (more on this below).

No. What is ultimately dissatisfying about this book is that Orrell wants it two ways. He uses the first half of the book warning us that we are, and have been over our entire history, too confident in our forecasts, that we are unaware of the amount of error in our models, and that we should expect the unexpected. Then he uses the second half of the book to warn us that, based on these same forecasts and models, we are heading toward a crisis, and that if we are not careful, the end is near. He softens the doom and gloom by adding an unsatisfactory “maybe” to it all. He cannot make up his mind and make a clear statement.

Now, it might be that the most dire predictions of climate models, economic forecasts, and emergent disease predictions are true and should be believed. But it cannot also be true that the models that produced these guesses are bad and untrustworthy, as he assures us they are. So, which is it? Are scientists too confident in their predictions, given their less-than-stellar history at predicting the future? Almost certainly. For example, we recall Lev Landau, saying of cosmologists, “They are often wrong, but never in doubt.” Could this also apply to climatologists and economists? If so, how is it we should believe Orrell when he says we should prepare for the worst?

To solve that conundrum, Orrell approvingly quotes Warren Buffet who, using an analogy of Pascal’s wager, says it’s safer to bet global warming is real. Pascal argued that if God exists you’d better believe in him because the consequences of not believing are too grim to contemplate; but if He does not exist, you do not sacrifice much by believing anyway. This argument is generally acknowledged as unconvincing—almost certainly Orrell himself does not hold with it, as he shows no sign of devoutness. Orrell does, sometimes, allow himself to say that people are too sure of themselves and their predictions. To which I say, Amen.

You now need to understand that weather and climate models both require a set of observations of the present weather or climate before they can run. These are called initial conditions, and the better we can observe them, the better the forecasts can be. Ideally, we would be able to measure the state of the atmosphere at every single point, see every molecule, from the earth’s surface, way up to where the solar wind impacts on the magnetosphere. Obviously, this is impossible, so there is tremendous uncertainty in the forecasts just because we cannot perfectly measure the initial conditions. There is a second source of uncertainty in forecasts, and that is model error. No climate model accurately models the real atmosphere. Moreover, it is impossible that they can do so. Approximations, many of them crude and no better than educated guesses, are made for many physical phenomena: for example, the way clouds behave. So some of the error in forecasts is due to model error and some due to uncertainty in the initial conditions.

Orrell makes the claim that most of the error in weather forecasts is due to model error. Maybe so—though this is far from agreed upon—but he goes further to say that these weather models do not have much, or any skill. (Skill means that the model’s forecast is better than just guessing that the future will be like the past.) This is certainly false. Orrell is vague about this: at times it looks like he is saying something uncontroversial, like long-range (on the order of a week) weather forecasts do not have skill. Who disagrees with that? Perhaps some private forecasting companies providing these predictions—but that is another matter. But often, Orrell appears to lump all, short- and long-term, weather forecasts in the same category and hints they are all error filled. This is simply not true. Meteorologists do a very good job forecasting weather out to about three or four days ahead. Climatologists, of course, do a very poor job of even forecasting “past” weather; i.e., most climate models can not even reproduce past known states of the atmosphere with any degree of skill.

Lovelock’s Gaia hypothesis is lovingly detailed in Orrell’s warning that we had better treat Mother Nature nicely. This curious—OK, ridiculous—idea treats the earth itself as a finely tuned, self-regulating organism.? Orrell warmly quotes some “environmentalists” as saying that Gaia treats humans as a “cancer”, and that it sometimes purposely causes epidemics, which are its way of keeping humans in check and curing the cancer. Good grief.

Of course, the Gaia idea is invoked only after humans come on the scene. The earth is only in its ideal state right before humans industrialized. But where was Gaia when those poor, mindless and apolitical, anaerobic bacteria swam in the oceans so many eons ago? The finely tuned earth-organism must have decided these bacteria were a cancer too, as the oxygen dumped as their waste product poisoned these poor creatures and killed them off. So too have other species come and gone before humans came down out of the trees. Belief in Gaia in this sense is no better than those who also believe that the climate we now have is the one, the one that is perfect and would always exist (and didn’t it always exist?) if only it weren’t for us people, and in the particular the Bush “Administration.”

But again, Orrell is wishy-washy. He assures us that Gaia is “just another story” (though by his tone, he indicates it’s a good one). His big-splash conclusion is that models should not be used as forecasts per se, that they should only be guides to give us “insight”. Well, a guide is just another word for a forecast, particularly if the guide is used to make a decision. Making a decision is nothing but making a guess and a bet on the future. So, once again, he tries to have it both ways.

A note on Malthus. What he argued was that humans, and indeed any species, reproduced to the limit imposed upon them by the availability of food. If the food supply increased, the population would increase. Both would also fall together. What Malthus said was that humans are in *equilibrium* with their environment. He never said that people would overpopulate and destroy the earth. He was, though, in a sense, an early eugenicist and did worry that a March of the Morons could happen if somebody didn’t do something about the poor; but that is a story for another day.

December 6, 2007 | No comments

The Algebra of Probable Inference: Richard T. Cox

This is a lovely, lovely book and I can’t believe it has taken me this long to find and read it (November 2005: I was lead to this book via Jaynes, who was the author that also recommended Stove). Cox, a physicist, builds the foundations of logical probability using Boolean algebra and just two axioms, which are so concise and intuitive that I repeat them here:

1. “The probability of an inference on given evidence determines the probability of its contradictory on the same evidence.”

2. “The probability on given evidence that both of two inferences are true is determined by their separate probabilities, one on the given evidence, the other on this evidence with the additional assumption that the first inference is true.”

Cox then begins to build. He shows that probability can be, should be, and is represented by logic; he shows the type of function probability is, the relation of uncertainty and entropy, and what expectation is. He ends with deriving Lapace’s rule of succession, and argues when this rule is valid and when it is invalid. And he does it all in only 96 pages!. This is one of the rare books that I also recommend you read each footnote. If you have any interest in probability or statistics, you have a moral obligation to read this book.

October 6, 2007 | 1 Comment

The Rationality of Induction: David Stove

Is deductive logic empirical? No. Is inductive logic also empirical? No. Is induction justified and, if so, is it just an extension of logic? Yes.

These are Stove’s conclusions as he takes Hume (and current-day relativists,such as Popper) to task and shows that, yes, induction is rational. He also shows that the common belief that ordinary logical is formal is a myth. Knowledge of the validness of certain arguements must come from intution, as Carnap argued, and Stove proves. He shows that certain forms of logical arguments do not always give valid conclusions, and that all arguments must be judged individually. In his words, “Cases Rule”.

This is another in a series of books that I think are largely unknown by most statisticians and probabilists, especially those who tend toward so-called pure mathematics. But this book, like those by Jaynes and Cox, argue the case for logical, as opposed to subjective, probability forcefully and conclusively. They deserve to be more widely read because, I believe, they have a great deal to say on the foundations of our field.