Uncertainty Reviewed Again: It is such a good book, and I recommend it without reservation

8808717962_6075e76b84_z

Don Aitkin, author of Moving On: A Tale of the Millennium, also available here, and feted here, and reviewed here, has reviewed my Uncertainty: The Soul of Modeling, Probability and Statistics. Here is that review (also available on his site) and my comments.

Aitkin’s Review

I have written before about William Briggs, the American statistician, and have corresponded with him, too. He has now published a book called Uncertainty. The Soul of Modeling, Probability and Statistics (Springer, 1916). A friend lent his copy to me, and I’ve now read it twice. It is as much about philosophy as it is about probability, but then Briggs would say that ‘probability’ is at its heart a matter of philosophy. I started to read the book a third time, but I’ll soon have to give it back. Pity, because like all good and hard books, you need to re-read unless you are an expert in the field, and I am neither by training a philosopher nor a statistician.

Why is it so good? Because, for me, he picks up areas of doubt that many people will have about uncertainty and probability (there are many books on the matter), and illuminates them helpfully. Judith Curry, whose website is on my blogroll, thinks that uncertainty is at the core of the problem with the orthodox view of ‘climate change’ (the inverted commas signify that this is the UNFCCC version of the term, meaning climate change caused by human activity). As I have tried to establish in my Perspective series, around every major proposition put forward by the orthodoxy there is considerable uncertainty. The orthodox, the believers, dismiss the uncertainty. They will tell you that 97 per cent of climate scientists agree, or that there are many separate illustrations of whatever point it is, or that the learned academies all say the same thing. But the uncertainty doesn’t go away. As Briggs would say, it is inherent in the data that is brought forward in the models used, and in the construction of the models themselves. Moreover, it would always be mentioned. Always.

For Briggs, uncertainty is about truth, and it is a sign that we do not know the truth. The whole point of science is to discover the truth about something, usually the truth about the cause of something. But truth for the most part resides in our heads, other than whether particular objects exist or not. Probability is an approach to truth. Some probabilities you can ascribe numbers to, but most you can’t. Most enlightening to me was his assertion, well argued, that ‘chance’ is not a cause of anything, and we shouldn’t think it is, let alone accept that others have somehow learned its mysterious causative potency. Equally, ‘randomness’ is not a cause of anything either. It is yet another sign that we do not know the cause of something.

In the well-known coin-toss experiment, it is not ‘chance’ that determines whether the coin shows heads or tails. The outcome of each toss is in principle knowable, if we could measure all the variables, the surface the coin fell to, the air movement at the time of the toss, the particular force given by the coin-tosser, and so on. But, at least as yet, we can’t. So to say that the result is random, or ‘determined by chance’ (a phrase Briggs would detest), is simply to say that we don’t know what has caused the result — unless we have good reason to believe that the coin has been tampered with.

As Briggs moves from philosophy into probability we get to see his philosophical position in practice, and I found that process most impressive. He argues that probability has been misunderstood and consistently misused because of what he calls ‘the We-Must-Do-Something fallacy’. Decision-makers need clear results on which to make decisions, and that pushes statisticians to construct their models so that they will produce numbers, and numbers that have ‘significance’. There is a real distinction between ‘significance’ in statistics and ‘importance’ in the real world. For Briggs the only viable way to go is to construct a model with a clear predictive purpose, and then test it on new data. If the outcome accords with the model (theory, hypothesis, supposition) then the model has some skill. It does NOT mean that the model causes the outcome, let alone that variation from the predicted outcome suggests that the data aren’t quite right.

Briggs loathes ‘scientism’, which he says rests on the belief that Only that which is measurable becomes important. He also loathes the use of regression, because it assumes a straight-line relationship between the parameters of the observations. His proposal is that you locate each of the paired observations on some sort of grid, and look at what you have found. Such ‘eye-balling’ will tell you something, but applying regression is a poor idea. There will be a temptation to get rid of outliers, to introduce ‘smoothing’ techniques of one kind or another, all in the pursuit of what he calls, disparagingly a ‘wee p-value’. Even worse, having found a nice trend line that supports your view, you will forget that the implication of a trend line is that the data should continue to show that slope both before and after your starting and ending points in time.

Most of his awful examples, drawn from the literature, are from the medical and epidemiological fields, and he offers what he calls ‘the epidemiological fallacy’, which I have observed elsewhere but without knowing that title. This is where the researcher says ‘X causes Y’ but never actually measures X. Worse the researcher then uses standard statistics to impute proof of cause. To give a Briggs example, a paper exploring the formation of Republican loyalties saw 4th July celebrations as the formative cause, but in fact used precipitation data for 4th July in towns where the participants said they lived where they were young, the assumption being that where 4th July parades were (presumably) washed out no Republican loyalties were generated. All you can say is, oh dear. Yes, it was peer reviewed.

It is such a good book, and I recommend it without reservation. But it is a book to read and study. There are no silver bullets or quick fixes.

9 Comments

  1. It is such a good book! I have it on my office shelf next to all my other, lower quality, statistics texts.

    I also continue to profit from knowing how to spot the epidemiologist fallacy. It’s fun to watch heads spin when you say “but you didn’t actually measure X, then”.

  2. Jerry

    I finally broke down and ordered a copy. Found one for only $45, which is still obnoxiously expensive, but Brigg’s blog is great, so I think the book will be even better.

    I still disagree with Briggs about sartorial habits, though.

  3. Don Jackson

    A proper suit of clothes, topped by a fedora, is sartorially conservative… It requires no defense!

  4. Yoj

    Anything you want to know about statistics, ask Briggs. He is the world leading expert.

    Ask someone else if you want to know anything about sartoriality.

    Emperors make a habit of not making a serious effort.

    The book is priced exactly right. Average books today in Waterstones were £25 or £30. £35. for a small coffee table book. Churned out books, I mean.

    Test books were never affordable even for poor students.

    You get what you pay for.

  5. Yoj

    Text books. dum di dum di dum. comment too short.

  6. Oldavid

    And yet the Briggslean/ Fesorial nonsense cartel still claim that “chance” is the “cause” of all that is… i.e. the World and all that’s in it are “becoming” what it will be by pure accidental mechanics and dialectics.

  7. Joy

    No, that’s somebody else. You’re confused.

  8. Joy

    Oldavid, I mean you’re mixed up.

Leave a Reply

Your email address will not be published. Required fields are marked *