Book review

A Deep Philosophical Account Of Probability

docsay

Another review of Uncertainty: The Soul of Modeling, Probability & Statistics, this one taken from an Amazon customer.

It was 7 or 8 years ago when I was sitting in my office at the university. At that time I was a Ph.D. student in the Machine Intelligence group at the computer science department. One of the department professors knocked on my door – he was looking for my supervisor because he needed advice on how to perform some test to verify some properties of his recent experiments. Well, Finn is not here, but maybe I can help. What’s your problem? He wanted to know about different statistical tests and how and why they work. Why they work? This question puzzled me immensely because I had taken many advanced classes in statistics, read countless books on the subject, but I never recalled any of them answering or discussing “why they work?”.

This lead us to Briggs’ new book “Uncertainty – The Soul of Modeling, Probability and Statistics”. It is a deep philosophical treatment of probability written in a plain language and without the interference of unnecessary math. This makes the book accessible to most university students. The books “Probability Theory: The Logic of Science” by E.T. Jaynes and J. Pearl’s “Causality” are the ones that have influenced my thinking most profoundly. Until now. Briggs explains why subjective, Bayesian or frequentist interpretations of probability are somewhat unfortunate and argues that the fundamental view should be to see probability as the extension of logic to the domain of uncertainty. I have met this view in other places before, but this is the first comprehensive treatment I have read. He also argues that all probability must be conditional – again, it is not the first time I have seen this view, but the first time I have seen a deep analysis of why that must be so. Now, it may not sound like much, but it really is. It has already allowed me to get a fresh perspective of one of my AI research problems that has plagued me for years.

Have you ever speculated what randomness really is? This book will tell you. Is there a mathematical definition of falsifiability? Oh yes. Do you ever wonder what the relationship is between probability and causality? And what is the role of statistical significance testing in relation to causality? A few years ago I read “The Cult of Statistical Significance” by S. T. Ziliak and D. N. McCloskey which is basically arguing against certain statistical practices while emphasizing the focus on effect sizes. Briggs’ book stand out because his analysis is much deeper (mathematically, philosophically) and because he goes much further by proposing that both p-values and relative risk should be abandoned, although he dislikes p-values the most. To read Fisher’s old gibberish, that led to this sad situation, is simply astounding.

I also have a little critique. First of all, I had hoped there was a section on the notion of “unbiased” estimators, but maybe Briggs can add that to the second version. Secondly, there are brief discussions of machine learning algorithms for causality. The reader could get the impression that people in this field think they can prove causality. If so, that is certainly not the case. From the little I know, then they always assume some kind of faithfulness of the distribution, or they take the graphical model as inductive knowledge (e.g. in the case of Pearl). Of course, the problem is, as pointed out by Briggs, that once the techniques get in the hand of less rigorous scientists, then they tend to forget that and immediately think causality has been proven. Briggs is kind to remind us that there is a difference between conditional and necessary truths, and once you start to assemble all your assumptions, the conditional truths may quickly become very uncertain.

In general, then this book should be relevant for anybody working with probability models and anyone consuming the output of such models. That’s a lot of people, including almost each and every scientist and university student. If you are a journalist, then read it too. It will give you a much better basis for accessing the nature and validity of all the research fluctuating in the media.

This lead us back to the question I was asked by the professor 7 or 8 years ago: “why do they work?”. I said that I didn’t really have a book on that (and I have many books). I think you might have to get the original paper(s) and see what’s in them. Then we discussed his problem a little, and I suggested a chi-square test and sent him out the door with a bunch of books, among them one with the reassuring title “100 Statistical Tests”. Today I would simply have given him Briggs’ book and said: they don’t work and here’s why!

Best regards,
Thorsten Jørgen Ottosen, Ph.D.
Director of Research


Here is my reply to his two criticism.

Ottosen is quite right that I can do a better job describing efforts to determine causality. My criticism in the book is not just that people take “machine learning” models, and all probability and statistical models, and assume they have proved cause, which they cannot, but that any model by itself can demonstrate cause where it is not previously known. I have more about this in the article “The Hierarchy Of Models: From Causal (Best) To Statistical (Worst)“. Automated processes can never identify cause because knowledge of cause is different than knowledge of determinism, and machines might be able to discern determinism in simple cases, or in multiple-choice (oracle) set ups.

About “unbiased estimators”, there are no such things. I mean, there are no such things as parameters at all. Parameters are not ontic. All parameter-based methods, which comprise the vast bulk of existing practice, should be eschewed. I argue this at some length in the book, but that I didn’t get the point across proves I need to be clearer here.

In the place of parameters and hypothesis testing? Understanding cause and making probabilistic predictions of observables.

Categories: Book review, Statistics

6 replies »

  1. What an amazing review!

    I really am going to put this on a higher priority

    Philosophically speaking, give it a higher probability of purchase.

    I can never think of what I really want for Christmas,
    maybe I’ll put it on my Christmas wish list.

  2. Went to the reviews on Amazon – the “one star” review (if anyone can call it that) said: “Overpriced”. One has to smile at times.

  3. Actually should have added, perhaps you could double the size of the font for the reprint Matt? That seems to be key for some.

  4. Science is the new State religion and statistics the cudgel used to
    swindle the masses. 97% has been burned into the brains of billions.

  5. Terrifying! – 30 years of medical practice and ascribing varying degrees of wisdom to treatments “validated” by p values and relative risk; reassured by editorial review that all is well within the enclosed statistics. Will be a must read along with who knows what else, to ever understand & critically read another published article.

Leave a Reply

Your email address will not be published. Required fields are marked *