Book review

An Introduction To Uncertainty: Probability, Statistics, and Modeling of All Kinds

Spencer Tracy is trying to fit round pegs into square holes.

Spencer Tracy is trying to fit round pegs into square holes.

This is a teaser, the first part of a 3,200-word narrative outline for the book that I’ve started to shop around. The current title is in the headline. Regular readers know it has undergone many changes, thus it is rational to conclude it might change again.

Why is this rotten thing taking so long? It took me forever to realize what I could leave out—which is a lot. I wanted to introduce to people not used to it to Aristotelian epistemology, and what this fine and true subject meant for the practical understanding and communication of uncertainty. But there’s no way to be complete about this without going on and on, at book length, by which time the reader, anxious to get to the “good stuff”, will have been put to sleep.

So out goes everything except the bare necessities. Besides, if readers are into that sort of thing, there’s plenty of other books to read. What’s left is an explanation of what probability is, what it means to “do” modeling, how to communicate results properly, and how to purge the magical thinking from our midsts.

I sent the outline to one well known publisher, who that very same day wrote back and called my bluff. The editor labeled the proposal “intriguing” and said that it “raises a lot of important points” but then asked me to immediately ship off two chapters. Sure. As if these were ready, and that, even if they were, I could pick the right two.

Finishing these chapters so that they are at least not embarrassing is what I’ll be doing for the next week.

Incidentally, the “Why?” which follows, suitably fleshed out, will become either the Preface or Chapter 1.

Why?

Fellow users of probability, statistics, and computer “learning” algorithms; physics and social science modelers; big data handlers; spreadsheet mavens; other respected citizens. We’re doing it wrong.

Not completely wrong: not everywhere: not all the time: but far more pervasively, far more often, and in far more places than you’d imagine.

What are we doing wrong? Probability, induction, statistics, the nature of causality, modeling, communicating results, expressing uncertainty. In short: everything.

Your natural reaction will be (this is a prediction based on observation and induction), “Harumph.” I can’t and shouldn’t put a probability measure to this guess, though. That would lead to over-certainty, which I will prove to you is already at pandemic levels.

You may well say “Harumph”, but consider: there are people who think statistical models are causal, that no probability can known with certainty until at the close of the universe, that probabilities can be read from mood rings, that induction is a “problem”, that randomness is a magical cause, that parameters exist, that computers learn, that models are realer than observations, that model fit is more important than model performance.

And that is only a sampling of the oddities which beset our field. How did we get this way? Best answer is that it is well known that the human race is insane.

More practically, our training lacks a proper foundation, a philosophical grounding. Introductory books plunge the student into data and never look back. The philosophical concepts which are necessarily present aren’t discussed well or openly. This is rectified once, and if, the student progresses to the highest levels, but by that time his interest has been turned either to mathematics or to solving specific problems. And when the student finally and inevitably weighs in on, say, “What models really are”, he lacks depth. Points are missed. Falsity is embraced.

So here is a philosophical introduction to uncertainty and the practice of probability, statistics, and modeling of all kinds. The approach is Aristotelian, even Thomistic. Truth exists, we can know it, and we can sometimes but not always measure its uncertainty, and there are good and bad ways of doing it.

This isn’t a recipe book. Except for simple (but common: regression, “binomial”) examples, this book does not contain lists of algorithms. Rather, this is a guide on how to create such recipes and lists. It is thus ideal for students and researchers looking for problems upon which to work. The mathematical requirements are modest: this is not a math book.

Do I have everything right? Well, I’m as certain I do as you were that you had everything right before you read this introduction. One thing which is certain is that we’re not done.

14 replies »

  1. “Not completely wrong: not everywhere: not all the time: but far more pervasively, far more often, and in far more places than you’d imagine.”

    “What are we doing wrong? Probability, induction, statistics, the nature of causality, modeling, communicating results, expressing uncertainty. In short: everything.”

    The two paragraphs seem to contradict each other–if it’s everything, then is is completely wrong, everywhere, all the time. Perhaps add “virtually” in front of everything?

    “probability can known” =”probability can be known”, probably!

    realer=more real?

    You are quite right about how we arrive at this problem–I have degrees in psychology and chemistry, with a minor in philosophy. For the psychology, we had “kiddie stats”, I took “real stats” because I wanted to understand how stats really worked, and all the time I have the voice of my logic teacher in my head (yes, they have drugs for that now!) saying “think, think, think”. It was a unique combination of degrees but gave me a clear view from all sides.

  2. Your proposal is worthwhile and fills a gap/need, but let me ask (and it is truly out of ignorance), are there any other books/texts on the philosophical foundations of probability/statistics?

  3. Bob,

    Sure, a few, but of varying direction. Try Howson & Urbach (Bayesian SomethingOrOther), for example. They expose frequentism fine but adopt subjective Bayes, which is the wrong thing to do. Bayes is okay, but not subjective. See: mood rings.

    Sheri,

    Realer, dammit!

  4. Terry: Not as hard as one might think. The stat course I took for psych majors had very little math in it. It was mostly about what the statistics can “prove” or “disprove”, how certain we are of the outcomes, etc. The actual math was mostly just mentioned but not actually included.

  5. typo: “…no probability can [be] known with certainty…”

    I’ve never ever heard anyone say “Harumph.” I’ve heard them say lots of other things and in much more aggressive terms.

    How about telling just one juicy example of how we’re doing it wrong, particularly one that illustrates the insanity of humans? We all like gawking at idiots.

  6. “What models really are”
    I thought they were just math. For instance, we electrical engineers use a computer program called SPICE to model electronic circuits. SPICE is nothing but a bunch of equations. Everybody knows the model is not the actual physical circuit, however you can obtain useful information without having to build the circuit. This can save lots of time and money. SPICE can also give you nonsense answers if you aren’t careful.
    http://bwrcs.eecs.berkeley.edu/Classes/IcBook/SPICE/

  7. Gary,

    This is only an outline, indeed only 400 words of the full 3,200 of the outline. It’s not meant to be comprehensive.

  8. Have you though of self-publishing? There are several models in the tech world of self-publishing books in progress which provides early revenue and feedback. Stating the obvious, data science is pretty hot and there are a lot of natural outlets to promote in the tech and analytics community.

  9. A preface is about the book as an object. Why might be part of an introduction, or chapter 1.

  10. Subjective is where you start with the Bayesian approach. That approach works because it isn’t subjective very long. In seventh grade I build Sci Am’s game playing matchbox and bead game. I didn’t know it was machine learning. I didn’t know it was Bayesian. But, it worked. The probabilities converged to the win.

    Steward Brand’s book “How Buildings Learn” tells us that asymmetries and accumulations on one side or the other is learning. Neurons need not be involved.

    Though leaves gaps compared to evolution addressing a constraint. Evolution has beat thought since zero. Hell, it made thought.

Leave a Reply

Your email address will not be published. Required fields are marked *