Got this email from VD. I’ve edited to remove any personal information and to add blog-standard style and links. I answered, and I remind all readers of the on-going claassre, but I thought I’d let readers have a go at answering, too.
I greatly appreciate the wealth of material contained on your website, and I am an avid reader of both your articles and papers and a consumer of your videos/lectures/podcasts on YouTube. You bring a clarity to the oft misunderstood, and—to an uncultured pleb such as myself—seemingly esoteric field of magical, complex formulae known as statistics.
I have a twofold question: First, do you have any plans to produce a textbook for students utilizing the principles within Uncertainty: The Soul of Modeling, Probability and Statistics—something along the lines of an updated Breaking the Law of Averages? I confess I have not yet read Uncertainty but assure you that it is at the top of my books-to-purchase list (although I’m under the impression much of the content therein is elucidated on your blog). If Uncertainty is the book I’m looking for then please let me know. I am also working through Breaking the Law and find it extremely helpful, lacking only in solutions to check my work.
If I simply need to go through Breaking the Law a few more times, please let me know if that’s the best route. In any event, I would appreciate a sequel that is an even better synthesis of the ideas since-developed and distilled in Uncertainty while also functioning as introductory-to-intermediate text on logical probability/objective Bayesian statistics. I appreciate your approach utilizing logic, common sense, and observation, to quantify the uncertainty for a given set of premises rather than becoming so consumed with parametrical fiddling that I forgot the nature of the problem I was trying to solve.
Second, if no new book is in the works, do you know of any good textbooks or resources for undiscerning novices such as myself for learning logical probability/objective Bayesian statistics that aren’t inundated with the baggage of frequentist ideals or the worst parts of classical statistics, baggage still dragged around by many of the currently available textbooks and outlets for learning statistics? It seems every other book or resource I pick up has at least a subset of the many errors and problems you’ve exposed and/or alluded to in your articles. If no such “pure” text exists, can you recommend one with a list of caveats? I also have found a copy of Jaynes’ Probability Theory, so I’ve added that to the pile of tomes to peruse. Since reading your blog I now make a conscious effort to mentally translate all instances of “random”, “chance”, “stochastic”, etc. to “unknown,” as well as actively oppose statements that “x entity is y-distributed (usually normally, of course!)” and recognize the fruits of the Deadly Sin of Reification (models and formulae, however elegant, are not reality).
I currently work to some degree as an analyst in Business Intelligence/Operations for a [large] company—a field where uncertainty, risk, and accurate predictive modeling are of paramount importance—and confess my grasp of mathematics and statistics is often lacking (I am in the process of reviewing my high school pre-calculus algebra and trigonometry so I can finally have a good-spirited go at calculus and hopefully other higher math). I think my strongest grasp at this point is philosophy (which I studied in undergrad with theology and language), and then logic and Boolean algebra, having spent a bit of time in web development and now coding Business Intelligence solutions. It’s the math and stats part that’s weak. If only I could go back 10 years and give myself a good talking to; hindsight’s 20-20 I suppose.
While not aiming to be an actuary by any measure, I want to be able to understand statements chock full of Bayesian terminology like the following excerpt from an actuarial paper on estimating loss. I want to discern whether such methods and statistics are correct:
“We will also be assuming that the prior distribution (that is, the credibility complement, in Bayesian terms) is normal as well, which is the common assumption. This is a conjugate prior and the resulting posterior distribution (that is, the credibility weighted result) will also be normal. Only when we assume normality for both the observations and the prior, Bayesian credibility produces the same results as Bühlmann-Straub credibility. The mean of this posterior normal distribution is equal to the weighted average of the actual and prior means, with weights equal to the inverse of the variances of each. As for the variance, the inverse of the variance is equal to the sum of the inverses of the within and between variances (Bolstad 2007).” (Uri Korn, “Credibility for Pricing Loss Ratios and Loss Costs,” Casualty Actuarial Society E-Forum, Fall 2015).
I understand maybe 25% of the previous citation.
My end goal is to professionally utilize the epistemological framework given on your blog and in Uncertainty. I want to be able to do modeling and statistics the right way, based on reality and observables, without the nuisances of parameters and infinity if they are not needed. I deal with mostly discrete events and quantifications bounded by intervals far smaller than (-infinity, +infinity) or (0, infinity),
I appreciate any advice you could share. Thank you sir!
Cordially,
VD
I suggest to the novice Larry Gonick’s “Cartoon Guide to …” series. Statistics, algebra, calculus, chemistry, physics, etc. They’re quite readable, and not actually for children.
I’d recommend Stanford’s (free) book and (free) online course Introduction to Statistical Learning With R, taught by Hastie, Tibshirani, and their students. For a reader of Briggs who wants to reify the ideas of the blog and books, there is a caveat that you must actually download, install, and run R and RStudio (dead easy on any platform), and work through all the examples and assignments – which I think are designed to be as straightforward as possible.
While Hastie and Tibshirani are highly empirical, I make no attempt to align their philosophy with Briggs – they do seem to have a common view that predictive performance against out of sample data is the primary concern of the working statistician.
The real value in re Briggs is that you will gain some ability to test the ideas discussed: for example, building a model and then making predictions about things that did not exist when it was built. It is also helpful that Briggs uses R. Finally, to the degree that the instructors present methods of which Briggs is critical (or if you explicitly seek out R packages implementing implementing such methods), you gain a hands-on understanding of that criticism.
– B
“Data Analysis: A Bayesian Approach” (by Sivia and Skilling) is an introductory book to statistics along the lines of Jaynes’ objective Bayesianism. However, I don’t think the book is any easier to read than Briggs’ Uncertainty. I guess the only _really_ introductory book might be Breaking the Law of Averages, which, unfortunately, lacks solutions.
Got another one – two actually. BTW, I am interpreting the original request as “given that I follow Briggs together with some examples of what I’m comfortable with technically, what works am I most likely to be interested in?”
Causality: there was once a guest post here which spoke of the notion, questioning whether the trigger of a gun causes the bullet to fire. I noticed in the discussion that followed, nobody compared the validity of that model against the Nobel Prize -winning notion that if one is half a mile away from the gun, then the muzzle flash is the cause of the sound that one hears 2.3 seconds later (at sea level, 68 degrees F).
It is certain that said Nobel Prize, awarded to an economist, caused many engineers and physicists to cry.
Anyway, Judea Pearl has not one but two new books: one is sort of a “Causality 101” intended for people with introductory-level probability and statistics under their belt, and it’s relatively cheap – thirty bucks for the e-book on Google Play. It’s co-written with a couple of other authors whom I suspect were charged with keeping the book readable (I’ve found many passages in Pearl’s other books to be needlessly difficult). I’ve just skimmed it and gone through the early sections, it looks good, and I’m planning on giving it a thorough go. In theory I know the material, but I’ve always found that’s not a barrier to learning something new from a well-written introductory text.
Second is Pearl’s general-readership book on causality – once again, written with a science journalist to keep things moving along. I have not listened to it yet – but will start in about three minutes when I finish this and go for a run. Anyway, to talk about causality and not give Pearl some scrutiny would be like studying behavioral biases without reading Kahneman’s book. Even if you question the work or the premises, you must know the thing you question.
And speaking of things that call the foundations of behavioral biases into question …
PS – Everyone should read (or listen to the audiobook) “Algorithms to Live By” by Christian and
Griffiths – it’s written for a general audience, funny, entertaining, and somehow also manages to be the best book I – as as Computer Scientist – have ever encountered on the fundamentals of Computer Science. All the people I know who read it have enjoyed it, independent of whether they are quanty or fuzzy.
While we’re mentioning “Breaking the Law of Averages”, could we have the pages relating to the book restored? Its big brother seems to have shouldered it aside.
Thanks for the suggestions!
McChuck, I’ll check out your suggestions because the format looks like an intriguing and unorthodox approach.
Pedro, I’ll add that one to the list.
B. Student, your intuition “given that I follow Briggs together with some examples of what I’m comfortable with technically, what works am I most likely to be interested in?” was correct. I am not an absolute beginner in stats and math (I can do a little calculus and at this point can conceptually understand some ideas even if I don’t know how to solve them yet), but I’m certainly no mathematician or engineer (my undergraduate studies were mostly non-math related). Do you know the titles of those two books that you suggested?
[Sussibar]
The Book of Why by Judea Pearl et al (more general, but i was incorrect to call it general-readership, no problem at all for your level but I’m not sure that audiobook+running works for this one.
He’s a bit over the top on what a great contribution he’s made. He certainly was the one who lit the fire and built the foundation, but gives the impression, perhaps because he believes it, that it was through his hard-fought effort that the ideas spread and eventually caught on. No, actually engineers and computer scientists immediately took his ideas, which filled gaps in what they were already doing, and ran with it. All that being said, still worth reading so far: especially because he promises to get into the story of R.A. Fisher’s nemesis Sewall Wright, who championed the exploration of causality and was constantly being smacked down.
Causal Inference in Statistics: A Primer by Judea Pearl et al. It’s very good, full stop.
Judea Pearl is a fairly well-known figure outside of CS/Stats too, but it is unfortunately because he is the father of Danny Pearl, the NYT reporter kidnapped and murdered by the Taliban.
Also, I cannot stress enough that it may seem off-topic, but in its own was Algorithms to Live By – Christian and Griffiths is as key to ideas of probability and action-selection problems as any statistics book. To say any more than that would be a spoiler.