Statistics, which is to say probability, is supposed to be about uncertainty. You would think, then, that the goal of the procedures developed would be to quantify uncertainty to the best extent possible in the matters of interest to most people. You would be wrong. Instead, statistics answers questions nobody asked. Why? Because of mathematical slickness and convenience, mostly.
The result is a plague, an epidemic, a riot of over-certainty. This means you, too. Even if you’re using the newest of the new algorithms, even if you have “big” data, even if you call your statisticians “data scientists”, and even if you are pure of heart and really, really care.
What are some (not all) of the indicators that you’re doing it wrong?
- You were seduced by a wee p-value: “Ooh, this one is 0.001!”
- You performed a hypothesis test: “I like to say ‘null hypothesis.'”
- You examined a posterior: Not that kind.
- You have used the word “learning” in the context of algorithms: Lift yourself up by the bootstraps.
- You thought “randomizing” was a good thing; Adding noise makes you surer?
- You thought your situation in predictive sense; No, wait. This is a good thing.
So where’s the talk? Right where you want it to be. For a small fee, of course. Well, maybe not so small: but worth every smallest-unit-of-currency-you-can-imagine. Teaser: I do not use PowerPoint. See these articles for a preview of the sort of fun you can expect (of course I’m opinionated: that is the point).
W.M. Briggs is a Data Philosopher, or Scientist, if you like; or if you don’t like he is also that well known Statistician to the Stars! and adjunct Professor of the same subject at Cornell. His specialty is the philosophy of probability and statistics, a field of study even less remunerative than you’d guess. Use the Contact Page to inquire about availability.