William M. Briggs

Statistician to the Stars!

Page 394 of 759

The Hobbit Reviewed—Guest Post by John Henry Briggs

The Hobbit

The Hobbit was quite good, but disappointing.

One of the biggest flaws in Lord of the Rings was the endless video game-like orc slaughter that was gratuitous at best and silly at worst (e.g. Legolas skating down stairs of Helms Deep on a shield shooting orcs). The Hobbit continues with that tradition, where goblins seem to be capable of nothing more than to get their heads chopped off. Any tension they might have existed when our heroes are surrounded by goblins is preemptively dissipated.

The much-touted high frame rate makes it feel hyper-realistic, but that only serves to drag the modern CG back 15 years. The higher the filming quality, the harder it is the trick the brain with the CG.

The movie suffered the same fate as the Hitchhiker’s Guide to the Galaxy, another beloved sci-fi/fantasy novel adapted for the screen starring Martin Freeman. The fate being, where the movie followed the book, it was a good enough, and where it didn’t, it kinda fell short.

As a pretty devout fan of Tolkien, a lot of little things made the movie worse, errors that someone who doesn’t own a replica One Ring wouldn’t notice. For example, the composer, Howard Shore used the leitmotif of the Witch-King in LotR for the Goblin King, Azog (whose role was extended for the movie) which felt like a lazy attempt to connect the two series thematically. Almost the whole soundtrack is borrowed wholesale from Lord of the Rings.

The reason why the movie didn’t work is that Peter Jackson apparently felt he was not done telling the story of Lord of the Rings and decided that The Hobbit was a good way to deliver it. We see the dark lord Sauron gathering his strength and the White Council deliberating their plan of action, which is unneeded in an action/adventure type movie; an allusion here and there is all that was needed. But the type of obsession that caused Peter Jackson to have all the props created just for the movies (chainmail and all) is what caused these other unneeded details.

It looked and felt a lot like Lord of the Rings, with the sweeping shots of mountains and forests. and as I mentioned the soundtrack doesn’t help to differentiate the two series. I went in hoping for a more magical feel, for a lack of a better word. The world of The Hobbit, though of course the same, felt more innocent and less dismal than Lord of the Rings, and that difference is not captured here. The Hobbit has a dragon and a battle of five armies. The Lord of the Rings has the Elves leaving middle earth, the Shire being destroyed (not in the movies, though) and the One Ring has scarred both Frodo and Bilbo.

I’ve been focusing on all the negatives because of the disappointment, but there is a good movie here. All the actors did a good job, and Martin Freeman is great as always. I wish they spent more time focusing on the dwarves; only half of them are familiar, the rest seem unimportant. The 3D was quite excellent, even though I really dislike the fad. And at the end of the movie, it somehow didn’t feel like 2 hours and 40 minutes had passed. I wanted it to keep going despite all my complaints. Though Radagast riding a sled driven by a bunch of rather large rabbits was absurd to say the least.

I wouldn’t be surprised if there will be fan edits sold at conventions with all the extra stuff cut out, and I will buy one.

John Henry Briggs is the number-two son of Yours Truly.

The Data Is The Data, Not The Model: With Climatology Time Series Example

How not to plot

The following plot was sent to me yesterday for comment. I cannot disclose the sender, nor the nature of the data, but neither of these are the least essential to our understanding of this picture and what has gone horribly, but typically, wrong.

How not to think about a time series

How not to think about a time series

There is one data point per year, measured unambiguously, with the item taking values in the range from the mid 20s to the high 50s. Lets suppose, to avoid tortured language, the little round circles represent temperatures at a location, measured, as I say, unambiguously, without error and such that the manner of measurement was identical each year.

What we are about to discuss applies to any—as in any—plot of data which is measured in this fashion. It could be money instead of temperature, or counts of people, or numbers of an item manufactured, etc. Do not fixate on temperature, though it’s handy to use for illustration, the abuses of which we’ll speak being common there.

The little circles, to emphasize, are the data and are accurate. There is nothing wrong with them. As the box to the right tells us, there are 18 values. The green line represents a regression (values as linear function of year); as the legend notes, the gray area shows the 95% confidence limits. Let’s not argue why frequentism is wrong and that, if anything, we should have produced credible intervals. Just imagine these are credible intervals. The legend also has a place value for “95% Prediction Limits”, but this isn’t plotted. Ignore this for now. The box to right gives details on the “goodness” of fit of this model, R-Square MSE and the like.

Questions of the data

Now let me ask a simple question of this data: did the temperature go down?

Did you say yes? Then you’re right. And wrong. Did you instead say no? Then you too are right. And just as wrong.

The question is not simple and is ill phrased: as it is written, it is ambiguous. Let me ask a better question: did the temperature go down from 1993 to 2010? The only answer is yes. What instead if I asked: what is the probability the temperature went down from 1993 to 2010? The only answer (given this evidence) is 1. It is 100% certain the temperature decreased from 1993 to 2010.

How about this one? Did the temperature go down from 1993 to 2007? The answer is no; it is 100% certain the temperature increased. And so forth for other pairs of dates (or other precise questions). The data is the data.

Did the temperature go down in general? This seems to make sense; the eye goes to the green line, and we’re tempted to say yes. But “in general” is ambiguous. Does that mean, from year-to-year there were more decreases than increases? There were half of each: so, no. Does the question mean that temps in 2001 or 2002 were lower than 1993 but higher than 2010? Then yes, but barely. Does it mean if I take the mean temps from 1993 to 2001 and compare it against the mean from 2002 to 2010? Then maybe (I didn’t do the math).

Asking an ambiguous question lets the user “fill in the blank”, different opinions can be had, merely because nobody is being precise. What we should do is just plot the data and leave it at that. Any question we can ask can be answered with 100% certainty. The data is the data. That green line—which is not the data—and particularly that gray envelope is an enormous distraction. So why plot it?

What is a trend?

It appears as if somebody asked: was there a trend? Again, this is ambiguous. What’s a “trend”? This person thought it meant the straight line one could draw with a regression. That means this person said it was 100% certain that this regression model was certain; that no other model could represent the uncertainty in the observed data than this one. But there are many, many, many other meanings of “trend” and other models which are possibilities.

No matter which model is chosen, no matter what, the data trumps the model. The green line is not the data. The data is the data. It makes no sense to abandon the data and speak only of the model (or its parameters). You cannot say: temperatures decreased, for we already have seen this is false or true depending on the years chosen. You can say “there was a negative trend” but only conditional on the model being true. And then a negative trend in the model does not correspond to a negative change in the data, not always.

Assume the regression is the best model of uncertainty. Is the “trend” causal? Does that regression line (or its parameters) cause the temperatures to go down? Surely not. Something physical causes the data to change: the model does not. There is no hidden, underlying forces which the model captures. The model is only of the data’s uncertainty, quantifying the chance the data takes certain values.

But NOT the observed data. Just look at the line: it only goes though one data point. The gray envelope only contains half or fewer of the data points, not 95% of them. In fact, the model is SILENT on the actual, already observed data, which is why it makes no sense to plot a model on the data, when the data does not need this assistance. Since the model quantifies uncertainty, and there is no uncertainty in the observed values, the model is of no use to us. It can even be harmful if we, like many do, substitute the model for the data.

We cannot, for instance, say “The mean temperature in 2001, according to the model, was 38.” This is nonsensical. The actual temperature in 2001 was 25, miles away from 38. What does that 38 mean? Not a thing. It quite literally carries no meaning, unless we consider this another way to say “false.” It was 100% certain the temperature in 2001 was 25, so there is no plus or minus to consider, either.

What’s a model for?

Again I say, the data is the data, and the model something else. What, exactly?

Well, since we are supposing this model is the best way to represent our uncertainty in values the data will take, we apply it to new data, yet unseen. We could ask questions like, “Given the data observed and assuming the model true, what is the probability that temperatures in 2011 are greater than 40?” or “Given etc., what is the probability that temps in 1992 were between 10 and 20?” or whatever other years or numbers which tickle our fancy. It is senseless, though, to ask questions of the model about data we have already seen. We should just ask the data itself.

Then we must wait, and this is painful, for waiting takes time. A whole year must pass before we can even begin to see whether our model is any good. Even then, it might be that the model “got lucky” (itself ambiguous), so we’d want to wait several years so we can quantify the uncertainty that our model is good.

This pain is so acute in many that they propose abandoning the wait and substituting for it measures of model fit (the R-Squared, etc.). These being declared satisfactory, the deadly process of reification begins and the green line becomes reality, the circles fade to insignificance (right, Gav?). “My God! The temperatures are decreasing out of control!” Sure enough, by 2030, the world looks doomed—if the model is right.

Measures of model fit are of very little value, though, because we could always find a model which recreates the observed data perfectly (fun fact). That is, we can always find a better fitting model. And then we’d still have to wait for new observations to check it.

Lastly, if we were to plot future values, then we’d want to use the (unseen) prediction limits, and not the far-far-far-too-narrow confidence limits. The confidence limits have nothing to say about actual observable data and are of no real use.

Today’s lesson

The data is the data. When desiring to discuss the data, discuss the data, do not talk about the model. The model is always suspect until it can be checked. That always takes more time than people are willing to give.

Fats Waller—Your Feet’s Too Big

I was reminded of Fats because an album of his was one of the best Christmas presents I ever received. And ’tis the season.

On-Line Statistics Course: Ideas And Your Opinions

We talked about this a couple of months ago, but now the time is nigh to build and create an on-line statistics course, or courses.

There are several problems: content, manner of delivery, costs, credit or acknowledgement, advertising. I’ll sketch my thoughts and request yours.


Statistics ranges from pure philosophy to pure mechanics, and I have the idea that more people would enjoy the latter. I mean, I could offer courses on epistemology, philosophy of science, point (God help us) estimation, measure theory, Bayesian theory, and the like, with more or less technical content, but I think the audience for such material is limited, and those wanting it are more anxious for “credit”, discussed below.

Instead, my guess is that practicum would be more popular. Courses like: Introductory Data Analysis, Introduction to R, Advanced Modeling, Regression, Predictive Analytics, and the like, again with more or less meatiness depending on the audience.

Therefore, at the start, I propose three: (1) Introduction to the New Statistics: Bayes, Prediction, and All That. (2) Introduction to R. (3) Philosophy of probability. If you have other suggestions, please do list them below. I’m open to most things.

Introduction to the New Statistics: Bayes, Prediction, and All That

A sketch of the two major philosophies of probability, settling on the correct one (probability as logic, Bayesian statistics). Evidence, argument, logic. Learning to count. Basic probability models. Uncertainty. Modeling: regression and logistic regression. Students must find, present, and explain their own data sets, with guidance and within certain limitations. A project where this data is analyzed and explained fully, complete with an explanation of the many reasons why they results might be wrong, comprises the grade. This is the course I teach, to some success, at Cornell, even though I say it myself. The R software (free) will be used, but the computer work is not the main focus.

Introduction to R

Reading in, storing, outputting, and manipulating data. GUIs, why they’re nice and why they should not be used. Data frames, variables, basic coding, modeling, graphics (base, lattice, introduction to ggplot2). The world of packages (plug-ins of analysis software, all freely available). Interacting with JAGS, Excel. A project comprises the grade (a file or files of code designed to do a set task).

Philosophy of probability

A reading and discussion course, with some but minimal lectures (videos). Sketch of authors list (incomplete): Aristotle, Laplace, Keynes, Hacking, Ramsey, Carnap, Jeffreys, de Finetti, Howson & Urbach, Williamson, Stove, Jaynes. Plus a few more modern papers. Some of these are too technical or mathematical, so only select pieces of authors are used. A paper—and a phone call with me—discussing some aspect of probability comprises the grade.

Manner of delivery

I could do the whole thing here, on a separate room on the site, complete with videos, emails, chat sessions and so forth. Or I could use a site like Udemy or StraighterLine. The advantage to sites like these are: infrastructure already in place, easy to create courses, possibility of “credit”, billing outsourced, and so forth. Disadvantages are courses can get lost (there are many on offer), have to share the money with the site, they are a little more impersonal.

Another advantage is the material is all paced at your ability. The course does not have to start and stop in (say) twelve weeks for everybody. Some might blaze through in six, others might take fifty-two; others might join in the beginning and then leave, or more might skip the intro and wade in downstream. All would be fine.


It’s got to cost something. How much? That is, what weight of green stuff are you willing to part with to take these courses? Let me know below. When answering, remember the beauty of the phrase, “Give ’till it hurts.” Good news is that different courses can cost different amounts. The Philosophy of Probability course would be cheapest, because most of the burden of the material is on you, the student. The others would cost more because they’d take more of my time.

Credit or acknowledgement

At the least, anybody who takes a course with me could always ask for a letter or a phone call as acknowledgement they took the course, and where I could give my opinion on the student’s ability. I could always devise a certificate; the sites like Udemy have such a mechanism already in place. They also have approaches to (and here is where it gets complicated) official accreditation, wherein courses might—I say might—transfer in part or in whole to more formal institutions of higher learning. Instead of me going on about this, I encourage you to wade through the sites linked above, or this article, and learning more.


How best to get the word out? It would be quite an investment of my time to create the course, particularly the first two, so that they fit well on-line. I’d hate to make this investment if I can only find a handful of students. Advertising is where I’m at my weakest. I’m terrible at blowing my own horn, finding it embarrassing. Yet this is how one must get on, so I’m prepared to do it. I just don’t know how.

All ideas welcome. I envision, if all goes well, beginning this or these courses early next year, perhaps February or March. Thanks everybody!

« Older posts Newer posts »

© 2017 William M. Briggs

Theme by Anders NorenUp ↑