Global Average Temperature: An Exceedingly Brief Introduction To Bayesian Predictive Inference

Update This post is mandatory reading for those discussing global average temperature.

I mean it: exceedingly brief and given only with respect to a univariate time series, such as operationally define global average temperature (GAT). Let him that readeth understand.

GAT by year are observed (here, assumed without error). If we want to know the probability that the GAT in 1980 was lower than in 2010, then all we have to do is look. If the GAT in 1980 was less than the GAT in 2010, then the probability that the GAT in 1980 was lower than in 2010 is 1, or 100%. If you do not believe this, you are a frequentist.

Similarly, if you ask what is the probability that the GAT in the 2000s (2001- 2010) was higher than in the 1940s (1941-1950), then all you have to do is (1) define an operational definition of higher, and (2) just look. One such operational definition is that the number of years in the warmer decade outnumber the number of years in the cooler decade. If the number of warmer years in the 2000s outnumber the tally of warmer years in the 1940s, then the probability that the 2000s were warmer than the 1940s is 1, or 100%

There is no model needed to answer these or similar simple questions.

If you want to ask what is the probability that the GAT increased by at least X degrees C per year from 1900 to 2000, then all you have to do is look. If the GAT increased by at least X degrees C per year from 1900 to 2000, then the probability that the GAT increased by at least X degrees C per year from 1900 to 2000 is 1, or 100%. There is no need, none whatsoever, to ask whether the observed increase of at least X degrees C per year was “statistically significant.” The term is without meaning and devoid of interest.

At this writing, the year is 2011, but the year is incomplete. I have observed GATs from at least 1900 until 2010. I want to know the probability that the GAT in 2011 (when complete) will be larger than the GAT (as measured) in 2010. I cannot observe this now, but I can still compute the probability. Here is how.

I must propose a model which relates the GAT to time. The model can be fixed, meaning it assumes that the GAT increases X degrees C a year: by which it means, it does not increase by X – 0.1, nor by X + 0.3, nor by any other number besides X. In my model, in 2011 the predicted GAT will be the GAT as it was in 2010 plus X. Conditional on this model—and on nothing else—the probability that the GAT in 2011 is larger than the GAT in 2010 is 1, or 100%. This is not necessarily the same probability that the eventually observed GAT in 2011 is larger than the GAT in 2010.

It is easy to see how I might adjust this fixed model by assigning the possible increase to be one of several values, each with a fixed (in advance) probability of occurring. I might also eschew fixing these increases and instead assume a parametric form for the possible increases. The most commonly used parametric form is a straight line (which has at least three parameters; there are different kinds of straight lines used in time series modeling). How do I know which kind of parametric model to use? I do not: I guess. Or I use the model that others have used because conformity is both pleasing and easy.

I choose the straight line which has, among its parameters, one indicating the central tendency of a probability distribution related to—but is not—the increase in GAT through time. To call this parameter the “trend” can only cause grief and misunderstanding. This parameter is not, and cannot be, identical with the observed GAT.

Bayesian statistics allows me to say what values this parameter (and all the other parameters) is likely to take. It will allow me to say that, if this model is true and given the past years’ GATs, then the probability the parameter is greater than 0 is y, or Y%. This is the parameter posterior distribution. Suppose that y = 0.9 (Y = 90%). Can I then answer the question what is the probability that the GAT in 2011 is larger than the GAT in 2010? NO. This is the only probability that means anything to me, but I cannot yet answer it. What if y = 0.999999, or however many 9s you like: can I then say what is the probability the GAT in 2011 is larger than the GAT in 2010? No, no, and no, with just as many “no”s as 9s. Again, “statistical significance” of some parameter (mistakenly called “trend”) is meaningless.

However, Bayesian statistics allows me to take the parameterized model and to weight it by each possible value of the parameters. The end result is a prediction of the possible values of the GAT in 2011, complete with a probability that each of these possible values is the true one, assuming the model is true. This is the posterior predictive distribution; it is free of all parameters and only speaks in terms of observables, here year and GAT.

I can use the posterior predictive distribution and directly ask what is the probability that the GAT in 2011 is larger than the GAT in 2010. This probability assumes the model is true (and assumes the previous values of GAT are measured without error).

If I have more than one model, then I will have more than one probability that the GAT in 2011 is larger than the GAT in 2010. Each probability assumes that the model that generated it is true. Which model is really true? I can only judge by external evidence. This evidence (or these premises) tell me the probability each model is true. I can then use these probabilities, and the probabilities that the GAT in 2011 is larger than the GAT in 2010, to produce a final probability that the GAT in 2011 is larger than the GAT in 2010. This probability is not conditional on the truth of any of the models.

But it still is conditional on the premise that at least one of the models in our set is true. If none of these models in our set is true—which we could only know using external evidence—then the probability that the GAT in 2011 is larger than the GAT in 2010 is likely to be wrong (it still may be right by coincidence).

I hope you can see that I can ask any question about the observables prior to 2011 and that in 2011. For example, I can ask what is the probability that the GAT in 2011 is Z degrees C higher than in 2010. Or I can ask, what is the probability that the GAT in 2011 is W degrees C higher than the average of the years 2001-2010. And so on.

This is how Richard Muller’s group should issue their statements on the GAT.

Global Average Temperature: What It Isn’t

Update See also: Global Average Temperature: An Exceedingly Brief Introduction To Bayesian Predictive Inference

Word is going round that Richard Muller is leading a group of physicists, statisticians, and climatologists to re-estimate the yearly global average temperature, from which we can say such things like this year was warmer than last but not warmer than three years ago. Muller’s project is a good idea, and his named team are certainly up to it.

The statistician on Muller’s team is David Brillinger, an expert in time series, which is just the right genre to attack the global-temperature-average problem. Dr Brillinger certainly knows what I am about to show, but many of the climatologists who have used statistics before do not. It is for their benefit that I present this brief primer on how not to display the eventual estimate. I only want to make one major point here: that the common statistical methods produce estimates that are too certain.

I do not want to provide a simulation of every aspect of the estimation project; that would take just as long as doing the real thing. My point can be made by assuming that I have just N stations from which we have reliably measured temperature, without error, for just one year. The number at each station is the average temperature anomaly at that station (an “anomaly” is takes the real arithmetic average and subtracts from it a constant, which itself is not important; to be clear, the analysis is unaffected by the constant).

Our “global average temperature” is to be estimated in the simplest way: by fitting a normal distribution to the N station anomalies (the actual distribution used does affect the analysis, but not the major point I wish to make). I simulate the N stations by generating numbers with a central parameter of 0.3 and an spread parameter of 5, and degrees of freedom equal to 20 (once again, the actual numbers used do not matter to the major point).

Assume there are N = 100 stations, simulate the data, and fit a normal distribution to them. One instance of the posterior distribution of the parameter estimating the global mean is pictured. The most likely value of the posterior is at the peak, which is (as it should be) near 0.3. The parameter almost surely lies between 0.1 and 0.6, since that is where most of the area under the curve is.

Global average temperature

Now let’s push the number of stations to N = 1000 and look at the same picture:

Global average temperature

We are much more certain of where the parameter lies: the peak is in about the same spot, but the variability is much smaller. Obviously, if we were to continue increasing the number of stations the uncertainty in the parameter would disappear. That is, we would have a picture which looked like a spike over the true value (here 0.3). We could then confidently announce to the world that we know the parameter which estimates global average temperature with near certainty.

Are we done? Not hardly.

Although we would know, with extremely high confidence, the value of one of the parameters of the model we used to model the global average temperature, we still would not know the global average temperature. There is a world of difference between knowing the parameter and knowing the observable global average temperature.

Here then is the picture of our uncertainty in the global average temperature, given both N = 100 and N = 1000 stations.

Global average temperature

Adding 900 more stations improved our uncertainty in the actual temperature only slightly (and here the difference in these two curves is just as likely to be because of the different simulations). But even if we were to have 1 million stations, the uncertainty would never disappear. There is a wall of uncertainty we hit and cannot breach. The curves will not narrow.

The real, observable temperature is not the same as the parameter. The parameter can be known exactly, but the observable actual temperature can never be.

The procedure followed here (showing posterior predictive distributions) should be the same for estimating “trend” in the year-to-year global average temperatures. Do not tell us of the uncertainty in the estimate of the parameter of this trend. Tell us instead about what the uncertainty in the actual temperatures are.

This is the difference between predictive statistics and parameter-based statistics. Predictive statistics gives you the full uncertainty in the thing you want to know. Parameter-based statistics only tells you about one parameter in a model; and even though you know the value of that parameter with certainty, you still do not know the value of the thing you want to know. In our case, temperature. Parameters be damned! Parameters tell us about a statistical model, not about a real thing.

Update See too the posts on temperature on my Stats/Climate page.

Update See also: Global Average Temperature: An Exceedingly Brief Introduction To Bayesian Predictive Inference

Update to the Update Read the post linked to above. Mandatory.

TSA Expands Jurisdiction To Sidewalks: Where Is The Left?

“Sir? Please step over here. You need to be x-rayed.”

“What? Get outta my way. Who are you?” said the man.

“Sir, please step over to the machine. You have been selected for random scanning,” said the TSA agent.

The man did not understand or chose not to and began to walk on. Two other armed agents moved to block the man’s way.

“Are we going to have trouble with you, sir? You have been selected for random scanning,” repeated the agent.

“What are you talking about? I’m just walking down the sidewalk in front of my apartment. We’re nowhere near an airport, nor even a train station. You can’t just grab innocent people off the sidewalk and bombard them with x-rays,” said the man, increasingly bewildered.

“Sir, we are agents of the Transportation Security Administration. The law says we have to protect you in all areas of transportation. Sidewalks are public modes of transportation. We have orders to randomly scan pedestrians. It’s for your protection, sir,” said the agent, bored with offering the same explanation he had issued a hundred times before. “Besides, if you have nothing to hide, you have nothing to worry about. Do you have your papers with you?”

“What if I refuse?” asked the man.

“You wouldn’t want to do that, sir,” advised the agent, as the other two agents moved in…

Paranoid fantasy? Not hardly. According to documents retrieved by Freedom (!) of Information Act requests, the Department of Homeland Security, as reported in Forbes, “has been planning pilot programs to deploy mobile scanning units that can be set up at public events and in train stations, along with mobile x-ray vans capable of scanning pedestrians on city streets.”

Read that again. The TSA wants to conduct “covert inspection of moving subjects” on sidewalks.

One project allocated to Northeastern University and Siemens would mount backscatter x-ray scanners and video cameras on roving vans, along with other cameras on buildings and utility poles, to monitor groups of pedestrians, assess what they carried, and even track their eye movements. In another program, the researchers were asked to develop a system of long range x-ray scanning to determine what metal objects an individual might have on his or her body at distances up to thirty feet.

Anything for money, eh?, Northeastern?

The Department of Homeland Security, and it sub-agency the TSA, are bureaucracies. Pournelle’s Iron Law of Bureaucracy states

that in any bureaucratic organization there will be two kinds of people: those who work to further the actual goals of the organization, and those who work for the organization itself. Examples in education would be teachers who work and sacrifice to teach children, vs. union representative who work to protect any teacher including the most incompetent. The Iron Law states that in all cases, the second type of person will always gain control of the organization, and will always write the rules under which the organization functions.

The unbreakable grasp of Pournelle’s Law guaranteed something like this would happen. Bureaucracies grow. That is what they do. They grow fastest when they have failed at their original mission and thus seek to justify their existence. If they can turn up nothing but new mothers smuggling breast milk through airport security lines, then they will search for publicity opportunities elsewhere. Sidewalks are public modes of transportation. And surely evildoers use sidewalks!

Those who would give up Essential Liberty to purchase a little Temporary Safety, deserve neither Liberty nor Safety. — Ben “Goddamit” Franklin


Where is the left on this? Where are perpetually outraged progressives who would be hoping mad if a non-racially pleasing Republican president signed off on this? Can you, my lefty friends, truly be pleased with this situation? We have nowhere heard from old Ben Franklin as we used to when there was as little as a hint police department budgets would increase. With Mr Obama in office, dead silence.

Why is a statistician opining on this? Because the matter is entirely statistical. All these scans (see the Decision Calculator link on the left sidebar) will produce a flood of “false positives”, i.e. innocent people falsely identified as suspicious. And sophisticated, intent terrorists will in all probability never be seen until too late. These guaranteed errors cost society more than just money.

And let’s never forget the leading cause of premature death in the twentieth century was not terrorists, but governments.

Reaction To The Latest Attack In Frankfurt

The spin from the military and from the Obama government regarding Army Major Nidal Hasan’s bloody Allahu Akbarization of Fort Hood said he was a troubled man, influenced only by himself.

The story from Scotland when they let Abdel Basset al-Boom-Boom al-Megrahi free was that the poor man’s manhood had shriveled beyond toleration, and that he killed because he was a troubled man, influenced only by himself.

The reason the Obama administration has canceled the planned prosecution of Abd al-Rahim al-Nashiri, the man who Allahu Akbared 17 sailors on the USS Cole in October of 2000, was because they were concerned that this troubled man, who was influenced only by himself, would not receive a fair trial from a military tribunal.

From these instances, and others similar to them, we can guess that Mr Obama, a man who never troubled to learn the pronunciation of corpsman, will say that the Kosovoan who Allahu Akbared our airmen in Germany yesterday did so because he was troubled, that he was influenced only by himself.

On no account are we to be influenced by the vast accumulation of evidence and draw the troublesome conclusion that radical Islamists have it in for the USA and her citizens. That would be—dare we say it?—a racist conclusion.

I surmise that our benevolent government fears that radical Islamists would take exception to be characterized as radical Islamists, since that is a racist accusation, and to the members of our culture nothing in this world is more vile than a racist, not even a mass murderer. Of course, radical Islamists don’t give a damn about racism and (rightly) think that those who are obsessed by it are out of their minds.

But our boys in charge must believe that Islamists hate racism as much as they do and that, when confronted by it, the radical Islamists will prance about like outraged macaques while adding recruits to the cause. But if the radical Islamists can be convinced we love them and don’t hold the occasional massacre against them, they will leave us alone and be satisfied in…whatever it is that they want. What do they want?

This is appeasement, admittedly on a small scale, but appeasement all the same, a strategy that has never worked. It is therefore rational to suppose that it never will. But, as the cliché has it, hope springs eternal. Peace—for us, not them—will be given another chance.

Thus, we can look forward to more reports like those coming out of Germany.


Please also read “Lt. Gen. John Kelly, who lost son to war, says U.S. largely unaware of sacrifice” from the Washington Post.


I’m in the air today and will answer comments and emails on Friday.

Low Flow Toilets Equals No-Flow Sewers In San Francisco

If you’re an environmentalist, particularly a San Francisco version of that creature (one of the most virulent of the breed), it must have come as quite a shock for you to learn that your muck stinks just as bad as a Rush Limbaugh fan’s output. The stench from the sewers in that earth-loving city has become overwhelming, “especially during the dry summer months.”

Why? The low-flow toilets insisted upon (by force of law) by enlightened legislators are not saving the San Francisco environment as the science said they would. According to SF Gate, the near water-free commodes have forced city engineers to mix in 27 million pounds of “highly concentrated sodium hypochlorite” with the sewage “before it’s dumped into the bay.”

San Francisco SewerAgain, why? The ineludible Doctrine of Unintended Consequences.

Now, it wasn’t that long ago that folks like writer Alexandra Marks thought that she could “become a better human being” by teaching us to “flush with clean consciences.” Low-flow toilets, as the science said and as the chant went, would save the environment (from what or for whom is never told).

The motive was pure, but the problem was that most low-flow toilets were crappy. Marks quoted Dave Berry:

They work fine for one type of bodily function, which, in the interest of decency, I will refer to here only by the euphemistic term ‘No. 1.’ But many of the new toilets do a very poor job of handling “acts of Congress,” if you get my drift.

Many low-flow toilets had to be flushed at least twice for acts of Congress, thus using as much water as the old-style toilets they replaced. But if governments are good at anything, it is moving muck, and they had soon passed enough laws and granted enough grants that new breeds of low-flow toilets performed their functions admirably.

Indeed, it was learned that it would even be possible, by purely mechanical means, of removing acts of Congress from toilets and shoving them into the sewage system using almost no water whatsoever! What a boon for the environment!

Alas, this dream merely proved that activists could only see as far as the bottom of their bowls. The Doctrine of Unintended Consequences struck with force when it was discovered that the water which relocated the acts of Congress from toilets was also necessary to shift the Congressional output through the sewer system! Who knew!

Instead of a laminar movement of muck found with the old toilets, low-flow toilets caused stagnation. The acts of Congress left the homes of the benevolent, but when they plopped dry into the sewer, there they sat, festering and bubbling and turning into a giant petri dish. And they stank.

And still stink, hence the plan for dumpling concentrated bleach into the sewers to make up for the lost water. Some of the bleach must also be used to kill critters in the drinking water, too.

In what must be a fascinating sociological experiment, the very forces of benevolence which created the demand for low-flow toilets is now pressuring politicians to eschew chemicals. “Don’t Bleach Our Bay!” is the new environmentalist cry. Activists are claiming that the bleach will cause an “environmental disaster” and is thus not “planet-friendly.” They suggest—I kid you not—using Oxyclean, or it’s sewer equivalent, to scrub clean their effluvia.

This being a world in which politicians are driven by fear more than by conviction, those who throw the biggest tantrums usually get their way. Consequences don’t matter: what really does is how much you care. And who cares as much as an activist? Thus how long until San Francisco visitors are advised not to drink the water?

Stock Tomato Seeds! Global Warming Is Coming!

It must be a joke. The punchline is surely coming. Ha, ha! Hoarding tomato seeds! Bars on his basement windows! Hilarious! This guy really nails nuttiness. He’ll shame a few zealots, boy. But…wait a minute…I’m awfully close to the the end. When is this guy going to toss in the zinger, the gotcha!, the line which says it’s all a spoof?

It never came! He was serious!

Thus was my shock when I finished Mike Tidwell’s “A climate-change activist prepares for the worst” in the Washington Post.

Tidwell tells us that he has long cared for the environment, that he did his part. But caring wasn’t enough, it was an emotion disproportionate to his soul-searing commitment. One can imagine Tidwell asking himself, “What other emotional states besides caring are available to me, such that I can show my dedication to the environment? Satisfaction? Clearly not. Worry? Too tepid. Concern? Insufficient. How about paranoia?”

“That’s it!” he must have shouted to himself. For what other emotion best explains his buying “a new set of deadbolt locks on all my doors”, a (presumably gas powered) generator, and a (yes) “starter kit to raise tomatoes and lettuce behind barred basement windows.”

Pause and re-read that. Did you notice the bars on his basement windows? Now, either he has purchased mutant tomato seeds from the Little Shop of Horrors or he has frightened himself into believing that crazed climate deniers will lay siege to his electrically powered fortress and its stock of juicy vegetables. UFO Global Warming

I’m know what you’re thinking, but Tidwell denies being a nut. He claims that he has taken his drastic actions because “we’re running out of time.” He says, “The proof is everywhere.”

When I re-read Tidwell, I felt like the weary cop listening to yet another citizen reporting a UFO. In the movies, the citizen senses the cop’s skepticism and, with clasped hands, pleads, “Don’t you believe me?” The cop always says, “I believe that you believe it.” The “UFO” turns out to be the porch light glinting off the wings of a moth. The citizen, if he has not believed in his mistake too long, laughs shyly and melts away.

But if he has cherished his sighting, no amount of evidence will convince him of his error. He will instead strain every possible strand of evidence to prove his UFO real. It is only a matter of time before he begins attending MUFON conferences where discussants agree that the only possible explanation for the lack of tangible evidence is (of course) conspiracy. It is a pathetic thing to see.

Tidwell saw a storm instead of a UFO, but he is certain sure that that storm, which knocked his power out for a few hours and prematurely thawed his meat, was sent by them. The pathos is evident:

After the August storm, I made the financially painful decision to buy the Honda generator. My solar panels, by themselves, can’t power my home. I spent $1,000 on the generator, money that would have gone into my 13-year-old son’s college fund. I’ve expanded my definition of how best to plan for his future.

Would it do any good to tell Tidwell that if the apocalypse comes his gas-powered generator, after giving glow to a light bulb or two for a week, will be useless for lack of fuel? Could he be convinced that his meager store of sun-starved tomatoes (they don’t grow well in dark basements) will not be the envy of climate refugees?

I am glad Tidwell has taken up skeet shooting for the good of his “immediate loved ones” because we could always use more advocates for Second Amendment rights. But if I were his mailman, I’d steer clear of his porch whenever there is a heat wave.


In Other News

Fellow statistician Ted Davison has created the new blog Search for Impartiality in which readers will have an interest. He begins with some frightening but apt quotations from some of the usual suspects.

And The Winner Goes To…Oscar Statistics Wrap Up

Our model was right: The King’s Speech won. In this weekend’s Oscar Statistics post, we modeled the chances of each nominated movie. We guessed that the movie most likely to win would be the one which took in about 1/4 of the Most Popular movie of the year, would have no significant roles for actresses, would be a drama, and would star a man at least 40 years old.

Of course, The King’s Speech, which shared all those traits, was the favorite for a variety of other well-known reasons, but we took none of these factors into account; our model was purely statistical.

Specifically, we did not try to predict what the best movie of the year would be, just what would win the Oscar for that category. As all know, the statuette is not awarded entirely for quality, but for political, personal, historical, equability, and other arguments.

The original purpose of our analysis was, however, to examine quality. Was the Academy of Motion Picture Arts and Sciences a better judge of quality than the “crowd-sourced” American public? The answer, we think, is probably yes.

Here are the movies for Oscar-winning Best Picture and Highest Grossing for those years when the Oscar winner made less than 25% of the Most Popular movie (recalling that we measured box office gross by hand and with some error).

Movies in which the Best Picture made 25% or less of the Highest Grossing Picture.
Year     Best Picture     Highest Grossing Movie
1940 Rebecca Pinocchio
1942 Mrs. Miniver Bambi
1950 All about Eve Cinderella
1951 An American in Paris Quo Vadis?
1958 Gigi South Pacific
1967 In the Heat of the Night The Jungle Book
1977 Annie Hall Star Wars Ep. IV: A New Hope
1980 Ordinary People Star Wars Ep. V: The Empire Strikes Back
1981 Chariots of Fire Raiders of the Lost Ark
1982 Gandhi ET: The Extra-Terrestrial
1984 Amadeus Ghostbusters
2004 Million Dollar Baby Shrek 2
2005 Crash Star Wars Ep. III: Revenge of the Sith
2007 No Country for Old Men Spider-Man 3
2009 The Hurt Locker Avatar

(First a note: our sources give two difference answers for Gigi; one says the movie made $7.3 million, another says double that. If the truth lies in-between, then Gigi should drop off this list. Since there is some doubt, we do not account for it below.)

Except for Quo Vadis? besting An American in Paris in 1951, every other Highest Grossing Movie could be considered a cartoon, a movie that the whole family could, and probably did, go to, thus boosting the bottom line. Certainly many of the movies in the list were cartoons (hand- or computer-drawn). The rest were cartoonish.

Because of ratings, some of the Oscar-winning movies the whole family could not go to, like Ordinary People, No Country for Old Men, and The Hurt Locker which hurts them in the bottom-line comparison. Even so, there are a distinct differences in the quality between the two columns.

Crash was an example of re-capturing the glory of long-won battles, but surely it was better than the direct-to-film-merchandising of Star Wars Ep. III: Revenge of the Sith. In the Heat of the Night might—for all the right reasons, of course—be overrated, but it was better than the watery version of The Jungle Book.

On the other hand, were Chariots of Fire and Gandhi, with vaguely similar sub-themes of the previous movies, and while also reflecting the Academy’s love of all things British, both better than Raiders of the Lost Ark and ET: The Extra-Terrestrial? Probably; maybe.

Then few older than six would argue Shrek 2 was better than Million Dollar Baby or that Ghostbusters improved upon Amadeus. And we’d have to push that age down a year when comparing Rebecca with Pinocchio, Mrs. Miniver with Bambi, and All about Eve with Cinderella.

However, it is true that this latter three Most Popular movies are good children’s films. So in effect, we are comparing the wrong things. Of course the Oscar winning movie would be better than a movie aimed at a child. But that was the case only into the 1970s, after which the children’s movies had pretensions of being grown-up, culminating in the politically simplistic Avatar.

Now look at the 17 movies which won Oscars were also the Highest Grossing. (This means the Oscar-wining movie had 100% of the take of the Highest Grossing movie. The next lowest percentage for an Oscar-winning movie’s take of the Highest was 85%.)

Movies in which the Best Picture made 90% or more of the Highest Grossing Picture.
Year     Best/Highest Grossing Picture
1929 The Broadway Melody
1934 It Happened One Night
1935 Mutiny on the Bounty
1938 You Can’t Take It with You
1939 Gone with the Wind
1944 Going My Way
1952 The Greatest Show on Earth
1957 The Bridge on the River Kwai
1959 Ben-Hur
1962 Lawrence of Arabia
1965 The Sound of Music
1972 The Godfather
1976 Rocky
1979 Kramer vs. Kramer
1988 Rain Man
1994 Forrest Gump
1997 Titanic
2003 The Lord of the Rings: The Return of the King

Two of these were R-rated: The Godfather and Rain Man, which means many sales were not to kids. Two of them involved simple minds, an increasingly popular theme: Rain Man and Forrest Gump. Only one was cartoonish: the endless orc slaughter-fest The Lord of the Rings: The Return of the King . Two were with Dustin Hoffman (when he was 42 and 51).

At least three had strong Christian themes: Going My Way, Ben-Hur, and The Sound of Music, but none since 1965. Most of the movies before 1979 are better than the movies which came after.

This is a loose assessment, of course, but since Kramer vs. Kramer movies in this category were goofier, for lack of a better word. It is a wild guess, and therefore likely to be wrong, but perhaps the increase in goofiness reflects the voting members of the Academy paying more attention to the bottom line then they had done previously.