William M. Briggs

Statistician to the Stars!

Page 149 of 416

Observational Bayes > Parametric Bayes > Hypothesis Testing < Looking

This is a completion of the post I started two weeks ago which shows that “predictive” or “observational” Bayes is better than classical, parametric Bayes, which is far superior to frequentist hypothesis testing which may be worse than just looking at your data. Actually, in many circumstances, just looking at your data is all you need.

Here’s the example for the advertising.csv data found on this page.

Twenty weeks of sales data for two marketing Campaigns, A and B. Our interest is in weekly sales. Here’s a boxplot of the data.

It looks like we might be able to use normal distributions to quantify our uncertainty in weekly sales. But we must not say that “Sales are normally distributed.” Nothing in the world is “normally distributed.” Repeat that and make it part of you: nothing in the world is normally distributed. It is only our uncertainty that is given by a normal distribution.

Notice that Campaign B looks more squeezed than A. Like nearly all people that analyze data like this, we’ll ignore this non-ignorable twist—at first, until we get to observational Bayes.

Now let’s run our hypothesis test, here in the form of a linear regression (which is the same as a t-test, and is more easily made general).

Estimate Std. Error t value Pr(>|t|)
(Intercept) 420 10 42 2.7e-33
CampaignB 19 14 1.3 0.19

Regression is this and nothing more: the modeling of the central parameter for the uncertainty in some observable, where the uncertainty is quantified by a normal distribution. Repeat that: the modeling of the central parameter for the uncertainty in some observable, where the uncertainty is quantified by a normal distribution.

There are two columns. The “(Intercept)” must (see the book for why) represent the central parameter for the normal distribution of weekly sales when in Campaign A. This is all this is, and is exactly what it is. The estimate for this central parameter, in frequentist theory, is 420. That is, given we knew we are in Campaign A, our uncertainty in weekly sales would be modeled by a normal distribution with best-guess central parameter 420 (and some spread parameter which, again like everybody else, we’ll ignore for now).

Nobody believes that the exact, precise value of this central parameter is 420. We could form the frequentist confidence interval in this parameter, which is 401 to 441. But then we remember that the only thing we can say about this interval is that either the true value of the parameter lies in this interval or it does not. We may not say that “There is a 95% chance the real value of the parameter lies in this interval.” The interval is, and is designed to be in frequentist theory, useless on its own. It only becomes meaningful if we can repeat our “experiment” an infinite number of times.

The test statistic we spoke of is here a version of the t-statistic (and here equals 42). The probability that if we were to repeat the experiment an infinite number of times, that in these repetitions we see a larger value of this statistic, given the premise that this central parameter equals 0, and given the data we saw and given our premise of using normal distributions is 2.7 x 10-33. There is no way to say this simpler. Importantly, we are not allowed to interpret this probability if we do not imagine infinite repetitions.

Now, this p-value is less than the magic number so we, by force of will, say “This central parameter does not equal 0.” On to the next line!

The second line represents the change in the central parameter when switching from Campaign A to Campaign B. The “null” hypothesis here, like in the line above, is that this parameter equals 0 (there is also the implicit premise that the spread parameter of A equals B). The p-value is not publishable (it equals 0.19), so we must say, “I have failed utterly to reject the ‘null’.” Which in plain English says you must accept that this parameter equals 0.

This in effect says that our uncertainty in weekly sales is thus the same for either Campaign A or B. We are not allowed to say (though most would), “There is no difference in A and B.” Because of course there are differences. And that ends the frequentist hypothesis test, with the conclusion “A and B are the same.” Even though the boxplots look like they do.

We can do the classical Bayesian version of the same thing and look at the posterior distributions of the parameters, as in this picture:

The first picture says that the first parameter (the “(Intercept)”) can be any number from -infinity to +infinity, but it is most likely between 390 to 450. That is all this says. The second picture says that the second parameter can take any of an infinite number of values but that it most likely lives between -20 and 60. Indeed, the vertical line helps us quantify the probability this parameter is less than 0 is about 9%. And thus ends the classical or parametric Bayesian analysis.

We already know everything about the data we have, so we need not attach any uncertainty to it. Our real question will be something like “What is the probability that B will be better than A in new data.” We can calculate this easily by “integrating out” the uncertainty in the unobservable parameters; the result is in this picture:

This is it: assuming just normal distributions (still also assuming equal spread parameters for both Campaigns), these are the probability distributions for values of future sales. Campaign B has higher probability of higher sales, and vice versa. The probability that future sales of Campaign B will be larger than Campaign A is (from this figure) 62%. Or we could ask any other question of interest to us about sales. What is the probability that sales will be greater than 500 for A and B? Or that B will be twice as big as A? Or anything. Do not become fixated on this question and this probability.

This is the modern, so-called predictive Bayesian approach.

Of course, the model we have so far assumed stinks because it doesn’t take into account what we observed in the actual data. First thing to change is the equal variances; second is to truncate the data to ensure no sales are less than 0. That (via JAGS; not in the book) gives us this picture:

The open circles and dark diamonds are the means of the actual and predictive data. The horizontal lines shows the range of 80% of the actual data placed at the height where there is 80% of the predictive data below. Ignore these lines if they confused you. The predictive model is close to the real data for Campaign B but not so close for Campaign A, except at the mean. This is probably because our uncertainty in A is not best represented by a normal distribution and would work better with a distribution that isn’t so symmetric.

The probability that new B Sales are larger than new A Sales is 65% (from this figure). The beauty of the observational or predictive approach is that we can ask any question of the observable data we want. Like, what’s the chance new B sales are 1.5 times new A sales? Why that’s 4%. And so on.

In other words, we can ask plain English questions of the data and be answered with simple probabilities. There is no “magic cutoff” probability, either. The 65% may be important to one decision maker and ignorable to another. To stay with A or B depends not just on this probability and this question: you can ask your own question inputting the relevant information to you. For instance, A and B may cost differently, so that you have to be sure that B has 1.5 times as many sales as A. Any question you want can be asked and asked simply.

We’ll try and do some more complex examples soon.

Rioting Is An Ecstatic, Spiritual Experience: Or, Structural Sin Made Me Do It

“Rioting”—which is to say, looting, rampaging, vandalizing, engaging in wanton mayhem and violence, and generally acting very badly—”can be, literally, an ecstatic spiritual experience.” So says the is-he Right Reverend Peter Price, a man who is no less than the Bishop of Bath and Wells. (Note the Bish’s use of literally. PDF of report.)

Price’s comments have been widely reported, with opinion coalescing around the idea that Price has, literally, lost his mind. But this is unfair, because, literally, Price was merely quoting another churchman, one Father Austin Smith, who made his ecstatic comments after the Toxteth riots in the 1980s. In that feast of spirituality, “468 police officers had been injured, 500 people arrested and at least 70 buildings demolished.”

Anyway, Price continued, “Something is released in the [riot] participants which takes them out of themselves as a kind of spiritual escape.” Out of themselves and into shoe shops, where “participants” gleefully stole as many “trainers” as they could lay their thieving mitts on. Which shows that “participants” are not only immoral lawbreakers but that they also have appalling taste in footwear. (This is known in statistics as a correlation.)

“The tragedy,” Price says, is that

we have a large population of young people who are desperate to escape from the constrained lives to which they feel and appear to be condemned. Where hope has been killed off and with no prospect of escape, is it surprising that their energies erupt in antisocial and violent actions? In a consumer society, is it surprising that lusting after high-status goods is seen as a way to find meaning?

There is some truth here. And it is this: there is always somebody willing to excuse the repugnant behavior of one person as the fault of the political enemies of the excuser. Price’s enemies are—wait for it—rich people, “austerity measures,” and “social tensions”.

These culpable entities even caused innocent citizens who had just “come out to see what was happening” to become “quickly caught up in the thrill of the moment.” To excuse why these bystanders engaged in theft, Price hypothesizes that they were “just picking up things that had been discarded on the streets.” And keeping them and bringing them home. Just like dear old mom did not, we hope, teach them.

Now most of Price’s report is dull, written in pseudo-academese and contains phrases like “coveting is a big issue” and “The mobile nature of the events makes a locational analysis problematic in relation to those involved, and particularly those arrested.” It reads as if a cluster of senior clergy were sitting around the pub on wet afternoon when one suddenly announced, “Our duty is to issue a report”—and then they actually wrote it!

Because it’s in the nature of these documents, Price could not help himself from theorizing. The real cause of the riots was something called “structural sin which recognises how people on all sides of conflicts can face moral choices that are not between what is clearly right and clearly wrong but which are necessitated by circumstances in response to situations where much has gone wrong already.” In other words, pocketing an item dropped by a fleeing looter is not clearly right nor clearly wrong. Even lighting a shop on fire to watch it burn can be considered morally ambiguous if much has gone wrong already.

Price was not speaking gibberish, however. The idea of “structural sin” is well known and was developed inside something called “liberation theology,” which is often read as a codeword for theological Marxism (and if that isn’t a contradiction in terms, nothing is).

Structural sin is shared sin. Because of “unjust” economic, political situations, and “discriminatory lending practices,” the man who ecstatically wields the club upside the shopkeeper’s head is guilty of structural sin, but so are you who sat at home guilty because you contributed to the circumstances which caused the man to swing the club. Why, if it weren’t for you dutifully going to work each day and paying your taxes, these riots would never have occurred! Price says “flawed social structures” are responsible for “creating the conditions for sin to manifest itself.”

Sin seen in this way is like a gas which leaks out individuals and seeps through a community, concentrating here and there due to vagaries of wind. The only recourse is to bottle sin up by reinvigorating the welfare state and making risky loans. Indeed, if “austerity” is left in place, Price sees the possibility of future riots. “Nothing is inevitable — but the auguries are not reassuring.”

A Depressing [Real Sad] Churchill Quotation—Contest

As I this morning scanned Facebook, I ran across the image below. Read it through and pause for a moment before continuing.

Churchill quote

Notice anything strange? I was so intrigued by the editor’s use of brackets that I responded with the following:

Love the quotation [words cited from source]! An insightful [cap full o' thinkin'] analysis [figurin' out] of a morally stunted [short people] heretical [believing bad stuff about God] sect [I forget this one].

Naturally, it’s time for another no-prize contest. Who can come up with the most helpful edited quotation? Your audience are degree holding—this is not synonymous with educated—United States citizens. Extra points awarded if edits are in the voice of a sorority or fraternity member.

Here is my entry.

Four score and seven [many] years ago our fathers [not our real fathers; other peoples' fathers] brought forth [after thirdth] on this continent [happy land mass] a new nation, conceived [thought of] in liberty [forget this word: vote Obama], and dedicated [given] to the proposition [business deal] that all men [and women] are created equal [must pay their fair share].

Now we are engaged [living together] in a great civil war, testing whether that nation, or any nation, so conceived and so dedicated, can long endure [make it]. We are met on a great battle-field [area outside club] of that war. We have come to dedicate a portion [bit] of that field, as a final resting place for those who here gave their lives that that nation might live. It is altogether fitting [right size] and proper that we should do this.

But, in a larger sense, we can not dedicate, we can not consecrate [have sex with], we can not hallow [full of air] this ground. The brave men, living and dead, who struggled [worked hard] here, have consecrated [good grief!] it, far above our poor power to add or detract [insult]. The world will little note, nor long remember what we say here, but it can never forget what they did here. It is for us the living, rather, to be dedicated here to the unfinished [not done] work which they who fought here have thus far so nobly [lot of knowledge] advanced. It is rather for us to be here dedicated to the great task [minimum wage job] remaining before us—that from these honored [Facebook friended] dead we take increased devotion [follow on Twitter] to that cause for which they gave the last full measure of devotion—that we here highly resolve [see better] that these dead shall not have died in vain [self-centered]—that this nation, under God [or "god"], shall have a new birth of freedom–and that government of the people, by the people, for the people, shall not perish [Church place] from the earth.

Does Averaging Incorrect Data Give A Result That Is Less Incorrect? Climate Modeling

Another question from the statistics mailbag:

Dear Matt: I recently got into a discussion with a CAGW “believer” and of course the discussion turned to global average temperature (whatever that is) anomalies and that the predictions of climate catastrophe are based on computer model output. I then said, “If a computer model cannot predict regional changes, it cannot predict global changes. Averaging incorrect data does not give accurate data,” referring to the computer models. Was that a correct statement?

Although I once took statistics courses, about the only things I remember are median, mode, mean, and standard deviation so if you have time to respond to this e-mail, please do so in a ridiculously simple way that I might be able to understand.

Thanks. By the way, I like your new format.

Regards,

Chuck Lampert

Sort of Yes, Chuck. The part that’s tricky is your conditional: climate models necessarily do better at “higher” scales than “lower ones.” But your second part is right: averaging a Messerschmidt, no matter how large, still leaves you with a Messerschmidt, if I may abuse the punchline to the old joke.

First, a “climate” model is just a model of the atmosphere. What makes it “climate” is its scale and nothing more; what we call “climate” versus what we label “weather” is really a matter of custom. So imagine a model of climate of the Earth from the view of Alpha Centauri. From that vantage the Earth is indeed a pale blue dot and its “global mean” temperature can be modeled to high accuracy, as long as we don’t try for too many decimal places. We can even ignore seasonality at this distance. Heck, I’d even believe a forecast from James Hansen for “climate” as defined this way.

But now imagine the temperature and precipitation on a scale of a city block for each hour of the day and over the entire surface. This would be incredibly complex to model and verify. Even trying to write down the computing resources required produces a dull pain in the occipital lobe. To my knowledge nobody tries this for the globe as a whole, though it is done over very small areas for limited time frames. The hope that this scale of model would be accurate or useful as a climate model matches that of a Marxist who says to himself, “Next time it’ll be different.”

Here’s the tricky part. A climate model built for large-scale climate can do well, while another built for smaller-scale climate will fare more poorly, each verification considered at the scale intended of each model. We can, as you suggest, average the small-scale model so that the resultant output is on the same scale as its coarser brother.

Now it can happen that the averaged model judged on the coarser scale will outperform itself judged on its original scale. This could be simply because the model did well on the coarse scale but poorly on the fine scale. Of course, the averaged model may also perform poorly even on the large scale. There is no way to know in advance which will be the case (it all depends on the competence of the modelers and how well the models reproduce the physics).

But, all things equal, the variance (of the verification or the model itself) of the averaged model will be larger than the variance of the large-scale-from-birth model. That means we would have either more trust in the large-scale model, or in its verification statistics (even if those stats showed the model to be poor) or both.

The old tale of the Chinese Emperor’s Nose is somewhat relevant here. Nobody in China knew its length, but they desired to have the information. Why they wanted to know is a separate question we leave to competent psychologists. Anyway, many people each contributed a guess, each knowing that his answer was probably wrong. But they figured the average of all the wrong guesses would be better than any individual guess.

Wrong. Taking the mean of nonsense, as suggested above, only produces mean nonsense. So that if the small-scale model stunk at predicting small-scale climate, taking averages of itself (via ensemble forecasting, say) and then examining the average model on the same small (not large) scale will still leave you with a Messerschmidt.

—————————————————

Photo source

“For who are a free people? Not those, over whom government is reasonably and equitably exercised, but those, who live under a government so constitutionally checked and controuled, that proper provision is made against its being otherwise exercised.” —John Dickinson, Letters From a Farmer in Pennsylvania (1768).

« Older posts Newer posts »

© 2014 William M. Briggs

Theme by Anders NorenUp ↑