William M. Briggs

Statistician to the Stars!

Page 395 of 415

Spanish Expedition

I have returned from Madrid, where the conference went moderately well. My part was acceptable, but I could have done a better job, which I’ll explain in a moment.

Iberia Airlines is reasonable, but the seats in steerage were even smaller than I thought. On the way there, I sat next to a lady whose head kept lolling over onto me as she slept. The trip back was better, because I was able to commandeer two sets. Plus, there were a large, boisterous group of young Portuguese men who apparently had never been to New York City before. They were in high spirits for most of the trip, which made the journey seem shorter. About an hour before landing they started to practice some English phrases which they thought would be useful for picking up American women: “Would you go out with me?”, “I like you”, and “You are a fucking sweetheart.”

My talk was simultaneously translated in Spanish, and I wish I would have been more coherent and that I would have talked slower. The translator told me afterwards that I talked “rather fast.” I know I left a lot of people wondering.

The audience was mostly scientists (of all kinds) and journalists. My subject was rather technical and new, and while I do think it is a useful approach, it is not the best talk to present to non-specialists. My biggest fault was my failure to recognize and speak about the evidence that others found convincing. I could have offered a more reasonable comparison if I had done so.

I’ll write about these topics in more depth later, but briefly: people weight heavily the fact that many different climate models are in agreement in closely simulating past observations. There are two main, and very simple problems with this evidence, which I could have, at the time, done a better job pointing out. For example, I could have asked this question: why are there any differences between climate models? The point being that eight climate models agreeing is not eight independent pieces of evidence. All of these models, for instance, use the same equations of motion. We should be surprised that there are any differences between them.

The second problem I did point out, but I do not think I was convincing. So far, climate models over-predict independent data: that is, they all forecast higher temperatures than are actually observed. This is for data that was not used to fit the models. This means, this can only mean, that the climate models are wrong. They might not be very wrong, but they are wrong just the same. So we should be asking: why are they wrong?

There was a press conference, conducted in Spanish. I can read Spanish much better than I can hear it, which is a fault I should work harder to correct, but it meant that I could not follow most of the comments or questions well. I was the critical representative, and a Professor Moreno was my foil. The most pertinent question to me was (something like) “Do I think it is time for new laws to be passed to combat global warming?” I said no. Professor Moreno vehemently disagreed, incorrectly using as an example the unfortunate heat wave in Spain that was responsible for a large number of deaths. Incorrect, because it is impossible to say that this particular heat wave was caused by humans (in the form of anthropogenic global warming). But the press there, like here (like everywhere), enjoyed the conflict between us, so this is what was reported.

Here, for the sake of vanity, are some links (in Spanish) for the news coverage. We were also on the Spanish national television news on the first night of the conference, but I didn’t see it because we were out. Some of these links may, of course, expire.

  1. ?Existe el cambio clim?tico?
  2. Estad?stico de EEUU rebaja la fiabilidad de las predicciones del IPCC contra la opini?n general
  3. Un estad?stico americano pone en duda la veracidad del cambio clim?tico
  4. Un experto americano duda de las consecuencias del cambio clim?tico
  5. Evidencias apabullantes
  6. Un debate sobre cambio clim?tico termina a gritos en Madrid

Madrid itself was wonderful, and my hosts Francisco Garc?a Novo y Antonio Cembrero were absolute gentlemen, and I met many lovely people. I was introduced to several excellent restaurants and cervesaria. The food was better than I can write about—I nearly wept at the Museo del Jamon. I felt thoroughly spoiled. Dr Novo introduced me to La Grita, a subtle sherry that is a perfect foil for olives. I managed to find some in the duty free shop, and I recommend that if you see some, snatch it up.

Come back over the next few days. By then, I hope to have written something on the agreement of climate models.

Tall men in planes

I am off to Spain today, for the conference, to present my unfinished, and unfinishable, talk. Why unfinishable? I am asking people to supply estimates for certain probabilities (see the previous post), on which there will never be agreement, nor will these estimates cease changing through time. I am somewhat disheartened by this, and would like to say something more concrete, but I am committed. So. It’s eight hours there and back, crammed into a seat made for, let us say, those of a more diminutive stature than I. There will be no more postings until Saturday, when I return, which is why I leave you with this classic column I wrote several years ago, but which is just as relevant today.



Burden of the very tall

Lamentations of the Very Tall

An alternate title of this article could have been, “Short People Rejoice!” for it’s my conviction that the world is mercilessly biased in favor of tiny people. That is, probably you.

I say “probably you” because of the firm statistical grounding in the fact that it is quantifiably improbable for a random person to be tall. I’m also assuming that you, dear reader, are a random person, and therefore most likely belong to the endless, but shallow, sea of short people.

Here’s the thing: since you are probably short you are likely to be unaware of how tall people suffer, so I’m going to tell you. For reference, I am a shade over six-two, which is tall, but not professional basketball player tall. This is still taller than more than nine-tenths of the American population, however.

Life as a tall man is not all bad. It’s true I’ve developed strong forearms from beating off adoring females who lust after my tallness, but there are many more misfortunes that outweigh the unending adulation of women. Showers for one.

Shower heads come to mid-chest on me. I’ve developed a permanent stoop from years of bending over to wash my hair—and then from scrunching down to see my reflection in the mirror, typically placed navel high, so that I can comb it.

The lamentations of the tall when it comes to airplane seats are too obvious to mention. As is our inability to fit into any bathtub or fully on any bed.

I once worked in a building that required, for security reasons, a peephole to be drilled into the door. I stood guard over two workers who dickered over where to place the pencil mark that would indicate where they were going to drill. Each in turn stepped up the door and put a dot in the spot where their eye met the door. The marks didn’t quite match but they soon settled on the difference.

Ultimately, the hole was about crotch high on me. To be fair, I was in Japan and the workers were Japanese, and therefore on the not tall side of the scale. Because I was in the military, I wasn’t entirely comfortable bending down to that degree1. This meant that I breached security each time I opened the door because I couldn’t see who was on the other side. Suspicious, is it not?

It was at this point that I began to believe that this discrepancy in height was not entirely genetic and that sinister motives may be behind the prejudices of the non-tall.

For example, I have to place my computer monitor on three reams of paper so that it approaches eye level, and I have to raise my chair to its maximum so that my knees aren’t in my chin, but when I do my legs won’t fit under the desk. No matter how I position myself I am in pain. I sit2 in a factory made cubicle-ette which, as far as I can tell, causes no difficulties for my more diminutive co-workers. This is more evidence of the extent of the conspiracy of the non-tall.

Shopping is suspiciously dreadful too. Short people can freely walk into any department store and grab something, anything, off the rack, while we tall men are stuck with places like Ed’s Big and Tall. These stores are fine if you have a waist of at least 46 inches and you have stumpy legs, but they are nearly useless otherwise.

Pants for the tall are a cruel joke. Even if they carry labels that promise lengths of 35 or more inches, we know that these labels are a lie. Yes, the legging material may stretch for yards and yards, but there is never enough space where it counts. These pants are called “short-rise” for obvious reasons. I asked a salesguy (a non-tall man, of course), do they make long-rise pants anymore? He didn’t stop laughing. Normally, I’d have my revenge by not buying anything from him, but I couldn’t buy anything from him in the first place. I could do nothing but fume.

I’m not sure how we, the tall, will be able to overcome these horrific adversities. In raw numbers we are but a small minority—a fairly imposing looking minority it’s true—but a minority just the same. Still, there is word that something can be done and I hear that we’re to discuss ideas at our next official Tall Man Meeting. Don’t bother trying to sneak in, though, because we take measurements at the door.

1If I would have been in the Navy, I would have been used to it, of course.
2This was true then; it no longer is. I do not have a desk now.

Quantifying uncertainty in AGW

My friends, I need your help.

I have written a paper on quantifying the uncertainty of effects due to global warming, but the subject is too big for one person. Nevertheless, I have tried to—in one location—list all of the major areas of uncertainty, and I have attempted to quantify them as well. I would like your help in assessing my guesses. I am not at all certain that I have done an adequate or even a good job with this.

At this link is the HTML version of the paper I am giving in Spain (I used latex2html to encode this; it is not beautiful, but it is mostly functional).

At this link is the PDF version of the paper, which is far superior to the HTML. This paper, complete with typos, is about draft 0.8, so forgive the minor errors. Call me on the big ones, though.

I would like those interested to download the paper, read it, and help supply numbers for the uncertainty bounds found within. I would ask that you not do this facetiously or glibly, or that you not purposely underestimate the relevant probabilities. I want an open, honest, intellectual intelligent discussion of the kinds and ranges of uncertainties in the claims of effects due to global warming. For example, the words “Al Gore” should never appear in any comment. If you have no solid information to offer in a given area, please feel free to not comment on it.

The abstract for the paper is

A month does not go by without some new study appearing in a peer-reviewed journal which purports to demonstrate some ill effect that will be caused by global warming. The effects are conditional on global warming being true, which is itself not certain, and which must be categorized and bounded. Evidence for global warming is in two parts: observations and explanations of those observations, both of which must be faithful, accurate, and useful in predicting new observations. To be such, the observations have to be of the right kind, the locations and timing where and when they were taken should be ideal, and the measurement error should be negligible. The physics of our explanations, both of motion and e.g. heat, must be accurate, the algorithms used to solve and approximate the physics inside software must be good, chaos on the time scale of predictions must be unimportant, and there must be no experimenter effect. None of these categories is certain. As an exercise, bounds are estimated for their certainty and for the unconditional certainty in ill effects. Doing so shows that we are more certain than we should be.

My conclusions (which will make more sense, obviously, after you have read the paper) are

Attempting to quantify, to the level of precision given, the uncertainties in effects caused by global warming, particularly through the use of mathematical equations that imply a level of certainty which is not felt, can lead to charges that I have done nothing more than build an AGW version of the infamous Drake equation (Drake and Sobel 1992). I would not dispute that argument. I will claim that the estimates I arrived at are at least within an order of magnitude of the actual uncertainties. For example, the probability that AGW is true might not be 0.8, but it is certainly higher than 0.08.

The equations given, then, are not meant to be authoritative or complete. Their purpose is to concentrate attention of what exactly is being asked. It is too easy to conflate questions of what will happen if AGW is true with questions of is AGW true. And it is just as simple to confuse questions of the veracity and accuracy of observations and with the accuracy of the models or their components. People who work on a particular component are often aware of its boundaries and restrictions, and so are more willing to reduce the probability that this component is an adequate description of the physical world, but they are usually likely to assume that the areas on which they do not have daily familiarity are more certain than they are. Ideally, experts in each of the areas I have listed should supply a measure of uncertainty for that area alone. I would welcome a debate and discussion on this topic.

I also would not make the claim that I have accurately listed all the avenues where uncertainty arises (for example, I did not even touch on the uncertainty inherent in classical statistical models). But the ones I did list are relevant, though not necessarily of equal importance. We do have uncertainty in the observations we make and we do have uncertainty in the models of these observations. At the very least, we know empirically that we cannot predict the future perfectly. Further, the claims made about global warming’s effects are also uncertain. Taken together, then, it is indisputable that we are less certain that both global warming and its claimed effects are true than in either AGW or its effects alone.

Thanks everybody.

Homework #1: Answer part II

In part I, we learned that all surveys, and in fact all statistical models, are valid only conditionally on some population (or information). We went into nauseating detail of the conditional information on our own survey of people who wear thinking suppression devices (TSDs; see the original posts), so I’ll skip repeating any of it again.

Today, we look at the data and ignore all other questions. The first matter we have to understand is: what are probability models and statistics for? Although we use the data we just observed to fit these models, they are not for that data. We do not need to ask probability questions of the data we just observed, there is no need to. If we want the probability that all the people in our sample wore TSDs, we just look and see if all wore them or not. The probability is 0 or 1, and is 0 or 1 for any other question we can ask about the observed data (e.g. what is the probability that half or more wore them? again, 0 or 1).

Thus, statistics are useful only for making inferences about unobserved data: usually future data, but really just unknown to you. If you want to make statements or quantify uncertainty in data you have not yet seen, then you need probability models. Some would say statistics are useful for making inferences about unobserved and unobservable parameters, but I’ll try to dissuade you of that opinion in this essay. We have to start, however, with describing what these parameters are and why so much attention is devoted to them.

Before we do, we have to return to our question, which was roughly phrased in English as “How many people wear TSDs?”, and we have to turn it into a mathematical question. We do this by forming a probability model for the English question. If you’ve read some of my earlier posts, you might recall that we have an essentially infinite choice of models which we could use. What we would like is if we could limit our choice to a few or, best of all, to logically deduce the exact model given some set of information that we believe true.

Here is one such statement: M1 = “The probability that somebody wears a TSD (at the locations and times specified for our for our exactly defined population subset) is fixed, or constant, and knowing whether one person wears a TSDs gives us no information whether any other person wears a TSD.” (Whenever you see M1, substitute the sentence “The probability…”)

Is M1 true? Almost certainly not. For example, if two people walk by our observation spot together, say a couple, it might be less likely for either to wear a TSD than it is for two separate people. Again people (not all people, anyway) aren’t going to wear a TSD at all hours equally often, and not equally often at all locations within our subset either.

But let’s suppose that M1 is true anyway. Why? Because this is what everybody else does in similar situations, which they do because it allows them to write a simple and familiar probability model for the number of people x out of n wearing TSDs. Here is the model for the data we just observed:

Pr( x = k | n, θ, M1)

This is actually just a script or shorthand for the model, which is some mathematical equation (binomial distribution), and not of real interest; however it is useful to learn how to read the script. From left to right, it is the probability that the number of people x equals some number k given we know n, something called θ, and M1 is true. This is the mathematical way of writing the English question.

The variable x is more shorthand meaning “number of people who wore a TSD”. Before we did our experiment, we did not know the value of x, so we say it was “random.” After we see the data we know k, the actual number of new people out of the n people we saw who did wear a TSD. OK so far? We already understand what M1 is, so all that is left to explain is θ What is it?

It is a parameter, which if you recall previous posts, is an unobservable, unmeasurable number, but which is necessary in order to formulate our probability model. Some people incorrectly call θ “the probability that a single person wears a TSD.” This is false and is an example of the atrocious and confusing terminology so often used in statistics (look in any introductory text and you’ll see what I mean). θ, while giving the appearance of one, is no sort of probability at all. It would be a probability if we knew its value. But we do not: and if we did know, we never would have bothered collecting data in the first place! Now, look carefully. θ is written on the right hand side of the “|”, which is where we put all the stuff that we believe we know, so again it looks as if we are saying we know θ, so it looks like a probability.

But this is because the model is incomplete. Why? Remember that we don’t really need to model the observed data if that is all we are interested in. So the model we have written is only part of a model for future data. There are several pieces that are missing. Those pieces are another probability model for the value of θ, a model for just the observed data, a model for the uncertainty in θ given the observed data, the data model itself again, which are all mathematically manipulated to produce this creature

Pr( xnew = knew | nnew, xold, nold, M1)

which is the true probability model for new data given what we observed with the old data. There is no way that I can even hope to explain this new model without resorting to some heavy mathematics. This is in part why classical statistics just stops with the fragmentary model, because it’s easier. In that tradition, people create a (non-verifiable) point estimate of θ, which means just plugging some value for θ into the probability model fragment, and then call themselves done.

Well, almost done. Good statisticians will give you some measure of uncertainty of the guess of θ, some plus or minus interval. (If you haven’t already, go back and read the post “It depends on what the meaning of mean means.”) The classical estimate used for θ is just the computed mean, the average of the past data. So the plus and minus interval will only be for the guess of the mean. In other words, just as it was in regression models, it will be too narrow and people will be overconfident when predicting new data.

All this is very confusing, so now—finally!—was can return to the data collected by those folks who turned in their homework and work through some examples.

There were 6 separate collections, which I’ll lump together with the clear knowledge that this violates the limits of our population subset (two samples were taken in foreign countries, one China and one New Jersey). This gave x = 58 and n = 635.

The traditional estimate of θ is 58/635 = 0.091, with the plus minus interval of 0.07 to 0.12. Well, so what? Remember that our goal is to estimate the number of people who wear TSDs, so this classical estimate of θ is not of much use.

If we just plug in the best estimate of θ to estimate, out of 300 million (the approximate population of the U.S.A.), how many wear TSDs, we get a guess of 27.4 million with a plus-minus window of 27.39 to 27.41 million, which is a pretty tight guess! The length of that interval is only about 20,000 people wide. This is being pretty sure of ourselves, isn’t it?

If we use the modern estimate, we get a guess of 25.5 million, with a plus-minus window of about 19.3 to 31.7 million, which is much wider and hence more realistic. The length of this interval is 12.4 million! Why is this interval so much larger? It’s because we took full account of our uncertainty in the guess of θ, which the classical plug-in guess did not (we essentially recompute a new guess for every possible value of θ and weight them by the probability that θ equals each value: but that takes some math).

Perhaps these numbers are too large to think about easily, so let’s do another example and ask how many people riding a car on the F train wear a TSD. The car at rush hour holds, say, 80 people. The classical guess is 7, with +/- of 3 to 13. The modern guess is also 7 with +/- of 2 to 12. Much closer to each other, right?

Well, how about all the students in a typical college? There might be about 20,000 students. The classical guess is 1750 with +/- 1830 to 1910. The modern is 1700 with +/- 1280 to 2120.

We begin to see a pattern. As the number of new people increases, the modern guess becomes a little lower than the classical one, and the uncertainty in the modern guess is realistically much larger. This begins to explain, however, why so many people are happy enough with the classical guesses: many samples of interest will be somewhat small, so all the extra work that goes into computing the modern estimate doesn’t seem worth it.

Unfortunately, that is only true because we had such a large initial data collection. If, for example, we only had Steve Hempell’s, which was x = 1 and n = 41, and we were interested still in the F train, then the classical guess is 2 with +/- 0 to 5; and the modern guess 4 +/- 0 to 13! The difference between the two methods is again large enough to make a difference.

Once again, we have done a huge amount of work for a very, very simple problem. I hope you have read this far, but I would not have blamed you if you hadn’t because, I am very sorry to say, we are not done yet. Everybody who remembers M1 raise their hands? Not too many. Yes, all these guesses were conditional on M1 being true. What if it isn’t? At the least, it means that the guesses we made are off a little and that we must widen our plus-minus intervals to take into account our uncertainty in the correctness of our model.

Which I won’t do because I am, and you are probably, too fatigued. This is a very simple problem, like I said. Imagine problems with even more complicated statistics where uncertainty comes at you from every direction. There the differences between the classical and modern way are even more apparent. Here is the second answer for our homework:

  1. Too many people are far too certain about too many things

Homework #1: Answer part I

A couple of days ago I gave out homework. I asked my loyal readers to count how many people walked by them and to keep track of how many of those people wore a thinking-suppression device like an I-pod etc. Like every teacher, my heart soared like a hawk when some of the students actually completed the task. Visit the original thread’s comments to see the “raw” data.

The project was obviously to recreate a survey of the kind which we see daily: e.g. What percent of Americans favor a carbon tax? What fraction of the voters want “change”? How many prefer Brand A? And so on.

Here is how a newspaper might present the results from our survey:

More consumers are endangering their hearing than ever before, according to new research by WMBriggs.com. Over 20% of consumers now never leave the house without an I-pod or I-pod-like device.

“Music is very popular” said Dr Briggs, “And now it’s easier than ever before to listen to it.” This might help explain the rise in tinnitus reports, according to some sources. Dr So Undzo of the Send Us Money to Battle Tinnitus Foundation was quoted as saying, “Blah blah blah.” He also said, “Blah blah blah blah blah.” &tc. &tc.

Despite its farcical nature, this “news” report is no different than the dozens that show up on TV, the radio, and everywhere else. In order to tell a newsworthy story, it extrapolates wildly from the data at hand, it gives you no idea who collected the original data or why (for money? for notoriety?) or how (by observation? by interview?), or of any of the statistical methods used to manipulate the data. In short: it is very nearly worthless. The only advantage a story like this has is that it can be written before any data is actually taken, saving time and money to the news organization issuing it.

But you already knew all that. So let’s talk about the real problem with statistics. Beware, however, that some of this is dull labor, requiring attention to detail, and probably too much work for too little content. However, that’s how the get you, by hoping you pass by quickly and say “close enough.”

We had five to six responses to the homework so far, but we’ll start with the first one from Steve Hempell. He saw n=41 people and counted m=1 wearing a thinking-suppression device (TSD). He sat on a bench in a small town during spring break to watch citizens pass by.

The first thing we need to have securely in our minds is what question we want to answer with this data. The obvious one is “How many people regularly wear a TSD?” This innocent query begins our troubles.

What do we mean by “people”? All people? There are a little over 6 billion humans now. Do we want an estimate from that group? What about historical, i.e. dead, people, or those yet to be born? How far back into the future or past do we want to go? Are we talking of people “now”? Maybe, but we still have to define “now”: does it mean in a year or two, or just the day the survey was taken or a few days into the future? Trivial details? Well, we’ll see. Let’s settle on the week after the survey was taken so that our question becomes “How many people in the week after our survey was taken regularly wear a TSD?”

We’re still not done with “people” and haven’t decided whether it was all humans or some subset. The most common subset is “U.S. Americans” (as Miss Teen South Carolina would have phrased it). But all U.S. citizens? Presumably, infants do not wear TSDs, nor do many in nursing homes or in other incarcerations. Were infants even counted in the survey? Older people in general, experience tells us, do not often wear TSDs. As I think about this question, I find myself unable to rigorously quantify the subset of interest. If I say “All U.S. citizens” then my eventual estimate would probably be too high, given this small sample. If I say, “U.S. citizens between the ages of 15 and 55″ then I might do better, but the survey is of less interest.

To pick something concrete, we’ll go with “All U.S. citizens” which modifies our question to “How many U.S. citizens in the week after our survey was taken regularly wear a TSD?”

Sigh. Not done yet. We still have to tackle “regularly” and the bigger question of whether or not our sample represents fairly the population we have in mind, and would still leave the largest, most error-prone area: what exactly is an TSD? I-pods were identified, but how about cell phones or Blackberries and on and on? Frankly, however, I am bored.

Like I said, though, boredom is the point. No one wants to invest as much time as we have for this simple survey to each survey they meet. No matter how concrete the appropriate population in a survey seems to you, it can mean something entirely different to somebody else; each person can take away their own definition. This ambiguity, while frustrating to me, is gold to marketers, pollsters, and “researchers.” So vaguely worded are surveys that the reader can supply any meaning they want to its results. Although they usually consciously aware of it, people read surveys like they read horoscopes or psychic readings: they always seem accurate or to confirm people’s worst fears or hopes.

An objection might have occurred to you. “Sure, these complex surveys are ambiguous. But there are simple polls that are easy to understand. The best example is ‘Who will you vote for, Candidate A or B?’ Not much to confuse there.”

You mean, since a poll is a prediction of ballot results, besides trusting that the pollster found a population representative of people who will actually vote on election day? That no event between the time the poll was taken and the election occurs that will cause people to change their minds? And—pay attention here—nobody lied to the pollster?

“Oh, too few people lie to make a difference.” Yeah? Well, I live in New York City and I like to tell the story of the exit polls taken for the presidential race between Kerry and Bush. Those polls had Kerry ahead by about 10 to 1, a non-surprising result, and one which confirmed people’s prior beliefs. The pollsters asked tons of voters and were spread throughout the city in an attempt to obtain the most representative sample they could. Not everybody would answer them, of course, and that is still another problem which is impossible to tackle.

But when the actual results were tallied, Kerry won by only a margin about a little under 5 to 1. Sure, he still won, but the real shocker is that so many people lied to the pollster. And why? Well, this is New York City, and in Manhattan particularly, you just cannot easily admit to being a Bush supporter (then or now). At the least, doing so invites ridicule, and who needs that? Simpler just to lie and say, “I voted for Kerry.”

We have done a lot and we still haven’t answered the question of how to handle the actual data!

Here are the answers to part I of the homework.

  1. The applicability of all surveys is conditional on a population which must be, though rarely is, rigorously defined.
  2. All surveys have significant measurement error that has nothing to do with the actual numerical data.
  3. Because of this, people are too certain when reading or interpreting the results of surveys

In part II, if we are not already worn down, we will learn how to—finally!—handle the data.

« Older posts Newer posts »

© 2014 William M. Briggs

Theme by Anders NorenUp ↑