Earthquake Ghouls, Colleges As High Schools, The Robinson DeFazio Controversy, More

J-school Ghouls

On the radio, a female American reporter in Tokyo, unnecessarily breathless and somewhat disappointed. “I can only imagine if it were here, it would have been much worse and the, uh, the count would have been much higher.” Body count, of course.

Colleges Now Offer High School Degrees

From the—yes, really—New York Times, a story on how CUNY schools are having to teach college students what they should have learned in high school. The “tide of remedial students has now swelled so large that the university’s six community colleges — like other two-year schools across the country — are having to rethink what and how they teach, even as they reel from steep cuts in state and local aid. ”

Isn’t this the same paper that is siding with the Wisconsin (and New York) union teachers, saying that these teachers need more money for the find job they are doing? Thanks to long-time reader and contributer Ari Scwartz for bringing this to our attention.

Detroit Invaded By Hipsters

Under the We’re-not-sure-this-is-a-good-thing category, Detroit is being taken over by t-shirted, expensively shod hipsters. And why not, when you can buy a perfectly serviceable house for pocket change. Videos here.

Unintended Consequences of Obamacare

Who could have ever guessed that the you-can’t-know-what’s-in-it-until-you-pass-it Obamacare law would have provisions which actually cause health care costs to increase? Unprecedented. The Wall Street Journal is reporting on the comedic situation where parents are going to the doctor to ask for prescriptions for aspirin and others over-the-counter remedies. Why? The Obamacare law says that health care expenses drawn from flexible spending accounts can only be authorized via a doctor’s script. Result: costs increase.

Stuff Academics Like

Just when you thought the postmodern academic culture wars were over, we have a new site documenting the oddities of academics. In the vein as Stuff White People Like, Stuff Academics Like posts strange things ivory-tower inhabitants find important. Much fun can be had in their “Guess the Fake Title” game, where they display a list of titles from genuine peer-reviewed papers, one of which is fake. A brief excerpt:

Exemplarity – The Real: Some Problems of a Realist Exemplarity Exposed through the Channel of an Aesthetic Autopoeisis [conference paper]

Tragic Closure: A Genealogy of Creative-Destructive Desire [conference paper]

The True History of His Beard: Joaquin Phoenix and the Boundaries of Cinéma Vérité [conference paper]

Trying the Law: Critical Prosecutions of the Exception [conference paper]

Thinking the Pure Unformed [conference paper]

Alan Ball’s True Blood Antics: Queering the Southern Vampire [conference paper]

Antagonistic Corpo-Real-ities [conference paper]

This list is partial, so it is unknown which, if any, is fake. Be sure not to miss the link to the Write Your Own Academic Sentence site. My entry: “The epistemology of pop culture replays (in parodic form) the ideology of the nation-state.”

Democrat Peter DeFazio Meddles With Opponent’s Kids?

Many sites (one link) are reporting on the Oregon House race of MoveOn.org-supported Democrat Peter DeFazio (leader of the House “progressive” caucus) versus Republican Art Robinson (ex professor of chemistry and climate “skeptic”, a no-no in his corner of Oregon). The details are not clear, but Robinson accused DeFazio of conspiring to have three of Robinson’s children (he has six) booted from Oregon State University’s graduate school. Robinson writes of one of his sons:

Thus, Democrat activist David Hamby and militant feminist and chairman of the nuclear engineering department Kathryn Higley are expelling four-year Ph.D. student Joshua Robinson from OSU at the end of the current academic quarter and turning over the prompt neutron activation analysis facility Joshua built for his thesis work and all of his work in progress to Higley’s husband, Steven Reese. Reese, an instructor in the department, has stated that he will use these things for his own professional gain. Joshua’s apparatus, which he built and added to the OSU nuclear reactor with the guidance and ideas of his mentor, Michael Hartman, earned Joshua the award for best Masters of Nuclear Engineering thesis at OSU and has been widely complimented by scientists at prominent U.S. nuclear facilities.

Robinson lost to DeFazio. Oregon’s Gazette Times reports that OSU said there was “no factual basis” for Robinson’s claims. The paper also differs in the details saying Robinson didn’t claim his kids weren’t being kicked out, but that two, not three, were “given unfair deadlines to complete their Ph.D. projects.” Which is a very different thing.

Robinson also alleges the OSU has “ostracized” faculty member Jack Higginbotham (nuclear engineering) for telling Robinson of the conspiracy. OSU was forced to issue a press release which said Robinson’s claims are “baseless and without merit.”

Anybody have more details on this?

Update Somebody linked to the low-flow toilet story with this must-see video from Rand Paul spanking the Obama administrator’s “Ms. Hogan” on what “pro-choice” means. Busybody!

The Sorites Paradox Isn’t

Clearly, a guy with no hair on his head is bald. But so is a guy with just one—if and only if we define bald as “a man with little or no hair.” If the guy has one hair and we define bald to mean “a man with no hair” then the man with one hair is not bald. So let us use “a man with little or no hair” as our definition and see where that gets us.

We assume that if a man with one hair is bald (by our definition), then so is a man with just two hairs. And if a man with two hairs is bald, then so is a man with three. We can expand this: if a man has N hairs and is bald, then a man with N + 1 hairs is also bald. Thus (eventually) a man with a million (say) hairs on his head bald, too. Which is absurd. Any man which such a mane is clearly fully flocked. Yet our derivation is error free.

This is the Sorites, an ancient puzzle, also given with respect to grains and heaps of sand (the words is derived from the Greek heaped up). More than a few writers on this paradox, after reaching the gotcha!, now say something like the following:

“We seem to have reached the point where we say that a man with, say, 5,000 hairs is ‘bald’, but one with just one more tiny, wee hair is not. This is nuts. Nobody can see the difference between 5,000 and 5,001 hairs. Something must be wrong with our system of logic.”

The man who says this, or anything like it, makes (at least) two mistakes. I’ve already given a hint of the first error above. There is nothing wrong with logic, but there is with the definition of bald. That word, when used in this exceedingly formal logical argument itself becomes a formal creature. It is no longer the bald as used colloquially, it is instead like the X used in algebra. It is an abstract thing, it no longer means real baldness on real men. It means logical X-ness on fictional men.

Indeed, rewrite the Sorites to remove the pseudo-word bald and replace it with X. X now means a man with fewer than Y hairs. If the man with no hairs is X, then so is the man with one hair, and so forth. Now, at some point we either bump up against Y, in which case the man is no longer X, or Y is the limit and the man is always X except at the limit.

If I were to have originally written the Sorites in this algebraic form—with just Xs and Ys—there never would have been a gotcha!, we never would have questioned the foundations of logic, there would have been no paradox. That there felt like one when we do use bald instead of X can only mean that we are silently augmenting our argument with hidden premises (which define bald). We figure that because these premises are unstated, or do not appear in print, they are not truly there.

One hidden premise is that the word bald to me, and to me right now, means a man with a certain shape of head and a certain lack of hair. I need not know how many hairs this man has, but I will make the judgment bald or not by what I see. Of course, we may, after my judgment, count the man’s hair and thus reach a quantification. My premises fluctuate: they are different for different times and men, or for the same men but they change depending on what these men wear, or the properties of the light, my relations to these men, or even by how much I have drunk.

My premises are almost certainly different than yours. I may say bald when you do not. That our behavior is not constant or that our judgments do not agree is meaningless. Neither is it relevant—and here is the second mistake—that I cannot articulate my premises. All that I can do is to say bald or not. Quantification, as I said, can always be had after the fact. But all this will tell us, in any individual case, is that the man now in front of me has not yet reached Y, or that he has exceeded it. We will not be able to deduce Y (unless the man is willing to undergo experimentation; however, my premises might change as we add or subtract hair from our recruit).

Unacknowledged, hidden premises are the generator of many “paradoxes.” The most relevant to statistics are in (faulty) criticisms of Laplace’s Rule of Succession, which we can attack another day.

Group Differences: An Exceedingly Brief Introduction To Bayesian Predictive Inference

Read the first entry in this series. All of what follows will appear ridiculously obvious to those who have had no statistical training. Those who have must struggle.

In a recent study, a greater fraction of Whites than Blacks were found to have a trait thought desirable (or undesirable, or a trait thought worth tracking). Something caused this disparity to occur. It cannot be that nothing caused it to occur. “Chance” or “randomness” are not operative agents and thus cannot cause anything to occur. It might be that we cannot know what caused it to occur, or that we guess incorrectly about what caused it to occur. But, I repeat, something caused this difference.

If you like, substitute “Pill A” and “Pill B”, or “Study 1″ and “Study 2″, etc. for White and Black.

I observed a greater fraction of Whites than Blacks possessing some trait. Given this observation, what is the probability that a greater fraction of Whites than Blacks in my study possessed this trait? It is 1, or 100%. If you do not believe this, you might be a frequentist.

What is the probability that the proportion of trait-possessing Whites is twice—or thrice, or whatever—as high as Blacks in my study? It is either 1 or 0, depending on whether the proportion of trait-possessing Whites is twice (or whatever) as high as Blacks. All I have to do is look. No models are needed, no bizarre concepts of “statistical significance.” All we need do is count. We are done: any empirical question we have about the difference (or similarities) of Whites and Blacks in our study has probability 1 or 0. It is as simple as that.

Now suppose that we will see a certain number of Whites we have not seen before; likewise Blacks (they could even be the same Whites and Blacks if we believed the thing or things that caused the trait was non-constant). We have not yet measured this new group of Whites and Blacks so that we do not know whether a greater proportion of Whites than Blacks will be found to possess the trait. Intuition suggests that since we have already observed a group in which a greater proportion of Whites than Blacks possessed the trait, the new group will display the same disparity.

We can quantify this intuition with a model. There are many—many—to choose from. The choice of which one to use is ours. All the results derived from it assume that the model we have chosen is true.

One model simply says, “In any group of Whites and Blacks, a greater proportion of Whites than Blacks will be found to possess the trait.” Conditional on this model—that is, assuming this model is true—the probability there will be a greater proportion of trait-possessing Whites than Blacks in our new group is 1, or 100%. This simple model only makes a statement about Whites possessing the trait in higher frequency than Blacks. Thus, we cannot say what is the probability the proportion of trait-possessing Whites is twice (or whatever) as high as Blacks in my study.

Some models do not let you answer all possible questions.

We could create a model which dictates the probability that we find each multiple (from some set) of fractions of Whites than Blacks (e.g. twice, thrice, 1/2, 1/3, etc.), and then use this model to make probability statements about our new group. Since that would be difficult (and somewhat capricious), we could instead parameterize the differences in proportion.

We could use this model to answer the question, “Given this model is true, and given the observations we have made thus far, what is the probability that the parameters take a certain value?” This question is not terribly interesting and it does not answer what we really want to know, which is about the differences between Whites and Blacks in our new group. Why ask about some unobservable parameter? (The right answer is not, “Because everybody else does.”)

But given a fixed value of the parameters, we could answer the question, “Given this parameterized model is true, and given a fixed value of its parameters, and given the observations we have made thus far, What is the probability a greater fraction of Whites than Blacks will posses the trait?” This is almost what we want to know, but not quite, because it fixes the values of the unobservable parameters.

Simple mathematics allows us to answer this question for each possible value of the parameters, and then weighting the answers by the probability that the parameters take those values (this is from the parameter posterior distribution, which is conditional on the model being true and on the observations we have made thus far). The final number is the probability that the fraction of Whites is larger than Blacks in our new group. Which is what we wanted to know. (This is called the predictive posterior distribution.)

“Statistical significance” never once enters into this or any real decision. When you hear this term, it is always a dodge. It is an answer to a question nobody asks and nobody wants to know. It always assumes, as we do, on the truth of a model (though it remains silent about this, hoping by this silence to convince that no other models are possible). It tells us the probabilities of events that did not happen, and asks us to make decisions based on probabilities of these never-happened events. If you want to be mischievous, ask a frequentist why this makes sense. Homework: Locate Jeffreys’s relevant quote.

See the first in this series to discover what to do if we suspect our model is not true.

Iowahawk Does Statistics—-Properly!

Thanks to the many readers who sent in this tip.

The Iowahawk, a.k.a. David Burge, the beloved assassin of pomposity and pretension has taken the often hysterical Paul “Global Warming Skeptics are Traitors” Krugman to task over education statistics.

It seems Krugman has taken the sides of the cowardly politicians (all Democrats1) in Wisconsin. You know, the ones who scurried away to Illinois (!) when they realized they would lose a vote. We musn’t be too harsh on these politicians, for their actions were instinctual, motivated by the same survival impulse that drives cockroaches to sprint for cover when the light comes on.

Incidentally, just like those nasty bugs, self-serving politicians cannot be eradicated by force. Poison is useless. Stomp on one and two more instantly appear. The only solution is to cut off their food supply: do not vote for them.

Anyway, Krugman, using sources known only to himself, “proved” that education outcomes were better in unionized Wisconsin then they were in non-unionized Texas. Thus, we should acceded to the demands of the Wisconsin activists who dictate that more money should be taken from the working people of that State and given to them.

The only thing Krugman did right was to take on a worthy target. Too bad everything he said was false or misleading. Burge found a hilarious admission from the Times’s ombudsman Daniel Okrent (click and read all Okrent has to say):

Op-Ed columnist Paul Krugman has the disturbing habit of shaping, slicing and selectively citing numbers in a fashion that pleases his acolytes but leaves him open to substantive assaults.

Not a bad euphemism for lying, that. One wonders how the lacrymose Krugman (“O! The planet!”) responded to his colleague’s disapprobation. Okrent says, “I didn’t give Krugman…the chance to respond before writing the last two paragraphs. I decided to impersonate an opinion columnist.”

In his first post, Iowhawk did what should be done: he found the raw, relevant numbers that best compared educational success for Texas and Wisconsin. The most obvious bit of detective work—well, obvious to Burge but not to Krugman—was to recognize that the racial makeup of the two states was different. Whites, Blacks, and Hispanics do not live in anywhere near the same proportion in these states, and neither do these two groups score the same on standardized tests.

There are many more Whites in Wisconsin and Whites tend to score better on standardized tests than do members of the other groups. Thus, raw comparisons between the states will tend to show Wisconsin out front, which is misleading—a fancy word to say wrong. It was these wrong numbers Krugman used.

But if we use the proper numbers, broken down by race, as Burge did, we find that in each year, in nearly all subjects, in nearly every pertinent measure, Texas trumps Wisconsin. Using Krugman’s logic, we should thus fire every union teacher in Wisconsin and hire a non-union ones in their place.

Wait a second! How can Wisconsin do better overall yet Texas win in every subcategory? Isn’t the overall measure just a sum of the subcategories? Texas should be the winner overall, shouldn’t it?

It was in his follow-up post that Burge made us most proud, offering an excellent definition of Simpson’s Paradox, and showing how that manifested itself in the education statistics. Simpson’s Paradox is often found in disparity or inequality studies. Indeed, it is found so often that it is practically criminal not to check for it. It is not just criminal, but is nigh treasonous. And that means Krugman is a traitor! A traitor, do you hear me! Ach! Sputter! Arr…..

Whew. Sorry about that. I don’t know what came over me. My only excuse is to say that I spent too much time reading the New York Times today.

Back to the point: Simpson’s Paradox is found when subcategories of different proportions are summed (read the material on the link for a full explanation). Since the racial makeup of the two states are so different, Simpson’s Paradox is guaranteed.

Burge also tells us the difference between the ACT and SAT, why that difference matters, and why simple state-to-state comparisons of these tests are difficult.

The only point at which Burge and I differ is his use of the term “statistical significance.” I say that it is evil, misleading, and just plain wrong to ever use. However, I thank Burge for using it, because it provides me the perfect segue for tomorrow’s column. Don’t miss it!

———————————————-

Update1I do not mean to imply that no or few Republican politicians behave like cockroaches; clearly, many do. I do mean to say that the actions of the Wisconsin and Indiana Democrat politicians is cowardly and bug-like. Their behavior not akin to an outnumbered army wisely retreating so that they may fight again another day, for these politicians have already been vanquished and they know it. They are instead acting petulantly, like sore losers, cry babies, cockroaches. I meant only to speak of politicians and not citizens, and therefore apologize if any thought I was talking about them (unless you are a politician stealing towels from an Illinois Red Roof Inn, then I did mean you).

Global Average Temperature: An Exceedingly Brief Introduction To Bayesian Predictive Inference

Update This post is mandatory reading for those discussing global average temperature.

I mean it: exceedingly brief and given only with respect to a univariate time series, such as operationally define global average temperature (GAT). Let him that readeth understand.

GAT by year are observed (here, assumed without error). If we want to know the probability that the GAT in 1980 was lower than in 2010, then all we have to do is look. If the GAT in 1980 was less than the GAT in 2010, then the probability that the GAT in 1980 was lower than in 2010 is 1, or 100%. If you do not believe this, you are a frequentist.

Similarly, if you ask what is the probability that the GAT in the 2000s (2001- 2010) was higher than in the 1940s (1941-1950), then all you have to do is (1) define an operational definition of higher, and (2) just look. One such operational definition is that the number of years in the warmer decade outnumber the number of years in the cooler decade. If the number of warmer years in the 2000s outnumber the tally of warmer years in the 1940s, then the probability that the 2000s were warmer than the 1940s is 1, or 100%

There is no model needed to answer these or similar simple questions.

If you want to ask what is the probability that the GAT increased by at least X degrees C per year from 1900 to 2000, then all you have to do is look. If the GAT increased by at least X degrees C per year from 1900 to 2000, then the probability that the GAT increased by at least X degrees C per year from 1900 to 2000 is 1, or 100%. There is no need, none whatsoever, to ask whether the observed increase of at least X degrees C per year was “statistically significant.” The term is without meaning and devoid of interest.

At this writing, the year is 2011, but the year is incomplete. I have observed GATs from at least 1900 until 2010. I want to know the probability that the GAT in 2011 (when complete) will be larger than the GAT (as measured) in 2010. I cannot observe this now, but I can still compute the probability. Here is how.

I must propose a model which relates the GAT to time. The model can be fixed, meaning it assumes that the GAT increases X degrees C a year: by which it means, it does not increase by X – 0.1, nor by X + 0.3, nor by any other number besides X. In my model, in 2011 the predicted GAT will be the GAT as it was in 2010 plus X. Conditional on this model—and on nothing else—the probability that the GAT in 2011 is larger than the GAT in 2010 is 1, or 100%. This is not necessarily the same probability that the eventually observed GAT in 2011 is larger than the GAT in 2010.

It is easy to see how I might adjust this fixed model by assigning the possible increase to be one of several values, each with a fixed (in advance) probability of occurring. I might also eschew fixing these increases and instead assume a parametric form for the possible increases. The most commonly used parametric form is a straight line (which has at least three parameters; there are different kinds of straight lines used in time series modeling). How do I know which kind of parametric model to use? I do not: I guess. Or I use the model that others have used because conformity is both pleasing and easy.

I choose the straight line which has, among its parameters, one indicating the central tendency of a probability distribution related to—but is not—the increase in GAT through time. To call this parameter the “trend” can only cause grief and misunderstanding. This parameter is not, and cannot be, identical with the observed GAT.

Bayesian statistics allows me to say what values this parameter (and all the other parameters) is likely to take. It will allow me to say that, if this model is true and given the past years’ GATs, then the probability the parameter is greater than 0 is y, or Y%. This is the parameter posterior distribution. Suppose that y = 0.9 (Y = 90%). Can I then answer the question what is the probability that the GAT in 2011 is larger than the GAT in 2010? NO. This is the only probability that means anything to me, but I cannot yet answer it. What if y = 0.999999, or however many 9s you like: can I then say what is the probability the GAT in 2011 is larger than the GAT in 2010? No, no, and no, with just as many “no”s as 9s. Again, “statistical significance” of some parameter (mistakenly called “trend”) is meaningless.

However, Bayesian statistics allows me to take the parameterized model and to weight it by each possible value of the parameters. The end result is a prediction of the possible values of the GAT in 2011, complete with a probability that each of these possible values is the true one, assuming the model is true. This is the posterior predictive distribution; it is free of all parameters and only speaks in terms of observables, here year and GAT.

I can use the posterior predictive distribution and directly ask what is the probability that the GAT in 2011 is larger than the GAT in 2010. This probability assumes the model is true (and assumes the previous values of GAT are measured without error).

If I have more than one model, then I will have more than one probability that the GAT in 2011 is larger than the GAT in 2010. Each probability assumes that the model that generated it is true. Which model is really true? I can only judge by external evidence. This evidence (or these premises) tell me the probability each model is true. I can then use these probabilities, and the probabilities that the GAT in 2011 is larger than the GAT in 2010, to produce a final probability that the GAT in 2011 is larger than the GAT in 2010. This probability is not conditional on the truth of any of the models.

But it still is conditional on the premise that at least one of the models in our set is true. If none of these models in our set is true—which we could only know using external evidence—then the probability that the GAT in 2011 is larger than the GAT in 2010 is likely to be wrong (it still may be right by coincidence).

I hope you can see that I can ask any question about the observables prior to 2011 and that in 2011. For example, I can ask what is the probability that the GAT in 2011 is Z degrees C higher than in 2010. Or I can ask, what is the probability that the GAT in 2011 is W degrees C higher than the average of the years 2001-2010. And so on.

This is how Richard Muller’s group should issue their statements on the GAT.

Global Average Temperature: What It Isn’t

Update See also: Global Average Temperature: An Exceedingly Brief Introduction To Bayesian Predictive Inference

Word is going round that Richard Muller is leading a group of physicists, statisticians, and climatologists to re-estimate the yearly global average temperature, from which we can say such things like this year was warmer than last but not warmer than three years ago. Muller’s project is a good idea, and his named team are certainly up to it.

The statistician on Muller’s team is David Brillinger, an expert in time series, which is just the right genre to attack the global-temperature-average problem. Dr Brillinger certainly knows what I am about to show, but many of the climatologists who have used statistics before do not. It is for their benefit that I present this brief primer on how not to display the eventual estimate. I only want to make one major point here: that the common statistical methods produce estimates that are too certain.

I do not want to provide a simulation of every aspect of the estimation project; that would take just as long as doing the real thing. My point can be made by assuming that I have just N stations from which we have reliably measured temperature, without error, for just one year. The number at each station is the average temperature anomaly at that station (an “anomaly” is takes the real arithmetic average and subtracts from it a constant, which itself is not important; to be clear, the analysis is unaffected by the constant).

Our “global average temperature” is to be estimated in the simplest way: by fitting a normal distribution to the N station anomalies (the actual distribution used does affect the analysis, but not the major point I wish to make). I simulate the N stations by generating numbers with a central parameter of 0.3 and an spread parameter of 5, and degrees of freedom equal to 20 (once again, the actual numbers used do not matter to the major point).

Assume there are N = 100 stations, simulate the data, and fit a normal distribution to them. One instance of the posterior distribution of the parameter estimating the global mean is pictured. The most likely value of the posterior is at the peak, which is (as it should be) near 0.3. The parameter almost surely lies between 0.1 and 0.6, since that is where most of the area under the curve is.

Global average temperature

Now let’s push the number of stations to N = 1000 and look at the same picture:

Global average temperature

We are much more certain of where the parameter lies: the peak is in about the same spot, but the variability is much smaller. Obviously, if we were to continue increasing the number of stations the uncertainty in the parameter would disappear. That is, we would have a picture which looked like a spike over the true value (here 0.3). We could then confidently announce to the world that we know the parameter which estimates global average temperature with near certainty.

Are we done? Not hardly.

Although we would know, with extremely high confidence, the value of one of the parameters of the model we used to model the global average temperature, we still would not know the global average temperature. There is a world of difference between knowing the parameter and knowing the observable global average temperature.

Here then is the picture of our uncertainty in the global average temperature, given both N = 100 and N = 1000 stations.

Global average temperature

Adding 900 more stations improved our uncertainty in the actual temperature only slightly (and here the difference in these two curves is just as likely to be because of the different simulations). But even if we were to have 1 million stations, the uncertainty would never disappear. There is a wall of uncertainty we hit and cannot breach. The curves will not narrow.

The real, observable temperature is not the same as the parameter. The parameter can be known exactly, but the observable actual temperature can never be.

The procedure followed here (showing posterior predictive distributions) should be the same for estimating “trend” in the year-to-year global average temperatures. Do not tell us of the uncertainty in the estimate of the parameter of this trend. Tell us instead about what the uncertainty in the actual temperatures are.

This is the difference between predictive statistics and parameter-based statistics. Predictive statistics gives you the full uncertainty in the thing you want to know. Parameter-based statistics only tells you about one parameter in a model; and even though you know the value of that parameter with certainty, you still do not know the value of the thing you want to know. In our case, temperature. Parameters be damned! Parameters tell us about a statistical model, not about a real thing.

Update See too the posts on temperature on my Stats/Climate page.

Update See also: Global Average Temperature: An Exceedingly Brief Introduction To Bayesian Predictive Inference

Update to the Update Read the post linked to above. Mandatory.

TSA Expands Jurisdiction To Sidewalks: Where Is The Left?

“Sir? Please step over here. You need to be x-rayed.”

“What? Get outta my way. Who are you?” said the man.

“Sir, please step over to the machine. You have been selected for random scanning,” said the TSA agent.

The man did not understand or chose not to and began to walk on. Two other armed agents moved to block the man’s way.

“Are we going to have trouble with you, sir? You have been selected for random scanning,” repeated the agent.

“What are you talking about? I’m just walking down the sidewalk in front of my apartment. We’re nowhere near an airport, nor even a train station. You can’t just grab innocent people off the sidewalk and bombard them with x-rays,” said the man, increasingly bewildered.

“Sir, we are agents of the Transportation Security Administration. The law says we have to protect you in all areas of transportation. Sidewalks are public modes of transportation. We have orders to randomly scan pedestrians. It’s for your protection, sir,” said the agent, bored with offering the same explanation he had issued a hundred times before. “Besides, if you have nothing to hide, you have nothing to worry about. Do you have your papers with you?”

“What if I refuse?” asked the man.

“You wouldn’t want to do that, sir,” advised the agent, as the other two agents moved in…

Paranoid fantasy? Not hardly. According to documents retrieved by Freedom (!) of Information Act requests, the Department of Homeland Security, as reported in Forbes, “has been planning pilot programs to deploy mobile scanning units that can be set up at public events and in train stations, along with mobile x-ray vans capable of scanning pedestrians on city streets.”

Read that again. The TSA wants to conduct “covert inspection of moving subjects” on sidewalks.

One project allocated to Northeastern University and Siemens would mount backscatter x-ray scanners and video cameras on roving vans, along with other cameras on buildings and utility poles, to monitor groups of pedestrians, assess what they carried, and even track their eye movements. In another program, the researchers were asked to develop a system of long range x-ray scanning to determine what metal objects an individual might have on his or her body at distances up to thirty feet.

Anything for money, eh?, Northeastern?

The Department of Homeland Security, and it sub-agency the TSA, are bureaucracies. Pournelle’s Iron Law of Bureaucracy states

that in any bureaucratic organization there will be two kinds of people: those who work to further the actual goals of the organization, and those who work for the organization itself. Examples in education would be teachers who work and sacrifice to teach children, vs. union representative who work to protect any teacher including the most incompetent. The Iron Law states that in all cases, the second type of person will always gain control of the organization, and will always write the rules under which the organization functions.

The unbreakable grasp of Pournelle’s Law guaranteed something like this would happen. Bureaucracies grow. That is what they do. They grow fastest when they have failed at their original mission and thus seek to justify their existence. If they can turn up nothing but new mothers smuggling breast milk through airport security lines, then they will search for publicity opportunities elsewhere. Sidewalks are public modes of transportation. And surely evildoers use sidewalks!

Those who would give up Essential Liberty to purchase a little Temporary Safety, deserve neither Liberty nor Safety. — Ben “Goddamit” Franklin

 

Where is the left on this? Where are perpetually outraged progressives who would be hoping mad if a non-racially pleasing Republican president signed off on this? Can you, my lefty friends, truly be pleased with this situation? We have nowhere heard from old Ben Franklin as we used to when there was as little as a hint police department budgets would increase. With Mr Obama in office, dead silence.

Why is a statistician opining on this? Because the matter is entirely statistical. All these scans (see the Decision Calculator link on the left sidebar) will produce a flood of “false positives”, i.e. innocent people falsely identified as suspicious. And sophisticated, intent terrorists will in all probability never be seen until too late. These guaranteed errors cost society more than just money.

And let’s never forget the leading cause of premature death in the twentieth century was not terrorists, but governments.