William M. Briggs

Statistician to the Stars!

Page 149 of 410

Bernie Madoff To Join Peter Gleick’s Pacific Institute: Work-Release Program

Perhaps we should file this under Nobody Saw This Coming. Here is a (second) press release issued in the dead of night from the Pacific Institute.

Tip o’ the hat to Anthony Watts, who alerted us all to the first press release.

PACIFIC INSTITUTE BOARD OF DIRECTORS (CORRECTED) STATEMENT

The Pacific Institute is pleased to welcome Dr. Peter Gleick back to his position as president of the Institute, and a joyous greeting to Bernie Madoff who will now be keeping the Institute’s books as part of a unique work-release program

An independent review conducted by outside counsel on behalf of the Institute has supported what Dr. Gleick has stated publicly regarding his, uh, interaction with the Heartland Institute. Which is to say, Dr. Gleick admitted to cheating, lying, conning, conniving, scamming, manipulating, and misrepresenting himself in a manner most sleazy in what has become known as Heartlandgate. We forgive him his trespasses.

Just as we forgive Bernie Madoff, the infamous Ponzi-scheming shady operator and convicted felon who absconded with millions from dozens of innocent victims, and who has agreed to join the Pacific Institute as part of a unique work-release program.

Mr. Madoff said from his cell at the Federal Correctional Institution Butner Medium that, “I can’t wait to get my hands on the Pacific Institute’s donor list.”

“Dr. Gleick’s and Mr. Madoff’s situation illustrate what America is all about,” said Pacific Institute board member Gigi Coe. “Doing something uniquely egregious and hoping people forget about it.” Coe added that she thought Gleick and Madoff had learned their lesson and would not scam anybody again soon.

Gleick apologized publicly for his nefarious actions, which are not condoned by the Pacific Institute and run counter to the Institute’s policies and standard of ethics over its 25-year history. “We’re willing to look past all that,” said board member Dr. Robert Stephens.

The Board of Directors accepts Dr. Gleick’s apology for his lapse in judgment. “We are sincerely sorry that Peter got caught,” said board member and Berkeley Professor Michael J. Watts. We look forward to Gleick’s continuing in the Pacific Institute’s ongoing and vital mission to advance environmental protection, economic development, social equity, and fund raising; but especially the fund raising.

“That’s the area where we hope to utilize Mr. Madoff’s unique talents,” said board member Margaret Gordon. “Mr. Madoff, like Dr. Gleick, has apologized for misusing his gifts. We hope to bend all that misplaced energy into bulking up our bottom line.”

“I am desperately glad to be back and thank everyone for continuing their important work at the Pacific Institute during my absence,” said Dr. Gleick in a statement. “I am returning with a renewed focus and dedication to the ideology and fund raising that remain at the core of the Pacific Institute’s mission.”

Asked if she thought Mr. Madoff’s past crimes and Dr. Gleick’s shenanigans would damage the reputation of the Institute, board member, Nancy Pelosi supporter, and Stanford Professor Dr. Anne H. Ehrlich said, “Are you serious?”

How Paranormal Research Differs From Normal Research

Cornell’s Daryl Bem case is instructive. He’s an academic who has published several notable peer-reviewed articles which claim that (several different versions of) ESP is real. Trouble is, despite the prominence of the journals, and the peer review, almost none of his peers believe his results.

They publish his papers anyway because the papers meet the statistical criterion of success, which is to say the papers contain wee p-values, which are p-values less than the magic number. Bem always finds, at least in the papers he submits for publication, publishable p-values. In his latest work he touts, “all but one of the experiments yielded statistically significant results.” This is code for “p-values less than the magic number.”

This sets up a conflict in the mind of the researcher. Small p-values are thought to be the proof definitive. Yet it is clearly absurd, or at least extraordinarily unlikely, that people can read minds through time and over vast distance, or that they can, by grimacing and grunting, bend spoons using only the power of thought.

The obvious answer—ignore the small p-values and substitute for them a stronger form of evidence—never occurs to the skeptical researcher. Well, it couldn’t really because the researcher has never been taught any other form of statistics. At that is the fault of people like me.

But what the researcher can do, and does, is to question Bem’s experimental protocols. He picks these protocols apart. He shows how other, non-paranormal explanations are just as, or even more, likely to have caused the results. He shows where “sensory leakage” could have crept in and masked itself as extrasensory perception.

In short, disbelieving Bem’s theory behind his statistics, the skeptical researcher picks Bem’s experiments apart. Or skeptics just ignore the statistics knowing that some other explanation besides the paranormal must exist. And all this is good.

Put it another way. The researcher reading Bem’s papers acts as a scientist should, asking himself, “What else could have caused these results?” There must be an end to this question, of course, for it is always possible that an infinite number of things could have caused a certain set of results. But there will, given the evidence available, be a finite list of plausible causes which should receive scrutiny—and which are preferable explanations over ESP.

Now wouldn’t it be nice if researchers in these “softer” fields did this routinely? Not just for extraordinary claims like Bem’s, but for all claims, especially those preposterous claims (we’re sick of these examples, I know) like exposure to the American flag turns one into a Republican, exposure to 4th of July parade turns one into a Republican, or that fMRIs can tell the difference between Christian and non-Christian brains.

These absurd hypotheses never receive the scrutiny Bem endures not because the claims are any more likely, but because they are more likely to match the political and emotional biases of researchers. About the fMRI they might think: Christians are different from us, aren’t they? They at least believe different things. Therefore, their brains must be wired differently, such that the poor souls were forced into believing what they do. Besides, just look at those small p-values! The results must be true.

So today a toast to alternate explanations. May they always be sought.

The Science In The Mercury Report By Florida’s DEP — Guest Post By Willie Soon

Our friend Willie Soon wrote this editorial in response to this letter from the Florida DEP. The Florida Department of Environmental Protection is here.

As a scientist who has spent the past ten years studying the science of mercury (Hg) and the biologically toxic form of mercury, methylmercury (MeHg), I was taken aback by the clear misuse of the phrase “good science” in a recent letter by Florida DEP’s director of the Division of Environmental Assessment and Restoration (published in the Florida Times-Union newspaper).

The director referred to FDEP’s draft report1 in setting a strict mercury limit in Florida’s river, stream, lake, and coastal waters, which was released May 24. After a careful examination of the draft report, however, I have come to the conclusion that it contains serious flaws such that the strict mercury limit proposed by FDEP is not scientifically defensible.

First, FDEP’s notion that mercury “pollution” in our air, water, and land is a new, man-made phenomenon is simply wrong. While FDEP cited a 2008 paper2 that reported mean mercury levels of 0.25 parts per million (or ppm) in the hair of a group of women of childbearing age (16 to 49) in the Florida Panhandle, a study of 550-year-old Alaskan mummies3 reported average hair mercury levels of 1.2 ppm for four adults and 1.44 ppm for four infants. One mummy had hair mercury levels as high as 4.6 ppm!

Even more importantly, the FDEP draft report failed to consider the 17-year-long Seychelles Islands study4, which found no harm, nor any indications of harm, from mercury in children whose mothers ate 5 to 12 servings of fish per week. In establishing the exposure risk of MeHg by fish consumption (most relevant to Floridians), the authors of this study argued that no consistent patterns of adverse associations existed between prenatal MeHg exposures and detailed neurological and behavioral testing. They concluded that despite the risk of MeHg to expectant mothers, “ocean fish consumption during pregnancy is important for the health and development of children and that the benefits are long lasting.” Indeed, the latest Centers for Disease Control data show blood mercury levels for U.S. women and children are already below EPA’s “safe” levels for mercury—the most restrictive mercury health in the world.

It is useful to note the FDEP draft report cited a 1972 study that confirmed tuna mercury levels in the past were higher (or at least not substantially lower) than tuna caught in the world’s oceans today. Although expecting to find a 9 percent to 26 percent increase in levels of MeHg, Princeton University scientists found no increase (actually, a minor decline) in fish tissue mercury levels after comparing Pacific Ocean tuna samples from 1971 and 1998. Those scientists concluded fish mercury level “is not responding to anthropogenic emissions irrespective of the mechanisms by which mercury is methylated in the oceans and accumulated in tuna.”5

Second, it is curious that the FDEP draft report failed to note that forest fires in the state of Florida alone were estimated to emit more than 4,000 lbs of mercury per year from 2002 to 2006 alone.6 This single source of local mercury emissions is comparable to, if not significantly higher than, the mercury emitted for 2009 from all man-made mercury sources in Florida, including coal-fired power plants (which emit less than 1,500 lbs per year).

The FDEP draft report also repeatedly mentioned volcanoes as an important source of global mercury emissions but somehow fell short in conveying the full scale of this natural source of mercury. A new study7 in the January 2012 issue of the journal Geology noted a truly huge emission of mercury during the Latest Permian era (about 250 million years ago) where the event was estimated to emit about 7,600 tons per year! This is about four times larger than current estimates of the amount of man-made Hg emissions globally, and it persisted for nearly 500,000 years.

Such large sources of mercury resulting from the natural environment can explain why it is not surprising to find high levels of mercury in old samples taken before contamination by modern sources of mercury emission. These high levels have been observed in the hair of Florida panthers and south Florida raccoons as well as fish and aquatic life.

It is equally important to dispel the false impression from the FDEP draft report that mercury “pollution” in Florida’s watersheds and fishes is increasing. A note of caution from the U.S. EPA is clear: Contaminants in fish have been increasingly monitored since the 1970s, which has resulted in more advisories being issued due solely to increased sampling by the various states and “not necessarily due to increased levels or frequency of contamination.”

I would further note there is a serious flaw in FDEP’s draft report that sets a mercury limit of 1.25 parts per trillion (or 0.00000125 ppm) as the new standard for Florida’s inland and coastal waters. It is tacitly assumed by the FDEP that water mercury levels are directly related to fish tissue mercury levels. In fact, no such relationship exists, and indeed the FDEP draft report admits on page 58 that “Using the data collected for the [Florida Mercury Project], no relationship is observed when comparing total mercury in the water column to total mercury in fish tissues.”

Perhaps it is time for FDEP to reconsider the scientific basis of its mercury rule-making.

Why is the FDEP so intent on setting mercury levels below those existing in nature? Why is it so difficult for the FDEP to fully disclose or explain such publicly available information from the scientific literature to all concerned citizens of Florida? Scientific inquiry must be above political pressure and partisan advocacy. Good decisions can arise only if the scientific evidence and knowledge are examined fully, without a selective bias.

Willie Soon is an independently minded Ph.D. scientist who has been studying the biogeochemical nature of mercury in our environment and ecosystem for the past 10 years.

———————————————————————————-

1PDF.

2Karouna-Renier et al. (2008) Environmental Research, vol. 108, 320-326.

3See Middaugh on pp 53-68 of July 24, 2002′s FDA’s Food Advisory Committee on MeHg.
(link) and also Arnold and Middaugh (2004) in Use of Traditional Foods in a Healthy Diet in Alaska: Risks in Perspective (available at: link).

4Davidson et al. (2011) Neurotoxicology, vol. 32, 711-717. Note that the evaluations and tests have also been done for the main cohort of SCDS at age 19 years.

5Kraepiel et al. (2004) Environmental Science & Technology, vol. 38, 4048 and see also Kraepiel et al., (2003) Environmental Science & Technology, vol. 37, 5551-5558

6Wiedinmyer and Friedli (2007) Environmental Science & Technology, vol. 41, 8092-8098.

7Sanei et al. (2012) Geology, vol. 40, 63-66.

Barcode People From Birth — Guest Post By Faith Reader

Despotism and tyranny wear many cloaks.  Modern Western leaders are above using raw, brute power to fulfill their desires.  Instead, they wheedle and whine and the public gives in, worn out and worse for the wear.  Thankfully, these days may soon be a thing of the past, if Elizabeth Moon gets her way.  The science fiction writer told the BBC last month:

“If I were empress of the Universe I would insist on every individual having a unique ID permanently attached – a barcode if you will; an implanted chip to provide an easy, fast inexpensive way to identify individuals.”

She goes on to say what a boon it would be wartime that soldier could differentiate between the opposing armies and the innocent civilians. It is a pity that she doesn’t think this through, and consider the advantage that bar-coded people would have for dictators with genocide on their mind.

Moon isn’t the first to come up with the idea of tagging the population, proving yet again that a bad idea never dies. She has tapped into something that appeals to the nanny-staters who positively drool at the prospect of having absolute power over every nook and cranny of everyone’s life. It is well known that most people are fools and will vote Republican, even if is against their interests. Therefore, they need to be lead around by the nose. “Everyone” doesn’t include those who hold the leash.

In the United States, many still cling to the idea that the people have supremacy over the government, and that the government is “of the people, by the people, and for the people.” In the last forty years (again, in the United States) there has been a reversal of who’s in charge, and the preponderance of evidence shows that the government rules the people, rather than the other way around.

It is neither the responsibility nor the obligation of the government supervise the non-criminal behavior of the people. If people pay their taxes and strive to obey laws, then the government ought to leave them alone so they can engage in their right to “life, liberty, and the pursuit of happiness.” Just because having people tagged makes life easier for the government not only to “identify” everyone, but also to find tax cheats and detect other criminal activity is not a reason to implement a massive bar-coding scheme.

Recent history suggests that some politicians may resist the idea of electronic tagging. In New York State there was a flap about fingerprinting food stamp applicants. The mayor of New York City was all for it, but the governor believed that practice treated welfare applicants as criminals. Using the governor’s logic, bar-coding the public would be akin to treating them as criminals.

Although, if the Affordable Care Act passes muster with the U.S. Supreme Court, there could be a basis to open the door for electronic health surveillance. Maybe the technology isn’t there yet, but such a smart chip could monitor not only one’s vitals, but also whether if one imbibed more than 16 ounces of soda, enjoyed more than the daily quota of adult beverages, or smoked a cigarette.

Our founders recognized that such a grievous state of government surveillance and interference was possible, and they had the foresight to propose a way out when they drafted the Declaration of Independence:

“But when a long train of abuses and usurpations, pursuing invariably the same Object evinces a design to reduce them under absolute Despotism, it is their right, it is their duty, to throw off such Government, and to provide new Guards for their future security.”

The Difference Between Technical And Plain English Correlation

Since the subject has come up so often, today a note on the words correlated and correlation. They have technical definitions and plain English meanings. The two definitions overlap but they are not equivalent.

Suppose you have these two propositions: X = “Jack has an IQ of 107″ and Y = “Jack makes $72,000 a year.” And you wonder, does Jack’s IQ have a bearing on his salary? Or does Jack’s salary have a bearing on his IQ? Higher incomes might imply a softer lives, more leisure time and perhaps more bodily ease for the little gray cells to flourish. So the latter question might be answered “yes.”

Problem is, we can’t answer either of these two questions without making recourse to other evidence. And if we want to quantify the answers, we also have to fix our meaning of “has a bearing.” This part is simple. If we knew X or assumed X was true for the sake of argument, then given X the probability of Y being true changes if we knew or assumed X was false. This “has a bearing” captures what we mean when we say X causes Y or if X is merely related to Y but is perhaps not in the “causal path” of Y.

For instance, there might be some W that causes both X and Y simultaneously; in this case knowledge of X “has a bearing” on knowledge of Y. Or it might be that X caused A which causes B which causes C and so on right up to Y. Or this path might be reversed. But once again, knowledge of X has a bearing on our knowledge of Y, even if we know nothing directly of A, B, C, etc.

A classical statistician wondering whether Jack’s IQ has a bearing on his salary would probably venture forth and collect data on Jill’s IQ and salary, and likewise data from Bill and from Alice, and from Will and Wilma, and so on. This maneuver adds the additional information or evidence we required. Why do we require this? Well, what is the answer to this:

     Pr (Y | X) = ?

This is “What is the probability Y is true given (or assuming) X is true?” It has no answer in this form. If you find yourself supplying an answer, it is because you are implicitly adding extra evidence not stated in the formula. That is, you are doing something like this:

     Pr (Y | X & A) = some number between 0 and 1,

where A was mentally supplied by you. Just as it was supplied by the statistician who collected the other pairs of IQ and salaries, which also implies (this is part of the statistician’s “A”) that these pairs are relevant to Jack; it also assumes that the causal path (and our certainty in it) from X to Y is the same for all these pairs. (This sameness can be changed, as in regression say, but sameness is the first belief.)

Now imagine we make a plot of our pairs: at each observation “X = Jill has an IQ of 108″ and Y = “Jill has a salary of $74,500″ we make a dot at (108, 74500), and so forth. To the extent that a straight line draw through the midst of these scattered points approximates the points themselves, the higher we say the correlation is. If all the points lined up exactly on this straight line, the correlation is “1″ or exact. If the points are spread from near to far and do not look at all friendly to the line, the correlation is “0″ or nearly.

This is the technical definition: if our gathering of Xs and Ys can be approximated by a straight line, they are said to be “correlated” or that the two variables have “non-zero correlation.”

Now imagine a sine wave. Here we have statements like X1 = “We are at time point 1″ or X2 = “We are at time point 1.01″ or whatever, with Y = “The sine at time point 1 is 0.84″ and Y = “The sine at time point 1.01 is 0.85″ and so forth. In this case, given the additional information on the formula of the sine, we can say that X directly causes Y to take the values it does. That is (ignoring rounding error),

     Pr (“The sine at time point 1 is 0.84″ | “We are at time point 1″ & S) = 1,

where S is the knowledge we have of the sine (see any trig or intro calculus book for this). But if we plotted1 a bunch of these Xs and Ys we would find the (technical) correlation between these Xs and Ys was somewhere in the vicinity of 0. This strange happenstance is because the extra evidence here purposely ignores S, the knowledge of the sine wave. It replaces S with some M, which assumes that, given X, our knowledge of Y is quantified by a normal distribution. Why ignore S? Well, just so we can replace it with M. If this seems odd, then know that in many statistical models relevant information like S is often ignored.

Anyway, we finally arrive at the most succinct definitions. Technical correlation is when a straight line approximates pairs of Xs and Ys. Plain English correlation is when knowledge of X changes the certainty we have in Y. Plain English correlation thus encapsulates technical correlation. Plain English correlation can also be called relevance, which is similar (but not identical to) technical “dependence.” About that, another day.

——————————————————————————————

1For once, Wikipedia has some good plots of functions like the sine where we know there is causality but where the correlation is 0 or near 0; they also have the formula for technical correlation.

« Older posts Newer posts »

© 2014 William M. Briggs

Theme by Anders NorenUp ↑