William M. Briggs

Statistician to the Stars!

Page 152 of 700

Climate Paper Causes Chaos, Angst, Anger, Apoplexy! (Hacking?)

Last Wednesday, the Daily Mail told the world of the peer-reviwed paper Lord Monckton, Willie Soon, David Legates and I wrote entitled “Why models run hot: results from an irreducibly simple climate model” (the post which highlighted this will be restored soon). The article was “Is climate change really that dangerous? Predictions are ‘very greatly exaggerated’, claims study“.

  • Researchers claim global warming predictions are ‘greatly exaggerated’
  • Large climate models typically require computers to perform calculations
  • They consider factors such as animal numbers and tectonic variations
  • By comparison, a team of researchers has created a ‘simple’ model
  • It looks at levels of solar energy absorbed and reflected by Earth
  • Using this simple model, they claim current predictions are wrong
  • Once errors are corrected, global warming in response to a doubling of CO2 is around 1oC or less – a third of the predicted 3.3oC

The scientific community reacted with clam, reasoned, logical argument.

Kidding! I’m kidding. People flipped out. Less than two days after our paper was generally known, I was hacked. The posts and comments from my old WordPress account were wiped out. Thank the Lord, I had backups for most things. Although I was off line for almost five days, I’m mostly back.

Here is one of the other asinine reactions. I’ll have more later because this makes for a fascinating case study of how outrageously political science has become.

Saul Alinksky

A meager-witted unctuous twit of a “reporter” rejoicing under the unfortunate name Sylvan Lane (cruel parents) from the far-left Boston Globe was assigned to attack the authors of “Why Models Run Hot”. Lord Monckton and I are independent and Legates’s position is solid. So Lane went after Soon. He emailed asking for “information.” I offered to provide it. Lane wrote back:

I apologize if I wasn’t clear before. The kind of questions I would like to ask Dr. Soon are the same ones Science Bulletin insisted you and your colleagues answer before it published your paper. Here’s a link to its conflict of interest policy, which outlines the same type of questions any writer is required to answer before being published in the journal.

I do agree with you that these questions are best left up to him, which I why I’ve cc’d him on this email. While Science Bulletin’s conflict of interest policy is comprehensive, it doesn’t specify whether it pertains to the specific submitted study or an author’s body of work. I’ve contacted them to clarify and contacted Dr. Soon and Harvard-Smithsonian to ask them about their interpretation of the policy. Those are my only intentions.

I replied:

Allow me to doubt that “clarifying” Dr Soon’s employment status and his employer’s understanding of a journal’s publication policies are your only intentions. But if on the wee small chance they are, is it your habit to investigate the employment status of every author of every science paper, or just those papers the content of which are disconsonant (in some way) with your employer’s or your views? What a dull job that would be.

But now I come to think of it, this might be a fun line of questioning. Let me try. How much money are you getting for this work? Do you feel that this money discredits the work you’re doing? Do you feel tainted by the money? Do you feel tempted, or will you, change what you write so that it more closely matches that of your employer? Have you had training as a scientist or in other ways feel competent to judge the content of science papers like ours? If not, why are you writing about this particular paper?

You’ll of course know the fallacy of the non sequitur. If not, here’s an example. A man makes a claim X. X might be true or again it might be false. A reporter says, “I don’t like that man, therefore X cannot be true. I shall write a story about this, to the cheer and admiration of my fellow journalists.” He does so, and is feted as predicted.

What a sad tale, eh?

Anyway, if you have relevant scientific, logical, climatological, meteorological, or statistical questions, I’d be glad to help. But I’ll trade answer for answer.

Not surprisingly, the dull-minded Lane did not respond. Instead, filled with notions of his own self importance and a nearly complete ignorance of how conflict-of-interest declarations work, the untutored Lane filed a report with his partisan political sheet: “Climate change skeptic accused of violating disclosure rules“.

I contacted Lane on Twitter (@SylvanLane: his visage reminds of a smugger version of Pajama Boy) to let him know what a foolish and stupid thing he had done. The coward did not respond.

Absolutely nowhere in this fictional “controversy” are any questions of science asked, addressed, or even hinted at. What is that Alinsky tactic? Teach the controversy and not the idea, or whatever? So blatant was Lane’s purpose that I hope his parents, if they haven’t been forced into hiding, are at least blushing for him.

Need I point out that it doesn’t matter if any or all of us authors were racist sexist homophobe slave trader twice-convicted con artists from Pluto, none of that, in any way, would be relevant to the points we made in “Why Models Run Hot”?

Any notion of responding to Lane’s preposterous “charges” would be giving him a victory, if you can call such callow acts “victorious.” Therefore I’ll insist that if you want to talk about the paper, talk about the paper.

I Was Hacked

We’re nearly back, ladies, gentlemen, and things in between!

I have restored most of the posts and comments. I haven’t uploaded the old images, so no pictures. The site needs lots of work, tweaks, adjustments. But it’s there!

Ha ha! Thanks to those who hacked me, I was able to move servers, which I’ve wanted to do for a long time.

Look to this space for more information.

Update I was hacked shortly after our paper “Why models run hot: results from an irreducibly simple climate model” became generally known. The posts and comments from my WordPress database were wiped out. Nothing else in the database or on the site was touched.

Except…backups. Every time I asked Yahoo (my old host) about them, they temporized. How could site-wide backups disappear. From me, I can see, but from Yahoo itself? Not so easy, that.

Like I said earlier, I forgive my hackers. You’ve done me a favor (what it is, I shan’t tell you!).

The pictures that used to appear above and in posts I have. But. Inside each post is a link which no longer points to the right place. I can fix this through a far-too painful grep session, then try to overwrite the new database, or skip it. I’m skipping it. I’ll go back and fix those posts which receive traffic, leaving the rest picture-less. Worse things can happen.

I have about 30 posts to back up. I’ll put most of these up over the course of a week or so. I don’t want to overwhelm subscibers with emails. Your comments to these posts are, sadly and forever, lost. But you can make them anew! I tried waiting for Yahoo to see if they could restore a snapshot of the database, but one day turned into two, into three, which today turned into a “snag” and then a soulless announcement that “more information” would be available within “24 to 48 hours.” Since this was doubtful, to say the least, I made the move.

I lost no emails nor any files. Only those two tables.

I have to fix all the little things with the theme that I had before. This will take a day or three. I’m in no rush.

More later.

Update Any WordPress.com experts out there? My new registered blog is “wmbriggs.com”, whereas the old one was “wmbriggs.com/blog”. All the site stats and, more importantly, blog subscribers are registered under the later. I looked around on WordPress.com but couldn’t discover a way to make these the same. Ideas? (Besides emailing their support.)

Update Now is also the time to ask for theme tweaks and minor changes.

Update If you’ve emailed me over the past five days, please email again. I lost these.

Bayesian Statistics Isn’t What You Think

A Logical Probabilist (note the large forehead) explains that the interocitor has three states.

This post is one that has been restored after the hacking. All original comments were lost.

Bayesian theory probably isn’t what you think. Most have the idea that it’s all about “prior beliefs” and “updating” probabilities, or perhaps a way of encapsulating “feelings” quantitatively. The real innovation is something much more profound. And really, when it comes down to it, Bayes’s theorem isn’t even necessary for Bayesian theory. Here’s why.

Any probability is denoted by the schematic equation \Pr(\mbox{Y}|\mbox{X}) (all probability is conditional), which is the probability the proposition Y is true given the premise X. X may be compound, complex or simple. Bayes’s theorem looks like this:
\Pr(\mbox{Y}|\mbox{W}\mbox{X}) = \frac{\Pr(\mbox{W}|\mbox{YX})\Pr(\mbox{Y}|\mbox{X})}{\Pr(\mbox{W}|\mbox{X})}.
We start knowing or accepting the premise X, then later assume or learn W, and are able to calculate, or “update”, the probability of Y given this new information WX (read “W and X are true”). Bayes’s theorem is a way to compute \Pr(\mbox{Y}|\mbox{W}\mbox{X}). But it isn’t strictly needed. We could compute \Pr(\mbox{Y}|\mbox{W}\mbox{X}) directly from knowledge of W and X themselves. Sometimes the use of Bayes’s theorem can hinder.

Given X = “This machine must take one of states S1, S2, or S3? we want the probability Y = “The machine is in state S1.” The answer is 1/3. We then learn W = “The machine is malfunctioning and cannot take state S3?. The probability of Y given W and X is 1/2, as is trivial to see. Now find the result by applying Bayes’s theorem, the results of which must match. We know that \Pr(\mbox{W}|\mbox{YX})/\Pr(\mbox{W}|\mbox{X}) = 3/2, because \Pr(\mbox{Y}|\mbox{X}) = 1/3. But it’s difficult at first to tell how this comes about. What exactly is \Pr(\mbox{W}|\mbox{X}), the probability the machine malfunctions such that it cannot take state S3 given only the knowledge that it must take one of S1, S2, or S3? If we argue that if the machine is going to malfunction, given the premises we have (X), it is equally likely to be any of the three states, thus the probability is 1/3. Then \Pr(\mbox{W}|\mbox{YX}) must equal 1/2, but why? Given we know the machine is in state S1, and that it can take any of the three, the probability state S3 is the malfunction is 1/2, because we know the malfunctioning state cannot be S1, but can be S2 or S3. Using Bayes works, as it must, but in this case it added considerably to the burden of the calculation.

Most scientific, which is to say empirical, propositions start with the premise that they are contingent. This knowledge is usually left tacit; it rarely (or never) appears in equations. But it could: we could compute \Pr(\mbox{Y}|\mbox{Y is contingent}), which even is quantifiable (the open interval (0,1)). We then “update” this to \Pr(\mbox{Y}|\mbox{X \& Y is contingent}), which is 1/3 as above. Bayes’s theorem is again not needed.

Of course, there are many instances in which Bayes facilitates. Without this tool we would be more than hard pressed to calculate some probabilities. But the point is the theorem can but doesn’t have to be invoked as a computational aide. The theorem is not the philosophy.

The real innovation in Bayesian philosophy, whether it is recognized or not, came with the idea that any uncertain proposition can and must be assigned a probability, not in how the probabilities are calculated. (This dictum is not always assiduously followed.) This is contrasted with frequentist theory which assigns probabilities to some unknown propositions while forbidding this assignment in others, and where the choice is ad hoc. Given premises, a Bayesian can and does put a probability on the truth of an hypothesis (which is a proposition), a frequentist cannot—at least not formally. Mistakes and misinterpretations made by users of frequentist theory are legion.

The problem with both philosophies is misdirection, the unreasonable fascination with questions nobody asks, which is to say, the peculiar preoccupation with parameters. About that, another time.

That “1-in-27 Million Chance That Earth’s Record Hot Streak Is Natural” Is Preposterous

This post is one that has been restored after the hacking. All original comments were lost.

We met a lot of bad statistics over the years, but this one wins the Blue Ribbon With Gold Lace, Free-Beer-For-Life Award of Statistical Putrescence. It is not only not true that the “1-in-27 Million Chance That Earth’s Record Hot Streak Is Natural” probability is correct, as quoted by Mashable and others who ought to know better, but that it’s so far from True that if it were to travel at a thousand miles per hour for two thousand years it would still be just as far from True as the day it started.

And this is so even if you are a leftist progressive Marxist politically correct environmentalist feminist hater of Fox News Elizabeth-Warren-For-President-button-wearing Democrat who would like nothing better than if the Temperatures Of Doom were just around the corner and all agreed that you—yes, you—should put in charge of the World’s Affairs, so saving us all.

It is a mark, and an important one, of how political climate “science” has become that statistics like this are quoted and accepted, and, yes, even rejected, all because people desire man-made apocalyptic the-time-to-act-is-now global warming be true or false.

The reader will pardon my exasperation. Misunderstandings of probability and bad statistics now account for 83.71% (p &lt’ 0.001) of all bad science. I’ve reached the snapping point..

We’ve met this statistic, or its cousin, before, when we discussed why the then touted “More On The 1 in 1.6 Million Heat Wave Chance“, created by somebody at the NCDC was ridiculous. Don’t be lazy. Read that article.

Here is why all these numbers are you-ought-to-have-known-better stupid.

Do you accept that some thing or things caused each month’s or year’s temperature? If not, you’re not familiar with science and so are excused. Well, some thing or things did cause the temperature (however it is operationally defined). Measuring it does not determine those causes, it only records the observations.

Suppose it is true that these causes are analogous to reaching inside a bag of temperatures, pulling one out, and then blanketing the earth with it. Some of these temperatures are high, some low; a mixed bag, as it were. Now let Mother Nature pull out the temperatures and we observe them. What are the chances we see what we see? 100% is the only acceptable answer.

Don’t see that? If it really is true, based on our accepted model, that the chance this year’s (or month’s or whatever) temperature is just as likely to be higher or lower than last year’s, then the probability we see the record we see is 100%. Think of a coin flip. The chance, given the obvious premises about the coin, we see any string of Hs and Ts is the same (HHHHH has the same chance as HTTHT, etc.); thus the chance we see what we see is 100% no matter what we see.

Of course, we may be interested in Hs or High Months more than Ts or Low Months. The chance, accepting our model, of so-many Hs or High Months in the record can also be calculated and will be some number. Imagine some string of High Months then calculate the chance of seeing this many in a row. Make the string as long as you like, which makes the probability weer than wee, vanishingly small. Make the probability smaller than 1 in 27 million. Make it 1 in a billion! Nay, two billion!

Does that mean our model of causation is false? No! No no no no no. No. We accepted the model as true! It is therefore, for these calculations, true. As in True. That means the probability is also true, given this model. But this probability doesn’t say anything about the model, it is a consequence of the model. If you don’t like this, then you shouldn’t have accepted this model as true.

What about that? Why did you accept this preposterous model as true? Who in the wide word of human nuttiness ever claimed that a forgetful Mother Nature regularly reached into a bag of temperatures and cast it over the surface of the deep? I’ll tell you. Nobody.

But that’s just the model everybody who is foisting these silly 1-in-27 Million Chance-like statistics on us believes. Or claims to believe, else they never would have quoted these numbers. But we know they don’t believe the model! So why do they quote these numbers?

Because they want to scare you into believing, without evidence, their alternate model, apocalyptic man-made global warming is true.

Some thing or things caused the temperatures to take the values we observed. If we knew what these causes were, i.e. what the model was, we would state them, yes? That chance of seeing what we saw, given this model, would not be 1 in 27 million, right? Since we would know the causes, the chance of seeing what we project would be 100%. Take an apple and drop it. Given the causal model of gravity, what is the chance apple meets earth? 100%.1

Do we have a full causal model for temperature? We do not. If we did, then meteorologists and climatologists would not make mistaken forecasts. Because they often do (especially climatologists), it must be that their models are incomplete. We do not know all the causes of temperature. But because we do not, it does not—it absolutely does not with liberty bells on—mean that we do know the cause is man-made global warming.

This is the sense that the 1 in 27 million is wrong. It’s the right answer to a question nobody asked based on a model no sane person believes. Its answer is useless utterly in discovering whether global warming is true or false. If this is not now obvious to you, you are lost, lost.

—————————————————————————————

1You are not being clever but obtuse by suggesting that, say, something interferes with the apple’s (and earth’s) path. That interference changes the model, it is an additional premise. The model is no longer gravity, but gravity-with-interference.

« Older posts Newer posts »

© 2016 William M. Briggs

Theme by Anders NorenUp ↑