Analysis Of A Sermon From The Secular Scientism Priest Neil deGrasse Tyson

Say Cheesy. Our man is on the far right.
Say Cheesy. Our man is on the far right.

It’s not that anybody of stature claims Neil deGrasse Tyson is one of our greatest minds; instead, it has been said he is a harmless entertainer. In this there is some truth. But he is also a sort of snide yet jolly secular priest, and a harmful one because the religion he preaches—scientism—is a false one. Don’t believe me: let’s let Tyson prove this in his own words, taken from his sermon “What Science Is — and How and Why It Works“.

Science distinguishes itself from all other branches of human pursuit by its power to probe and understand the behavior of nature on a level that allows us to predict with accuracy, if not control, the outcomes of events in the natural world.

This sounds like a boast, but it is only a definition. Who didn’t know the job of science? We could have also said, for example, Literature distinguishes itself from all other branches of human pursuit by its power to probe and understand the behavior of people on a level that allows us to probe with accuracy, if not control, the outcomes of events of people’s lives. Science is only one branch of knowledge.

The scientific method, which underpins these achievements, can be summarized in one sentence, which is all about objectivity:

Do whatever it takes to avoid fooling yourself into thinking something is true that is not, or that something is not true that is.

However, this “method” is also the one that applies in literature, philosophy, mathematics, theology, and so forth. Science has no special case to make in the pursuit of truth. Yet Scientist Tyson intimates that because science does a superior job at discovering scientific truths, scientific truths are superior to other truths. This is obviously false, not the least because science could not do its job without the truths from the other areas of human endeavor. Indeed, a strong case can be made that scientific truths are the least important to mankind because no scientific truth gives us any insight into life and death, the purpose and meaning of our lives, morality, and so forth.

Since [Galileo and Bacon], we would further learn not to claim knowledge of a newly discovered truth until multiple researchers, and ultimately the majority of researchers, obtain results consistent with one another.

A “newly discovered truth” is a truth regardless of how many researchers believe or can prove it. This is just Tyson writing poorly. He meant to say that we should not say new claims are true until the claims have received consistent verification. Yet since the advent of Big Science, we know that grandiose claims made by breathless press releases often receive the biggest reward. And those claims which further the Establishment (particularly government) are often claimed to be true contrary to evidence.

Science discovers objective truths…

Once an objective truth is established by these methods, it is not later found to be false. We will not be revisiting the question of whether Earth is round; whether the sun is hot; whether humans and chimps share more than 98 percent identical DNA; or whether the air we breathe is 78 percent nitrogen.

Science is not alone in discovering objective truths, and science can only discover truths about the contingent world. Also, science would a dull field indeed were it only to catalog contingent objective truths. It doesn’t take a full-fledged “scientist” to say the sun is hot or even that the earth is round (despite the many fictions that have grown up claiming mankind only knew this latter objective truth recently). Science is therefore not a dry collection of objective truths, but an attempt at understanding the (secondary) causes of these truths. Why is sun hot? That requires an explanation, a theory, which is subject to revision, to revisting.

So the only times science cannot assure objective truths is on the pre-consensus frontier of research, and the only time it couldn’t was before the 17th century, when our senses — inadequate and biased — were the only tools at our disposal to inform us of what was and was not true in our world.

Once again, Scientist Tyson has confused observation with explanation. And somehow he forgets that our senses—inadequate and biased—are still the only tools we have to make sense of the data presented to us. It is only the case that we have created tools which provide our inadequate and biased senses with new data. A microscope still presents to the eye an image which must be interpreted.

Objective truths exist outside of your perception of reality, such as the value of pi; E= m c 2; Earth’s rate of rotation; and that carbon dioxide and methane are greenhouse gases. These statements can be verified by anybody, at any time, and at any place. And they are true, whether or not you believe in them.

This is the product of a mind overtaxed. Not all these statements can be verified by anybody. But that carbon dioxide is a greenhouse gas, for instance, is of itself of only minor interest. How much influence it has is another question entirely. Tyson does not appear to understand the difference.

Meanwhile, personal truths are what you may hold dear, but have no real way of convincing others who disagree, except by heated argument, coercion or by force. These are the foundations of most people’s opinions…You don’t have to like gay marriage. Nobody will ever force you to gay-marry. But to create a law preventing fellow citizens from doing so is to force your personal truths on others. Political attempts to require that others share your personal truths are, in their limit, dictatorships.

I’m guessing where a migrant to hold at knife at Scientist Tyson’s throat, his “personal truth” that murder is wrong would at least come to his mind if not his lips. Would he try to force this personal truth on his would-be slayer? That murder is wrong is not, of course, a scientific answer. Science is mute on morality and on, for instance, gay marriage. It is only a mind saturated in scientism that could say if you don’t like gay marriage, you don’t have to participate in it. It’s like saying that you don’t have to like murder, you don’t have to participate in it.

Note further that in science, conformity is anathema to success.

It is here I doubled over in laughter and could not follow the remainder of the sermon to its end.

Is Using FanDuel Or DraftKings Gambling? What Is A “Game Of Chance”?



Heard about the legal troubles of FanDuel and DraftKings? These are fantasy sports companies that allow people to pick lineups (according to certain rules) for professional sporting matches and to win money based on wise picks. The New York Attorney General and others are going after these companies because they claim fantasy sports contests are gambles.

Here’s the setup in brief (go to the original sites for details): The men chosen in the lineups earn points for various activities, like running for a touchdown or getting a base hit. The fantasy game user who picked the lineup (per game or set of games) that earned the most points wins the contest. Fees are paid to enter contests, and the winner (or winners) take a cut of the pool, the remainder going to the fantasy sports company.

Is this gambling? I mean, legally speaking? What I don’t know about the law could fill a library, so I won’t attempt any legal answer. I can only give my opinion about the terms used by lawyers when considering what make contests games of skill or (as they call it) chance.

The very useful site Legal Sports Report has an excellent article on the situation, “Analyzing FanDuel’s Statistical Arguments On Skill Vs. Chance At The New York Hearing“. The writer, Peter Hammon, said this of New York gambling laws:

1. “Contest of chance” means any contest, game, gaming scheme or gaming device in which the outcome depends in a material degree upon an element of chance, notwithstanding that skill of the contestants may also be a factor therein.

2. “Gambling.” A person engages in gambling when he stakes or risks something of value upon the outcome of a contest of chance or a future contingent event not under his control or influence, upon an agreement or understanding that he will receive something of value in the event of a certain outcome.

What Is A Game Of Chance?

The legal question is thus whether a contest depends “in a material degree upon an element of chance”. So, what is chance? There is no such thing. Chance does not exist. I understand many think it does; New York clearly believes in it. Still, it wouldn’t be the first time the law is in error.

If you haven’t already, please read the article “What Is A Game Of Chance?” which defines terms and in which I conclude a “game of chance” is a “game where the causes are unknown but where the outcomes are defined”. I’m going to assume readers here have complete knowledge of that article.

What does this definition have to do with fantasy sports? Fantasy contests would be “games of chance” if their outcomes were like those in craps, in the sense that no causes (or proxies) could be measured.

Are Fantasy Contests Games Of Chance?

First, it is clear pikers enter fantasy contests having no idea what is happening, knowing only that they could “win money”. The same happens in casinos, of course, and even bars, with drunks falling into poker games, folks who haven’t a clue how the game works but who believe riches are only a few bets away. But surely it is unfair to judge a fantasy company, or a casino, on just the ill-thought-out behavior of ignorant (I use this word in its technical sense) users. To understand the role of “chance” we thus need to look at those people who at least claim to know what is happening. And that means looking at how the games are constructed.

Users pick lineups (subject to various restrictions) hoping that the men chosen for the lineups will excel during professional sports games and thus garner more points according to the specific posted rules of the contests. Users can enter multiple lineups per contest (some enter hundreds). Unlike craps (this was the example used in the linked article on games of chance), the number of possible points is not known in advance, except that it is bounded below by 0.

A running back carrying a ball in for a touchdown causes that touchdown. Actually, his activities are some of several causes; there are various blocks and so on by other players that also contribute. A fantasy user who has the running back on his lineup can’t predict in advance all the exact causes for that touchdown, but if he has good knowledge of football, he might know which running backs are better at securing touchdowns in the games which are part of a fantasy contest.

Every contest has a list of possible men that could be picked for a lineup. This makes for a huge number of potential lineups. Each of these potential lineups, even if they are not picked by a fantasy user, would result in a score. Here is a tricky point. In craps, because we know how totals are constructed we know, in advance, that some scores are more probable than others. Are some scores more probable than others in fantasy contests? This would be true only if we could, a priori and based only the rules of the contests (and not on “data” of past contests), discover that certain lineups of player types (quarterbacks over running backs, say) result in more points than others. Given the nature of the scoring rules, which award points by activities which themselves are contingent, I can’t see how this could be so. But if somebody could derive those, in a strictly mathematical sense, that becomes the baseline knowledge I’ll discuss next.

Now the scores of all potential lineups can be ranked, smallest to largest. I mean “all” as in all, i.e. even those lineups no fantasy user picked. (Forming all possible lineups is a simple problem in combinatorics.) A clear indication that fantasy users are demonstrating skill, meaning they have some understanding of the causes behinds the points they are awarded, is that their scores consistently fall into the upper range of this ranking. The idea is like this: a “chance”, i.e. “no-causal-knowledge”, user chooses any of the potential lineups; each lineup, to this “chance” or “no-causal-knowledge” player, has the same probability of garnering more points in a match-up with another “no-causal-knowledge” user. But a skilled player has knowledge that many lineups are poor, which is why they aren’t picked.

Even a casual acquaintance with sports indicates most potential lineups will result in low scores. Experts, those with some knowledge of cause, should be able to handily beat “chance/no-causal-knowledge” users easily and often. Think of it this way (to quote myself from the linked article): take two craps players, one a novice but who knows the rules, and the other an expert who claims, falsely, that he is able to measure some of the relevant causes. Pitted against one another, each is as likely to win (more money) as the other. But if the second player truly can measure some of the relevant causes, he will beat the first fellow consistently. How consistently depends on the extent of his causal knowledge. Same thing with expert fantasy users pitted against “no-causal-knowledge” users.

What Do Others Say?

Fantasy companies have begun answering the charges about games of chance. Hammon commented, “FanDuel released data that showed about 50% of the prize money is won by 1% of the winners. In a game dominated by chance, you would expect a more equal distribution of prize money over time.” But Hammon doesn’t like this argument because the “average number of entries per week and the top 1% of winners predominantly fell in the ‘500+ entries’ category while the bottom 1% of winners all fell in the ’25 or fewer entries’ category.”

A user who entered every potential lineup would win (or at least tie) every contest, but this approach obviously requires no skill. There are ways, however, of handling multiple entries per contest in comparisons, so Hammon’s objection doesn’t have much force. For instance, consider that the expert multiple-entry user could be matched against the “no-causal-knowledge” user given the same number of entries (from the pool of all possible lineups).

Hammon said, “FanDuel looked at the performance of lineups created completely at random…and compared them to the performance of the average FanDuel user lineup across”. He doesn’t like this either:

Because the simulated lineups were selected randomly and without a salary floor, that means they would include low-priced reserve players who would not have been drafted by actual FanDuel users. Not surprisingly, the average FanDuel user lineups won most of the time. But all this proves is that even the worst FanDuel contestant is smart enough to avoid drafting reserves who don’t see playing time.

If the objection is that the “random” (there is no such thing as “random”, so he means what I meant by potential) lineups includes men who could not be used by FanDuel users, then Hammon’s objection is sound. But if these men aren’t drafted because FanDuel users don’t like them, because for instance they are thought to have little athletic ability, then this indicates FanDuel users know something of cause. In other words, FanDuel users are showing skill.

Hammon has some other objections at that site, and I’ll let you read those on your own. At another site he comments on an approach taken by DraftKings in demonstrating skill. Basically, DraftKings found expert users and pitted them against average users, and discovered the experts were much better. Well, no surprise. But that some users can consistently come out on top gives terrific evidence that skill (knowledge of some cause) is required.


Fantasy sports contests are not “games of chance” in the same sense as for instance dice games are. In order to be consistently successful at fantasy games, players have to have knowledge of the sports in question to staff their lineups, over and above the knowledge that this or that potential lineup is allowed by the fantasy rules. In other words, some skill is needed to be successful.

Perfect skill in fantasy sports is not attainable. But neither is perfect skill in, say, weather forecasting attainable. Consider that physicists know a lot of the causes of tomorrow’s potential rain, but even they don’t guess right every time. Yet nobody would claim (except jokingly) weather forecasting is a game of chance.

Note: I have an interest in this subject, though not with either of the companies mentioned.

What Is A Game Of Chance?


This originally ran 4 December 2015, but since we need it for tomorrow’s crucial article on whether fantasy sports are “games of chance”, it is best to refresh our memories. This is also complementary to yesterday’s post.

What is a game of chance? There is no such thing as chance. Chance does not exist, though many think it does. Given that the term crops up repeatedly, and people do take meaning from it, we have to figure what it is that people think the term means with respect to gambling. Best guess, as I’ll show, is that chance appears to be as a synonym of mostly not predictable. “Games of chance”, therefore, could be translated as “games which are mostly not predictable”.

Take craps. On the come-out, the shooter wins with a 7 or 11. The two-dice total will come to something, possibly this 7 or 11. Is this total “chance”? Well, craps is taken by the law (and by everybody else) as a game of “chance.” But what is really meant is that that outcome of the roll is not predictable beyond the constraints that the two-dice total must fall in the range 2 to 12. This knowledge of the bounds is a form of predictability, albeit a weak one.

If the only—pay attention here: I mean this word in its literal sense—information that we have is that “There will be a game played which will display a number between 2 and 12 inclusive”, then we can quantify our uncertainty in this number, which is that each number has probability 1/11 of showing (there are 11 numbers in 2 to 12).

Craps players have more information than this. They know the total can be from 2 to 12, but they also know the various ways how the total can be constructed, e.g. 1 + 1, 1 + 2, 2+ 1, …, 6 + 6. There are 36 different ways to get a total using this new information, and since some of the totals are identical, the probability is different for different totals. For example, snake eyes, 1 + 1, has 1/36 probability; there are 6 ways to get a total of 7, for a probability of 1/6.

It should be clear that these different probabilities are not a property of the dice (or the dice and table and shooter, etc.). If probability were a physical property, then it must be that the total of 2 has 1/11 physical probability and 1/36 physical probability! How does it choose between them? Quantum mechanics? (There is a big hint here about interpreting QM, which I’ll skip today.) No, probability is a state of mind; rather, it states our uncertainty given specific information. Change the information, change the probability.

There are any number of physical mechanisms that cause each dice total, causes of which we are mostly or completely ignorant. We know the causes must be there, we just don’t know what they are. We do know there are many causes: imagine the bouncing rolling dice flopping around, buffeted by this and that. If we knew some of these causes for individual rolls—perhaps we could measure them in some way as the dice fly; say, by noting the walls of the table are cushier and more absorbent than usual—then we could incorporate that causal information and, again, update the probabilities of the totals. A 7 might be more or less probable depending on hos the information “plays”.

Casinos base their craps payouts on their “vig” (or cut, or “percentage”) and on the probabilities calculated only using the information of how the totals are reached. If you had extra information about causes, you could use that to “beat” the system—assuming their vigorish is not too vigorous; it’s the transaction fees that always kill you. Unless your knowledge of cause is complete, you might not necessarily beat the casino for any single game, but if you have good causal knowledge, you will beat them over multiple games. It is for this reason that casinos ban contrivances that could measure causes or proxies of causes.

Nobody could, in the scenario of a casino (but in a physics lab, the situation would be different) measure all causes of a dice roll; but to win consistently, all causes don’t need to be measured, just some of them, or their proxies (things related to a cause which is measurable). It is the measurement and knowledge of cause, and not just bounds, that requires skill and turns “games of chance” into “games of skill.”

Think of it this way: take two craps players, one a novice but who knows the rules, and the other an expert who claims, falsely, that he is able to measure some of the relevant causes. Pitted against one another, each is as likely to win (more money) as the other. But if the second player truly can measure some of the relevant causes—perhaps he is a physicist with secret measurements devices which allow him to know some but not all causes—he will beat the first fellow consistently. How consistently depends on the extent of his causal knowledge.

We have arrived at a better definition of “game of chance”. It is “game where the causes are unknown but where the outcomes are defined”. Once any of the causes become known, the game becomes, at least partially, a game of skill.

The Four Errors in Mann et al’s “The Likelihood of Recent Record Warmth”


Michael E. Mann and four others published the peer-reviewed paper “The Likelihood of Recent Record Warmth” in Nature: Scientific Reports (DOI: 10.1038/srep19831). I shall call this authors of this paper “Mann” for ease. Mann concludes (emphasis original):

We find that individual record years and the observed runs of record-setting temperatures were extremely unlikely to have occurred in the absence of human-caused climate change, though not nearly as unlikely as press reports have suggested. These same record temperatures were, by contrast, quite likely to have occurred in the presence of anthropogenic climate forcing.

This is confused and, in part, in error, as I show below. I am anxious people understand that Mann’s errors are in no way unique or rare; indeed, they are banal and ubiquitous. I therefore hope this article serves as a primer in how not to analyze time series.

First Error

Suppose you want to guess the height of the next person you meet when going outdoors. This value is uncertain, and so we can use probability to quantify our uncertainty in its value. Suppose as a crude approximation we used a normal distribution (it’s crude because we can only measure height to positive, finite, discrete levels and the normal allows numbers on the real line). The normal is characterized by two parameters, a location and spread. Next suppose God Himself told us that the values of these parameters were 5’5″ and 1’4″. We are thus as certain as possible in the value of these parameters. But are we as certain in the height of the next person? Can we, for instance, claim there is a 100% chance the next person will be, no matter what, 5’5″?

Obviously not. All we can say are things like this: “Given our model and God’s word on the value of the parameters, we are about 90% sure the next person’s height will be between 3’3″ and 7’7″.” (Don’t forget children are persons, too. The strange upper range is odd because the normal is, as advertised, crude. But it does not matter which model is used: my central argument remains.)

What kind of mistake would be it to claim that the next person will be for certain 5’5″? Whatever name you give this, it is the first error which pervades Mann’s paper.

The temperature values (anomalies) they use are presented as if they are certain, when in fact they are the estimates of a parameter of some probability model. Nobody knows that the temperature anomaly was precisely -0.10 in 1920 (or whatever value was claimed). Since this anomaly was the result of a probability model, to say we know it precisely is just like saying we know the exact height will be certainly 5’5″. Therefore, every temperature (or anomaly) that is used by Mann must, but does not, come equipped with a measure of its uncertainty.

We want the predictive uncertainty, as in the height example, and not the parametric uncertainty, which would only show the plus-or-minus in the model’s parameter value for temperature. In the height example, we didn’t have any uncertainty in the parameter because we received the value from on High. But if God only told us the central parameter was 5’5″ +/- 3″, then the uncertainty we have in the next height must widen—and by a lot—to take this extra uncertainty into account. The same is true for temperatures/anomalies.

Therefore, every graph and calculation in Mann’s paper which uses the temperatures/anomalies as if they were certain is wrong. In Mann’s favor, absolutely everybody makes the same error as they. This, however, is no excuse. An error repeated does not become a truth.

Nevertheless, I, like Mann and everybody else, will assume that this magnificent, non-ignorable, and thesis-destroying error does not exist. I will treat the temperatures/anomalies as if they are certain. This trick does not fix the other errors, which I will now show.

Second Error

You are in Las Vegas watching a craps game. On the come out, a player throws a “snake eyes” (a pair of ones). Given what we know about dice (they have six sides, one of which must show, etc.) the probability of snake eyes is 1/36. The next player (because the first crapped out) opens also with snake eyes. The probability of this is also 1/36.

Now what, given what we know of dice, is the probability of two snake eyes in a row? Well, this is 1/36 * 1/36 = 1/1296. This is a small number, about 0.0008. Because it is less than the magic number in statistics, does that mean the casino is cheating and causing the dice to come up snake eyes? Or can “chance” explain this?

First notice that in each throw, some things caused each total, i.e. various physical forces caused the dice to land the way they did. The players at the table did not know these causes. But a physicist might: he might measure the gravitional field, the spin (in three dimensions) of the dice as they left the players’ hands, the momentum given the dice by the throwers, the elasticity of table, the friction of the tablecloth, and so forth. If the physicist could measure these forces, he would be able to predict what the dice would do. The better he knows the forces, the better he could predict. If he knew the forces precisely he could predict the outcome with certainty. (This is why Vegas bans contrivances to measure forces/causes.)

From this it follows that “chance” did not cause the dice totals. Chance is not a physical force, and since it has no ontological being, it cannot be an efficient cause. Chance is thus a product of our ignorance of forces. Chance, then, is a synonym for probability. And probability is not a cause.

This means it is improper to ask, as most do ask, “What is the chance of snake eyes?” There is no single chance: the question has no proper answer. Why? Because the chance calculated depends on the information assumed. The bare question “What is the chance” does not tell us what information to assume, therefore it cannot be answered.

To the player, who knows only the possible totals of the dice, the chance is 1/36. To the physicist who measured all the causes, it is 1. To a second physicist who could only measure partial causes, the chance would be north of 1/36, but south of 1, depending on how the measurements were probative of the dice total. And so forth.

We have two players in a row shooting snake eyes. And we have calculated, from the players’ perspective, i.e. using their knowledge, the chance of this occurring. But we could have also asked, “Given only our knowledge of dice totals etc., what are the chances of seeing two snake eyes in a row in a sequence of N tosses?” N can be 2, 3, 100, 1000, any number we like. Because N can vary, the chance calculated will vary. That leads to the natural question: what is the right N to use for the Vegas example?

The answer is: there is no right N. The N picked depends on the situation we want to consider. It depends on decisions somebody makes. What might these decisions be? Anything. To the craps player who only has $20 to risk, N will be small. To the casino, it will be large. And so on.

Why is this important? Because the length of some sequence we happen to observe is not inherently of interest in and of itself. Whatever N is, it is still the case that some thing or things caused the values of the sequence. The probabilities we calculate cannot eliminate cause. Therefore, we have to be extremely cautious in interpreting the chance of any sequence, because (a) the probabilities we calculate depend on the sequence’s length and the length of interest depends on decisions somebody makes, and (b) in no case does cause somehow disappear the larger or smaller N is.

The second error Mann makes, and an error which is duplicated far and wide, is to assume that probability has any bearing on cause. We want to know what caused the temperatures/anomalies to take the values they did. Probability is of no help in this. Yet Mann assumes because the probability of a sequence calculated conditional on one set of information is different from the probability of the same sequence calculated conditional on another set of information, that therefore the only possible cause of the sequence (or of part of it) is thus global warming. This is the fallacy of the false dichotomy. The magnitude and nature of this error is discussed next.

The fallacy of the false dichotomy in the dice example is now plain. Because the probability of the observed N = 2 sequence of snake eyes was low given the information only about dice totals, it does not follow that therefore the casino cheated. Notice that, assuming the casino did cheat, the probability of two snake eyes is high (or even 1, assuming the casino had perfect control). We cannot compare these two probabilities, 0.0008 and 1, and conclude that “chance” could not have been a cause, therefore cheating must have.

And the same is true in temperature/anomaly sequences, as we shall now see.

Third Error

Put all this another way: suppose N is a temperature/anomaly series of which a physicist knows the cause of every value. What, given the physicist’s knowledge, is the chance of this sequence? It is 1. Why? Because it is no different than the dice throws: if we know the cause, we can predict with certainty. But what if we don’t know the cause? That is an excellent question.

What is the probability of a temperature/anomaly sequence where we do not know the cause? Answer: there is none. Why? Because since all probability is conditional on the knowledge assumed, if we do not assume anything no probability can be calculated. Obviously, the sequence happened, therefore it was caused. But absent knowledge of cause, and not assuming anything else like we did arbitrarily in the height example or as was natural in the case of dice totals, we must remain silent on probability.

Suppose we assume, arbitrarily, only that anomalies can only take the values -1 to 1 in increments of 0.01. That makes 201 possible anomalies. Given only this information, what is the probability the next anomaly takes the value, say, 0? It is 1/201. Suppose in fact we observe the next anomaly to be 0, and further suppose the anomaly after that is also 0. What are the chances of two 0s in a row? In a sequence of N = 2, and given only our arbitrary assumption, it is 1/201 * 1/201 = 1/40401. This is also less than the magic number. Is it thus the case that Nature “cheated” and made two 0s in a row?

Well, yes, in the sense that Nature causes all anomalies (and assuming, as is true, we are part of Nature). But this answer doesn’t quite capture the gist of the question. Before we come to that, assume, also arbitrarily, that a different set of information, say that the uncertainty in the temperatures/anomalies is represented by a more complex probability model (our first arbitrary assumption was also a probability model). Let this more complex probability model be an autoregressive moving-average, or ARMA, model. Now this model has certain parameters, but assume we know what these are.

Given this ARMA, what is the probability of two 0s in a row? It will be some number. It is not of the least importance what this number is. Why? For the same reason the 1/40401 was of no interest. And it’s the same reason any probability calculated from any probability model is of no interest to answer questions of cause.

Look at it this way. All probability models are silent on cause. And cause is what we want to know. But if we can’t know cause, and don’t forget we’re assuming we don’t know the cause of our temperature/anomaly sequence, we can at least quantify our uncertainty in a sequence conditional on some probability model. But since we’re assuming the probability model, the probabilities it spits out are the probabilities it spits out. They do not and cannot prove the goodness or badness of the model assumption. And they cannot be used to claim some thing other than “chance” is the one and only cause: that’s the fallacy of the false dichotomy. If we assume the model we have is good, for whatever reason, then whatever the probability of the sequence it gives, the sequence must still have been caused, and this model wasn’t the cause. Just like in the dice example, where the probability of two snake eyes, according to our simple model, were low. That low probability did not prove, one way or the other, that the casino cheated.

Mann calls the the casino not cheating the “null hypothesis”. Or rather, their “null hypothesis” is that his ARMA model (they actually created several) caused the anomaly sequence, with the false dichotomy alternate hypothesis that global warming was the only other (partial) cause. This, we now see, is wrong. All the calculations Mann provides to show probabilities of the sequence under any assumption—one of their ARMA or one of their concocted CMIP5 “all-forcing experiments”—have no bearing whatsoever on the only relevant physical question: What caused the sequence?

Fourth Error

It is true that global warming might be a partial cause of the anomaly sequence. Indeed, every working scientist assumes, what is almost a truism, that mankind has some effect on the climate. The only question is: how much? And the answer might be: only a trivial amount. Thus, it might also be true that global warming as a partial cause is ignorable for most questions or decisions made about values of temperature.

How can we tell? Only one way. Build causal or determinative models that has global warming as a component. Then make predictions of future values of temperature. If these predictions match (how to match is important question I here ignore), then we have good (but not complete) evidence that global warming is a cause. But if they do not match, we have good evidence that it isn’t.

Predictions of global temperature from models like CMIP, which are not shown in Mann, do not match the actual values of temperature, and haven’t for a long time. We therefore have excellent evidence that we do not understand all of the causes of global temperature and that global warming as it is represented in the models is in error.

Mann’s fourth error is to show how well the global-warming-assumption model can be made to fit past data. This fit is only of minor interest, because we could also get good fit with any number of probability models, and indeed Mann shows good fit for some of these models. But we know that probability models are silent on cause, therefore model fit is not indicative of cause either.


Calculations showing “There was an X% chance of this sequence” always assume what they set out to prove, and are thus of no interest whatsoever in assessing questions of cause. A casino can ask “Given the standard assumptions about dice, what is the chance of seeing N snake eyes in a row?” if, for whatever reason it has an interest in that question, but whatever the answer is, i.e. however small that probability is, it does not answer what causes the dice to land the way they do.

Consider that casinos are diligent in trying to understand cause. Dice throws are thus heavily regulated: they must hit a wall, the player may not do anything fancy with them (as pictured above), etc. When dice are old they are replaced, because wear indicates lack of symmetry and symmetry is important in cause. And so forth. It is only because casinos know that players do not know (or cannot manipulate) the causes of dice throws that they allow the game.

It is the job of physicists to understand the myriad causes of temperature sequences. Just like in the dice throw, there is not one cause, but many. And again like the dice throw, the more causes a physicist knows the better the predictions he will make. The opposite is also true: the fewer causes he knows, the worse the predictions he will make. And, given the poor performance of causal models over the last thirty years, we do not understand cause well.

The dice example differed from the temperature because with dice there was a natural (non-causal) probability model. We don’t have that with temperature, except to say we only know the possible values of anomalies (as the example above showed). Predictions can be made using this probability model, just like predictions of dice throws can be made with its natural probability model. Physical intuition argues these temperature predictions with this simple model won’t be very good. Therefore, if prediction is our goal, and it is a good goal, other probability models may be sought in the hope these will give better performance. As good as these predictions might be, no probability will tell us the cause of any sequence.

Because an assumed probability model said some sequence was rare, it does not mean the sequence was therefore caused by whatever mechanism that takes one’s fancy. You still have to do the hard work of proving the mechanism was the cause, and that it will be a cause into the future. That is shown by making good predictions. We are not there yet. And why, if you did know cause, would you employ some cheap and known-to-be-false probability model to argue an observed sequence had low probability—conditional on assuming this probability model is true?

Lastly, please don’t forget that everything that happened in Mann’s calculations, and in my examples after the First Error, are wrong because we do not know with certainty the values of the actual temperature/anomaly series. The probabilities we calculate for this series to take certain values can take the uncertainty we have in these past values into account, but it becomes complicated. That many don’t know how to do it is one reason the First Error is ubiquitous.


Why don’t you try publishing this in a journal?

I’ll do better. It’s a section in my upcoming book Uncertainty. Plus, since I am independent and therefore broke and not funded by big oil or big government, I cannot afford page charges. Besides, more people will read this post than will read a paper in some obscure journal.

But aren’t you worried about peer review?

Brother, this article is peer review. Somebody besides me will have to Mann about it, though, because Mann, brave scientist that he is, blocked me on Twitter.

Could you explain the genetic fallacy to me?

Certainly: read this.

Have you other interesting things to say on temperature and probability?

You know I do. Check these out.