Is God’s Existence Confirmed By Prophecy Via Probability?

What are the chances of that?

What are the chances of that?

I came across a curious book by Marvin Bittinger, a mathematics educator, called The Faith Equation: One Mathematician’s Journey in Christianity. I’m a sucker for books like this, which seek to prove various metaphysical propositions using probability. None of them are in the least convincing, but I can’t help but be fascinated.

Why? Well, though it’s possible to be uncertain of a metaphysical proposition, just as it’s possible to be uncertain of any physical one, with physics we know we’re in the realm of the contingent where much, most, or even all certainty is denied to us. But in metaphysics, the only satisfying argument is one which ends in truth or falsity. The indifference found in statements like “God might exist” or “God probability doesn’t exist” is dismaying. A question that important should have a definitive answer.

Which it does, incidentally. God’s “existence” is well proved via more than a dozen different arguments, all metaphysical, starting with true premises reaching valid conclusions leaving no uncertainty. But don’t let’s fight over these today, else we will get lost.

Bittinger says that once we accumulate a number of fulfilled prophecies, each of them amazingly unlikely, the probability for God’s existence must be high. What’s a prophecy? “[A] prediction of the future, typically a promise made by God through his prophets.” He adds, “If thousands of these promises are fulfilled, it is incredible evidence of the Bible’s reliability.”

This isn’t a rigorous definition. As the Catholic Encyclopedia says, “St. Paul, speaking of prophecy in 1 Corinthians 14, does not confine its meaning to predictions of future events, but includes under it Divine inspirations concerning what is secret, whether future or not.” Plus, some prophecies are conditional, “Do X else Y”; if X is done, no Y. Is the prophecy then fulfilled? Well, yes, but understanding the outcome isn’t simple. Other prophecies are highly allegorical, so to speak. Just think about the book of Revelation.

Bittinger chose nine prophecies because, he claims, they “lent themselves to estimating, or reasoning, a probability.” All of these have been fulfilled and so now, he says, “have a probability of 1.” This is true: given that these events happened, the probability that they happened is indeed 1. But he’s concerned about the probability of these events before they happened.

What could that mean? On the 13th of April the Detroit Tigers played the San Diego Padres. Tigers lost (don’t weep). The probability the Padres won is therefore 1. But what was the probability they were going to win? There is no unique answer to that question. It is ill posed. All probability is conditional on the information supplied or evidence used. What evidence is the right or correct evidence? Historical record? This season’s outcomes? Player stats?

There isn’t any “right” evidence, though there is a sense there exists a best evidence. But learning that best evidence isn’t always possible, especially in fluid human events like baseball games. Sometimes we can know something like the best, but only in highly controlled situations. Think experiments with inclined planes or electrons.

Now given any set of evidence, a probability can be had. Not necessarily a numerical probability. If the evidence about the ball game was just this: “Them Padres are lookin’ good. And the Tigers relief maybe ain’t so hot” there is no numerical probability possible. Yes, people can state one, but they are not doing so based on the this evidence.

The first prophecy Bittinger uses is “Israel’s Messiah Will Be Born in Bethlehem” from Micah 5:2. He gives this a 1 in a million shot. How? Firstly, he went to trouble to figure the number of villages in which Jesus could have been born. About 1,000. Second, he figures the chance the prophecy would have been fulfilled 700 years after the prediction was made, which has the probability, he says, of 1/(2*700), a figure he generates using something called a “time principle.” These two probabilities are multiplied to get a number which is less than 1 in a million.

He does similar things for eight other prophecies arriving at the cumulative product of 10-76, which is mighty small. Therefore, and considering there are many more than nine prophecies in the Bible, mathematics shows God exists.

See what I mean? Unsatisfying in the extreme. I feel for Bittinger. He and I are fellow believers, and we agree with the prophecies. But I can’t agree with his arbitrary quantification. One reason is metaphysical. If the prophecies were unconditional, in the form of “X will happen”, then given (at least arguendo) the best evidence “God said X will happen and what God says goes”, then the probability of “X will happen” equals 1, a number with which even atheists would agree.

On the other hand, to the man who does not accept the “God said, etc.” premise, the before-the-fact prophecies are uncertain. And their fulfillment does given evidence to the hypothesis God exists. But it can never be conclusive evidence because the probabilities for fulfillment are not unique. Endless possibilities for disagreement about historical events exist.

Update If you want the best of the best of these probability arguments, check out (the start of) one by Richard Swinburne: Swinburne’s P-Inductive and C-Inductive arguments (for the existence of God). Another not so great: Bayes Theorem Proves Jesus Existed And Did not Exist.

On UFOs, Salt Intake, And Heart Disease

Some not-so-savory results

Michael “State of Fear” Crichton once proposed that UFOs were responsible for global warming. Why not? After all, something caused that record amount of snow in Detroit yesterday.

Don’t get me wrong. It was global warming which caused the snow—what else?—but something had to cause the global warming first. And that, as statistics demonstrate to a very high level of “significance”, was caused by UFOs. Roy Spencer has done the work “correlating” UFO reports and the environment. The statistics say it happened. (Thanks to KA Rodgers for reminding us of this.)

The statistics do prove the association. But nobody not actually preferring tinfoil-lined hats believes UFOs could be a cause of anything. Simultaneous movement in two (or more) time series, such as the increase in UFO reports and (say) ocean temperature, is a necessary condition to prove causality. But it is not a sufficient condition. Correlation does imply causation, but it is nowhere near proving it.

After all, since these two series moved together, it could also be that warming ocean temperatures are releasing more UFOs into the wild (the saucers have been parked down there, some say, for a very long time; hadn’t you seen John Carpenter’s The Thing?).

I apologize for the winding introduction, but it was absolutely necessary to begin with an absurd example of how plotting two or more time series together could lead to insanity. Because here comes another entry, also in the same genre. Not temperatures and ocean levels. But yet another thing the government is most anxious to control. Salt.

The new peer-reviewed article “Salt reduction in England from 2003 to 2011: its relationship to blood pressure, stroke and ischaemic heart disease mortality” by Feng He and others claims that the “reduction in salt intake is likely to be an important contributor to the falls in [blood pressure] from 2003 to 2011 in England. As a result, it would have contributed substantially to the decreases in stroke and [ischemic heart disease] mortality.” (Thanks to reader Rich for alerting us to this study.)

Feng He, like Crichton, plotted the course of several time series, but only over four separate years. The picture above shows some of these series. The data themselves were taken from different sources and measured over different people (and even over slightly different times, but let that pass). The sample sizes of the different data sets were widely different, too.

Emphasis: salt intake was measured on different people in each of the four time periods.

Of the most important series, “Stroke and IHD mortality rates were calculated as the number of stroke or IHD deaths divided by the population.” Of course, the population of England changed over this time, mostly due to immigration of people whose eating habits probably weren’t the same as the native residents’ (I’m guessing, but it’s plausible).

More emphasis: nowhere was salt intake nor heart disease nor stroke occurrence measured on any individual. All we have is four time points for several different disparate heterogeneous series. Nowhere was immigration measured. Obviously, or perhaps not obviously, many other possible causes were not measured.

There thus could be no possibility of claiming causation, nor even really hinting of it. Too many other things might have caused the decrease in deaths by IHD and stroke. And also, over those same four time periods, people in England still died. Each person that died had to die of something. Therefore if there were decreases in the rates of some diseases, such as IHD and stroke, there had to be an increase in the rates of some other disease or diseases. (I’m guessing cancer.) It is very curious we do not also see plotted these other causes. In just the same way, we can say salt was the cause of these increases.

Enter classical statistics: out pops wee p-values which are everywhere taken as proof that whatever the authors claim is therefore true. Sure, people know p-values aren’t proof; or at least that’s what they’ll tell you. But they believe it is, whatever they say.

Since salt was measured on different people than the outcomes, there is no proof that falling salt intake by a few hundred to thousand people means anything. After all, the first people sampled in 2003 may have been eating the same amount of salt through 2011. There is no way to know they hadn’t. This paper is thus not much different than Spencer’s plotting UFOs reports and ocean temperatures, except maybe Spencer’s is better since he used the same ocean throughout.

Anyway, here’s the kicker, the authors’ final word: “Therefore, continuing and much greater efforts are needed to achieve further reductions in salt intake to prevent the maximum number of stroke and IHD deaths.”

That speaks for itself. All uncertainty vanishes. The p-values are the final proof.

They’ll be coming after your salt next.

Government Per Capita Spending: Up, Up, And Away! Or, Happy Tax Day!

Here it is, folks. Right from The amount of Federal Government spending per capita from 1901 to the present and projected out to 2019. Population projections were taken from the Census Bureau. Everything is adjusted to 2008 dollars (since we’re interested in the shape of this curve, the base year doesn’t matter).

The sky's the limit!

The sky’s the limit!

This is an update to last year’s The Most Depressing Graph: Per Capita Federal Spending Rises Alarmingly.

The bumps are wars, naturally: the big ones—WWI, WWII, Cold, Afghanistan/Iraq—are easily seen. Korea and Vietnam not so much, but they’re still there (obviously).

The inexorable overall increase in this graph describes the size of Leviathan’s waist.

Sure, the GDP, however that might be calculated, has increased over this same period (also in constant dollars). But for our purposes this matters little. Besides, as the second most depressing graph shows (from last year; not yet updated), government spending as a percentage of GDP has also been on its way upwards and onwards.

I claim this graph is a direct proxy for the level, size, and intrusiveness of government (especially considering spending as percentage of GDP is also increasing at about the same rate). Every dollar spent is a fraction of control. And that control is not slowing; perhaps it is even accelerating, especially when we consider the full effect of Obamacare has not yet been felt.

There is nothing Yours Truly can see, save Divine intervention, which can slow the growth of the beast. Especially since an occult feedback is present. The more control the government has, the more control it, and its citizens for the most part, demand. For example, members of the intelligentsia still pen articles like “Yes, the government should spend more each year“. Too much is never enough.

Neither political party dares cut spending, and thus control, more than by token amounts. Every proposed cut is met by bleating and whining from (some segment of) the populace. The Republicans, it’s true, might slow the acceleration of spending, just as the Democrats would increase it, but there is no evidence the increasing trend will be anything but increasing no matter who is in power.

The amount spent per capita shows the dependence the people have on government. $12,000 (2008 dollars) per person. A family—are we still allowed to use that word? or is it “bigoted”—of four sucks up, on average, $48,000. Well, that money isn’t spread evenly, of course. Those that have get more. Those that have more are those companies which do business with the government; and this isn’t solely military contractors, but universities, hospitals, TSA/NSA suppliers, and so forth.

Don’t let’s forget the deficits remain and are projected to remain. The debt therefore must increase. The closest most politicians come to fixing this problem is to acknowledge the problem might exist. The nearest most citizens come to fixing it is to say, “What debt? I want more.”

All this being true, it becomes a matter of fun to make projections. Simply lining a ruler up to the plot and continuing the line doesn’t seem far wrong. But that doesn’t account for the occult feedback. I’m guessing there will come a year which finds us at an inflection point, where the graph begins a rapid, perhaps even astronomical, increase. This is when we embrace full socialism.

Since that word is anathema to most Americans, we’ll call it something else, just as progressive historians are anxious to paint the National Socialists of Germany as non-socialists. Whatever cosmetic fix is discovered, it seems clear that unless there is outside intervention (war? true pandemic? meteor?) socialism must come.

What do you think?

The Hot Hand: Statistical Fluke Or Genuine Article?

My hand does not appear aflame.

My hand does not appear aflame.

I’ll save you hunting through the text. It’s a real thing. If you want to know why, read on. If not, you just tell ‘em W.M. Briggs sez so, which is enough to stop any argument (though perhaps not in your favor).

The hot hand fell on hard times after Tom Gilovich and pals seemed to prove, via statistics, that the appearance of hot-handed shots were “due to chance” or that the shots were really “random.” Now that makes no sense. Every shot that makes and every shot that misses is caused to do so because of some reason and that reason can’t be chance. Chance is not a thing, nor is randomness—they are not physical entities—therefore it is impossible they can cause shots to make or miss.

ESPN’s Aaron McGuire “How the hot hand rose from the ashes” quoted the central premise of the original Gilovich paper: “Each player has an ensemble of shots that vary in difficulty [depending, for example, on the distance from the basket and on defensive pressure], and each shot is randomly selected from this ensemble.”

This makes no sense. Each player, taking into consideration the swarming bodies surrounding him, causes, in the moment he takes them, the shots he takes. He does not “select” the shots from some mysterious “ensemble.” The player himself, the physics of basketballs in flight, and the actions of the other players, even the behavior of the fans (as they affect the players), cause the shot to make or miss.

Now you in the stands watching the game won’t know when the player will take his next shot, nor whether it will go in, so to you, based on the information you have, the shot is “random”, which only means unknown, which is obvious. Emphasis: the shot is not random, only your understanding whether it makes or misses.

McGuire points to a new paper by Andrew Bocskocsky and others (“The Hot Hand: A New Approach to an Old ‘Fallacy’“, pdf) which uses different statistical methods than Gilovich. According to Bocskocsky his statistics prove the hot hand is real but small.

Bocskocsky makes the same mistakes in interpretation as Gilovich, however, and talks about shots being (or not being) “independent” from one another. It is impossible that shots are causally independent. Everything that happens in a game is contingent on what happened earlier in the game. Thus earlier shots must affect latter ones. If the opposition sees a man is “on fire”, and they see that because they have seen the majority of his (difficult?) shots go on, they are likely to increase guarding him. And so forth.

We may not be able to predict to reasonably accuracy what will happen from what came before, which only means our knowledge of (some) earlier shots is irrelevant to our predictions of future ones. Saying shots are “independent” or “random” is to mix up causal language with epistemological language, confusing why shots make or miss from with the level of our uncertainty in whether future shots will make or miss. It is “future” because we already know all about the past shots.

So just what is a hot hand? How about the kind of thing Wilt Chamberlain did when he scored 100 points? Which seems to be the same kind of thing Kobe Bryant did when he once scored 81.

Perhaps, as many argue. Chamberlin’s record isn’t as impressive as it first seems, but it’s still something special. And nobody poo-poos Bryant’s. Those two observations are all the instances we need to show that a hot hand exists. These don’t prove anybody else ever had it, or ever will. But these two single statistics, or data points, prove the hot hand is a reality.

Now it is rational to suppose, given these two extreme observations, that since the hot hand certainly exists on the large scale, that it probably does on the medium or small scale. And though Yours Truly is no expert on basketball statistics, we often hear of men scoring an unusual number of points in a game. So it appears the hot hand exists at the medium scale, too.


A Twitter follower pointed me to this article some time ago, but I neglected to write down who. I apologize for that.

Truth: Logic of Probability and Statistics

Doubt truth and end up with a haircut like this.

Here, as promised, is rough, incomplete, outline-only, not-yet-finished, gist-only version of Chapter 1, Truth, for the book tentatively titled The Philosophy of Probability and Statistics (I’m also toying with The Philosophy of Science, Probability, and Statistics and This Is The Book You Were Thinking Of).

Truth is meant as an introduction or guide to truth and not a disquisition. I don’t have the space for a full justification of the realism-coherence description of truth (which is anyway obvious; and if you say it isn’t, it is), nor for a complete survey of all the alternatives and why they are wrong. I only have enough to prepare the ground for probability—a subject which, it will be no secret to regular readers, is vaster than the mathematical quantification ordinarily thought of as “scientific.”

Chapter 2, incidentally, is Logic, and contains some material that complements Truth, such as a proof that logic cannot be empirical and that our knowledge of it must be, in part, built in. So if you see something missing in Truth, it might be in Logic. Same thing for Chapter 3, Induction. I’m still wavering whether to put Causality with Induction or bust it out on its own.

These three beginning chapters (and a Preface) are the necessary foundation for understanding fully probability and statistics. They are therefore the hardest to write, especially since they must perforce be terse. I don’t want people skipping over things, so they can’t be too long; yet if they are too short, I risk giving key elements short shrift.

I’m not happy with the sections on Scientism and Faith: consider these well underdone, mere placeholders.

Statisticians (me included) receive no philosophy in their formal training, except for inconsistent occasional unanchored tidbits. This is why, for example, most repeat the false proposition “All models are wrong”, when nearly the exact opposite is true. Others claim to “use falsification all the time”, which itself is falsified. And so on. Like most people who have no education in an area, the limited knowledge statisticians do possess is thought sufficient and complete. Since this is not so, before work commences on the subject proper, I need these three (four?) chapters whose main job is to prove there is more to be known and to highlight and point to places where complete descriptions might be found.

Anyway, here you go. Unless you have something so secret you don’t want any except the NSA and me to know, please leave comments below and don’t email. That way I won’t misplace them. Don’t point out typos. Way too early for that. Oh, the footnotes, references, and index are far, far from complete.


Update Like I said, Faith was merely a sketch, but upon further reflection, I think I’ll add it to Induction, where it is much better placed (given what I want to say about belief in the unseen).

No Love Of Joy: Yet Another Author Claims Statistically Significant Temperature Change

The end is probably not nigh.

Update This originally ran 28 May 2013, but given Shaun Lovejoy’s latest effort in Climate Dynamics to square the statistical circle, it’s necessary to reissue. See the Lovejoy update at the bottom.

My Personal Consensus

I, a professional statistician, PhD certified from one of the top universities in the land—nay, the world—a man of over twenty years hard-bitten numerical experience, a published researcher in the very Journal of Climate, have determined that global temperatures have significantly declined.

You read that right: what has gone up has come back down, and significantly. Statistically significantly. Temperatures, he said again, have plunged significantly.

This is so important a scientific result that it bears repeating. And there is another reason for a recapitulation: I don’t believe that you believe me. There may be a few of you who are suspicious that old Briggs, well known for his internet hilarity, might be trying to pull a fast one. I neither josh nor jest.

Anyway, it is true. Global warming, by dint of a wee p-value, has been refuted.

Which is to say that according to my real, genuine, mathematically legitimate, scientifically fabricated scientific statistical scientific model (calculated on a computer), I was able to produce statistical significance and reject the “null” hypothesis of no cooling. Therefore there has been cooling. And since cooling is the opposite of warming, there is no more global warming. Quod ipso facto. Or something.

I was led to this result because many (many) readers alerted me to a fellow named Lord Donoughue, who asked Parliament a question which produced the answer that “the temperature rise since about 1880 is statistically significant.” Is this right?

Not according to my model. So who’s model, the Met Office’s or mine, is right?

Well, that’s the beauty of statistics. Neither model has to be right; plus, anybody can create their own.

Statistical model

Here’s the recipe. Grab, off the shelf or concoct your own with sweat and integrals, a model. The more scientific sounding the better. Walk into a party with “Autoregressive heteroscedastic GARCH process” or “Coupled GCM with Kalman-filtering cloud parameterization” on your lips and you simply cannot fail to be a hit.

Don’t despair of finding a model. They are as dollars to a bureaucracy: they are infinite! Thing is, all models, as long as they are not fully deterministic, have some uncertainty in them. This uncertainty is parameterized by a lot of knobs and switches which can be throw into any number of configurations.

Statistical “significance” works by tossing some data at your model and hoping that, via one of a multitude of mathematical incantations, one of these many parameters turns out to be associated with a wee p-value (defined as less than the magic number; only adepts know this figure, so if you don’t already have it, I cannot tell you).

If you don’t get a wee p-value the first time, you keep the model but change the incantation. There are several, which practically guarantees you’ll find joy. Statisticians call this process “hypothesis testing.” But you can think of it as providing “proof” that your hypothesis is true.

Funny thing about statistics is that you can always find a model with just the right the set of parameters so that one, in the presence of data, is associated with a wee p-value. This is why, for example, one scientist will report that chocolate is good for your ticker, while another will claim chocolate is “linked to” heart disease. Both argue from a different statistical model.

Same thing holds in global warming. One model will “confirm” there has been statistically significant cooling, another will say statistically significant warming.

Say What?

The global temperature (as measured operationally) has certainly changed since the 1800s. Something, or some things, caused it to change. It is impossible—as in impossible—that the cause was “natural random variation”, “chance” or anything like that. Chance and randomness are not causes; they are not real, not physical entities, and therefore cannot be causes.

They are instead measures of our ignorance. All physical and probability models (or their combinations) are encapsulations of our knowledge; they quantify the certainty and uncertainty that temperature takes the values it does. Models are uncertainty engines.

This includes physical and statistical models, GCMs and GARCHes. The only difference between the two is that the physical models ties our uncertainty of temperatures to knowledge of other physical processes, while statistical models wed uncertainty to mysterious math and parameterizations.

A dirty, actually filthy, open secret in statistics is that for any set of data you can always find a model which fits that data arbitrarily close. Finding “statistical significance” is as difficult as the San Francisco City Council discovering something new to ban. The only evidence weaker than hypothesis tests are raw assertions and fallacies of appeal to authority.

The exclusive, or lone, or only, or single, solitary, sole way to check whether any model is good is if it can skillfully predict new data, where “new” means as yet unknown to the model in any way—as in in any way. The reason skeptics exist is because no know model has been able to do this with temperatures past a couple of months ahead.

The Dramatic Conclusion

There isn’t a soul alive or dead who doesn’t acknowledge that temperatures have changed. Since it cannot be that the observed changes are due to “natural variation” or “chance,” that means something real and physical, possible many different real and physical things, have caused temperature to take the values it did.

If we seek to understand this physics, it’s not likely that statistics will play much of role. Thus, climate modelers have the right instinct by thinking thermodynamically. But this goes both directions. If we have a working physical model (by “working” I mean “that which makes skillful predictions”) there is no reason in the world to point to “statistical significance” to claim temperatures in this period are greater than temperatures in that period.

Why abandon the physical model and switch to statistics to claim significance when we know that any fool can find a model which is “significant”, even models which “prove” temperatures have declined? This is nonsensical as it is suspicious. Skeptics see this shift of proof and rightly speculate that the physics aren’t as solid as claimed.

If a statistical model has skillfully predicted new temperatures, and of course this is possible, then it is rational to trust the model to continue to do so (for the near horizon; who trusts a statistics model for a century hence?). But there is not a lot that can be learned from the model about the physics, unless the parameters of the model can be married to physical concepts. And if we can do that, we should be able to create skillful physical models. Good statistical models of physical processes thus work toward their own retirement.

Ready for the punch line? It is shocking and deeply perplexing why anybody would point to statistical significance to claim that temperatures have gone up, down, or wiggled about. If we really want to know whether temperatures have increased, then just look. Logic demands that if they have gone up, then they have gone up. Logic also proves that if they have gone down, then they have gone down. Statistical significance is an absurd addition to absolute certainty.

The only questions we have left are—not whether there have been changes—but why these changes occurred and what the changes will be in the future.

Lovejoy Update To show you how low climatological discourse has sunk, in the new paper in Climate Dynamics Shaun Lovejoy (a name which we are now entitled to doubt) wrote out a trivially simple model of global temperature change and after which inserted the parenthetical words “skeptics may be assured that this hypothesis will be tested and indeed quantified in the following analysis”. In published comments he also fixated on the word “deniers.” If there is anybody left who says climate science is no different than politics, raise his hand. Anybody? Anybody?

His model, which is frankly absurd, is to say the change in global temperatures is a straight linear combination of the change in “anthropogenic contributions” to temperature plus the change in “natural variability” of temperature plus the change in “measurement error” of temperature. (Hilariously, he claims measurement error is of the order +/- 0.03 degrees Celsius; yes, three-hundredths of a degree: I despair, I despair.)

His conclusion is to “reject”, at the gosh-oh-gee level of 99.9%, that the change of “anthropogenic contributions” to temperature is 0.

Can you see it? The gross error, I mean. His model assumes the changes in “anthropogenic contributions” to temperature and then he had to supply those changes via the data he used (fossil fuel use was implanted as a proxy for actual temperature change; I weep, I weep). Was there thus any chance of rejecting the data he added as “non-significant”?

Is there any proof that his model is a useful representation of the actual atmosphere? None at all. But, hey, I may be wrong. I therefore challenge Lovejoy to use his model to predict future temperatures. If it’s any good, it will be able to skillfully do so. I’m willing to bet good money it can’t.

The Somebody-Might-Get-Hurt! Fallacy

Hey, it could happen.

Word is our beneficent government, which loves us and would not see us fall into harm, is working on a design for a system of chains to anchor both citizens and illegal aliens to the earth. Why? Because gravity might reverse itself.

That, dear reader, despite its rank absurdity, is a true statement. Gravity might reverse itself. And if it does, we’d be in some pretty deep kimchee. So the government would be well justified in shackling us to the ground.

What we have is an actual possibility, a non-zero probability, of a unimaginable calamity. The ill effects of the calamity would be so awful that nobody could calculate them. Why, they’d be costlier than the entire Federal debt times two. It would be so horrific that the hosts of NPR to raise their voices.

Yet the whole thing is obviously absurd.

This is the Somebody-Might-Get-Hurt! fallacy, a.k.a. the What-About-The-Children! fallacy, a.k.a. the We’re-All-Going-To-Die fallacy, the Better-Safe-Than-Sure! fallacy. It is the only fallacy comes with an exclamation point (technically it should also be written in italics to emphasize its dire nature).

The only time this fallacy is written about soberly is when when it appears in scientific literature, where it is called the Precautionary Principle.

The old joke used to be that a sweater was defined as an article of clothing that a child put on when its mother got cold. Now it’s the same joke but “mother” has been swapped for “government.”

The problem lies in the nature of contingency. All physical events, such as gravity reversing itself, the climate spinning out of control and forcing the atmosphere to resemble an Easy-Bake oven, plastic bags tainting the water supply turning us all into three-armed mutants, dust in air causing hearts to seize up solid, and on and on, are all contingent.

Contingent physical events are not logically necessary. It is a rock-solid undefeatable fact of the universe that what happened could have happened differently, and thus that what might happen could be virtually anything. Mountains might grow legs and dance, goats might swell to terrible size and begin goring the populace, progressives might become tolerant of dissent. Anything that can be imagined might happen.

And therefore, the costs incurred from these mini-apocalypses might be astronomical, they might be incalculably large, almost infinite disruptions.

The means you can always threaten doom and use your lurid fantasy to justify almost any action that would “Save the planet!”

Because of these indisputable truths, the Somebody-Might-Get-Hurt! fallacy is an informal and not a formal fallacy, much in the way that the No True Scotsman and Slippery Slope informal fallacies are also not rigorous proofs your enemy’s argument are false. So it never does any logical good to tell the government that its latest ban is silly. They can always retort truthfully that unimaginable evils await unless they have their way.

Still, the Somebody-Might-Get-Hurt! fallacy is an informal fallacy, which means it can be answered.

When your mother used to tell you to put on a sweater or come out of the water, the natural retort was I am not cold. What that does is reject the premise used by your mom in building her threat. Or you might have been cold but were having too much fun so you said, “Oh, mom. Just five more minutes!” That rebuts the cost. You have to do the same thing with the government.

Yes, you admit, transfats might be killing more people than old age and so should be banned. But if they so deadly, where is the evidence of their effects? The probability of widespread death, given all observation, is apparently near zero. And then it’s none of the government’s business what kind of fats I want to eat.

Just like your mother, the government is not likely to buy that last argument. Everything is their business. They say. Since you are not intelligent enough to figure out for yourself the best way to live, the government, bristling with well credentialed experts, feels it must step in and do the job for you.

This is why instances where somebody invokes the Somebody-Might-Get-Hurt! fallacy turn into shouting matches. Either the argument is over the premises which drive the probability of the calamity, or its over who’s business the effects of the calamity are.

The only chance of winning against somebody beholden to the fallacy is ridicule. You won’t change your opponent’s mind, but you might convince enough others so that you outnumber your opponent.

But the smart money is on the government.