All probability is conditional and we are always interested in some proposition, call it X. We want to know “the probability of X”. Well, there is none: not ever. There is no unconditional “probability” of anything.
There is, however, Pr(X | E), where E is some evidence (data, observations, premises, surmises, whatever). Change the evidence, change the probability.
In the precautionary principle, X is some disaster or undesirable event. Now it is easy to supply some E to deduce a probability of X. If there was widespread agreement on this E, then there would be agreement on the probability of X. The opposite case is also true. In global warming, there is great disagreement about evidence, usually because lovers of models chose to forget their creations’ flaws. Love is blind.
But the precautionary principle doesn’t quite work that way in practice. Usually the X is dire and the E is missing except to say E_c = “X is possible”, which is another way of saying X is contingent. All contingent things are logically possible; i.e. they are not impossible. With E_c, officially 0 < Pr(X | E_c) < 1, which tells us almost nothing except that X is not logically impossible and that X is not logically necessary. Weak evidence indeed.
Notice very carefully that Pr(X | E_c) is not 0.5, nor any other single number. You can’t use this interval to argue the probability “may be likely”. This is a false reading. The probability is the interval.
Another point of stress: if we knew or agreed upon decent E such that Pr(X|E) is greater than some decision threshold, then we do not need to use the precautionary principle. We’d have Pr(X|E) and we can use the some regular form of decision making. The precautionary principle is only invoked when such evidence is missing. Indeed, it is used to supply the missing evidence. The argument is that we don’t know E so we don’t know the probability but we do know E_c, thus X could happen, therefore X is sufficiently likely, thus we ought to do something. This is obviously a fallacy.
Or the precautionary principle is used when evidence exists which shows the chance of X is very low indeed. Say E_r (for realized evidence) then Pr(X|E_R) = ε > 0, but only just. It’s then said that because this probability is greater than 0, this is sufficient if the doom in X is disastrous enough. That’s why yesterday at The Stream I illustrated the precautionary principle with a hostile alien invasion.
An invasion is a contingent event, so it’s logically possible. There exists lots of (non-quantitative) evidence that this chance is near-zero low, such as the vast distances of space and so forth. But it could happen! And, like I said, if it did, nothing short of the Apocalypse would be as bad. Thus, according to the precautionary principle, we should—even must!—act to stop it.
Yet alien invasions are only the start of contingent doomsday events that might destroy us all. Rocks from space, viral mutations, planetary plagues, black holes plunging into the ocean, serial volcanic eruptions, the core of the Earth spinning out of control, Hillary presidency, rapacious nanobots, rogue humanoid robots, and on and on. Because each is possible and each would destroy mankind, no amount of protection is too little.
And then there are the troubles I mentioned yesterday, the precautionary principle applied to itself. If we can’t agree on the evidence such that we can say something about the probability of X, then the effects of protecting against X by manipulating the causes or possible causes of X are also likely unknown, and just as likely as hazardous, or perhaps even more hazardous, than X itself.
The solution is boring. Return to the hard work of amassing evidence such that we can agree on the evidence and compute reasonable probabilities. Tough, grueling, time-consuming labor.
Or you can run around like an addled fool and call your detractors “Deniers!” or “Troglodytes!”
Gee, maybe I was wrong about the precautionary principle after all: https://t.co/8PREh76WSZ
— William M. Briggs (@mattstat) July 22, 2015
Update Another Twitter interaction.
— William M. Briggs (@mattstat) July 22, 2015
I hope readers can see that.
If a “black swan” is defined as X = “an event which we know nothing about”, then we cannot find evidence E probative of it. There is no probability. Notice that that “nothing” is a very strong word. Make sure you get this.
What I have seen is that some define “black swans” as events which we can characterize at least partly. For instance, X = “Destruction of the human race” (or, in vulgar terms, X = “Loss of all capital”) . No idea in X how the event comes about, thus finding evidence probative of it is a problem. Like I said above, we can go on endlessly positing different ways the world can end. All these can form E, which, taken together, make X all but certain. Ponder this.
But because we packed everything into E, and formed a frightening probability of X, we have learned nothing. Which of the elements of E should we protect against—if it’s even possible? We can’t say “all of them”, because this is silly.