# Chapter 1 Excerpt from *Uncertainty: The Soul of Probability, Modeling & Statistics*

**Necessary & Conditional Truth**

Given “*x,y,z* are natural numbers and *x>y* and *y>z*” the proposition “*x>z*” is true (I am assuming logical knowledge here, which I don’t discuss until Chapter 2). But it would be false in general to claim, “It is true that ‘*x>z*‘.” After all, it might be that “*x = 17* and *z = 32*“; if so, “x>z” is false. Or it might be that “*x = 17* and *z = 17*“, then again “*x>z*” is false. Or maybe “*x* = a boatload and *z* = a humongous amount”, then “*x>z*” is undefined or unknown unless there is tacit and complete knowledge of precisely how much is a boatload and how much is a humongous amount (which is doubtful). We cannot dismiss this last example, because a great portion of human discussions of uncertainty are pitched in this way.

Included in the premise “*x,y,z* are natural numbers and *x>y* and *y>z*” are not just the raw information of the proposition about numbers, but the tacit knowledge we have of the symbol *>*, of what “natural numbers” are, and even what “and” and “are” mean. This is so for any argument which we wish to make. Language, in whatever form, must be used. There must therefore be an understanding of and about definitions, language and grammar, in any argument if any progress is to be made. These understandings may be more or less obvious depending on the argument. It is well to point out that many fallacies (and the best jokes) are founded on equivocation, which is the intentional or not misunderstanding double- or multiple-meanings of words or phrases. This must be kept in mind because we often talk about how the mathematical symbols of our formulae translate to real objects, how they matter to real-life decisions. A caution not heard frequently enough: just because a statement is mathematically true does not mean that the statement has any bearing on reality. Later we talk about how the deadly sin of reification occurs when this warning is ignored.

We have an idea what it means to say of a proposition that it is true or false. This needs to be firmed up considerably. Take the proposition “a proposition cannot be both true and false simultaneously”. This proposition, as I said above, is true. That means, to our state of mind, there exists evidence which allows us to conclude this proposition is true. This evidence is in the form of thought, which is to say, other propositions, all of which include our understanding of the words and English grammar, and of phrases like “we cannot believe its contrary.” There are also present tacit (not formal) rules of logic about how we must treat and manipulate propositions. Each of these conditioning propositions or premises can in turn be true or false (i.e. known to be true or false) conditional on still other propositions, or on inductions drawn upon sense impressions and intellections. That is, we eventually must reach a point at which a proposition in front of us just *is* true. There is no other evidence for this kind of truth other than intellection. Observations and sense impressions will give partial support to most propositions, but they are never enough by themselves except for the direct impressions. I explore this later in the Chapter on Induction.

In mathematics, logic, and philosophy popular kinds of propositions which are known to be true because induction tells us so are called axioms. Axioms are indubitable—when considered. Arguments for an axiom’s truth are made like this: given these specific instances, thus this general principle or axiom. I do not claim, and it is not true, that everybody knows every axiom. The arguments for axioms must first be considered before they are believed. A good example is the principal of non-contradiction, a proposition which we cannot know is false (though, given we are human, we can always *claim* it is false). As said, for every argument we need an understanding of its words and grammar, and, for non-contradiction specifically, maybe the plain observation of a necessarily finite number of instance of propositions that are only true or only false, observations which are consonant with the axiom, but which are none of them the full proof of the proposition: there comes a point at which we just believe and, indeed, cannot do other than *know* the truth. Another example is one of Peano’s axioms. For every natural number, if *x = y* then *y = x*. We check this through specific examples, and then move via induction to the knowledge that it is true for every number, even those we have not and, given our finiteness, cannot consider. Axioms are known to be true based on the evidence and faith that our intellects are correctly guiding us.

This leads to the concept of the truly true, really true, just-plain true, universally, absolutely, or the *necessarily* true. These are propositions, like those in mathematics, that are known to be true given a valid and sound chain of argument which leads back to indubitable axioms. It is not possible to doubt axioms or necessary truths, unless there be a misunderstanding of the words or terms or chain of proof or argument involved (and this is, of course, possible, as any teacher will affirm). Necessary truths are true even if you don’t want them to be, even if they provoke discomfort, which (again of course) they sometimes do. Peter Kreeft said: “As Aristotle showed, [all] ‘backward doubt’ terminates in two places: psychologically indubitable immediate sense experience and logically indubitable first principles such as ‘X is not non-X’ in theoretical thinking and ‘Good is to be done and evil to be avoided’ in practical thinking”.

A man in the street might look at the scratchings of a mathematical truth and doubt the theorem, but this is only because he doesn’t comprehend what all those strange symbols mean. He may even say that he “knows” the theorem is false—think of the brave soul who claims to have squared the circle. It must be stressed that this man’s error arises from his not comprehending the whole of the argument. Which of the premises of the theorem he is rejecting, and this includes tacit premises of logic and other mathematical results, is not known to us (unless the man makes this clear). The point is that if it were made plain to him what every step in the argument was, he *must* consent. If he does not, he has not comprehended at least one thing or he has rejected at least one premise, or perhaps substituted his own unaware. This is no small point, and the failure to appreciate it has given rise to the mistaken subjective theory of probability. Understanding the *whole* of an argument is a requirement to our admitting a necessary truth (our understanding is obviously not required of the necessary truth itself!).

From this it follows that when a mathematician or physicist says something akin to, “We now know Flippenberger’s theorem is true”, his “we” does not, it most certainly does not, encompass all of humanity; it applies only to those who can and *have* followed the line of reason which appears in the proof. That another mathematician or physicist (or man in the street) who hears this statement, but whose specialty is not Flippenbergerology, conditional on trusting the first mathematician’s word, also believes Flippenberger’s theorem is true, is not making (to himself) the same argument as the theory’s proponent. He instead makes a *conditional* truth statement: to him, Flippenberger’s theorem is *conditionally* true, given the premise of accepting the word of the first mathematician or physicist. Of course, necessary truths are *also* conditional as I have just described, so the phrase “conditional truth” is imperfect, but I have not been able to discover one better to my satisfaction. *Local* or *relative* truth have their merits, but their use could encourage relativists to believe they have a point, which they do not.

Besides mathematical propositions, there are plenty other of necessary truths that we know. “I exist” is popular, and only claimed to be doubted by the insane or (paradoxically) by attention seekers. “God exists” is another: those who doubt it are like circle-squarers who have misunderstood or have not (yet) comprehended the arguments which lead to this proposition. “There are true propositions” always delights and which also has its doubters who claim it is true that it is false. In Chapter 2 we meet more.

There are an infinite number and an enormous variety of conditional truths that we do and can know. I don’t mean to say that there are not an infinite number of necessary truths, because I have no idea, though I believe it; I mean only that conditional truths form a vaster class of truths in everyday and scientific discourse. We met one conditional truth above in “*x>z*“. Another is, given “All Martians wear hats and George is a Martian” then it is conditionally true that “George wears a hat.” The difference in how we express this “truth is conditional” is plain enough in cases like hat-wearing Martians. Nobody would say, in a general setting, “It’s true that Martians wear hats.” Or if he did, nobody would believe him. This disbelief would be deduced conditional on the observationally true proposition, “There are no Martians”.

We sometimes hear people claim conditional truths are necessary truths, especially in moral or political contexts. A man might say, “College professors are intolerant of dissent” and believe he is stating a necessary truth. Yet this cannot be a necessary truth, because no sound valid chain of argument anchored to axioms can support it. But it may be an extrapolation from “All the many college professors I have observed have been intolerant of dissent”, in which case the proposition is still not a necessary truth, because (as we’ll see) observational statements like this are fallible. Hint: The man’s audience, if it be typical, might not believe the “All” in the argument means *all*, but only “many”. But that substitution does not make the proposition “Many college professors are intolerant of dissent” necessarily true, either.

Another interesting possibility is in the proposition “Some college professors are intolerant of dissent,” where *some* is defined as *at least one and potentially all*. Now if a man hears that and recalls, “I have met X, who is a college professor, and she was intolerant of dissent”, then conditional on that evidence the proposition of interest is conditionally true. Why isn’t it necessarily true? Understand first that the proposition is true for you, too, dear reader, if we take as evidence “I have met X, etc.” Just as “George wears a hat” was conditionally true on the other explicit evidence. It may be that you yourself have not met X, nor any other intolerant-of-dissent professor, but that means nothing for the epistemological status of these two propositions. But it now becomes obvious why the proposition of interest is not necessarily true: because the supporting evidence “I have met X, etc.” cannot be held up as necessarily true itself: there is no chain of sound argument leading to indubitable axioms which guarantees it is a logically necessity that college professors must be intolerant of dissent. (Even if it sometimes seems that way.)

We only have to be careful because when people speak or write of truths they are usually not careful to tell us whether they have in mind a necessary or only a conditional truth. Much grief is caused because of this.

One point which may not be obvious. A necessary truth is just true. It is not true *because* we have a proof of it’s truth. Any necessary truth is true *because* of something, but it makes no sense to ask why this is so for any necessary truth. Why is the principle of non-contradiction true? What is it that *makes* it true? Answer: we do not know. It is just is true. How do we *know* it is true? Via a proof, by strings of deductions from accepted premises and using induction, the same way we know if any proposition is true. We must ever keep separate the epistemological from the ontological. There is a constant danger of mistaking the two. Logic and probability are epistemological, and only sometimes speak or aim at the ontological. Probability is always a state of the mind and not a state of the universe.

RE: “A caution not heard frequently enough: just because a statement is mathematically true does not mean that the statement has any bearing on reality. Later we talk about how the deadly sin of reification occurs when this warning is ignored.”

The entire premise of today’s book excerpt is itself guilty of the “deadly sin of reification.”

The error is the implicit presumption that if someone is doing it wrong, or has committed the ‘reifaction sin’, this is occurring by mistake, correctable by education. The model Briggs is implicitly using is that those applying stats are doing so with intent to do it right.

That is wrong. Too often enough.

Too commonly stats are applied with willful intent to generate a very particular answer, or, to give undue credibility to something, etc. That is the sin of deception, lying.

Some cliche’s re that:

Torture the data until it confesses.

Figures don’t lie, but liars figure.

This occurs typically/usually in one of two ways: 1) by those manipulating information to deceive others (typically to buy/endorse something), or, 2) by people who need to deceive themselves — this very often AFTER they’ve made a decision or taken some action and need reassurance their decision/action was best.