William M. Briggs

Statistician to the Stars!

Author: Briggs (page 1 of 424)

Jerry Coyne Has A Go At Pope Francis. Shoots Self In Foot

One of the demons which torment Jerry Coyne?

One of the demons which torment Jerry Coyne?

Since it’s Halloween, we may as well examine the spooks, hobgoblins, and bogeymen which taunt and haunt the minds of our public undead, which is to say, our intellectuals.

Now no one of these creatures is frightening by himself. But when you gather them together and concentrate their power—say, in universities—they are like zombies. The slow kind, I mean. They move inexorably forward, infecting or killing every good idea in their path. Being undead, they are difficult to take down.

A common zombified idea (yes, zombified) is that evolution implies atheism, which is why you see such hysterical defenses of the subject whenever anybody challenges even the smallest part of the theory. The implication is false, always has been false, and is easily seen as false. Evolution is perfectly consistent with, for instance, Catholic, Jewish, and Muslim faiths, among others.

Which is why Pope Francis, addressing a meeting of the Pontificial Academy of Sciences, said the other day that “Evolution in nature is not in contrast with the notion of (divine) creation because evolution requires the creation of the beings that evolve”.

This is almost a tautology. If you had no creatures, there would be no creatures that could evolve, n’est-ce pas? But where did the stuff that makes creatures come from? And, once these creatures got here, from where did the rules of evolution arise? Good questions, those. Here’s another: how did any of the rules, i.e. “scientific laws”, arise? Another: why is there something rather than nothing?

Pope: “The beginning of the world is not the work of chaos that owes its origin to something else, but it derives directly from a supreme principle that creates out of love…The ‘Big Bang’ [first posited by a Catholic priest, incidentally], that today is considered to be the origin of the world, does not contradict the creative intervention of God, on the contrary it requires it.”

The ever-angry Jerry Coyne was repulsed by the Pope’s commonsensical statement. He said there is “no evidence for the religious alternative of divine creation”. On the contrary, there is scads of it. There is more than a plethora. That Coyne does not know of this evidence, and not only publicly admits it, but boasts of his ignorance, is scary. Boo!

Coyne says, without supporting argument, that “the Vatican’s official stance on evolution is explicitly unscientific: a combination of modern evolutionary theory and Biblical special creationism.” If he means by “special creationism” that the rules of evolution and science and the universe itself (which is here defined as all there is) were designed and created by God, then he’s right. Modern evolutionary theory is, as Coyne suggest on this interpretation, not in the least inconsistent with Catholicism.

But I think instead that Coyne is just hyperventilating as biologists unthinkingly do whenever they hear the word “creation.”

The recent history of Catholicism and evolution is spotty. Pope Pius XII…insisted that humans…had been bestowed by God with souls, a feature present in no other species…Adam and Eve were seen as the historical and literal ancestors of all humanity.

Both of these features fly in the face of science. We have no evidence for souls, as biologists see our species as simply the product of naturalistic evolution from earlier species…Further, evolutionary genetics has conclusively demonstrated that we never had only two ancestors…

Coyne, of course, holds the zombified idea of soul, and has never bothered to teach himself what the soul really is. To Coyne, a “soul” is an aetheric, ectoplasm-like substance which hides from all known scientific probes. A chilling definition. Boo!

The mundane truth is that the soul is the form of man. Animals have souls, too, because they have forms. So do plants have souls. The nature of the human soul is different, because we are rational creatures, whereas animals and plants are not. Hey, Jerry, here’s some homework for you.

Could there have been a literal Adam and Eve? Our friend Mike Flynn provides the necessary reading here: “Darwin tells us..an ape that was not quite a man gave birth to a man that was no longer quite an ape…[who] had the capacity for rational thought; that is, to reflect on sensory perceptions and abstract universal concepts.”

Later Coyne says evolution “is not a process involving chance alone, but a combination of random mutations and deterministic natural selection” and he quotes Pope Benedict who said the “universe is not the result of chance”. Now this is scary. Coyne, whose self-labeled orientation is biologist, does not understand his own subject. Boo!

Evolution cannot be a product of “random” or “chance” mutations; neither did the universe come into creation by “chance.” It is unscientific in the extreme to think that “randomness” or “chance” cause anything to happen. Is Coyne saying evolution happens by magic? Randomness and chance are measures of (our) ignorance and nothing more. Because we don’t know how the various causes of evolution came about does not mean these causes don’t exist—and neither can we conclude therefore the system which designed how these causes would work does not exist. To claim otherwise is to embrace mysticism. How irrational. Boo!

Ready for some bone-chilling spookiness?

Let’s start with the Big Bang, which, said Francis, requires the intervention of God. I’m pretty sure physicists haven’t put that factor into their equations yet, nor have I read any physicists arguing that God was an essential factor in the beginning of the universe. We know now that the universe could have originated from “nothing” through purely physical processes, if you see “nothing” as the “quantum vacuum” of empty space.

Scare quotes around nothing! Boo! Somehow—more magic!—nothing is defined as a “quantum vacuum” and this “nothing” causes things to happen—never mind how!

Now we scientists would call the something that is a quantum vacuum something, not nothing. From whence did that vacuum, or whatever it is that is the most basic level of existence, arise? Obviously from God, because why? Because God is existence itself. There is no creation, no movement, no change, no nothing (i.e. something) without God.

Jerry Coyne “is in a tough spot, straddling an equipoise between modern science and antiscientific medieval” woo-wooiness (yes, woo-wooiness). He wants there not to be a God so badly that he is willing to abandon rationality and embrace magical thinking. Which is fine, except he wants you to do the same. Boo!

Doc Asks Fellows To Keep Statistics Simple

Drs Howard, Fine, and Howard check the result of a statistical model.

Drs Howard, Fine, and Howard check the result of a statistical model.

Our friend Christos Argyropoulos (‏@ChristosArgyrop) to a popular medical site in which Stephen Reznick asks “Keep statistics simple for primary care doctors.

He can’t read the journals, because why? Because “Medical school was a four year program. The statistics course was a brief three week interlude in the midst of a tsunami of new educational material presented in a new language…While internship and residency included a regular journal club, there was little attention paid to analyzing a paper critically from a statistical mathematical viewpoint.”

Reznick has been practicing for some time and admits his “statistical analysis skills have grown rusty…When the Medical Knowledge Self Assessment syllabus arrives every other year, the statistics booklet is probably one of the last we look at because not only does it involve re-learning material but you must first re–learn a vocabulary you do not use day to day or week to week.”

What he’d like is for journals to “Let authors and reviewers say what they mean at an understandable level.”

Now I’ve taught and explained statistics to residents, docs fresh out of med school, for a long time. And few to none of them remember the statistics they were taught either. Why should they? Trying to squeeze a chi-square test among all those muscles and blood vessels they must memorize isn’t easy, and not so rewarding either.

Medical students learn why the ankle bone is connected to the ulniuous, or whatever the hell it is, and what happens when this or that artery is choked off. Useful stuff—and all to do with causality. They never learn why the chi-square does what it does. It is presented as mystery, a formula or incantation to invoke when the data take such-and-such a form. Worse, the chi-square and all other tests have nothing to do with causality.

A physician reading a journal article about some new procedure asks himself questions like, “What is the chance this would work for patients like mine?”, or “If I give my patients this drug, what are the chances he gets better?”, or “How does the cure for this disease work?” All good, practical, commonsense queries.

But classical statistics isn’t designed to answer commonsense questions. In place of clarity, we have the “null” and “alternate” hypotheses, which in the end are nothing but measures of model fit (to the data at hand and none other). Wee p-values are strewn around papers like fairy dust. What causes what cannot be discovered, but readers are invited to believe what the author believes caused the data.

I’ve beat this drum a hundred times, but what statistical models should do is to predict what will happen, given or conditioned on the data which came before and the premises which led to the particular model used. Then, since we have a prediction, we wait for confirmatory, never-observed-before data. If the model was good, we will have skillful predictions. If not, we start over.

“But, Briggs, that way sounds like it will take longer.”

True, it will. Think of it like the engineering approach to statistics. We don’t rely on theory and subjectively chosen models to build bridges or aircraft, right? We project and test. Why should we trust our health to models which have never been been put through the fire?

One benefit would be a shoring up of the uncertainty of side effects, especially the long-term side effects, of new drugs. Have you seen the list of what can go wrong when you eat one of these modern marvels? Is it only us civilians who cringe when hearing “suicide” is a definite risk of an anti-depressant? Dude. Ask your doctor if the risk of killing yourself is right for you.

What the patient wants to know is something like, “If I eat this pill, what are the chances I’ll stroke out?” The answer “Don’t worry” is insufficient. Or should be. How many medicines are released only to be recalled because a particular side effect turned out more harmful than anticipated?

“Wouldn’t your scheme be difficult to implement?”

It’s a little known but open secret that every statistical model in use logically implies a prediction of new data. All we have to do is use the models we have in that way. This would allow us to spend less time talking about model fit and more about the consequences of particular things.

“What are the chances people will switch to this method?”

Slim.

If We Are What We Sexually Desire, How About These Curious People?

Say, baby.

Say, baby.

Gender theory in brief says we are what we sexually desire. It’s not that we have desires, but that we are these desires. They are the core of our being. They make and form us. They are our orientation.

That’s why Yours Truly is not what he appears and what his biology made him, i.e. a man, a male human being, but is instead a “heterosexual” or, in slang, “a straight.” I cannot escape from this prison or these desires even if I wanted to, which I don’t. And since this state is forced upon me without my consent, and because anyway I like it, you must respect and even celebrate this fact. I must wear my orientation as a badge. You may not judge me.

We all know the other categorizations of desire and of their increasing prominence, so we needn’t cover them. But what do we make of these people, a group with very specific sexual desires?

Denmark already has a handful of animal brothels which, according to Ice News, a site specialized in Nordic reporting, charge between $85 and $170 depending on the animal of choice.

…24 percent of the population would like freedom of movement when it comes to pursuing beasts for pleasure. In a Vice Video aptly called “Animal [Edited]” one unnamed man explains what turns him on in the animal kingdom. “I’m into human females. I’m into horse females,” he says. “I’m asexual towards rats. I’m a bit voyeuristic about dogs and women.”

…People who literally love their animals have been tied to a series of side crimes. In August, a woman in New Mexico tried to kill her roommates after they witnessed her having sex with a dog and admitting to having sex “multiple times” with both roommates’ dogs. In September, a priest who was convicted of 24 counts of pedophila against Inuit people in Nanavut, Canada, had a bestiality record as well.

It’s little known, but bestiality is legal is several countries, mostly in Europe. Some animal “rights” groups are seeking to change these laws because they are concerned that animals are not giving “consent” to these odd encounters. Well, the animal that turned into my breakfast sausage probably wasn’t consulted about that, either. But let that pass. What matters is that the acts, legal or not, are somewhat common, in the sense that this kind of desire has been known across the centuries.

What to call these folks? Zoophilia is the technical term for the desire, but “zoophiliacs” is unwieldy. How about woofies? That has a pleasant, nonjudgemental, evocative tone.

Since gender theory insists we are our desires, then people who lust after aardvarks and wombats and the like are not people but woofies.

Do woofies have certain gifts and qualities to offer society? Are we capable of welcoming these people, guaranteeing to them a fraternal space in our communities? Often woofies wish to encounter a culture that offers them a welcoming home. Are our communities capable of providing that, accepting and valuing their sexual orientation?

Good questions, those. The reader should answer them.

Now I know that some of you will have a “yuck” response and will say that woofie desires are “unnatural.” But I’m afraid that won’t do. Because to say something is “unnatural” is to logically imply there is such a thing as human nature. It is to admit that those critics who decry “sexual orientations” as so much farcical academic tootling and who say that instead natural law should be our guide to behavior are right. Do we really want that? Accept natural law and what happens to all those other “orientations” which are also unnatural? Some deep kimchee there, brother.

You might try insisting that woofie behavior is “disgusting”. That doesn’t fly, either. The acts of many orientations are disgusting, too, and are often crippling to health. And isn’t “disgusting” a matter of personal taste?

Can you say that woofies are “perverted”? No. That is to draw an artificial line, a line which cannot be discovered by natural law but only by reference to a vote, and votes are malleable. Today we say “perverted” and next week we all walk past the pet shop window with a gleam in our eyes, only to see us come back in time to “perverted.” People are fickle.

How about man-beast “marriages”? Several people have already walked down that aisle. “Marriage” is whatever we say it is anyway, so all woofies need to recognize their civil unions is a good judge.

Zoophobes, the bigots, haven’t a leg to stand on, morally speaking. Let’s ostracize them.

The Mysticism Of Simulations: Markov Chain Monte Carlo, Sampling, And Their Alternatives

Not a simulation.

Not a simulation.

Introit

Ever heard of somebody “simulating” normal “random” or “stochastic” variables, or perhaps “drawing” from a normal or some other distribution? Such things form the backbone of many statistical methods, including bootstrapping, Gibbs sampling, Markov Chain Monte Carlo (MCMC), and several others.

Well, it’s both right and wrong—but more wrong than right. It’s wrong in the sense that it encourages magical thinking, confuses causality, and is an inefficient use of time. It’s right that, if assiduously applied, reasonably accurate answers from these algorithms can be had.

Way it’s said to work is that “random” or “stochastic” numbers are input into some algorithm and out pops answers to some statistical question which is not analytic, which, that is, cannot be solved by pencil and paper (or could, but at too seemingly great a difficulty).

For example, one popular way of “generating normals” is to use what’s called a Box-Muller transformation. It starts by “generating” two “random” “independent” “uniform” numbers U1 and U2 and then calculating this creature:

Z = R \cos(\Theta) =\sqrt{-2 \ln U_1} \cos(2 \pi U_2) ,

where Z is now said to be “standard normally distributed.” Don’t worry if you don’t follow the math, though try because we need it for later. Point is that any algorithm which needs “normals” can use this procedure.

Look at all those scare quotes! Yet each of them is proper and indicates an instance of magical thinking, a legacy of our (frequentist) past which imagined aleatory ghosts in the machines of nature, ghosts which even haunt modern Bayesians.

Scare quotes

First, random or stochastic means unknown, and nothing more. The outcome of a coin flip is random, i.e. unknown, because you don’t know all the causes at work upon the spinning object. It is not “random” because “chance” somehow grabs the coin, has its way with it, and then deposits the coin into your hand. Randomness and chance are not causes. They are not real objects. The outcome is determined by physical forces and that’s it.

Second, there is the unfortunate, spooky tendency in probability and statistics to assume that “randomness” somehow blesses results. Nobody knows how it works; that’s why it’s magic. Yet how can unknowingness influence anything if it isn’t an ontological cause? It can’t. Yet it is felt that if the data being input to algorithms aren’t “random” then the results aren’t legitimate. This is false, but it accounts for why simulations are so often sought.

Third, since randomness is not a cause, we cannot “generate” “random” numbers in the mystical sense implied above. We can, of course, make up numbers which are unknown to some people. I’m thinking of a number between 32 and 1400: to you, the number is random, “generated”, i.e. caused, by my feverish brain. (The number is hidden in the source code of this page, incidentally.)

Fourth, there are no such thing as “uniforms”, “normals”, or any other distribution-entities. No thing in the world is “distributed uniformly” or “distributed normally” or distributed anything. Distributed-as talk is more magical thinking. To say “X is normal” is to ascribe to X a hidden power to be “normal” (or “uniform” or whatever). It is to say that magical random occult forces exist which cause X to be “normal,” that X somehow knows the values it can take and with what frequency.

This is false. The only thing we are privileged to say is things like this: “Give this-and-such set of premises, the probability X takes this value equals that”, where “that” is calculated via some distribution implied by the premises. (Ignore that the probability X takes any value for continuous distributions is always 0.) Probability is a matter of ascribable or quantifiable uncertainty, a logical relation between accepted premises and some specified proposition, and nothing more.

Practicum

Fifth, since this is what probability is, computers cannot “generate” “random” numbers. What happens, in the context of our math above, is that programmers have created algorithms which will create numbers in the interval (0,1) (notice this does not include the end points); not in a coherent way, but with reference to some complex formula. This formula which, if run long enough, will produce all the numbers between (0,1) at the resolution of the computer.

Say this is every 0.01; that is, our resolution is to the nearest hundredth. Then all the numbers 0.01, 0.02, …, 0.99 will eventually show up (many will be repeated, of course). Because they do not show up in sequence, many fool themselves into thinking the numbers are “random”, and others, wanting to hold onto the mysticism but understanding the math, call the numbers “pseudo random”, an oxymoron.

But we can sidestep all this and simply write down all the numbers in the sequence, i.e. all the numbers in (0,1)2 (since we need U1 and U2) at whatever resolution we have; this might be (0.01, 0.01), (0.01, 0.02), …, (0.99, 0.99) (this is a sequence of pairs of numbers, of length 9801). We then apply the mapping of (U1, U2) to Z as given above, which produces (3.028866, 3.010924, …, 1.414971e-01).

What it looks like is shown in the picture up top.

The upper plot are the mappings of (U1, U2) to Z, along the index of the number pairs. If you’ve understood the math above, the oscillation, size, and sign changes are obvious. Spend a few moments with this. The bottom plot shows the empirical cumulative distribution of the mapped Z (black), overlayed by the (approximate) analytic standard normal distribution (red), i.e. the true distribution to high precision.

There is tight overlap between the two, except for a slight bump or step in the ECDF at 0, owing to the crude discretization of (U1, U2). Computers can do better than the nearest hundredth. Still, the error even at this crude level is trivial. I won’t show it, but even a resolution 5 time worse (nearest 0.05; number sequence length of 361) is more than good enough for most applications (a resolution of 0.1 is pushing it).

This picture gives a straightforward, calculate-this-function analysis, with no mysticism. But it works. If what we were after was, say, “What is the probability that Z is less than -1?”, all we have to do is ask. Simple as that. There are no epistemological difficulties with the interpretation.

The built-in analytic approximation is 0.159 (this is our comparator). With the resolution of 0.01, the direct method shows 0.160, which is close enough for most practical applications. A resolution of 0.05 gives 0.166, and 0.1 gives 0.172 (I’m ignoring that we could have shifted U1 or U2 to different start points; but you get the idea).

None of these have plus or minuses, though. Given our setup (starting points for U1 and U2, the mapping function), these are the answers. There is no probability attached. But we would like to have some idea of the error of the approximation. We’re cheating here, in a way, because we know the right answer (to high degree), which we always won’t. In order to get some notion how far off that 0.160 is we’d have to do more pen-and-paper work, engaging in what might be a fair amount of numerical analysis. Of course, for many standard problems, just like in MCMC approaches, this could be worked out in advance.

MCMC etc.

Contrast this to the mystical approach. Just like before, we have to specify something like a resolution, which is the number of times we must “simulate” “normals” from a standard normal—which we then collect and form the estimate of the probability of less than -1, just as before. To make it fair, pick 9801, which is the length of the 0.01-resolution series.

I ran this “simulation” once and got 0.162; a second time 0.164; a third showed 0.152. There’s the first problem. Each run of the “simulation” gives different answers. Which is the right one? They all are; a non-satisfying but true answer. So what will happen if the “simulation” itself is iterated, say 5000 times, where each time we “simulate” 9801 “normals” and each time estimate the probability, keeping track of all 9801 estimates? Let’s see, because that is the usual procedure.

Turns out 90% of the results are between 0.153 and 0.165, with a median and mean of 0.159, which equals the right answer (to the thousandth). It’s then said there’s a 90% chance the answer we’re after is between 0.153 and 0.165. This or similar intervals are used as error bounds, which are “simulated” here but (should be) calculated mechanically above. Notice that the uncertainty in the mystical approach feels greater, because the whole process is opaque and purposely vague. The numbers seem like they’re coming out of nowhere. The uncertainty is couched probabilistically, which is distracting.

It took 19 million calculations to get us this answer, incidentally, rather than the 9801 the mechanical approach produced. But if we increased the resolution to 0.005 there, we also get 0.159 at a cost of just under 40,000 calculations. Of course, MCMC fans will discover short cuts and other optimizations to implement.

Why does the “simulation” approach work, though? It does (at some expensive) give reasonable answers. Well, if we remove the mysticism about randomness and all that, we get this picture:

Mystical versus mechanical.

Mystical versus mechanical.

The upper two plots are the results of the “simulation”, while the bottom two are the mechanical mapping. The bottom two show the empirical cumulative distribution of U1 (U2 is identical) and the subsequent ECDF of the mapped normal distribution, as before. The bump at 0 is there, but is small.

Surprise ending!

The top left ECDF shows all the “uniforms” spit out by R’s runif() function. The only real difference between this and the ECDF of the mechanical approach is that the “simulation” is at a finer resolution (the first U happened to be 0.01031144, 6 orders of magnitude finer; the U’s here are not truly plain-English uniform as they are in the mechanical approach). The subsequent ECDF of Z is also finer. The red lines are the approximate truth, as before.

But don’t forget, the “simulation” just is the mechanical approach done more often. After all, the same Box-Muller equation is used to map the “uniforms” to the “normals”. The two approaches are therefore equivalent!

Which is now no surprise: of course they should be equivalent. We could have taken the (sorted) Us from the “simulation” as if they were the mechanical grid (U1, U2) and applied the mapping, or we could have pretended the Us from the “simulation” were “random” and then applied the mapping. Either way, same answer.

The only difference (and advantage) seems to be in the built-in error guess from the “simulation”, with its consequent fuzzy interpretation. But we could have a guess of error from the mechanical algorithm, too, either by numerical analysis means as mentioned, or even by computer approximation (one way: estimate quantities using a coarse, then fine, then finest grid and measure the rate of change of the estimates; with a little analysis thrown in, this makes a fine solution).

The benefit of the mechanical approach is the demystification of the process. It focuses the mind on the math and reminds us that probability is nothing but a numerical measure of uncertainty, not a live thing which imbues “variables” with life and which by some sorcery gives meaning and authority to results.

The Preponderance Of Evidence Criterion Is Absurd

She said, he said.

She said, he said.

Our beneficent government, through its Department of Education’s Office of Civil Rights “sent a letter to colleges nationwide on April 4, 2011, mandating policy changes in the way schools handle sexual assault complaints, including a lowering of the burden of proof from ‘clear and convincing’ evidence to a ‘preponderance’ of evidence. Not surprisingly, there has been a marked increase in women coming forward with such complaints.”

The preponderance of evidence criterion is asinine and harmful and bound to lead to grief. Here’s why.

Suppose a woman, Miss W, instead of going to the police, shows up at one of her university’s various Offices Of Indignation1 & Diversity and complains she was “sexually assaulted” by Mr X, a fellow student. By means of a lengthy and secretive process, Mr X is called eventually to deny the claim. He does so.

Incidentally, we may as well inject here the advice that if celibacy outside marriage were promoted at colleges, while the success rate of this program would never reach 100%, any rate above 0% solves for its dedicated individuals the sorts of problems discussed below.

Anyway, ignoring all other details, here is what we have: Miss W says Mr X did it, and Mr X denies. Using only that evidence and none other, there is to the neutral observer a 50-50 chance Mr X did the deed. Fifty-fifty does not a preponderance make, which is any amount over 50%. But since we start at 50% given she-said-he-said, it takes only the merest sliver of additional evidence to push the probability beyond 50% and into preponderance.

What might that evidence be? Anything, really. A campus Diversity Tzar might add to Miss W’s claim, “Miss W almost certainly wouldn’t have made the charge if it weren’t true”, which brings the totality of guilt probability to “almost certainly” (we cannot derive a number). Or the Tzar might say, “Most men charged with this crime are guilty”, which brings the guilt probability to “nearly certain”—as long as we supply the obvious tacit premises like “Mr X is a man and is charged with this crime.”

But this is going too far, and, depending on the university, our Tzar knows she might not be able to get away with such blanket statements. Instead she might use as evidence, “Miss W was crying, and victims of this crime often or always cry”, or “Miss W told another person about Mr X’s crime, which makes it more likely she was telling me the truth as telling more than one person, if her story is a lie, would be to compound a lie.”

Now none of these are good pieces of evidence; indeed, they are circumstantial to the highest degree. But. They are not completely irrelevant premises, either. As long as we can squeeze the weest, closest-to-epsilon additional probability from them, they are enough to push the initial 50% to something greater than 50%.

And that is all we need to crush Mr X, for we have reached a preponderance of evidence. Of course, Mr X may counter or cancel this evidence with his own protestations, or even physical proof that he was nowhere near the scene in question, or that Miss W drunk-texted him first and asked for the services which she later claimed were “assault.” But the Tzar, having all the woes of all feminine society on her mind, is free to ignore any or all of all this.

Mr X, guilty or innocent, is therefore easy to “prove” guilty using this slight standard. He can then be punished in whatever way thought appropriate by the university.

That brings up another question. Suppose you gather all the relevant evidence and decide that the chance of the zombie apocalypse is just under 50%. Or again, given reliable premises you calculate the probability that the woman who just winked at you from across the bar does not have Ebola is 49.999%. You therefore decide that since the preponderance of evidence is against both propositions, you needn’t protect yourself.

You have it. The probability of 50% is in no ways the probability to use for all yes-no decisions. Decisions have consequences and these must be taken into account. Should we wreck a man when the evidence against him amounts only to 50.001%? Too, if we use in every situation the preponderance criterion, the number of mistakes made will be great.

This is why in actual criminal courts, where the standards of evidence are in play and the accused is allowed to confront his accuser and so on, the standard is guilt beyond reasonable doubt, a sane and sober principle.

—————————————————-

1The indignation quip came from this.

Older posts

© 2014 William M. Briggs

Theme by Anders NorenUp ↑