Class 31: How Views Diverge With New Evidence

Class 31: How Views Diverge With New Evidence

Is Trump a Russian Asset? (This question accounts for the change in banner image this week.) What happens to belief when people learn new evidence about this curious proposition? Theory says that as evidence increases, beliefs must converge. But they often DIverge. Find out why!

Jaynes’s book

Permanent class page

Uncertainty & Probability Theory: The Logic of Science

Video

Links:

YouTube

Twitter (latest class at bottom of thread)

Rumble

Bitchute (often a day or so behind, for whatever reason)

Link to all Classes.

HOMEWORK: Same as last week. Read Jaynes, and see below how you will find a Research Shows paper and apply the methods you learned today.

Lecture

Once again, you must read Jaynes. Links to his book are above. I got the section wrong in the lecture: it’s 5.3 you must read today, which draws on 5.2.

The theme behind last week and this week, and really everything we do, is that we always—as in always—form our probabilities of some proposition of interest Y (mathematical or other or anything) with respect to whatever evidence X we assume. Change the evidence, change the probability: which is why nothing has a probability. Pr(Y|X) is everything. That is the lesson of this entire course.

Pr(Y|X) is always everything. Even if X = A&B&C&…&Z, and that each of these letters is some complex proposition, even mathematical, even physical, even counterfactual. Below we use a form of Pr(Y|DX), where “D” (following Jaynes) is new data. Which we can, if we have a mind, use Bayes formula to aid in computation. But we need not, if it is not convenient. Because Pr(Y|DX) = Pr(Y|X’) where X’ = DX. We still want the probability of Y given all the evidence we’re considering.

This is why it would be wrong to call our efforts Bayesian. We are doing probability, full stop. Or “logical probability” if you must give a label.

Now theory has it that as various people consider a proposition, regardless where they start, as evidence increases (and which all know), then people’s beliefs should draw together. This is the impetus behind the idea of “raising awareness”, a popular activity in an “our democracy”. The idea is that once you have heard the facts of the matter on some question, your opinion becomes the correct one.

That does indeed sometimes happen. Then again sometimes, for an individual, Pr(Y|DX) = Pr(Y|X), when that individual judges D as irrelevant (in the face of X) to Y. Which means the opinion on Y, for this individual, does not change when learning D—while believing X. Of course, this is generic. It could be that in D there are propositions that nullify or falsify parts or all of X.

Opinions may also diverge, which Jaynes first taught us. That is, once your awareness is raised, you might come to believe the unacceptable. Let’s see how. Jaynes stuck with ESP, but I’m using something politically controversial because it’s easier to see how divergence works. But I insist, as does Jaynes, that this works for any question.

Let S = “Trump is a Russian agent”, and let us put ourselves back to 2020, or even 2016. Let’s imagine two people, A and B, who have different background evidence on S. So that Pr(S|A) = 0.1, and Pr(S|B) = 0.01. A has a minor suspicion, and B think it’s almost certainly not true, but still allows the possibility.

Then comes news (or data) D = “Media reports two dozen (or whatever) current and security officials say Trump is a Russian asset.” This, of course, happened, as it seen in today’s cover picture. How did people greet this news? Depends on what they thought about D!

We want, for A and B, this:

$$\Pr(S|D*) = \Pr(S|*)\frac{\Pr(D|S*)}{\Pr(S|*)\Pr(D|S*) + \Pr(\bar{S}|*)\Pr(D|\bar{S}*)}$$,

where $\bar{S}$ is the contrary of S, here “Trump is not a Russian agent”, and the * stands for either A or B.

Suppose Pr(D|SA) = Pr(D|SB) = 1. This means that if S is true and Trump really is a Russian agent, both A and B believe the media and security experts would report this. Remember and never forget that everything to the right of the bar “|” is assumed true. Doesn’t matter whether it is. We assume it is. That is what Pr(Y|X) is all about!

So far we have agreement. But now suppose Pr(D|not-SA) = 0.01 and Pr(D|not-SB) = 0.9. That is, A believes that if S is false, and Trump is not a Russian agent, the chance that the media reports he is is 1%. Not quite 0, because he believes mistakes can happen, the topic is complex, and so forth. But B thinks that even if Trump is innocent of the charge, the media and Experts will still likely say Trump is guilty. Make sure you understand the distinctions in this paragraph before reading further.

Given all these numbers (which don’t really matter; only the relative differences are important, as you’ll see), we can plug them in and see that

$$\Pr(S|DA) = 0.1 \times \frac{1}{0.1\times 1) + 0.9 \times 0.01)} \approx 0.92$$,
$$\Pr(S|DB) = 0.01 \times \frac{1}{0.01\times 1) + 0.99 \times 0.9)} \approx 0.1$$.

We started | Pr(S|A) – Pr(S|B) | = 0.09. Which is not that far apart. But we ended | Pr(S|DA) – Pr(S|DB) | ~ 0.82. Which is close to the maximum difference of 1. In other words, because of the way the evidence is treated, opinions can diverge.

That was the exact lesson of last week, too.

Subscribe or donate to support this site and its wholly independent host using credit card click here. Or use the paid subscription at Substack. Cash App: \$WilliamMBriggs. For Zelle, use my email: matt@wmbriggs.com, and please include yours so I know who to thank. BUY ME A COFFEE

2 Comments

  1. The example is unfortunate because it moves the reader’s view of what you’re on about from math to behavior – one is largely rational, the other is only indirectly so.

    Somebody wrote a book about how math is bad for physics because we tend to believe that the math prescribes reality when, at best, it sometimes describes part of it. This is the same problem: it seems obvious that a guess based on some assumptions should change if the assumptions change – and while that seems logical and would be right in Mr. Spock’s rational world; in the real world it often isn’t.

    Maybe read Festinger et al? or how about just his summary (reproduced on my telearb.net/node/13 site for your convenience..)

  2. Johnno

    The only numbers that matter to people who fall for this are simple, and big. So the upper limit is at 99, but significantly more than 10, to fraudly demonstrate consensus, as in the popular example of “60 (SIXTY!) court rulings declared that Trump had no proof of election fraud!”

    Another popular practice is in the misinformation/disinformation scene, where fak-checkas fak-chak satire they know is satire, and is obvious to all as satire, to juice the numbers in their hersterical reports that beg the gub’mint to ACT NOW & DO SOMETHING!!!!

    https://www.zerohedge.com/political/reuters-fact-checks-babylon-bee-article-stating-allahu-akbar-has-replaced-cheerio-mate

Leave a Reply

Your email address will not be published. Required fields are marked *