Philosophy

Theory confirmation and disconfirmation

Time for an incomplete mini-lesson on theory confirmation and disconfirmation.

Suppose you have a theory, or model, about how some thing works. That thing might be global warming, stock market prices, stimulating economic activity, psychic mind reading, and on and on.

There will be available a set of historical data and facts that lead to the creation of your theory. You will always find it an easy process to look at those historical data and say to yourself, “My, those data back my theory up pretty well. I am surely right about what drives stock prices, etc. I am happy.”

Call, for ease, your theory MY_THEORY.

It is usually true that if the thing you are interested in is complicated—like the global climate system or the stock market—somebody else will have a rival theory. There may be several rival theories, but let’s look at only one. Call it RIVAL_THEORY.

The creator of RIVAL_THEORY will say to himself, “My, the historical data back my theory up pretty well, too. I am surely right about what drives stock prices, etc. I am happy and the other theory is surely wrong.”

We have a dispute. Both you and your rival are claiming correctness; however, you cannot both be right. At least one, and possibly both, of you is wrong.

As long as we are talking about historical data, experience and human nature shows that the dispute is rarely allayed. What happens, of course, is that the gap between the two theories actually widens, at least in the sense of strength which the theories are believed by the two sides.

This is because it is easy to manipulate, dismiss as irrelevant, recast, or interpret historical data so that it fits what your theory predicts. The more complex the thing of interest, the easier it is to do this, and so the more confidence people have in their theory. There is obviously much more that can be said about this, but common experience shows this is true.

What we need is a way to distinguish the accuracy of the two theories. Because the historical data won’t do, we need to look to data not yet seen, which is usually future data. That is, we need to ask for forecasts or predictions.

Here are some truths about forecasts and theories:

If MY_THEORY says X will happen and X does not happen, then MY_THEORY is wrong. It is false. MY_THEORY should be abandoned, forgotten, dismissed, disparaged, disputed, dumped. We can say that MY_THEORY has been falsified.

For example, if MY_THEORY is about global warming and it predicted X = “The global mean temperature in 2008 will be higher than in 2007” then MY_THEORY is wrong and should be abandoned.

You might say that, “Yes, MY_THEORY said X would happen and it did not. But I do not have to abandon MY_THEORY. I will just adapt it.”

This can be fine, but the adapted theory is no longer MY_THEORY. MY_THEORY is MY_THEORY. The adapted, or changed, or modified theory is different. It is NEW_THEORY and it is not MY_THEORY, no matter how slight the adaptation. And NEW_THEORY has not made any new predictions. It has merely explained historical data (X is now historical data).

It might be that RIVAL_THEORY theory made the same prediction about X. Then both theories are wrong. But people have a defense mechanism that they invoke in such cases. They say to themselves, “I cannot think of any other theory besides MY_THEORY and RIVAL_THEORY, therefore one of these must be correct. I will therefore still believe MY_THEORY.”

This is the What Else Could It Be? mechanism and it is pernicious. I should not have to point out that because you, intelligent as you are, cannot think of an alternate explanation for X does not mean that an alternate explanation does not exist.

It might be that MY_THEORY predicted Y and Y happened. The good news is that we are now more confident that MY_THEORY is correct. But suppose it turned out that RIVAL_THEORY also predicted that Y would happen. The bad news is that you are now more confident that RIVAL_THEORY is correct, too. How can that be when the two theories are different?

It is a sad and inescapable fact that for any set of data, historical and future, there can exist an infinite number of theories that equally well explain and predict it. Unfortunately, just because MY_THEORY made a correct prediction does not imply that MY_THEORY is certainly correct: it just means that it is not certainly wrong. We must look outside this data to the constructs of our theory to say why we prefer MY_THEORY above the others. Obviously, much more can be said about this.

It is often the case that a love affair develops between MY_THEORY and its creator. Love is truly blind. The creator will not accept any evidence against MY_THEORY. He will allow the forecast for X, but when X does not happen, he will say it was not that X did not happen, but the X I predicted was different. He will say that, if you look closely, MY_THEORY actually predicted X would not happen. Since this is usually too patently false, he will probably alter tactics and say instead that it was not a fair forecast as he did not say “time in”, or this or that changed during the time we were waiting for X, or X was measured incorrectly, or something intervened and made X miss its mark, or any of a number of things. The power of invention here is stronger than you might imagine. Creators will do anything but admit what is obvious because of the passion and the belief that MY_THEORY must be true.

Some theories are more subtle and do not speak in absolutes. For example, MY_THEORY might say “There is a 90% chance that X will happen.” When X does not happen, is MY_THEORY wrong?

Notice that MY_THEORY was careful to say that X might not happen. So is MY_THEORY correct? It is neither right or wrong at this point.

It turns out that it is impossible to falsify theories that make predictions that are probabilistic. But it also that case that, for most things, theories that make probabilistic predictions are better than those that do not (those that just say events like X certainly will or certainly will not happen).

If it already wasn’t, it begins to get complicated at this point. In order to say anything about the correctness of MY_THEORY, we now need to have several forecasts in hand. Each of these forecasts will have a probability (that “90% chance”) attached, and we will have to use special methods to match these probabilities with the actual outcomes.

It might be the case that MY_THEORY is never that close in the sense that its forecasts were never quite right, but it might still be useful to somebody who needs to make decisions about the thing MY_THEORY predicts. To measure usefulness is even more complicated than measuring accuracy. If MY_THEORY is accurate more often or useful more often, then we have more confidence that MY_THEORY is true, without ever knowing with certainty that MY_THEORY is true.

The best thing we can do is to compare MY_THEORY to other theories, like RIVAL_THEORY, or to other theories that are very simpler in structure but are natural rivals. As mentioned above, this is because we have to remember that many theories might make the same predictions, so that we have to look outside that theory to see how it fits in with what else we know. Simpler theories that make just as accurate predictions as complicated theories more often turn out to be correct (but not, obviously, always).

For example, if MY_THEORY is a theory of global warming that says that there is a 80% chance that global average temperatures will increase each year, we need to find a simple, natural rival to this theory so that we can compare MY_THEORY against it. The SIMPLE_THEORY might state “there is a 50% chance that global average temperatures will increase each year.” Or it might be that LAST_YEAR’S_THEORY might states “this year’s temperatures will look like last year’s.”

Thus, especially in complex situations, we should always ask, when somebody is touting a theory, how well does that theory make predictions and how much better is it than its simpler, natural rivals. If the creator of the touted theory cannot answer these questions, you are wise to be suspicious of that the theory and to wait until that evidence comes in.

Categories: Philosophy, Statistics

31 replies »

  1. The SIMPLE_THEORY could be compatible with global warming, couldn’t it? It assumes a 50% chance regarding the direction of change but does not make any predictions regarding the amount. So if the average amount of change was systematically higher in one direction than it was in the other, the result of the process in the long run could still be an increase in temperature. I have no idea whether this would be a plausible model for any physical process, but mathematically it is one possible option.

  2. Occam meet Matt. Matt meet Occam.

    It is tough to get the pro AGWers to indicate how many years of no increase in termperature will it take to disprove AGW. (aka til Hell freezes over!)
    It is tougher still to get the anti AGWers to indicate how many years of increasing temperatures it will take to prove AGW. (aka til it is as hot as hell!)

  3. Hi Briggs,

    This post may give me the opportunity to take advantage of your expertise with a problem I’ve had with Bayesian beliefs for some time.

    You state in your post that: “It is a sad and inescapable fact that for any set of data, historical and future, there can exist an infinite number of theories that equally well explain and predict it.” But would this not mean that all of our theories are falsified by the data over time? Even comfirming data? Let me explain.

    Starting with Bayes’ Theorem:

    P(A|B) = P(B|A) * P(A) / P(B)

    Let: A = MY_THEORY, such that P(A) = my prior belief in MY_THEORY, say: .95 (95% confident MY_THEORY is true)

    Let: ~A = the intersection of all theory (RIVAL_THEORY, SIMPLE_THEORY, OTHER_THEORY) that is independent of MY_THEORY, say: .05 (5% confident some other theory is true), so that P(A) + P(~A) = 1.

    Let: B = some future event or prediction, say: “global temperatures will increase each year”

    Thus, for event B: P(B) = P(B|A) * P(A) + P(B|~A) * P(~A)

    Now from your quote above: P(B|~A) = 1.0, that is, I am certain that some theory other than A can also explain (will perfectly predict) the fact that temperatures will increase each year.

    Thus, Bayes Theorem becomes:

    P(A|B) = P(B|A) * P(A) / (P(B|A) * P(A) + P(B|~A) * P(~A))

    Substituting give:

    P(A|B) = P(B|A) * P(A) / (P(B|A) * P(A) + (1) * (1 – P(A)))

    or:

    P(A|B) = P(A) * [P(B/A) / (P(A) * (P(B|A) – 1) + 1)]

    Note that the term in [brackets] must be less than 1 (I plotted [it] out for probabilities between 0 and 1) which means that any future event B will “condition” our belief in theory A downward!

    For example: if we assume a prior P(A) = .95 and if P(B|A) = .80, that is, a prediction of 80% confident that global temperatures will increase each year, then the term in the brackets is:

    [(.80) / ((.95) * (.80 – 1) + 1)] = .99

    Thus, even if MY_THEORY successfully predicted the rising temperatures, the fact that I also believe with certainty that P(B|~A) = 1, means that my conditioned belief in MY_THEORY will decrease by 1 percent.

    It does not seem right that confirmation of MY_THEORY would cause me to believe in it less.

    Can you give me a clue here?

  4. A quick opinion: AGW per se is not a theory (as relativity is). It is a set of simulators based on accepted physical models plus parameters (some not observable). These are updated every few years so that there is no static projection which can be compared to observations (the AGW modellers say 5 to 10 years comparing observations to model output are insufficient). We are all familiar with claims the any observation is consistent with the models. Thus there is no falsifiablity untill maybe 50 years hence when we can check sea levels. Also, the models may be totally wrong and yet dangerous warming may occur, but this is a coincidence or just curve fitting.

    Anyway, just my humble opinion.

  5. George,

    Excellent question.

    First, and most trivially, we can never speak of unconditional probabilities. We always must (even though textbooks are lazy about this) write probabilities conditional on some information. For example, to have a probability of A we must have it conditional on some information I (whatever that is).

    Thus, P(A | I) = 0.95, say.

    As you state, it is always true that P(A | I) + P(~A | I) = 1.

    You also correctly write (with me sticking in the I) that

    P(B|I) = P(B|AI) * P(A|I) + P(B|~AI) * P(~A|I)

    and then you say that P(B|~AI) = 1 because another theory besides A predicted B perfectly. Actually, writing P(B|~AI) = 1 means that B certainly follows from ~A (and I). But don’t forget that AI also perfectly predicts B, so that P(B|AI) = 1. Then

    P(B|I) = 1 * P(A|I) + 1 * P(~A|I) = 1

    which is no more than saying that B must always happen; which is the same thing as saying “Regardless whether A is true or not, B will happen.”

    Of course, when we have rival theories we usually do not lump them all together as “Not A”, we keep them separate so we can separately track their probabilities. That is, even if B follows from A, it doesn’t follow from all the theories “inside” ~A.

    Now we have to start looking at I to give us a hint why A is more probable than ~A (or any of its rivals).

    Not a complete answer, I think, but does it make sense? If not, I’ll explain in more detail.

  6. Hi Briggs,

    Thanks for the response! I agree that I was being lazy by not explicitly including conditional information “I” in the equations in my comment. I will do better.

    But I still fail to see how this changes my conclusion because I still disagree where you write: “But don’t forget that AI also perfectly predicts B, so that P(B|AI) = 1.”

    In my comment, I used as an example (where the “I” is implicit): P(B|A) = .80, that is, some number less than 1. So we are disagreeing here.

    The future is mostly uncertain. I can’t see how a (or any) theory A (and implicitly some conditional information I) can allow me to believe that I can predict some future event B with certainty, in this case: P(B|AI) = 1. I would not see it even if I had certain belief in the truth of the theory itself: P(A|I) = 1, or where I had absolute confidence that some other independent theory could explain it: P(B|~AI) = 1.

    I guess I could even believe that an event can occur that *has* no good theoretical explanation (an “unbelievable” event), since there is the logical possibility of: P(B|I) = 1 yet: P(B|AI) + P(B|~AI) << 1.

    Now, I am not a mathematician, but an engineer/programmer. My understanding of math is very intuitionist and Bayesian. I’m vaguely hoping I am using terms in a way you do not. (I.e., I am using them wrong? :-)) Perhaps some other readers of your blog are like me in this respect. So I am taking you up on your generous offer of more detail.

  7. George,

    You said:

    ‘But I still fail to see how this changes my conclusion because I still disagree where you write: “But don’t forget that AI also perfectly predicts B, so that P(B|AI) = 1.”’

    The point is that there exist infinitely many theories that predict B perfectly. Most of them will probably be perfectly ridiculous, like measuring skirt lengths over time, or perhaps nothing more than curve fitting exercises with different families of functions, or even just a really lucky stream of pseudo-random numbers. But the fact that they fit the data doesn’t make them a true theory about “how some thing works” (from the second sentence of TFA).

    The point here, I think, is exactly the most [in]famous refrain of statistics teachers everywhere: “Correlation does not equal causation.” And your math above seems to assume that the prediction (correlation) equals truth (causation). But since multiple theories get the prediction correct, we should be able to see that obviously they can’t all be true.

  8. William, it sounds like you are talking about philosophy of science.

    Thomas Kuhn is an important read in this area (Structure of Scientific Revolutions), and he would say (if he were alive) that theories have core tenants and auxiliary hypotheses, and that the theory is not necessarily falsified by a failed prediction, as the auxiliary hypothesis is discarded and the core remains.

    Science has not really proceeded on the basis of simple falsification.

    I can recommend a great book on the topic called
    ‘What is this thing called science’ by alan chalmers.

    Very helpful….

    All that being said, I think a proper review of science history shows that the claimed certainty in the AGW debate is a load of rubbish

  9. This article cries out for a second part…more examples, more details. I found myself wanting much more.

  10. Let me take a stab at this, and risk making a fool of myself.

    This year, before next year happens, we assume that P(A|B) = P(B|A) * P(A)/P(B) = .80 * .95/P(B) = .76/P(B). Now, P(B) is unknown. That’s what MY_THEORY is an attempt to predict. But since P(A|B) cannot be more than 1, P(B) cannot be less than .76 (right?)

    Our priors have limitations. We assume P(A) = .95, and that P(B|A) = .80, and thus P(B)>.76. We might even assume P(B) = 1, but it’s just an assumption. We don’t know the future. If we did, we wouldn’t need theories about it (right?).

    A year passes and we note that B happened (once). Temperatures did increase. George points out that apparently that reduces our confidence in P(A). A paradox?

    But P(A) = P(B)*P(A|B)/P(B|A), and all those values have changed (except perhaps P(B), which is unknown). It’s a new day. We cannot remain stuck on our old priors. At least, I hope not. We have more information now.

    Those darn priors are just guesses, or assumptions if you like. That’s the big gripe I have with Bayesianism. I accept that conditioning statistics on Bayesian assumptions is useful for establishing truly probabilistic confidence intervals, and there are some pretty broad assumptions out there, like the uniform distribution, which are logical for priors. But we shouldn’t get all married to very specific assumptions, like P(A) = .95 and P(B|A) = .80. I don’t think those are tenable. We can play with the equations as if they were real values, but they are just guesses. And when we get more information, it should alter our guesses somewhat.

    Right?

  11. Alan,

    Falsification works. Cold fusion is a debunked theory because it has been falsified.

    But you are right, many theories are so mushy that definitive Popperian tests are impossible. We all know of theories that have been fashioned to predict almost anything. Adherents go so far as to dispute that their pet theories even make predictions! MY_THEORY only provides “scenarios” etc.

    Kuhn points out that anomalies are the drivers. An anomaly might not falsify a theory, but it can give it a cramp. As the anomalies mount, the theory gets leg problems and eventually can’t walk anymore. Adherents get tired of dragging it along (or else they die off, as Kuhn coldly remarked).

    We could kill all the old paradigmers and speed up Science, just as we could kill all the lawyers and improve our justice system. But it’s not necessary to get all vengeful about the matter. Time wounds all heels, and the anomalies eventually cannot be denied.

  12. Mike D. Cold fusion is a physics experimental science with a fairly simple mechanism. It was certainly not thrown out straight away (due to hope mostly), but after many failed attempts to replicate the original findings and with opposition from the current nuclear physics paradigm.

    It’s fairly simple mechanism and experimental nature provided little scope to protect the theory with auxiliary hypothesis. The same can’t be said for AGW or any other non-experimental science.

    I think we pretty much agree, with your comments about ‘mushy’ etc, but I was just cautioning against a naive view of science, especially based on how science has worked in the past.

    From my own perspective, I think that anything that is not testable in controlled experiment is not really science, but merely mental gymnastics and so should not really be afforded any real level of confidence. (Neo-experimentalism if you know philosophy of science)

  13. Bayesian conditioning is a rational way (using Dutch book arguments) of changing our prior beliefs (not correlations) in various theories (which inevitably assume causation) based on events occuring. It is of practical interest.

    If an event B actually occurs, P(B|FACTS) = 1, then a RIVAL_THEORY that predicted the event as less likely than did MY_THEORY serves to strengthen MY_THEORY (confirmation). On the other hand, any RIVAL_THEORY (even a non-causal one based on skirt lines) that predicts the event as more likely will weaken my belief in MY_THEORY (falsification).

    For example if:

    P(MY_THEORY) = .95 (my prior belief in MY_THEORY)
    P(RIVAL_THEORY) = .05 (maximum prior belief in RIVAL_THEORY)
    P(B|MY_THEORY) = .80 (MY_THEORY predicts 80% chance of event B occuring)
    P(B|RIVAL_THEORY) = .50 (RIVAL_THEORY predicts 50% chance of event B occuring)

    From Bayes Theorem:

    P(MY_THEORY|B) = .80 * .95 / ( .80 * .95 + .50 * .05) = .97

    Thus, event B occuring conditions, in this case slightly strengthens, my belief in MY_THEORY from .95 to .97.

    If, however:

    P(B|RIVAL_THEORY) = .95 (RIVAL_THEORY more strongly predicts event B)

    Then:

    P(MY_THEORY|B) = .80 * .95 / (.80 * .95 + .95 * .05) = .94

    Thus, event B occuring slightly weakens my belief in MY_THEORY, since RIVAL_THEORY predicted event B so strongly.

    The takeaway here is that assuming most of the global warming models strongly predict increasing temperatures, more strongly than most of the more skeptical theories, then, as the world in fact warms, the belief in these models mostly increases. Rationally.

  14. It does not seem right that confirmation of MY_THEORY would cause me to believe in it less.

    I think I have the answer. If my answer is wrong, just remember it is free. ^_^

    George, your great intuition becomes a hint that something is amiss! I think, in your case, B (prediction of GW) is disconfirmation of MY_THEORY (non-GW). Yes?!

  15. George,

    Oops. You’re right, I blew it when I missed where you said P(B|AI) = 0.8, and I was lazy, too.

    Let’s think of it this way. We have two competing hypotheses A and ~A, and given I, we are claiming it’s more likely to see B under ~A (P(B|~AI) = 1) than under A (P(B|~AI) = 0.8). So that when we do see B, it should not be surprising that we believe ~A more and A less. Make sense so far?

    The strangeness comes in thinking that because we have a concrete hypothesis A, we also have a concrete ~A given I. Sometimes, of course, we do have a concrete ~A, but many times (given some I) we do not.

    It is a very strong claim to say P(B|~AI) = 1—no claim can be stronger—it means that B necessarily follows from ~AI: that is, it is impossible for B not to happen if ~AI is true.

    If AI is a theory of AGW, say, it would be very difficult to articulate what ~AI is and why any observation necessarily follows from it. Especially since the information (I) would tell us that the climate observations are contingent (each observation depends on the world attaining some state), and logic tells us that all contingent statements (like B) cannot be necessarily true. That is, given I, it must be the case that 0 < P(B|~AI) < 1; that is, it is false that P(B|~AI) = 1. (I was stupid by not noticing this yesterday.)

    In the case you outline, as long as 0 < P(B|~AI) < 0.8 then P(A|BI)> P(~A|BI).

    Mike,

    Falsification does work, but only rarely strictly, exceedingly rarely. Turns out that you cannot falsify any theory (or statement) that has probability less than 1 (and greater than 0). Which is really every scientific theory we have.

    It’s true that sometimes people make blank statements like, “Given MY_THEORY, temperatures will rise!” but the mistake is the same one I stupidly made. The statement itself is not what people really mean. They must mean something like this, “Given MY_THEORY, the probability of temperatures rising is 0.99” (or as high as you like, but less than 1).

    Then, no matter what is observed, MY_THEORY is not falsified. You cannot make the claim that MY_THEORY is “practically falsified” either, because that makes no logical sense. You can only say that “MY_THEORY is probably not true” with a probability as close to 0 as you like, but not attaining 0.

    Cold fusion, then, was not falsified. Instead, given all the evidence we have (about physics, the experiments, Pons’s and Fleishman’s personalities, etc.), it is extremely unlikely to be true. So unlikely that it makes little sense in pursuing it.

    But I can see that all of these arguments (to George and to you) call out for more explanation, as MrCPhysics asks. For the moment, let me recommend David Stove’s Scientific Irrationalism: Origins of a Postmodern Cult, a book which takes up these subjects and more. I have some of this, too, in the first chapter of my book.

    Way off topic…but I recommend clicking on Alan Grey’s website where he has a link to a whitepaper China put out on its military strategy.

  16. Re Alan Grey:

    From my own perspective, I think that anything that is not testable in controlled experiment is not really science, but merely mental gymnastics and so should not really be afforded any real level of confidence. (Neo-experimentalism if you know philosophy of science)

    Your rigid notion of science could get you into trouble if applied to practical matters. Testability can be hampered by external factors such as ethics or unavailability of spare universes. Yes, testability matters, but there may be situations where direct testing would be straightforward yet isn’t feasible for some reason. Such as when you are trying to test whether parachutes save the lives of people jumping from a plane.

  17. Maybe I’m getting carried away, but let me expand on Mike D.’s comment and point out the Bayesian reason why there is a Popperian emphasis on falsification rather than simple confimation in science. It’s more effective.

    Recall that Mike’s argument is:

    Assuming P(A|B) = P(B|A) * P(A)/P(B) = .80 * .95 / P(B) = .76 / P(B)
    And since P(B) <= 1, P(B) .76.

    Mike then explains what this means. Essentially, he says that if a theory A has a high prior (P(A)=.95) and predicts an event B very likely to occur (P(B|A)=.8) then (as Briggs also points out in his post and comments) no matter how impossible it is for P(B) not to occur no matter the truth of theory A (P(B)=1), then theory A will remain almost just as believable (P(A|B)>.76) after B happens.

    But let’s take the case where theory A strongly predicts event B will NOT happen:

    P(A) = .95 (our prior belief in theory A)
    P(B|A) = .05 (Theory A confidently predicts event B will not occur)
    P(B) = .95 (Competing theories sum to predict it almost certain)

    Now when event B occurs anyway:

    P(A|B) = .05 * .95 / .95 = .05 (we likely have falsified A)

    Falsification offers a bigger bang for the buck. And it’s why we don’t experimentally confirm parachutes save lives. Big cost, little bang. No pun intended.

  18. I tried posting this earlier but it apparently failed. Please excuse me if this is a double post.

    Let me expand on Mike D.’s comment and point out the Bayesian reason why there is a Popperian emphasis on falsification rather than simple confimation in science. It’s more effective.

    Recall that Mike’s argument is:

    Assuming P(A|B) = P(B|A) * P(A)/P(B) = .80 * .95 / P(B) = .76 / P(B)

    And since P(B) <= 1, P(B) .76.

    Mike then explains what this means. Essentially, he says that if a theory A has a high prior (P(A)=.95) and predicts an event B very likely to occur (P(B|A)=.8) then (as Briggs also points out in his post and comments) no matter how impossible it is for P(B) not to occur no matter the truth of theory A (P(B)=1), then theory A will remain almost just as believable (P(A|B)>.76) after B happens.

    But let’s take the case where theory A strongly predicts event B will NOT happen:

    P(A) = .95 (our prior belief in theory A)
    P(B|A) = .05 (Theory A confidently predicts event B will not occur)
    P(B) = .95 (Competing theories sum to predict it almost certain)

    Now when event B occurs anyway:

    P(A|B) = .05 * .95 / .95 = .05 (we likely have falsified A)

    Falsification offers a bigger bang for the buck. And it’s why we don’t experimentally confirm parachutes save lives. Big cost, little bang. No pun intended.

  19. Assuming P(A|B) = P(B|A) * P(A)/P(B) = .80 * .95 / P(B) = .76 / P(B)…
    Mike then explains … (as Briggs also points out…) no matter how impossible it is for P(B) not to occur no matter the truth of theory A (P(B)=1), then theory A will remain almost just as believable (P(A|B)>.76) after B happens.

    But George, in this case P(B) has to be ≥ 0.76! Hmmm…

  20. Dr. Briggs,

    I would like to ask about a different statement in this post:

    “Some theories are more subtle and do not speak in absolutes. For example, MY_THEORY might say “There is a 90% chance that X will happen.” When X does not happen, is MY_THEORY wrong?

    It turns out that it is impossible to falsify theories that make predictions that are probabilistic. ”

    I have explored the idea of attempting to falsify claims such as these by looking at where the predictions are made sufficiently frequently to make some old-fashioned statistical tests work. For example if a theory has made thousands of daily predicitons about, say, the weather. I can take all of the days when it predicted a “90% chance” of rain and measure whether or not in fact it rained 90% of the time.

    To my mind, the results are obvious if the prediction is true only 15% of the time. The difficultly lies in how to treat results that are right 85% of the time. Is this a falsification or are there some bounds where a confidence interval should be used?

    I look forward to your thoughts.

    ~ Ivin

  21. I see a lot of confirmation bias in my fellow skeptics. They never pressure test their own side. They’re more into hanging out with people of the same belief (similar to the self-selection and social processes on DKos and Free Republic.) Any interest in new info or curioisity is restricted to new things that help their viewpoint. They don’t question their own side and subject it to penetrating analysis. Only the other side gets that.

  22. A good example is the War in Iraq. People (whether leaders or just partisans) HATE admitting wrong. However, the war had 75% support before we went in and now has 25%. Within the 25%, there are still people grasping at straws claiming that WMD got transshipped to the Bekaa valley. Bullshit. If you take over a country, crack the dictator, hang him, have all the systems and command and control…you should be able to find the people who maintained the WMD, who trucked it, etc. etc. etc. AND THEY AREN’T THERE. A GOOD study and search was done in country…and it combined multiple methods (interviews, site visits, electronic intel, documents, etc. etc.) AND THEY CAME UP WITH ANSWER NO.

    And I say all of the above, being a sterner Cold Warrior than most.

  23. Ivin,

    Your intuition is right on. What you’ve figured out is the idea of calibration. There are formal ways to account for calibration and the lack of it. I’ll try and write about this later.

    George,

    I’ll have something more about Popper up soon. Been busy trying to finish a couple of papers, sucking up my time.

  24. Some scientific theories are “confirmed” or “falsified” through some pretty precise measurements of their predictions. When Einstein published the mathematics of general relativity, he predicted that large masses would bend light by a certain amount. Effort was then undertaken at a subsequent total solar eclipse to check the apparent position of certain stars whose light passed close by the sun, and, voila, the light was bent. Einstein was thus confirmed, and Newton falsified. The same two theories clashed about the precession of the orbit of mercury, and general relativity was again the winner, and Newton again had to concede (posthumously).

    Newton’s theory was shown to be approximately correct, but “false” under certain conditions.

    If CO2-driven AGW theory were to predict that “given the rise in CO2, temperatures in 20 years will have risen 2 +/- 0.5 C, and then, in 20 years, temperatures had actually dropped 3 C, could we not consider the theory to then have been well and soundly falsified? How many standard errors outside of predicted bounds would real world results have to be to make an intelligent observer decide that a theory is worthless?

    Of course, CO2-driven AGW makes no such unambiguous predictions…

  25. MrCPhysics,

    When you say that Newton’s theory was proved “false”, I’ll agree with you. But if you claim it was proved false, then I won’t. Those quotation marks make all the difference.

    There are no set of observations, in the instance of bending of light, that are impossible under Newton’s theory. They may be extraordinarily unlikely: the light bending would require, for example, masses and forces to be in odd places and in strange quantities. It is true that we might not discover, and cannot discover, those bizarre masses and forces, but there is nothing to say the logically could not exist.

    Since they might exist, it is not impossible for the light bending to take place under Newton’s mechanics, no matter how unlikely.

    Now, we have not found those masses and forces, and they are not needed under Einstein’s mechanics, thus it is more likely that Einstein’s mechanics are true.

    In order for Newton’s mechanics to be falsified—deduced to be false—we would have to prove by deduction that what we see is logically impossible under that theory.

    That is an enormous burden of proof. Like I said above, there is no stronger claim than to say something is false (or true). This burden is not met in the theories we use in physics.

  26. [i]In order for Newton’s mechanics to be falsified—deduced to be false—we would have to prove by deduction that what we see is logically impossible under that theory. [/i]

    I had already the opportunity to explain here some time ago why the Newton theory is false (and not “false”) .

    The necessary condition for the Newton’s theory to be valid is the infinite speed of propagation of interactions .
    Conversely by analysing Newton’s theory , the infinite speed of propagation of interactions can be deduced from it .

    So , to use the same words , under the Newton theory it is logically impossible to see a finite speed of propagation of interactions .
    However that is exactly what one sees when the speed of light is measured (it is finite) and as a bonus we learn in the Michelson experiment that this speed is a constant in every inertial frame what confirms the right theory (special and general relativity) and falsifies Newton’s theory .

    The fact that for v/c << 1 the first order approximation of the correct theory (relativity) reduces to the newtonian formulation changes nothing on the fact that the latter is false .

    It only shows that for the kind of everydays speed we meet , there is no big difference between extremely fast and infinitely fast and that explains btw a posteriori why Newton could have happened on a false theory that had correct approximation formulas that could have been deduced trivially in few minutes from the correct theory .
    If we lived in a world where speeds in 100 thousands km/s were normal occurrence , then the Newton’s theory would have never appeared because it would have been immediately falsified by everyday’s experience and this world’s Newton would have at once formulated the correct relativity theory .

  27. Ivin, your scenario is quite interesting because it raises questions about what the actual claims of MY_THEORY might be.

    Suppose we have no information other than the output of a black box, which gives us a P(rain_tomorrow). Do we assume that this is a Bernoulli process for which we are verifying our hypothetical P after N trials (maybe this is where Dr. Briggs wants to go with his forthcoming comments about calibration)? Perhaps all predictions of 90% should be checked independently from predictions of 25%, for example, so that the calibration is performed at all ranges.

    However, MY_THEORY of weather is usually a lot more complicated than this. It might be a fairly sophisticated black box which takes a bunch of inputs and gives you the state of the weather, say, up to 24 hours in advance.

    Now you have several possible “falsifications” to deal with. Suppose the model is perfect in a physics sense — you certainly have underdetermined outcomes even in this case just because you have incomplete inputs. So even with an “identical” finite set of inputs, there is some range of possible outputs which are consistent with physics. Here, you might hypothetically calibrate using repeated conditions and check whether the output is within some bounds. However, more likely you will not get “many” samples with repeated inputs, so you have to rely on a systematic analysis of error propagation in the model, or short of that, some kind of Monte Carlo trials.

    Of course, there is no guarantee that the physics is correct, which is also part of the claim of MY_THEORY. The model will almost certainly be falsified on some detail if it is at all compliated, so Dr. Briggs’ standard for strict falsification may be TOO strict to be meaningful for any operational model of weather, climate, etc.

    Anyway, I apologize if my post is a bit pedantic or confused. There is no easy way to satisfactorily answer these questions, so I thought I would just throw out these thoughts.

    Looking forward to Dr. Briggs’ or anyone else’s responses or further elucidation of these ideas.

    -OMS

Leave a Reply

Your email address will not be published. Required fields are marked *