Theories Don’t Have Probabilities: Or, Is The Multiverse Real?

Just one of several copies.
Just one of several copies.

There was in Munich last week a three-day workshop on the soul of science. According to Quanta Magazine, the conference was driven by George Ellis and Joe Silk who wrote a cri de coeur in Nature about defending the integrity of physics.

The crisis, as Ellis and Silk tell it, is the wildly speculative nature of modern physics theories, which they say reflects a dangerous departure from the scientific method. Many of today’s theorists — chief among them the proponents of string theory and the multiverse hypothesis — appear convinced of their ideas on the grounds that they are beautiful or logically compelling, despite the impossibility of testing them. Ellis and Silk accused these theorists of “moving the goalposts” of science and blurring the line between physics and pseudoscience. “The imprimatur of science should be awarded only to a theory that is testable,” Ellis and Silk wrote, thereby disqualifying most of the leading theories of the past 40 years. “Only then can we defend science from attack.”

Now there is much to discuss about this conference and about Ellis and Silk’s paper. But for today, let’s focus on one small item. There is this Joe Polchinski who is a “staunch” string theorist, who had a paper read for him in Munich. According to the magazine:

Polchinski concludes that, considering how far away we are from the exceptionally fine grain of nature’s fundamental distance scale, we should count ourselves lucky: “String theory exists, and we have found it.” (Polchinski also used Dawid’s non-empirical arguments to calculate the Bayesian odds that the multiverse exists as 94 percent — a value that has been ridiculed by the Internet’s vocal multiverse critics.)

The critic is Peter “Not Even Wrong” Woit. Woit quotes Polchinski on the this 94%-calculation:

To conclude this section, I will make a quasi-Bayesian estimate of the likelihood that there is a multiverse. To establish a prior, I note that a multiverse is easy to make: it requires quantum mechanics and general relativity, and it requires that the building blocks of spacetime can exist in many metastable states. We do not know if this last is true. It is true for the building blocks of ordinary matter, and it seems to be a natural corollary to getting physics from geometry. So I will start with a prior of 50%. I will first update this with the fact that the observed cosmological constant is small. Now, if I consider only known theories, this pushes the odds of a multiverse close to 100%. But I have to allow for the possibility that the correct theory is still undiscovered, so I will be conservative and reduce the no-multiverse probability by a factor of two, to 25%. The second update is that the vacuum energy is nonzero. By the same (conservative) logic, I reduce the no-multiverse probability to 12%. The final update is the fact that our outstanding candidate for a theory of quantum gravity, string theory, most likely predicts a multiverse. But again I will be conservative and take only a factor of two. So this is my estimate for the likelihood that the multiverse exists: 94%.

Whew! Everything so far was merely introductory, both for today’s meat and for discussion later. Without taking any opinion on the existence of the multiverse, let the theory, i.e. the very complex set of premises, which include a vast array or metaphysical, physical, and mathematical propositions, from which we can deduce the multiverse be called T. T is a complex proposition, and we are interested in whether T itself is true. Why? Because we know the multiverse is true if T is: the multiverse is a deduction or theorem of T. Polchinski wants to bring in Bayesian theory to answer whether T is true. That was mistake number one.

Mistake two is this statement, “I will start with a prior of 50%.” This makes no sense. Theories do not have probabilities. And since theories are nothing but (complex) propositions, neither do propositions have probabilities. Indeed, no thing has a probability. Probabilities are measures of knowledge, therefore they have to come equipped with gauges, i.e. conditions. In other words, all probability is conditional.

Many think one natural gauge is the proposition W = “T might be true”, which is logically equivalent to W’ = “T is true or it is false”. Both of these are tautologies, which we know are true conditional on our knowledge of logic and understanding of English grammar. But it makes no sense to say, as Polchinski said, Pr(T | W) = 50%. Tautologies are non-informative. The best we can do, as I pointed out earlier, is to deduce T’s contingency, which gives it a interval probability (0,1). Of course, Polchinski may not have had the tautology in mind, but some other gauge. Call this G, which relates to some complex proposition in Polchinski’s head. Then it might be true that Pr(T|G) = 50%.

But what would this G have to look like? Well, it would have to be directly probative of T itself, which means of the propositions of which T is composed. And if Polchinski really had such a G, it is more plausible these G-propositions would already be in T to give it support. Why withhold from T knowledge relevant to multiverses? It doesn’t make sense. But then G might have nothing probative to say about T except its contingency like W, in which case 0 < Pr(T|G) < 1.

According to the rules of probability, Pr(T false | G) = 1 – Pr(T| G). But what does it mean to say T is false? Just that at least one of propositions within T is false. And if we knew that, then we would never entertain T. We would instead modify T (which really means making a brand new T) to remove or transform these troublesome propositions. If G told us which part of T was wrong we would fix it.

Put all this another way. If all we had in contention for the multiverse was T, then T is all we have. We can’t judge its truth or falsehood because we have nothing to compare it to. T is it. It’s T or bust.

I’m sure (though I didn’t check) Polchinski’s numerical calculations are on the money, but the end result is meaningless. T has to be compared not against some internal gut reaction, because there is no such thing as subjective probability, but against the predictions T makes or against rival theories, which provide the only natural comparators. That is, Polchinski might have some alternative theory M in mind, a rival to T such that, given M, the multiverse is not a theory of M. Now Polchinski’s G makes a little more sense.

There may be, and almost certainly are, overlapping elements of T and M, sub-propositions which they share. Nevertheless, T is not deducible from M, nor vice versa, else they would be the same theory. We’ve already seen that it makes little sense to have in G propositions which duplicate the multiverse-predictability propositions in T, and the same objection applies to M. That means G is something else. The simplest would be the “freshman” G, which is “There are two rival theories, T or M, and only one of which can be true”. Therefore, Pr(T|G) = Pr(M|G) = 1/2, via the statistical syllogism. But that’s as far as we can go without additional evidence, such as observations of the multiverse (which won’t be had) or via observables deducible from T or M. Other G are possible, but it is easy enough to see, since there is no such things as subjective probability, we’re up against the unquantifiable. Gut feeling as a decision takes the place of probability. In other words, it’s better to go out and find proof of T or M.

Naturally, everything said holds for theories of any kind. Clever readers will see in the criticism of Polchinski the standard argument why hypothesis testing, whether by wee p-value or Bayes’s rule, is based on the fallacy of the false dichotomy.

More to come…Incidentally, none of what I wrote has any bearing on whether the multiverse actually exists.

38 Comments

  1. Per

    Great article. Carl Sagan made the same form of argument in the 1970s concerning likelihood of life on other planets: assume 10 percent of stars have planets, then 10 percent of planets are suitable for life, and 10 percent of those … On and on. Never made sense even to a kid.

  2. John B()

    I just saw Terminator Genisys and while it was a ton of fun, the characters had to intentionally act in the future present in order for the future character to experience that incident in his future past.

    Then I caught the last part of X-men Days of Future Past…

    Then there was the original Multiverse story about Hezekiah (the Multiverse – it is Biblical)

  3. Psychics will have your head for saying there’s no subjective probability. The IPCC might to. That’s heresy.

    John B(): “the characters had to intentionally act in the future present in order for the future character to experience that incident in his future past.” Sounds somewhat like the pop physics version of quantum entanglement on a large scale. That’s an interesting idea, but I can see where it might slow a movie down somewhat.

  4. DAV

    no such thing as subjective probability

    One could argue there is no such thing as objective probability which would mean it’s the same (or should be) no matter who calculates it. If it’s based on only known information wouldn’t that make it subjective? Subjective doesn’t mean it’s anything you want it to be. It means it would vary depending on the observer because each observer might have different knowledge.

    Take the Monty Hall example. The probabilities for which door has the prize are different between the contestant and Monty because Monty knows which one has it. So the probability depends on who you ask and what they know — thus subjective.

  5. DAV

    hypothesis testing, whether by wee p-value or Bayes’s rule, is based on the fallacy of the false dichotomy.

    I would say the fallacy in Hypothesis Testing is that it doesn’t actually test your hypothesis unless the hypothesis is X is correlated to Y. Using Hypothesis Testing is the equivalent of claiming cause merely because of correlation and despite the constant reminder that correlation does not necessarily imply cause.

  6. With respect to multiverses and George Ellis, there was a series of posts on my blog and reposted here dealing with the philosophy of cosmology, essentially a summary of George Ellis’s paper on the subject. The post relevant to multiverses on this blog is “Philosophic Issues in Cosmology VII–Is there a multiverse?
    at
    https://www.wmbriggs.com/post/13405/
    I’ve read Peter Woit’s critique of string theory, “Not Even Wrong”; it should be required reading for every theoretical physicist.
    The problem is that with the Standard Model all set, there are no more worlds for the theoretikers to conquer, so they turn to mathematical metaphysics, i.e. mathematical theories of the universe that cannot, either in principle (as with multiverses) or in practice (as with string theory) be empirically tested.

  7. FAH

    Concur with the title, theories don’t have probabilities. Also concur with Bob Kurland.

    In the traditional view of physicists (or perhaps in the view of traditional physicists) a theory is neither true nor false. Instead, it is either useful or not so useful. I suppose in the language of mathematics (or perhaps decision analysis) a theory might be said to have utility. Utility measures the extent to which it allows observers to predict outcomes of experiments or observations that we construct or make, some of which may have practical value in our daily lives. That “extent” is simply the part of the universe that we currently have the ability and desire to match conceptual formulations to experiment or observation.

    The notion of having two (or more) physical theories, denoted by T and M, and wanting to decide a probability that one or the other is “true” is simply irrelevant from the viewpoint of physics. It may be of interest within the disciplines of philosophy or mathematics but not physics. In mathematics and perhaps less so in philosophy, one has the luxury of precisely defining the universe, i.e. enumerating all possible states under consideration. When all possible states are well defined, either by construction or exclusion, then a statement’s truth or falsity can in principle be determined with a consistent application of mathematical or logical principles. (Except for that darned Godel, of course.) With physical theories, a physicist always faces the fact that humanity’s current knowledge of the universe is limited by the cleverness of our observational engineering, the conceptual limits of our finite brain capacity, and our essential mortality (finite observation and thinking time). Hence the population of possible theories to conceptualize the universe is largely unknown to us at any given time. There may well be an infinity of theories necessary to explain the universe, not just T or M. (If the notion of the probability of a theory being true or false is whether someone “believes” it or not, then one is free to believe whatever one wishes.)

    A couple of examples may be useful. One could have asked in 1865 whether Maxwell’s theory of electrodynamics was “true” or not. It did unify the theories of light and electricity in an elegant set of equations that most physics students at one time or another wore on a t-shirt. In the math of the time, differential equations on continuous functions, it also predicted accurately the outcomes of a tremendous extent of experiments which were previously disjoint and poorly understood. Fifty years later, as a result of experiments stimulated by Maxwell’s work, our knowledge of the universe expanded (we knew better where and how to look) and Einstein proposed an improvement, special relativity, which unified electrodynamics and mechanics, and unified the treatment of space and time. About another fifty years passed before Feynman (and others) developed quantum electrodynamics which introduced a notion of virtual particles propagating forces with new rules for making predictions about outcomes. Maxwell’s theory is not false. Neither is it true. It is useful for predicting some set of outcomes of interest. The same is true for the others. QED is not very useful for calculating the impedance of a transformer one wants to design for a power station. Maxwell’s theory is. In Maxwell’s time the engineering cleverness and conceptual framework of human thought was not equipped to consider the notions addressed by QED.

    More germane to the multiverse discussion are what are called formulations (or theories) of quantum mechanics, one associated with Heisenberg, another with Schrodinger, and a third with Feynman. Heisenberg’s approach was algebraic in focus and was called matrix mechanics. Schrodinger’s approach was called wave mechanics and used classically motivated differential equations to describe time evolution of reality. The approaches of both Heisenberg and Schrodinger considered reality to be composed of a superposition of possible states, which evolved en masse into the future. Feynman introduced the notion that an observable of a particle at some position and time in the future is the sum over all the possible ways the particle could arrive that that future event. The notion allows the idea that the particle actually has some probability of taking each of the possible paths and could be measured to have taken any one of them at the future point. In other words, the particle in the future has multiple past paths contributing to its behavior. This is the germ of the multiverse notion. The math involved in doing Feynman’s sum (called a path integral) is challenging, but achievable for some problems.

    So, which of the theories, Heisenberg’s, Schrodinger’s, or Feynman’s, are “true”? None of them. Some are useful in some situations, some in others. Heisenberg’s is very useful to predict elemental and molecular spectra to build lasers or find out what comprises the light coming from a star. Schrodinger’s is useful in a variety of interference phenomena. Feynman’s is useful because it is intrinsically space-time geometric, i.e. relativistic, but it is very hard to calculate predictions.

    The point is that reality from a physics perspective is not a simple true or false question. Utility is the measure and it is gauged by prediction of observable outcomes. The current “multiverse” discussions may be straying from physics a little, in my view. (I remember a time when many major universities housed relativity centers in math departments, not physics departments and black holes were considered a mathematical oddity.) Observable tests are just out of reach in lab experiments but perhaps predicting properties of things like dark matter might work. One way of viewing the notions is in the context of what is called M-theory (it is googleable). The basic idea is that for different perspectives on the universe completely different theories may be applicable, somewhat like the different quantum approaches are applicable in different situations. Like the simple quantum theories, the different theories implied by M-theory would apply on the scales and for the observers involved in different situations, and would have clearly defined rules for transitioning between. In this notion, the idea that there are “multiple universes” would be replaced by the notion that there are multiple theories applicable to different views of the universe. But once again, none would be “true” or “false” only useful or not so useful. Stephen Hawking has written (with Leonard Mlodinow, 2010) a readable little book on the subject called “The Grand Design.” Hawking is somewhat optimistic that we are close to understanding. But we can recall Michelson’s statement in 1894 “…most of the grand underlying principles have been firmly established…..An eminent physicist remarked that the future truths of science are to be looked for in the sixth place of decimals.” [He may have been referring to Lord Kelvin to whom the last remark is often attributed, but it is disputed that Kelvin actually said it.]

  8. John B()

    Penny : what’s new in the world of physics
    Leonard : nothing …
    Penny : nothing?
    Leonard : well except for string theory nothing since 1930’s …
    you can’t prove string theory … at best you can say is: “hey look my idea has an internal logical consistency”
    Penny : Well I’m sure things will pick up.

  9. Gary

    John B(), Bazinga.

  10. FAH, I classify “The Grand Design” as a non-useful example of mathematical metaphysics; the math may be ok (but hidden), but the metaphysics is execrable.

  11. FAH

    Bob,
    I agree generally about the “Grand Design” book. I only said it was readable, not useful, in the sense of a physical theory. One could argue that most of the multiverse theories are not “useful” within the context of physics as I discussed above. I have no expertise in metaphysics and hazard no opinions therein. I know Hawking has some rather anti-religious (I am not sure that is the right word) writings. I seem to recall he likened belief in miraculous occurrences, heaven, hell, angels, an afterlife and the like to believing in “comforting fairy tales,” and that offends some people mightily. Where he describes the theoretical options he is accurate, when he speculates on the metaphysical, I believe he is no more an expert than any other mortal.

  12. SteveBrooklineMA

    KIRK: Mister Spock, can we get those two guards? What would you say the odds on our getting out of here? SPOCK: Difficult to be precise, Captain. I should say approximately 7,824.7 to 1. KIRK: Difficult to be precise? 7,824 to 1? SPOCK: 7,824.7 to 1. KIRK: That’s a pretty close approximation. SPOCK: I endeavor to be accurate. KIRK: You do quite well.

    I used to think this exchange from the original Star Trek series was preposterous and funny. Now I appreciate how prescient the writers were about the future of science!

  13. DAV

    The point is that reality from a physics perspective is not a simple true or false question.

    Which is as it should be but you won’t get that impression from burble like this one from Wiki: In quantum mechanics, wave function collapse is said to occur when a wave function—initially in a superposition of several eigenstates—appears to reduce to a single eigenstate (by “observation”).

    A peculiar way to say the outcome of the observation is unknown until it’s made but there is a predictable range of values it may have. From the peculiar version it sounds very much like belief of probability as an inherent property.

    Then, too, are those arguing string theory is more “real” than any other theory. And the reasoning is not that it predicts better but that it is elegant.

    Not just physicists doing this, there are those who think weather is chaotic and the proof of this are the models. They fail to see it’s only their model of the weather which can be shown to be chaotic. They have no way or demonstrating their model has any validity because admittedly it can’t predict beyond a small time interval.

    The gist of this post centers on the quoted sentence: I will make a quasi-Bayesian estimate of the likelihood that there is a multiverse and the surrealistic calculation that follows.

  14. Briggs

    DAV,

    Amen to your “wave collapse” comment. In re the impossibility of subjective probability. If you have fixed premises X and a proposition of interest Y, if a number can be had, then Pr(Y|X) is fixed, i.e. deduced, and not subjective. Of course, the choice of X and Y often are subjective.

  15. “Hypothesis Testing is the equivalent of claiming cause merely because of correlation and despite the constant reminder that correlation does not necessarily imply cause.”

    Science is an empirical discipline. All claims are based on correlation. The only reason why Newton’s Laws are assumed deductive is that you’ve observed the correlations so many times, that you no longer think of them as correlations. A point Hume made a long time ago.

  16. Briggs

    Will,

    Only Hume was completely wrong about induction and that we can’t know any causes (and about many other things). On the other hand, his history of England is excellent.

  17. FAH,

    What you are expressing here is a popular culture type of intellectualism especially promoted these days by the progressive moment. It’s called Instrumentalism, the central claim being that there is no underlying reality, or if there is, it is forever unknowable. This world view lead ultimately to Idealism, which has largely been rejected, although it seems to be making a come-back in this new/old way of thinking.

    We don’t have direct access to reality. But think of reality this way. Reality is an island. There is a lighthouse on the island, and we’re trying to find it. We are on a ship in a rough sea in a thick fog. The island is there, but it’s not easy to navigate to. We use the lighthouse (science) to try to work our way ever closer. But we have bad weather and currents to contend with, hence detours arise.

    For Instrumentalism to make any sort of sense, it has to be possible for there to exist multiple theories that are incommensurate (to borrow a word from Feyerabend), yet are exactly predictive in the same way. I would argue this has never been shown to be true and will never be shown to be true, because there is only one underlying reality. Hence Instrumentalism is nonsense. Now, if you have two not-very-good theories, each able to predict some aspect of reality in some way, but which largely fails in all other ways, you might be tempted to think in an Instrumentalist way, because you’re lumping weak theories and good theories together and treating them all as just ‘theories’.

  18. “Only Hume was completely wrong about induction and that we can’t know any causes (and about many other things). On the other hand, his history of England is excellent.”

    ‘Completely wrong’ about induction? I think you’re projecting what later philosophers of science wrote in interpreting some of Hume observations. What paragraph can you cite from Hume concerning induction (a word I don’t think he ever directly used) that is wrong?

    Anyway, for what it is worth, I agree with Stove (hence I suppose I agree with you), that empirical science is not just a bunch of unprovable relativist assertions. Instrumentalism is nonsense for the reasons I outlined above, and the “Problem of Induction” not really the problem it is made out to be. That puts me, like you, I suspect, in the minority.

  19. Briggs

    Will,

    Besides Stove and Donald Williams, read Louis Groarke, who perhaps has the best take. I have a chapter in my book summarizing his argument, but you’re much better off going to the source.

  20. Among anyone I’ve known to be generally familiar with the subject, all would say that such theories are really just hypotheses. Less even so than time is math anything more than a measure. And unless you can measure something…

    There very well be multi-verses, and all mass and energy may be vibrating strings, but it does no good to assume these things are true until at least tangential experiments show us employable, safe to play with, theories. One thing life should teach any sports fan, anyway, is just because a team looks good ‘on paper’ doesn’t mean they’re going to the Super Bowl. Gotta play.

    JMJ

  21. JMJ, Amazing!!! you’ve said something I agree with wholeheartedly!

  22. Atheists and theists share one thing deeply in common. They are very uncomfortable to admit to themselves or each other, that they don’t know things, so they pretend to know what they don’t. In part I suspect this is a reaction, at least in the case of atheists, to theism itself. Theists deeply believe things they don’t understand and use words such as ‘faith’ as justification. Many atheists seem rather jealous of ‘faith’ (although they would never admit this given their general contempt for theists), so by way of compensation they assert greater confidence in their own beliefs than rationality can justify. Hence the sort of pretend certainty you see on display which is the subject of this topic. Yes of course they all say truth is subject to verification and falsification and so on, but once the platitudes are mouthed, few of them actually believe this. They are just as certain as the theists in things they don’t understand. That is how you come to be 94% certain that the Multiverse exists, in the absence of any evidence whatsoever.

  23. I’m an atheist and I do not believe anything. I know it, or I don’t. I don’t know if there are multiverses, though I’d be far less surprised by that than, say, the existence of the Abrahamic God.

    JMJ

  24. Bumble

    Concerning bayesianism and the subjective/objective distinction: both sides agree that probabilities are (a) calculated using a formula; and (b) conditional upon the information that one possesses; and are therefore subjective in the sense that different people possessing different information will form different assessments of the probability, and objective in the sense that possession of the same information should ideally lead to the same assessment. The real difference is usually understood as the issue of how one determine’s one’s priors. Subjectivists hold that one can do no more than work with whatever priors you have, because to change your credences by any method other than bayesian updating violates considerations of decision theory, while objectivists maintain that one can abstract away from the vagaries of individual priors by constraining them to agree with known frequencies and minimising assumed information.

    As to bayesianism committing a false dichotomy, this can be overcome by using Bayes’ rule in its comparative form, i.e. for two rival hypotheses the ratio of the posteriors is equal to the ratio of the priors multiplied by the likelihood ratio.

  25. “I’m an atheist and I do not believe anything. I know it, or I don’t.”

    Being 100% certain in your belief doesn’t mean you know something. It just means your certainty in your belief has reached the level of delusion.

  26. Joy

    Will,
    Delusion only, when you know the truth and still hold a contrary belief.
    Belief is not unhealthy, delusions often are.

  27. Gary in Erko

    “Being 100% certain in your belief doesn’t mean you know something. It just means your certainty in your belief has reached the level of delusion.”

    Beware – that’s recursive.
    Your own self-admitted delusion isn’t necessarily applicable to others.

  28. Ah yes, usually I’d assume you were a Progressive to be so clever as to turn every statement no matter the content, into a Liar’s paradox.

  29. Michael

    THIS Insanity has to do with one all encompassing bias. The Linchpin, in which all other ideas are derived from. ATHEISM.

    Now, understand that many cosmologists are agnostic or atheist for the same reason Doctors are Theists. For the same reason most modern philosophers are agnotic and theologians are theists. In short, science degree scholarships have a choice. Theists already feel they have creations ultimate answers and thus have no interest in physics * today, especially because it is populated by the likes of militant atheists like Lawrence Krauss.

    In turn atheists flock to Origin science and we, the unlucky public must put up with their socially awkward self refuting worldview.

    But its not just about being an atheist folks. Its about feeling directly threatened by Fine Tuning and all the other cumulative evidence that naturalism cuts its own throat and one day these mockers of God are gonna have to answer for their arrogant deceitful boasts.

    You dont proclaim an idea that ultimately destroys science probability and reason unless you have become demented with fear, or just plain demented. Thank God for many who are speaking up at this group of lunatics….all who, surprisingly, have articles or books on the market selling atheism. These weirdos are turning science into their own cash cow in which Bar Room Philosophy is peddled as Truth. Someone, throw these bums out –they are just embarrassing

  30. YF

    “Besides Stove and Donald Williams…”

    Sorry to bring bad news, but Williams fails to solve the problem of induction.

    He is correct that one can infer properties of a population from a sample, but ONLY if that sample is RANDOMLY selected and considered REPRESENTATIVE. But the assumption of randomness and representativeness is just another way of saying that the population from which the samples are selected is more or less ‘uniform’ (i.e., that we are confident that the sample has been selected through an unbiased procedure). But this is simply another manifestation of the ‘uniformity of nature’ assumption that contributes to Hume’s problem.

    One may reply that we have no reason to believe that the sample drawn is biased. But this does not allow us to infer that the sample is in fact unbiased. To do so is to beg the question. We don’t know. Similarly, that we have no reason to suspect that nature is not uniform does not allow us to infer that nature is in fact uniform.

    Alas, Hume remains vindicated.

  31. YF

    Thanks for the reply. So, you’re going to have me buy your book in order to understand your objection to my critique?! Sorry, but I can’t afford it.

    Some of the key points I raised against Stove’s’/Williams’ ‘solution’ to the problem of induction are covered in the following paper, which I highly recommend you read. I am happy to send you the PDF if you are unable to access it on your own. In the meantime, have fun trying to prove that the sun will likely rise tomorrow without using any inductive assumptions!

    Synthese

    October 1990, Volume 85, Issue 1, pp 95–114

    Some remarks on the Rationality of Induction

    Bipin Indurkhya

    Indurkhya, B. Synthese (1990) 85: 95. doi:10.1007/BF00873196

    Abstract
    This paper begins with a rigorous critique of David Stove’s recent bookThe Rationality of Induction. In it, Stove produced four different proofs to refute Hume’s sceptical thesis about induction. I show that Stove’s attempts to vindicate induction are unsuccessful. Three of his proofs refute theses that are not the sceptical thesis about induction at all. Stove’s fourth proof, which uses the sampling principle to justify one particular inductive inference, makes crucial use of an unstated assumption regarding randomness. Once this assumption is made explicit, Hume’s thesis once more survives.

    The refutation of Stove’s fourth proof leads to some observations which relate Goodman’s ‘grue’ paradox with randomness of a sample. I formulate a generalized version of Goodman’s grue paradox, and argue that whenever a sample, no matter how large, is drawn from a predetermined smaller interval of a population that is distributed over a larger interval, any conclusion drawn about the characteristics of the population based on the observed characteristics of the sample is fatally vulnerable to the generalized grue paradox.

    Finally, I argue that the problem of justification of induction can be addressed successfully only from a cognitive point of view, but not from a metaphysical one. That is, we may ask whether an inductive inference is justified or not within the ‘theories’ or ‘cognitive structures’ of a subject, but not outside them. With this realization, induction is seen as a cognitive process, not unlike vision, that is useful at times, and yet has its own illusions that may make it a serious obstacle to cognition at other times.

  32. Briggs

    YF,

    Yes, buy my book. It’s all part of my get-rich-slow scheme. Or, better, since your interest is in induction, is to read just Chapter 4 of my book, or get Groarke’s work.

  33. YF

    Ok, now you’ve enticed me. Do you have a PDF of your Chapter 4 you could send me? I would be most grateful. Books are off my budget until my grant gets funded! In return, I will refer all of my friends and colleagues to your website.

Leave a Reply

Your email address will not be published. Required fields are marked *