William M. Briggs

Statistician to the Stars!

Category: Philosophy (page 1 of 116)

The philosophy of science, empiricism, a priori reasoning, epistemology, and so on.

If We Are What We Sexually Desire, How About These Curious People?

Say, baby.

Say, baby.

Gender theory in brief says we are what we sexually desire. It’s not that we have desires, but that we are these desires. They are the core of our being. They make and form us. They are our orientation.

That’s why Yours Truly is not what he appears and what his biology made him, i.e. a man, a male human being, but is instead a “heterosexual” or, in slang, “a straight.” I cannot escape from this prison or these desires even if I wanted to, which I don’t. And since this state is forced upon me without my consent, and because anyway I like it, you must respect and even celebrate this fact. I must wear my orientation as a badge. You may not judge me.

We all know the other categorizations of desire and of their increasing prominence, so we needn’t cover them. But what do we make of these people, a group with very specific sexual desires?

Denmark already has a handful of animal brothels which, according to Ice News, a site specialized in Nordic reporting, charge between $85 and $170 depending on the animal of choice.

…24 percent of the population would like freedom of movement when it comes to pursuing beasts for pleasure. In a Vice Video aptly called “Animal [Edited]” one unnamed man explains what turns him on in the animal kingdom. “I’m into human females. I’m into horse females,” he says. “I’m asexual towards rats. I’m a bit voyeuristic about dogs and women.”

…People who literally love their animals have been tied to a series of side crimes. In August, a woman in New Mexico tried to kill her roommates after they witnessed her having sex with a dog and admitting to having sex “multiple times” with both roommates’ dogs. In September, a priest who was convicted of 24 counts of pedophila against Inuit people in Nanavut, Canada, had a bestiality record as well.

It’s little known, but bestiality is legal is several countries, mostly in Europe. Some animal “rights” groups are seeking to change these laws because they are concerned that animals are not giving “consent” to these odd encounters. Well, the animal that turned into my breakfast sausage probably wasn’t consulted about that, either. But let that pass. What matters is that the acts, legal or not, are somewhat common, in the sense that this kind of desire has been known across the centuries.

What to call these folks? Zoophilia is the technical term for the desire, but “zoophiliacs” is unwieldy. How about woofies? That has a pleasant, nonjudgemental, evocative tone.

Since gender theory insists we are our desires, then people who lust after aardvarks and wombats and the like are not people but woofies.

Do woofies have certain gifts and qualities to offer society? Are we capable of welcoming these people, guaranteeing to them a fraternal space in our communities? Often woofies wish to encounter a culture that offers them a welcoming home. Are our communities capable of providing that, accepting and valuing their sexual orientation?

Good questions, those. The reader should answer them.

Now I know that some of you will have a “yuck” response and will say that woofie desires are “unnatural.” But I’m afraid that won’t do. Because to say something is “unnatural” is to logically imply there is such a thing as human nature. It is to admit that those critics who decry “sexual orientations” as so much farcical academic tootling and who say that instead natural law should be our guide to behavior are right. Do we really want that? Accept natural law and what happens to all those other “orientations” which are also unnatural? Some deep kimchee there, brother.

You might try insisting that woofie behavior is “disgusting”. That doesn’t fly, either. The acts of many orientations are disgusting, too, and are often crippling to health. And isn’t “disgusting” a matter of personal taste?

Can you say that woofies are “perverted”? No. That is to draw an artificial line, a line which cannot be discovered by natural law but only by reference to a vote, and votes are malleable. Today we say “perverted” and next week we all walk past the pet shop window with a gleam in our eyes, only to see us come back in time to “perverted.” People are fickle.

How about man-beast “marriages”? Several people have already walked down that aisle. “Marriage” is whatever we say it is anyway, so all woofies need to recognize their civil unions is a good judge.

Zoophobes, the bigots, haven’t a leg to stand on, morally speaking. Let’s ostracize them.

The Mysticism Of Simulations: Markov Chain Monte Carlo, Sampling, And Their Alternatives

Not a simulation.

Not a simulation.

Introit

Ever heard of somebody “simulating” normal “random” or “stochastic” variables, or perhaps “drawing” from a normal or some other distribution? Such things form the backbone of many statistical methods, including bootstrapping, Gibbs sampling, Markov Chain Monte Carlo (MCMC), and several others.

Well, it’s both right and wrong—but more wrong than right. It’s wrong in the sense that it encourages magical thinking, confuses causality, and is an inefficient use of time. It’s right that, if assiduously applied, reasonably accurate answers from these algorithms can be had.

Way it’s said to work is that “random” or “stochastic” numbers are input into some algorithm and out pops answers to some statistical question which is not analytic, which, that is, cannot be solved by pencil and paper (or could, but at too seemingly great a difficulty).

For example, one popular way of “generating normals” is to use what’s called a Box-Muller transformation. It starts by “generating” two “random” “independent” “uniform” numbers U1 and U2 and then calculating this creature:

Z = R \cos(\Theta) =\sqrt{-2 \ln U_1} \cos(2 \pi U_2) ,

where Z is now said to be “standard normally distributed.” Don’t worry if you don’t follow the math, though try because we need it for later. Point is that any algorithm which needs “normals” can use this procedure.

Look at all those scare quotes! Yet each of them is proper and indicates an instance of magical thinking, a legacy of our (frequentist) past which imagined aleatory ghosts in the machines of nature, ghosts which even haunt modern Bayesians.

Scare quotes

First, random or stochastic means unknown, and nothing more. The outcome of a coin flip is random, i.e. unknown, because you don’t know all the causes at work upon the spinning object. It is not “random” because “chance” somehow grabs the coin, has its way with it, and then deposits the coin into your hand. Randomness and chance are not causes. They are not real objects. The outcome is determined by physical forces and that’s it.

Second, there is the unfortunate, spooky tendency in probability and statistics to assume that “randomness” somehow blesses results. Nobody knows how it works; that’s why it’s magic. Yet how can unknowingness influence anything if it isn’t an ontological cause? It can’t. Yet it is felt that if the data being input to algorithms aren’t “random” then the results aren’t legitimate. This is false, but it accounts for why simulations are so often sought.

Third, since randomness is not a cause, we cannot “generate” “random” numbers in the mystical sense implied above. We can, of course, make up numbers which are unknown to some people. I’m thinking of a number between 32 and 1400: to you, the number is random, “generated”, i.e. caused, by my feverish brain. (The number is hidden in the source code of this page, incidentally.)

Fourth, there are no such thing as “uniforms”, “normals”, or any other distribution-entities. No thing in the world is “distributed uniformly” or “distributed normally” or distributed anything. Distributed-as talk is more magical thinking. To say “X is normal” is to ascribe to X a hidden power to be “normal” (or “uniform” or whatever). It is to say that magical random occult forces exist which cause X to be “normal,” that X somehow knows the values it can take and with what frequency.

This is false. The only thing we are privileged to say is things like this: “Give this-and-such set of premises, the probability X takes this value equals that”, where “that” is calculated via some distribution implied by the premises. (Ignore that the probability X takes any value for continuous distributions is always 0.) Probability is a matter of ascribable or quantifiable uncertainty, a logical relation between accepted premises and some specified proposition, and nothing more.

Practicum

Fifth, since this is what probability is, computers cannot “generate” “random” numbers. What happens, in the context of our math above, is that programmers have created algorithms which will create numbers in the interval (0,1) (notice this does not include the end points); not in a coherent way, but with reference to some complex formula. This formula which, if run long enough, will produce all the numbers between (0,1) at the resolution of the computer.

Say this is every 0.01; that is, our resolution is to the nearest hundredth. Then all the numbers 0.01, 0.02, …, 0.99 will eventually show up (many will be repeated, of course). Because they do not show up in sequence, many fool themselves into thinking the numbers are “random”, and others, wanting to hold onto the mysticism but understanding the math, call the numbers “pseudo random”, an oxymoron.

But we can sidestep all this and simply write down all the numbers in the sequence, i.e. all the numbers in (0,1)2 (since we need U1 and U2) at whatever resolution we have; this might be (0.01, 0.01), (0.01, 0.02), …, (0.99, 0.99) (this is a sequence of pairs of numbers, of length 9801). We then apply the mapping of (U1, U2) to Z as given above, which produces (3.028866, 3.010924, …, 1.414971e-01).

What it looks like is shown in the picture up top.

The upper plot are the mappings of (U1, U2) to Z, along the index of the number pairs. If you’ve understood the math above, the oscillation, size, and sign changes are obvious. Spend a few moments with this. The bottom plot shows the empirical cumulative distribution of the mapped Z (black), overlayed by the (approximate) analytic standard normal distribution (red), i.e. the true distribution to high precision.

There is tight overlap between the two, except for a slight bump or step in the ECDF at 0, owing to the crude discretization of (U1, U2). Computers can do better than the nearest hundredth. Still, the error even at this crude level is trivial. I won’t show it, but even a resolution 5 time worse (nearest 0.05; number sequence length of 361) is more than good enough for most applications (a resolution of 0.1 is pushing it).

This picture gives a straightforward, calculate-this-function analysis, with no mysticism. But it works. If what we were after was, say, “What is the probability that Z is less than -1?”, all we have to do is ask. Simple as that. There are no epistemological difficulties with the interpretation.

The built-in analytic approximation is 0.159 (this is our comparator). With the resolution of 0.01, the direct method shows 0.160, which is close enough for most practical applications. A resolution of 0.05 gives 0.166, and 0.1 gives 0.172 (I’m ignoring that we could have shifted U1 or U2 to different start points; but you get the idea).

None of these have plus or minuses, though. Given our setup (starting points for U1 and U2, the mapping function), these are the answers. There is no probability attached. But we would like to have some idea of the error of the approximation. We’re cheating here, in a way, because we know the right answer (to high degree), which we always won’t. In order to get some notion how far off that 0.160 is we’d have to do more pen-and-paper work, engaging in what might be a fair amount of numerical analysis. Of course, for many standard problems, just like in MCMC approaches, this could be worked out in advance.

MCMC etc.

Contrast this to the mystical approach. Just like before, we have to specify something like a resolution, which is the number of times we must “simulate” “normals” from a standard normal—which we then collect and form the estimate of the probability of less than -1, just as before. To make it fair, pick 9801, which is the length of the 0.01-resolution series.

I ran this “simulation” once and got 0.162; a second time 0.164; a third showed 0.152. There’s the first problem. Each run of the “simulation” gives different answers. Which is the right one? They all are; a non-satisfying but true answer. So what will happen if the “simulation” itself is iterated, say 5000 times, where each time we “simulate” 9801 “normals” and each time estimate the probability, keeping track of all 9801 estimates? Let’s see, because that is the usual procedure.

Turns out 90% of the results are between 0.153 and 0.165, with a median and mean of 0.159, which equals the right answer (to the thousandth). It’s then said there’s a 90% chance the answer we’re after is between 0.153 and 0.165. This or similar intervals are used as error bounds, which are “simulated” here but (should be) calculated mechanically above. Notice that the uncertainty in the mystical approach feels greater, because the whole process is opaque and purposely vague. The numbers seem like they’re coming out of nowhere. The uncertainty is couched probabilistically, which is distracting.

It took 19 million calculations to get us this answer, incidentally, rather than the 9801 the mechanical approach produced. But if we increased the resolution to 0.005 there, we also get 0.159 at a cost of just under 40,000 calculations. Of course, MCMC fans will discover short cuts and other optimizations to implement.

Why does the “simulation” approach work, though? It does (at some expensive) give reasonable answers. Well, if we remove the mysticism about randomness and all that, we get this picture:

Mystical versus mechanical.

Mystical versus mechanical.

The upper two plots are the results of the “simulation”, while the bottom two are the mechanical mapping. The bottom two show the empirical cumulative distribution of U1 (U2 is identical) and the subsequent ECDF of the mapped normal distribution, as before. The bump at 0 is there, but is small.

Surprise ending!

The top left ECDF shows all the “uniforms” spit out by R’s runif() function. The only real difference between this and the ECDF of the mechanical approach is that the “simulation” is at a finer resolution (the first U happened to be 0.01031144, 6 orders of magnitude finer; the U’s here are not truly plain-English uniform as they are in the mechanical approach). The subsequent ECDF of Z is also finer. The red lines are the approximate truth, as before.

But don’t forget, the “simulation” just is the mechanical approach done more often. After all, the same Box-Muller equation is used to map the “uniforms” to the “normals”. The two approaches are therefore equivalent!

Which is now no surprise: of course they should be equivalent. We could have taken the (sorted) Us from the “simulation” as if they were the mechanical grid (U1, U2) and applied the mapping, or we could have pretended the Us from the “simulation” were “random” and then applied the mapping. Either way, same answer.

The only difference (and advantage) seems to be in the built-in error guess from the “simulation”, with its consequent fuzzy interpretation. But we could have a guess of error from the mechanical algorithm, too, either by numerical analysis means as mentioned, or even by computer approximation (one way: estimate quantities using a coarse, then fine, then finest grid and measure the rate of change of the estimates; with a little analysis thrown in, this makes a fine solution).

The benefit of the mechanical approach is the demystification of the process. It focuses the mind on the math and reminds us that probability is nothing but a numerical measure of uncertainty, not a live thing which imbues “variables” with life and which by some sorcery gives meaning and authority to results.

Summary Against Modern Thought: There Is No Accident In God

This may be proved in three ways. The first...

This may be proved in three ways. The first…

See the first post in this series for an explanation and guide of our tour of Summa Contra Gentiles. All posts are under the category SAMT.

Previous post.

An easier (relatively speaking, considering you have been reading along thus far) article today, proving God does not look like His popular depictions, i.e. that He has no extraneous properties.

Chapter 23: There Is No Accident In God

1 FROM this truth it follows of necessity that nothing can accrue to God besides His essence, nor anything be accidentally in Him.i

2 For existence itself cannot participate in something that is not of its essence; although that which exists can participate in something else. Because nothing is more formal or more simple than existence. Hence existence itself can participate in nothing. Now the divine substance is existence itself.[1] Therefore He has nothing that is not of His substance. Therefore no accident can be in Him.ii

3 Moreover. Whatever is in a thing accidentally, has a cause of being there: since it is added to the essence of that in which it is. Therefore if anything is in God accidentally, this must be through some cause. Consequently the cause of the accident is either the divine substance itself, or something else. If it is something else, this other thing must act on the divine substance; since nothing introduces a form whether substantial or accidental, into some recipient, unless in some way it act upon that recipient: because to act is nothing but to make something to be actual, and it is this by a form.

Wherefore God will be passive and movable to some agent: which is against what has been decided above.[2] If, on the other hand, the divine substance itself is the cause of the accident that is in it, then it is impossible for it to be its cause as receiving it, since then the same thing in the same respect would make itself to be in act. Therefore, if there is an accident in God, it follows that He receives that accident in one respect, and causes it in another, even as bodies receive their proper accidents through the nature of their matter, and cause them through their form: so that God, therefore, will be composite, the contrary of which has been proved above.[3]iii

4 Again. Every subject of an accident is compared thereto as potentiality to act: because an accident is a kind of form making a thing to exist actually according to accidental existence. But there is no potentiality in God, as shown above.[4] Therefore there can be no accident in Him.iv

5 Moreover. Everything in which something is accidentally is in some way changeable as to its nature: since an accident, by its very nature, may be in a thing or not in it. Therefore if God has something that becomes Him accidentally, it follows that He is changeable: the contrary of which has been proved above.[5]v

6 Further. Everything that has an accident in itself, is not whatever it has in itself, because an accident is not of the essence of its subject.vi But God is whatever He has in Himself. Therefore no accident is in God. The middle proposition is proved as follows. A thing is always to be found more excellently in the cause than in the effect. But God is the cause of all things. Therefore whatever is in Him, is found in Him in the most perfect way. Now that which is most perfectly becoming to a thing, is that thing itself: because it is more perfectly one than when one thing is united to another substantially as form is united to matter: which union again is more perfect than when one thing is in another accidentally. It follows therefore that God is whatever He has.vii

7 Again. Substance is not dependent upon accident, although accident depends on substance. Now that which is not dependent upon another, can sometimes be found without it.[6] Therefore some substance can be found without an accident: and this seemingly is most becoming to a supremely simple substance, such as the divine substance.[7] Therefore the divine substance is altogether without accidents.viii

9 …Having established this truth we are able to refute certain erroneous statements in the law of the Saracens to the effect that the divine essence has certain forms added thereto.ix

—————————————————————————

iThis follows, probably obviously, from God’s essence being His existence (last week). Think of it like if existence = essence, then there’s no room for accident’s. What can be an “accident”, i.e. an unessential property, of existence itself?

ii“that which exists can participate in something else.” You exist, and accidentally (in this sense) have characteristics that other human beings might or might not have. None of these accidents change your essence, which is that of a rational being. The rest follows simply from point 1.

iiiA lot of words, which I trust you read carefully. If there was an accident in God, it must have been caused. By what? The only possibility is God. But that would make Him composite, and we have already proved He is not made of parts, and (as we’ll need next) He has no potentiality. Thus this proof is pretty simple.

ivYou remember act versus potential, I trust? That it takes a cause, i.e. something actual, to turn a potentiality into an actuality? The rest follows.

vThe footnote is, as it often is, to Chapter 13, which proves God is the Unchanging Changer slash Unmoved Mover. Not too different than proof 3, then.

viThe middle term is “an accident is not of the essence of its subject.”

viiThe rest is really proving existence and essence are one in God again, though the roundabout way. A cause is more than its effect, and cause cannot give what it doesn’t have. Good analogy I once read is that the cause of the water becoming red is red dye, but the red dye will necessarily be redder (or no less red) than the water. The rest follows, but it is admittedly a bit of a tangle.

viiiThis obviously follows from the material above; it doesn’t survive on its own. But note what simple means: without accidents or parts, without potentiality. It is not a synonym of “less” or the like.

ixI left this in only to prove what we already know. That disputes are ever with us.

[1] Ch. xxii
[2] Ch. xiii.
[3] Ch. xviii.
[4] Ch. xvi.
[5] Ch. xiii.
[6] Cf. ch. xiii: Again it any two things . . . p. 28.
[7] Ch. xviii.
[8] v. 4.

We Don’t Know Anything

Degrees for everybody.

Degrees for everybody.

The Appeal to Authority is not a formal fallacy, but an “informal” one, a fancy way of admitting that arguments in the form of “Because I said so” are often valid and sound. If these arguments were always a fallacy, there’d be no use asking potential employees for their resumes, no point in asking, “What are my chances, doc?”, really no reason to ask anybody anything about which you are uncertain.

On the other hand, the argument becomes a fallacy routinely in the hands of the media and politicians. Surf over to Slate (I won’t link), tune in to NPR, or listen to Debbie Wasserman Schultz speak on nearly any subject for examples.

So much is common knowledge. And I think fallacious instances of “Because I said so” are on the increase. This is because of many reasons—the usual suspects: scientism, ideology, political correctness, privilege, insularity, etc.—but one occasion for sin, a certain form of the fallacy, is not well known.

This is the form “We now know…”, usually put in service of some sociological, educational, psychological, or other loose science, like the effects of deadly rampant out-of-control tipping-point global warming.

Just like its father, the “We now know…” form of the argument from authority is sometimes valid and sound. A journalist might write, “We now know the neutrino has mass…” and cite some press release put out by some university. The journalist will be right, because in this case (you’ll have to trust me) the claim is true. But the “we” part is risible. The problem is not just that the reporter himself boasts indirectly of an expertise he does not have and has not earned, but that he encourages the same flippant behavior in his audience. And the audience, duly flattered, makes itself part of the “we”. “We now know” is then on everybody’s lips.

For many propositions from the hard sciences, as said, this is mostly harmless, because the “We now know…” won’t be fallacious. The problem is that the knowledge comes cheap and is thus subject to easy misinterpretation and incorrect extrapolation. This is because complex scientific propositions are usually highly conditional, filled with technical premises and other presuppositions, and these rarely make it to the popular level. People go off half cocked, as it were.

Actual hard scientists, in their own fields of competence, rarely fall into the trap, not taking anybody’s word for anything which they can prove for themselves. And so knowledge in the fields manned by rigorous technicians increases. But since nobody bats 1.000 and not every claim can be personally checked, the occasional error slips by.

No, the real problem, as usual, comes from fields which make fewer demands on their practitioners, and fewer still to none on their popular audiences. It’s going to be a man of some mental training who bothers to seek out and to read anything about neutrinos. But sociological claims and the like are available to one and all. Indeed, they are hard to escape, like (bad) music in restaurants.

The problem starts at the “top”. Here’s a typical example, the paper “Taking a Long View on What We Now Know about Social and Environmental Accountability and Reporting” in the Electronic Journal of Radical Organisation Theory. The paper is filled with “We now know…” propositions which are at best only sketchily supported, and others that are only wild surmises. Results from papers like this are fed to students and the public, and those who take joy in that most vague of notions “sustainability”, will uncritically add the propositions to the list of things “We now know…”

You can’t really blame the students, the dears, at least not fully. The serious fault is with inexpert experts, a large and growing class, a growth given impetus by the swelling of higher education. More people earning a “degree” means more professors, and since the gifts of intelligence are varied, this means a necessary expansion in “degrees” which require less effort (from both parties). It is in these fields the “We now know…” is mainly found. Compounding the problem is that the students who carry these “We now knows…” feel that their beliefs have been certified by their degrees.

The solution would thus appear to be a return to (or increase in, since it still partially exists) some idea of educational elitism, the idea that some forms of knowledge are better or more important than others. But give our insatiable craving for Equality, I don’t see it happening.

The Problem Of Grue Isn’t; Or, A Gruesome Non-Paradox About Induction

This emerald does not appear to be green, nor grue. Maybe Goodman was right!

This emerald does not appear to be green, nor grue. Maybe Goodman was right!

Skepticism about induction happens only among academic philosophers, and only in print. Tell an induction skeptic to take a long walk off a short dock or hint that his health insurance will be cancelled and you will find an immediate and angry convert to Realism.

Some philosophers come to their skepticism about induction from puzzles which they are unable to solve and reason that, since they cannot solve the puzzles, it’s a good bet to side with skepticism. Well, in some ways this is natural.

A classic puzzle is Nelson Goodman’s “grue”. Goes like this. Grue is a predicate, like green or blue, but with a built-in ad hoc time component. Objects are grue if they are green and observed before 21 October 1978 or blue and observed after that date. A green grape observed 20 October 1978 and a blue bonnet observed 22 October 1978 are grue. But if you saw the green grape yesterday, or remember the blue bonnet from 1976, then neither are grue. The definition changes with the arbitrary date.

So imagine it’s before the Date and you’ve seen or heard of only green emeralds. Induction says future, or rather all unobserved, emeralds will also be green. But since it’s before the Date, these emeralds are also grue, thus induction also says all unobserved emeralds will be grue. Finally comes yesterday—and lo!—a green and not a blue emerald appears, thus not a grue emerald. Induction, which told us it should be grue, is broken!

There have been several exposures of the grue fallacy before, and up until the other day (another date!) I had thought David Stove’s in his Rationality of Induction was best. But I now cast my vote for Louis Groarke’s in his An Aristotelian Account of Induction. He calls belief in Goodman’s fallacy “an adamant will to doubt rather than an evidence-based example of a deep problem with induction” and likens it to the fallacy of the false question (e.g. “Have you stopped cheating on your taxes yet?”).

Groarke says (p. 65):

The proposition, “emeralds are grue,” [if true] can be unpacked into three separate claims: emeralds are green before time t (proposition1); emeralds are blue after time t (proposition2); and emeralds turn from green to blue at time t (proposition3). Goodman illegitimately translates support for proposition1 into support for proposition2 and proposition3. But the fact that we have evidence in support of proposition1 does not give us any evidence in support of all three propositions taken together.

What does the arbitrary time have to do with the essential composition of an emerald? Not much; or rather, nothing. The reason we expect (via induction) unobserved emeralds to be green is we expect that whatever is causing emeralds to be green will remain the same. That is, the essence of what it is to be an emerald is unchanging, and that is what induction is: the understanding of this essence, and awareness of cause.

Groarke emphasizes that the time we observe something is not a fact about the object, but a fact about us. And what is part of us is not part of the object. Plus, the only evidence anybody has, at this point in time, is that all observed emeralds have been green. We even have a chemical explanation for why this is so, which paradox enthusiasts must ignore. Thus “there is absolutely no evidence that any emeralds are blue if observed after time t.”

Two things Groake doesn’t mention. First is that, in real life, the arbitrary time t is ever receding into the future. I picked an obviously absurd date above; it’s absurd because we have all seen green emeralds but no blue ones up to today, which is well past 1978. The ad hoc date highlights the manufactured quality of the so-called paradox. When, exactly, should we use a grue-like predicate for anything?

Secondly, nobody not in search of reasons to be skeptical would have ever thought to apply a predicate like grue to anything. It is entirely artificial. If you doubt that, consider that you can substitute any other predicate after the arbitrary date. It doesn’t have to be blue. Try salty, hot, tall, or fast. An emerald that is green up until t then fast? That’s ridiculous! Yes, it is.

After showing the paradox isn’t, Groake goes on to explain the possible reasons why the paradox has been so eagerly embraced. Cartesian corrosion. That bottomless skepticism which dear old Descartes introduced in the hope of finding a bedrock of certainty. There isn’t space here to prove that, but anybody who has read deeply in epistemology will understand what that means.

Update A glimpse of how much angst the “problem” of grue has created, try this (or similar) searches. Also note the New & Improved title.

Older posts

© 2014 William M. Briggs

Theme by Anders NorenUp ↑