Skip to content

Category: Philosophy

The philosophy of science, empiricism, a priori reasoning, epistemology, and so on.

October 21, 2008 | 11 Comments

Science is decided by committee

Scientists still do not appear to understand sufficiently that all earth sciences must contribute evidence toward unveiling the state of our planet in earlier times, and that the truth of the matter can only be reached by combing all this evidence. . . It is only by combing the information furnished by all the earth sciences that we can hope to determine ‘truth’ here, that is to say, to find the picture that sets out all the known facts in the best arrangement and that therefore has the highest degree of probability. Further, we have to be prepared always for the possibility that each new discovery, no matter what science furnishes it, may modify the conclusions we draw.

—Alfred Wegener.

We have all heard Wegener’s sad story. How all of “science” aligned against him and his bizarre, false, ridiculous, obviously false theory of continental drift. What happened, more or less, and certainly not formally, was about 100 years ago all geologists got together and voted that Wegener had lost his mind. But, of course, and in fact, they had, and from Wegener arose the fascinating study of plate tectonics.

Then there is the Rene Blondlot saga. All of “science” aligned against his, too, and his weird, silly, sad, and pathetic theory of n-rays. What happened was that about 100 years ago all physicists got together and came to the consensus that poor Blondlot had lost his mind. And, of course, he had. From Blondlot came the cautionary tale of how easy it is to fool yourself, even if you happen to be a very smart man. There are no n-rays.

I don’t want to dwell on the point here, but there is no such thing as science. There are things we know and things we don’t. There are more things we think are true, and many more we think are false. And that’s it. But the purposes of this essay, I’ll, like everybody else, use the word but leave it vague and undefined.

Now, for every Wegener, there is at least one Blondlot and certainly hordes of nameless others, each touting their own personalized, probably false theories-of-everything. What this means is that because some person touts a theory which “science” denies, it is more likely that that theory is false than it is true. Thus, it is usually rational, for example, to seek Dr Smith’s of State U.’s opinion on Joe Jones’s new theory of zero-point energy. That is, an appeal to the consensus is rational.

The opinion of a great many learned persons concentrated in one place is a good filter of nonsense and falsity. But this filter is too often applied indiscriminately and too assiduously and it often blocks truth, particularly if the truth is new and different, or it is against a vogue that has taken tight, but temporary, grip on the academic masses.

Max Planck: “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.”

Thus, in with the new but only after the those with the old are out. It’s often not as bleak as Planck painted it, of course. Some fields, especially when they’re young and unossified, grow by leaps and bounds, and each new idea, no matter how trivial or valid, is celebrated. It’s only after a field has had time to metastasize, that is, be formally recognized by its own separate department—complete with chairs (endowed, naturally), meetings, and new journals—that the filter becomes fully functional.

Academic freedom and its opposite

It might, then, not surprise you to learn that as a professor at a university, the locus of emanations and endless chanting about academic freedom, you cannot teach what you want. You cannot even study what you want. You can still think what you like, but, as we all know by now, you cannot always speak or write it.

What I mean by this is that peer review is not confined to accepting or rejecting journal articles. The sword is also wielded inside departments. Courses, for example, are decided upon by a committee. Many committees, actually. There is a departmental one (or more than one), then usually one at the “school”, or group-department level. There is sometimes another beyond that at the university-wide level.

Each level has to vet and approve any new course so that, among other things, it fits in among other courses, that the material is aligned with the consensus, and so on. These are reasonable goals, but the constrictions lead to tremendous inertia.

The amount of innovation (in teaching method or material) allowed in a course is inversely proportional to its difficulty. Thus, very advanced courses—seminars,1 usually taught just to graduate students and other professors—are wide open. You can teach exactly what you like, require what you like, depart on any tangent. There is little consensus about what to teach or how.

But in 101-type classes, your behavior is strictly proscribed. The book2 is decided for you, the lesson plan is decided for you, and in some places even the quizzes, homeworks, and exams are decided for you. Again, usually this is not entirely bad. The closer the field is to being driven by logic or is empirically verifiable, the more likely that the basics in that field are known, and that the optimal order and method to teach the introductory concepts have been hammered out. So, for example, all physics students should learn that F = ma, “pre-calculus” students must know that ln(exp(x)) = x, and all chemistry students should know in what way a proton, neutron, and electron are different. Which is to say, there is a consensus about what is known and what is best about the fundamentals. This obviously works against you when the fundamentals have recently changed,3 or the fundamentals are in dispute.

A within-department consensus always exist to also ensure that the work professors do is limited. Most do not feel that it is a burden to toe the line; after all, most professors are hired to work on a specific sub-sub-area within a field, and it is this area in which they enjoy working. Academics attend meetings in their specialties and sub-specialties and the areas of work which are popular are discovered. This group-think can lead to success of the kind we have certainly seen in many fields, but it also tends to narrow the scope of new work. We have all heard somebody tell us “You’re working on that? Nobody is interested in that.” And we have taken its meaning: work on something popular or your tenure or promotion will be more difficult.

The trend towards specialization has a built-in positive feedback. The more people work in a narrow field, the narrower that area becomes, or the more likely an area splits into two or more areas which also constrict. Again, not always bad, as this can lead to rapid progress, but it clearly not the best model for all people or all fields. If you are hired to do radiative cloud modeling, for example, you are not encouraged to dabble in your neighbor’s boundary layer fluid flow problem. You certainly would receive odd looks if you were to suddenly discover an interest in, say, difference equations or philosophy. You might furtively work in these areas that are “not yours”, and you might even publish in them, but you will not receive any credit for doing so, and as I said above, papers published in other areas might even work against you: “She’s not focused” is a commonly heard phrase. Which is to say, broad curiosity is not rewarded; potentially stultifying specialization is.

On being wrong

The closer a field of study is itself to politics or any area which involves human behavior, the more the consensus acts to keep people in line than it does to promote innovation. Non-consensus ideas are not welcome. Professors holding verboten thoughts are not hired, or if they are found out, they are let go, or they even leave voluntarily, tired of the process.

Naturally, the more a field agrees on what is actually true, then the stronger the consensus is to be sought. Problem is—as you might have guessed—is that people in these human-centered fields often feel, as people in more physical fields do not, in the grip of enlightenment and so always advocate the consensus stridently. The reasons for this are obvious and well known. The solution seems to be, because people in areas which involve humans are prone to ill-informed zealousness, that they should all be taught and consistently reminded that they might be wrong. This is the reason, after all, that, on average, people involved in physical areas are humbler: they have seen and verified their failures, and they have seen and acknowledged that their predictions sometimes are a bust.

Not all who work in physical areas are so lucky as to face correction. Today, there are at least two fields in which predictions are being made that either cannot be verified or cannot be verified until quite a lot of time has passed: string theory and climatology. The best these two fields can say is “Observations we have seen are consistent with our theory.” A true, or mostly true, statement. But, and I need hardly point this out, the observations can be equally, or even more, consistent with different theories, even theories which make opposite predictions. This is why making predictions is more important than explaining what we have already seen.

In fields where making predictions is more difficult, again, the human-centered or influenced ones, the local consensus is stronger, and people in those fields look more to the past to find observations which support their views. Evidence is picked over, and the best—in the sense of most agreeable—is kept, the rest discarded or explained away. The more a field is in the grip of explanation, the stronger the consensus will be, and of course the greater the chance that there will be splinter consensuses.

This is contrasted with fields in which (verifiable) prediction is king. There may be—there certainly are—splinter groups, but people can and do swear allegiance to more than one group. The consensus in these groups is more fluid and more likely to change on short notice. If there are many factions—explanations for a phenomenon—the first from which arises a correct prediction is the one that gains the most support. If that explanation can continue to make verifiable predictions, then eventually the explanation is accepted and becomes part of the consensus.

Everybody who agrees with me, raise their hands

So far we have seen that the consensus can work both for and against what is true. This should not be surprising. Research is done by people, and people have foibles. The process, on the whole, and especially in areas which do not involve human behavior, appears to be working. It is a clunky system, but it has shown results and still has promise.

The system breaks, as it always has, when people fall in love with an idea because that idea fits in with other deeply held beliefs, or when people simply want the idea to be true. When these like-minded people form a group and then a consensus, progress is halted, or even set back. These people need more experience with failure—that is, with acknowledging failure. I have no clear idea how to do this.

Naturally everything in this essay is subject to dozens of caveats and exceptions to the rule. The general theme sticks, however: people are generally too sure of themselves.


1Incidentally, these seminar courses are often taught “off the books” by the professor. Meaning they do not always count towards their official teaching load. Credit for students taking seminars is usually limited, too.

2The difference between the 101-books used in these courses is driven more by economics and fad than by fact or material.

3This happened in physics about 60-70 years ago, and is happening in statistics now.

October 16, 2008 | 45 Comments

Bad news for Bonobo

It turns out—shockingly, to some correct-thinking academics—that the bonobo ape is just as bloodthirsty as the rest of the higher primates. Yes, it’s true.

Bonobos, a sex- and peace-loving species of ape often held up as an exemplar for human emulatation, like to hunt, kill, and eat other primates. Researchers first learned this by looking at bonobo poop, which contained more than just the expected half-digested berry seeds.

After the spoor had given up it secrets, researchers put a tail on some bonobos and they discovered the truth: the apes hunt in packs, which is obviously more efficacious than hunting singly. Their prey, after all, is fast and wary.

Now, this wouldn’t be the least interesting (or even surprising) except for a curious development in the Enlightened world (Europe, of course). Spain will grant human rights to apes.

Some of you will hail this special instance of Progressive thinking; but before you cheer let me remind you of a fact. Logically, you cannot have a right without entailing a responsibility. What this at least means is that if you grant “human” rights to apes, you must also ensure they own up to their “human” responsibilities.

Thus, if a certain ape were to, say, steal a banana from a fellow ape, he would be guilty of theft, and so must be held accountable and punished. If a gorilla were to be so bold as to take more than one mate, he must be prosecuted for polygamy. If a monkey, as monkeys sometimes do, kills a conspecific, then that monkey must pay the price (not the death penalty—that would be inhuman—but perhaps life in prison).

The immediate consequence is obvious. Spain will require an enormous number of translators so that, when a primate is brought to court, he can be made to understand the charges against him.

Meanwhile, in another Spanish-speaking country, Ecuador, people have just voted in a new constitution which—wait for it—grants human rights to Mother Nature. Some of the language from that constitution:

The State will apply precaution and restriction measures in all the activities that can lead to the extinction of species, the destruction of the ecosystems or the permanent alteration of the natural cycles.

The introduction of organisms and organic and inorganic material that can alter in a definitive way the national genetic patrimony is prohibited.

This can be read to mean, for example, that no more will farmers be allowed to breed their stock in an intelligent manner, nor will they apparently be allowed to use fertilizer. Tough luck for the farmers—and for the people who have to eat their food.

But to concentrate on the negatives that will befall some people misses the main point: this language snippet delineates certain rights for Mother Nature. And just like with the apes, you cannot have rights without responsibilities.

This obviously means that the next time a flood washes away some property, Mother Nature must be held accountable. When lightening kills a cat, a penalty must follow. If a person is killed in a storm….something must be done!

But I can’t bear to think of it. Because, as everybody knows, “You don’t fool with Mother Nature.”

(Thanks to SooperDave for finding the Mother Nature Clip!)

October 11, 2008 | 7 Comments

Looks like an own goal to me

Friend of humanity, meteorologist, and philosopher Tom Hamill reminds us of this clip:

Which reminds me to this clip, one of the very few songs the lyrics of which I have manage to memorize:

October 9, 2008 | 21 Comments

Why probability isn’t relative frequency: redux

(Pretend, if you have, that you haven’t read my first weak attempt. I’m still working on this, but this gives you the rough idea, and I didn’t want to leave a loose end. I’m hoping the damn book is done in a week. There might be some Latex markup I forgot to remove. I should note that I am more than half writing this for other (classical) professor types who will understand where to go and what some implied arguments mean. I never spend much time on this topic in class; students are ready to believe anything I tell them anyway. )

For frequentists, probability is defined to be the frequency with which an event happens in the limit of “experiments” where that event can happen; that is, given that you run a number of “experiments” that approach infinity, then the ratio of those experiments in which the event happens to the total number of experiments is defined to be the probability that the event will happen. This obviously cannot tell you what the probability is for your well-defined, possibly unique, event happening now, but can only give you probabilities in the limit, after an infinite amount of time has elapsed for all those experiments to take place. Frequentists obviously never speak about propositions of unique events, because in that theory there can be no unique events. Because of the reliance on limiting sequences, frequentists can never know, with certainty, the value of any probability.

There is a confusion here that can be readily fixed. Some very simple math shows that if the probability of A is some number p, and it’s physically possible to give A many chances to occur, the relative frequency with which A does occur will approach the number p as the number of chances grows to infinity. This fact—that the relative frequency sometimes approaches p—is what lead people to the backward conclusion that probability is relative frequency.

Logical probabilists say that sometimes we can deduce probability, and both logical probabilists and frequentists agree that we can use the relative frequency (of data) to help guess something about that probability if it cannot be deduced1. We have already seen that in some problems we can deduce what the probability is (the dice throwing argument above is a good example). In cases like this, we do not need to use any data, so to speak, to help us learn what the probability is. Other times, of course, we cannot deduce the probability and so use data (and other evidence) to help us. But this does not make the (limiting sequence of that) data the probability.

To say that probability is relative frequency means something like this. We have, say, observed some number of die rolls which we will use to inform us about the probability of future rolls. According to the relative frequency philosophy, those die rolls we have seen are embedded in an infinite sequence of die rolls. Now, we have only seen a finite number of them so far, so this means that most of the rolls are set to occur in the future. When and under what conditions will they take place? How will those as-yet-to-happen rolls influence the actual probability? Remember: these events have not yet happened, but the totality of them defines the probability. This is a very odd belief to say the least.

If you still love relative frequency, it’s still worse than it seems, even for the seemingly simple example of the die toss. What exactly defines the toss, what explicit reference do we use so that, if we believe in relative frequency, we can define the limiting sequence?2. Tossing just this die? Any die? And how shall it be tossed? What will be the temperature, dew point, wind speed, gravitational field, how much spin, how high, how far, for what surface hardness, what position of the sun and orientation of the Earth’s magnetic field, and on and on to an infinite list of exact circumstances, none of them having any particular claim to being the right reference set over any other.

You might be getting the idea that every event is unique, not just in die tossing, but for everything that happens— every physical thing that happens does so under very specific, unique circumstances. Thus, nothing can have a limiting relative frequency; there are no reference classes. Logical probability, on the other hand, is not a matter of physics but of information. We can make logical probability statements because we supply the exact conditioning evidence (the premises); once those are in place, the probability follows. We do not have to include every possible condition (though we can, of course, be as explicit as we wish). The goal of logical probability is to provide conditional information.

The confusion between probability and relative frequency was helped because people first got interested in frequentist probability by asking questions about gambling and biology. The man who initiated much of modern statistics, Ronald Aylmer Fisher3, was also a biologist who asked questions like “Which breed of peas produces larger crops?” Both gambling and biological trials are situations where the relative frequencies of the events, like dice rolls or ratios of crop yields, can very quickly approach the actual probabilities. For example, drawing a heart out of a standard poker deck has logical probability 1 in 4, and simple experiments show that the relative frequency of experiments quickly approaches this. Try it at home and see.

Since people were focused on gambling and biology, they did not realize that some arguments that have a logical probability do not equal their relative frequency (of being true). To see this, let’s examine one argument in closer detail. This one is from Sto1983, Sto1973 (we’ll explore this argument again in Chapter 15):

Bob is a winged horse
Bob is a horse

The conclusion given the premise has logical probability 1, but has no relative frequency because there are no experiments in which we can collect winged horses named Bob (and then count how many are named Bob). This example, which might appear contrived, is anything but. There are many, many other arguments like this; they are called couterfactual arguments, meaning they start with a premise that we know to be false. Counterfactual arguments are everywhere. At the time I am writing, a current political example is “If Barack Obama did not get the Democrat nomination for president, then Hillary Clinton would have.” A sad one, “If the Detroit Lions would have made the playoffs last year, then they would have lost their first playoff game.” Many others start with “If only I had…” We often make decisions based on these arguments, and so we often have need of probability for them. This topic is discussed in more detail in Chapter 15.

There are also many arguments in which the premise is not false and there does or can not exist any relative frequency of its conclusion being true; however, a discussion of these brings us further than we want to go in this book.4

Haj1997 gives examples of fifteen—count `em—fifteen more reasons why frequentism fails and he references an article of fifteen more, most of which are beyond what we can look at in this book. As he says in that paper, “To philosophers or philosophically inclined scientists, the demise of frequentism is familiar”. But word of its demise has not yet spread to the statistical community, which tenaciously holds on to the old beliefs. Even statisticians who follow the modern way carry around frequentist baggage, simply because, to become a statistician you are required to first learn the relative frequency way before you can move on.

These detailed explanations of frequentist peculiarities are to prepare you for some of the odd methods and the even odder interpretations of these methods that have arisen out of frequentist probability theory over the past ~ 100 years. We will meet these methods later in this book, and you will certainly meet them when reading results produced by other people. You will be well equipped, once you finish reading this book, to understand common claims made with classical statistics, and you will be able to understand its limitations.

(One of the homework problems associated with this section)
{\sc extra} A current theme in statistics is that we should design our procedures in the modern way but such that they have good relative frequency properties. That is, we should pick a procedure for the problem in front of us that is not necessarily optimal for that problem, but that when this procedure is applied to similar problems the relative frequency of solutions across the problems will be optimal. Show why this argument is wrong.

1The guess is usually about a parameter and not the probability; we’ll learn more about this later.

2The book by \citet{Coo2002} examines this particular problem in detail.

3While an incredibly bright man, Fisher showed that all of us are imperfect when he repeatedly touted a ridiculously dull idea. Eugenics. He figured that you could breed the idiocy out of people by selectively culling the less desirable. Since Fisher also has strong claim on the title Father of Modern Genetics, many other intellectuals—all with advanced degrees and high education—at the time agreed with him about eugenics.

4For more information see Chapter 10 of \citet{Sto1983}.