Skip to content

Category: Philosophy

The philosophy of science, empiricism, a priori reasoning, epistemology, and so on.

August 19, 2008 | 10 Comments

Roger Kimball’s Challenge

The famous writer Roger Kimball has issued a challenge:

Name the silliest argument to be offered by a serious academic in the last 25 years and to be taken up and be gravely masticated by the larger world of intellectual debate.

A leading contender is Global Warming. Kimball’s own entry is Francis Fukuyama’s “end of history” thesis.

Kimball’s rules of the contest:

I’ll collect proposals for the next week or two and then announce the winner. (The decision, from which there is no appeal, will be determined by a committee staffed, overseen, and operated entirely by me.)

This tournament reminds me of the one issued by philosopher David Stove, who sought to find The World’s Worst Argument. The winning entry was entered by Stove himself:

We can know things only: as they are related to us; under our forms of perception and understanding; insofar as they fall under our conceptual schemes, etc. So, we cannot know things as they are in themselves.

By “worst”, according to Jim Franklin, his literary executor (Stove is dead) and student, Stove “meant that it had to be extremely bad logically and also it had to be very widely believed.” (Quote is near the end of the link.)

We’ll discuss Stove’s worst argument another time, but the thing to notice now is this. Since he was so familiar with bad arguments of every stripe, Stove capered to the finish line. All other entries didn’t have a chance. It is probably the same with Kimball: he knows too many appalling arguments so it will be difficult to beat him.

Readers of this blog might also nominate “Global warming” for Kimball’s contest, but the category is ambiguous, there is no specificity to it. Of course there is global warming, and mankind is certainly responsible for some of it (think of thermometers placed in urban settings that have grown in population through time). The entry has to be clarified before it has a chance. I don’t think Al Gore will collect this prize.

It will be difficult, therefore, to beat the “End of History” nonsense. This is Fukuyama’s thesis, first promulgated at the end of the Cold War, that “The end of history as such” has been reached; that we have realized “the evolution and the universalization of Western liberal democracy as the final form of human government”; and that “the ideal will govern the material world in the long run.”

(As Russia is clipping Georgia, I realize I should have written “First Cold War” in the previous paragraph.)

As Stove himself (Kimball has edited a volume of his writings) said

[T]he mixture which Fukuyama expects to freeze history forever–a combination of Enlightenment values with the free market–is actually one of the most explosive mixtures known to man. Fukuyama thinks that nothing will ever happen again because a mixture like that of petrol, air, and lighted matches is widespread, and spreading wider. Well, Woodrow Wilson thought the same; but it is an odd world view, to say the least.

It’s a strong contender, this silly argument, and will likely win. But we shouldn’t acquiesce without a fight. Here is my entry (well, it’s a modification of my entry; I clicked “submit” too quickly):

Moral Equivalence

This is the thesis that all ideas are ethically commutable. Moral equivalence often goes by the terms “Diversity” and “Multiculturalism.”

Diversity, as in “we value diversity in our student body.” One major ivy-league university, for example, states that it “is committed to extending its legacy recruiting a heterogeneous faculty, student body and staff; fostering a climate that doesn’t just tolerate differences but treasures them [etc.]” You cannot now find a university that isn’t constantly and loudly devoted to diversity.

However, we can be sure that by this they do not—and should not—mean intellectual diversity. This should be obvious. For if we merely wanted to increase intellectual diversity, we would create classes and recruit subject matter experts in “How to Murder”, “Advanced Pedophilia”, “Creative Robbery”, “Marxist Theory”, or similar idiocies. You often hear conservatives ask to increase intellectual diversity on campuses; conservatives are arguing poorly, because they really mean they want to increase conservative thought.

Diversity, then, cannot mean intellectual diversity. Therefore, to “increase diversity” usually means to “without regard to merit, forcibly manipulate the ratios of student/faculty races so that it matches that of an (unstated) specific goal.” Of course, this implies quotas, which is to say, legalized discrimination based on race. Incidentally, statistically speaking, it is nearly impossible to achieve “diversity” without resort to forced quotas—I’ll talk about this another time.

Multiculturalism is just as bizarre. I have often thought it would be instructive to set up a “Multiculturalism Booth” at a college fair. Participants would take part in common rituals of many different cultures. For example, there would be the stoning of homosexuals, the honorable murder of raped women, the clitorectomy ring toss, a foot race whereby the losers are killed and eaten, and so on. Naturally, the booth’s staff will be equipped with native costumes and pamphlets describing the history and cultural relevance of each topic. This is meant to be educational, after all. At the end of the day, those that survived would be given a survey asking their opinion on the importance of multiculturalism.

How many participants would finally admit that all cultures are not equal, that some are better than others?

August 8, 2008 | 12 Comments

The B.S. octopus

Jonathan Bate, of Standpoint, recently wrote an essay “The wrong idea of a university”:

It used to work like this. Dr Bloggs, the brilliant scholar who had solved the problem of the variant quartos of Christopher Marlowe’s Dr Faustus, was one of the most boring teachers on God’s Earth. Mr Nobbs, who never got around to finishing his PhD on the image of the sea in English Literature, let alone publishing any academic articles, was an awe-inspiring teacher: he had read everything and could instil in his students a passion for the subject that would stay with them all their lives. All the Head of the English Department had to do was give Nobbs a heavy teaching load, which delighted both him and the students, and Bloggs a light one, which also delighted the students and gave him more time alone with his textual collations. The department was a happy place.

But then along came the RAE. Bloggs’s work was just the stuff to bring the department the money that came with a five-star rating. Nobbs, to the distress of the students, was pensioned off as “non-returnable”. The next generation of academics learnt the lesson. They finished their PhDs and started up new journals in which to get their work published. They developed more and more specialised areas of expertise. (The RAE is Research Assessment Exercise, an attempt to quantify academic quality in England.)

In proof that university politics have not changed in the 100 years since William James wrote his now-famous essay “The PhD Octopus“, there is this quote:

Some years ago, we had at our Harvard Graduate School a very brilliant student of Philosophy, who, after leaving us and supporting himself by literary labor for three years, received an appointment to teach English Literature at a sister-institution of learning. The governors of this institution, however, had no sooner communicated the appointment than they made the awful discovery that they had enrolled upon their staff a person who was unprovided with the Ph.D. degree. The man in question had been satisfied to work at Philosophy for her own sweet (or bitter) sake, and had disdained to consider that an academic bauble should be his reward.

His appointment had thus been made under a misunderstanding. He was not the proper man; and there was nothing to do but inform him of the fact. It was notified to him by his new President that his appointment must be revoked, or that a Harvard doctor’s degree must forthwith be procured. (Be sure to read the rest of this essay.)

It will probably always be thus at universities: no PhD, no respect. Good thing nobody told Einstein, however, who had his miracle year long before he actually had the credentials to do so. Good thing, too, that nobody told the editors of the journals where he submitted his papers, nor did anyone notify readers of those journals of the lack of Einstein’s bona fides.

Actually, in areas like physics, chemistry, math, and so on, lack of credentials is still not a barrier for authors to gain consideration. Anybody is free to submit a paper, and due to the blinded, or ever double-blinded, refereeing policies at these journals, the paper will get something like a fair hearing. If the paper is published, still nobody will know that the person who wrote it lacks certification (unless, of course, somebody knows the person).

Incidentally, medicine is an exception to this rule. Every paper in medical journals list the authors’ credentials after their names. Papers are festooned with MDs, MPHs, DDS, EDDs, DOs, PhDs, and every other possible combination of letters. I have even seen some with the lowly BS. I have been unable to find any other field—I have looked in English, sociology, history, and so on—that maintains this silly practice. The argument of Appeal to Authority remains strong in medicine. Too, these authors like to see their degrees displayed prominently. Well, who didn’t know that physicians have large egos? (When medical co-authors ask me for my letters, I tell them “HS”, which I almost got away with once until a journal editor caught it. HS = High School.)

Although anybody is free to submit papers regardless of their formal education, few to none will actually work for a university as a professor without the actual blessing. This is so well know as to be unexceptional. It is not usually the professors in the department who employ a teacher sans PhD that care. I have known two exceptional men who were welcomed, more than welcomed, in their departments in spite of their lack of letters. It is usually the administration who insist. They see the spreadsheet before them with a blank column by the man’s name and they balk, unable and unwilling to grant the title “professor”, regardless of the teacher’s ability, unless that column can be filled in. But, however, this has been the way of the world for at least the last century.

What is more pernicious, is that the desire for credentials has spread to nearly every area in society. People used to be able to get jobs with nothing more than high school educations. While it’s probably true that the content of a high school education nowadays is less than it used to be, it is still sufficient to allow somebody to, for example, be an assistant manager of Jamba Juice. That company, we learn from Monster.com, is soliciting applications for the position, advising applicants that a “Bachelor’s” degree is preferred. Do you really need a BS or BA to learn how to prepare and pour a smoothie? Like many job postings, this one merely says “degree wanted” and is indifferent to the field of study. Proof that the “degree” is not a necessity.

It is true that, generally, more knowledge is better than less, and that colleges attempt to give students more. But it is not clear that what colleges attempt to teach is the sort of knowledge that is useful to being a manager at Jamba Juice. Nor is it even close to true that the only or best or ideal way to gain knowledge is by attending college, especially to acquire job-specific knowledge.

A typical answer from students about why they are attending college is “to get a degree.” Note carefully that this is not the same as “to learn all about biology” or physics, or English literature, or whatever. Or to learn how to be a better citizen or lead an examined life or become, as the hackneyed phrase has it, “well rounded.” Some will say they are at college to “get an education”, which is synonymous with “get a degree”, because, as I hope you know, education is not equivalent to knowledge.

Getting a degree, and not necessarily gaining knowledge, is a rational thing for students to do. This is because they know, as we have just seen, that employers explicitly require “degrees.” It is true that some employers also require field-specific knowledge, but this is not stressed strongly or at all for entry-level candidates. Businesses will teach people what they need to know to do their jobs once they get there. Except in certain highly technical areas, where some competency with computers and an extensive numeracy are expected. Students can gain these skills in college, but they could just as easily have attained them in a trade school in half the time at half the cost. But more and more, non-technical, non-complex jobs require Bachelor’s degrees, mainly because, well, because businesses have convinced themselves “degrees” are needed.

On the whole, employers—and civilians, too—view a Bachelor’s “degree” as something magical, imbuing its holder with special powers—but not necessarily special knowledge. You’ll have heard stories of some person, wholly competent in her job, who is paid a low salary because she has not yet attained her “degree.” Once she comes by it, she is immediately given a raise, because people with “degrees” of course rate a higher salary. Or you might know of another person who everybody agrees should be promoted and given extra responsibility, but, sorry, no degree, so the promotion cannot be given. Everybody is heartily sorry for it, of course, but what can they do? It’s a degree we’re talking about, after all.

It is true that knowing whether a person has advanced educational credentials helps predicts whether they will be to accomplish some task. But it is not wholly predictive, and not even mostly predictive. People are fooling themselves by weighing the evidence of letters after a name too strongly. It has also been observed that the more education a person has the less likely that person will admit a mistake or ignorance on any subject.

A host of “experts” have exploded into public life over the past twenty or thirty years. There is a credentialed expert for any subject imaginable, ready to be drug out and placed onto television to say why this or that is so. Businesses regularly host expensive consultants with “MBAs” from “good schools” to tell them how to do their jobs. Government routinely taps academia to justify or give blessing to what it wants to do. The letters after the name of the expert are enough for most people to accept what is uttered unquestioningly. Having somebody make decisions for you is also comforting and easy. Objecting too what an expert says, unless the dissident is at least as credentialed as the expert, is seen as distasteful, and even in some cases immoral (see two posts back). There is much more to say on this subject, but for now we can note that the old rule that the best argument wins has been lost.

I don’t think anything can stop or reverse this trend of the increasing hunger for degrees. Jobs that used to require nothing except intelligence now require a Bachelor’s. Some of them prefer a Master’s. Soon, a PhD will be the minimum. But we should all remember the words of Frank Mundus, the famous (uncredentialed) shark hunter whose life partly formed the basis for the fisherman Quint character in the book and movie Jaws. Some “PhDs” once took exception to a belief he espoused about a certain behavior in sharks. Mundus was proved right and the “experts” wrong. In reply he said, “A PhD don’t mean shit.”

Amen, brother.

By the way, in case you were nervous, your author (me) has a PhD in statistics from Cornell, so you know I know what I’m talking about.

July 21, 2008 | 7 Comments

A Rhetorical Question

A tip of the hat to Dennis Dutton’s Arts and Letters Daily where this article from the Chronicle of Higher Education appeared. Incidentally, since the Chronicle bought out A&LD, there have been a lot more links to Chronicle stories, which inevitably means some weaker stories get linked.

Like this one by Russell Jacoby entitled “Gone, and Being Forgotten: Why are some of the greatest thinkers being expelled from their disciplines?” He starts

How is it that Freud is not taught in psychology departments, Marx is not taught in economics, and Hegel is hardly taught in philosophy? Instead these masters of Western thought are taught in fields far from their own. Nowadays Freud is found in literature departments, Marx in film studies, and Hegel in German.

Jacoby seems to think that these three were “leading historical thinkers” and he laments their absence from college syllabi. He sneers, “Psychology without Freud, economics without Marx, philosophy without Hegel: For disciplinary cheerleaders, this confirms intellectual progress. The cloudy old thinkers have made way for new scientific researchers.”

Well, yes.

No psychologist, after reading Frederick Crews’s devastating critiques can take Freud seriously, no economist who has at least a passing knowledge of history can but laugh at Marx, and no philosopher familiar with David Stove’s quip that Hegel was a philosopher “of the kind who would quickly starve to death if [his] food supply depended on [his] ability to argue” would spend much time with our German friend.

The reason, then, that these gentlemen are not taught in their respective fields is because of the simple fact that their theories have been discredited and so should not be taught—except as they are historically interesting. For example, it is a fascinating sociological question to ask why people were so eager to believe in Freud’s manufactured experimental “evidence”, or why so many who thought that communal “ownership” of property were so willing to slaughter their fellows in the name of utopia. These three “thinkers” were “leaders” in the sense that they won many converts based upon faulty, “cloudy” reasoning, but not because their ideas were any good.

In, for example, the field of physics, Newton is still taught in mechanics, even though his ideas have been superseded by quantum mechanics, because Newtonian mechanics are more than reasonable approximations to everyday phenomena. Too, physics texts do not spend much time with failed, obvious false theories because the purpose of these books is to teach what is true and useful. Physics books are not meant to be works of history. There is no analogy for Jacoby’s crew. Freud’s etc. theories are not approximately true or even useful, but just plain false or intentionally misleading. Their names can, of course, show up in the relevant psychology, economic, and philosophy courses, but as examples of “what not to do”, but that’s about it.

Jacoby’s question, then, is rhetorical. They are not taught because they should not be taught.

That this is recognized means that there is some hope in academia. But it is significant that Jacoby found Freud being taught in literature courses, Mark in “film studies” programs, and Hegel in German language classes. Obviously, the people who teach these subjects will, on average, know less about psychology, economics, and philosophy than their peers in those fields. Meaning, naturally, that they will be less well equipped to know which ideas are bad and which good. They will more likely act on what they hope or wish to be true. They will have a better chance of being seduced by easy theories.

Bad ideas never die. They just shift departments.

July 1, 2008 | 9 Comments

Wired’s theory: the end of theory

Chris Anderson, over at Wired magazine, has written an article called The End of Theory: The Data Deluge Makes the Scientific Method Obsolete.

Anderson, whose thesis is that we no longer need to think because computers filled with petabytes of data will do that for us, doesn’t appear to be arguing serious—he’s merely jerking people’s chains to see if he can get a rise out of them. It worked in my case.

Most of the paper was written, I am supposing, with the assistance of Google’s PR department. For example:

Google’s founding philosophy is that we don’t know why this page is better than that one: If the statistics of incoming links say it is, that’s good enough. No semantic or causal analysis is required.

He also quotes Peter Norvig, Google’s research director, who said, “All models are wrong, and increasingly you can succeed without them.”

Lastly,

The scientific method is built around testable hypotheses….The models are then tested, and experiments confirm or falsify theoretical models of how the world works…But faced with massive data, this approach to science ? hypothesize, model, test ? is becoming obsolete.

Part of what is wrong with this argument is a simple misconception of what the word “model” means. Google’s use of page links as indicators of popularity is a model. Somebody thought of it, tested it, found it made reasonable predictions (as judged by us visitors who repeatedly return to Google because we find its link suggestions useful), and thus became ensconced as the backbone of its rating model. It did not spring into existence simply by collecting a massive amount of data. A human still had to interact with that data and make sense of it.

Norvig’s statement, which is false, is typical of the sort of hyperbole commonly found among computer scientists. Whatever they are currently working on is just what is needed to save the world. For example, probability theory was relabeled “fuzzy logic” when computer scientists discovered that some things are more certain than others, and nonlinear regression were re-cast as mysterious “neural networks,” which aren’t merely “fit” with data, as happens in statistical models, instead they learn (cue the spooky music).

I will admit, though, that their marketing department is the best among the sciences. “Fuzzy logic” is absolutely a cool sounding name which beats the hell out of anything other fields have come up with. But maybe they do too well because computer scientists often fall into the trap of believing their own press. They seem to believe, along with most civilians, that because a prediction is made by a computer it is somehow better than if some guy made it. They are always forgetting that some guy had to first tell the computer what to say.

Telling the computer what to say, my dear readers, is called—drum roll—modeling. In other words, you cannot mix together data to find unknown relationships without creating some sort of scheme or algorithm, which are just fancy names for models.

Very well—there will always be models and some will be useful. But blind reliance on “sophisticated and powerful” algorithms is certain to lead to trouble. This is because these models are based upon classical statistical methods, like correlation (not always linear), where it is easy to show that it becomes certain to find spurious relationships in data as the size of that data grows. It is also true that the number of these false-signals grow at a fast clip. In other words, the more data you have, the easier it becomes to fool yourself.

Modern statistical methods, no matter how clever the algorithm, will not being salvation either. The simple fact is that increasing the size of the data increases the chance of making a mistake. No matter what, then, a human will always have to judge the result, not only in and of itself, but how it fits in with what is known in other areas.

Incidentally, Anderson begins his article with the hackneyed, and false, paraphrase from George Box “All models are wrong, but some are useful.” It is easy to see that this statement is false. If I give you only this evidence: I will throw a die which has six sides, and just one side labeled ‘6’, the probability I see a ‘6’ is 1/6. That probability is a model of the outcome. Further, it is the correct model.