Skip to content

Category: Philosophy

The philosophy of science, empiricism, a priori reasoning, epistemology, and so on.

March 3, 2009 | 39 Comments

You have no choice but to read this

Your decisions are not your own

Our gut instinct, our experience, is that we make the decisions to move, to think, to eat, to steal, to lie, to punch and kick. We have constructed the entire edifice of our civilisation on this idea. But science says this free will is a delusion. According to the world’s best neuroscientists, we are brain-machines. Our brains create the sense that somewhere within them is the “you” that makes decisions. But it is an illusion; there is no ghost in the machine. What does this mean for our sense of self? And for our morality – can we prosecute people for acts over which they had no conscious control?

Or so says London Times writer Michael Brooks. Free Will is one of his 13 Unsolved scientific puzzles (linked at the indispensable Arts & Letters Daily).

The most interesting part of this story is that the Times linked back to a previous article on the question, first published in 1877:

That brilliant speaker, Professor Tyndall, lecturing at Birmingham the the other day, adopted frankly the theory of necessity, and in the name of conscience dismissed free-will henceforward from all civilized society. To the obvious remonstrance of the murderer against his punishment, that we hang him for what he could not help, the Professor answers in these words,—“We entertain no malice against you, but simply with a view for our own saftey and purification we are determined that you, and such as you, shall not enjoy liberty of evil action in our midst…You offend, because cannot help offending.

Professor Tyndall was merely stating consequences that follow from the deduction that if all things are material, and we are material, and that since all material effects have causes, all of our actions are thus materially determined. That is, we cannot have free will. We act as we do because we cannot help but act as we do.

Not much has changed since Tyndall held forth. Now we have C. M. Fisher in the journal Medical Hypotheses wondering how free will can be possible since we have neurons, which he has cleverly determined are material, and since they are material etc. He says, “This has implications for voluntary behavior and the doctrine of free will.” Doctrine?

How do neurons cause things to happen without will? A common view is provided by John-Dylan Haynes of the Max Planck Institute for Human Cognitive and Brain Sciences in Germany. Haynes used a magnetic phrenology machine to show that areas of the brain “light up” before people say they are aware of making a decision. Therefore free will might be an illusion.

Other scientists have also discovered material neurons: for example, Princeton’s Joshua Greene and Jonathan Cohen (quoted here):

…the law’s intuitive support is ultimately grounded in a metaphysically overambitious, libertarian notion of free will that is threatened by determinism and, more pointedly, by forthcoming cognitive neuroscience…. The net effect of this influx of scientific information will be a rejection of free will as it is ordinarily conceived, with important ramifications for the law…[And on crime] demonstrating that there is a brain basis for adolescents’ misdeeds allows us to blame adolescents’ brains instead of the adolescents themselves.

You get the drift by now: when people do bad things, it’s not their fault. They were made to do evil by their selfish genes (as Dawkins would say) or by their oblivious neurons (as many scientists say).

Thus, when I say C.M. Fisher is a fool, and when I suggest that Joshua Greene and Jonathan Cohen have unnatural relations with horses, it’s not my fault. I cannot help but write those words, just as Greene and Cohen cannot help hanging around stables.

I couldn’t even stop myself if I wanted to. Worse, there is no “I” to stop. The “I” that is typing is just a mass of tissue following a predetermined path.

Since it is not the perpetrator’s fault for raping your daughter (he had no choice), these Enlightened folks conclude we cannot punish them. It would be the same as punishing a cloud for dropping unwanted rain.

Every modern argument against free will reaches that same “progressive” conclusion: bad people should not be punished. Only our old friend Professor Tyndall was wise enough to see the flaw in that ridiculous argument. The criminal does evil because he cannot help himself, but since there is no free will, “We punish, because we cannot help but punish.” Everybody does what they do because they have no choice.

If there is no free will, we cannot change our behavior to take account of the enlightened non-punishment idea. No one can change their behavior to account for anything. Every action is set. There is no way to stand outside of our genes/neurons to direct them in an enlightened manner to do our bidding. Even college professors, even those at Princeton, are stuck in an eternal rut.

Because we cannot see how there could be free will given the implications of certain theories and beliefs only means that all or some of those theories and beliefs are wrong or incomplete. It does not mean that the observation of free will is in error.

This is not the place to say why free will is possible (please don’t mention the study of movement in discrete units). Free will is obvious. And it is even obvious to those people who say there is no free will.

If you disagree with me, pause for a moment and consider that “I” cannot help saying what I am saying, so there is no use for you to tell me I am wrong.

February 25, 2009 | 30 Comments

Peer review

Here is how peer review roughly works.

An author sends a paper to a journal. An editor nearly always sends the paper on to two or more referees. The referees read the paper with varying degrees of closeness, and then send a written recommendation to the editor saying “publish” or “do not publish.” The editor can either accept or ignore the referees’ recommendations.

The paper is then either published, or sent back to the author for revisions or rejection.

If the paper is rejected, the author will usually submit it to another journal, where the peer review process begins anew. This cycle continues until either the paper is published somewhere (the most typical outcome) or the author tires and quits.

Here are two false statements:

(A) All peer-reviewed, published papers are correct in their findings.

(B) All papers that have been rejected1 by peer review are incorrect in their findings.

These statements are also false if you add “in/by the most prestigious journals” to them. (A) and (B) are false in every field, too, including, of course, climatology.

A climatology activist might argue, “Given what I know about science, this peer-reviewed paper contains correct findings.” This is not a valid argument because (A) is true: the climatology paper might have findings which are false.

If the activist instead argued, “Given what I know, this peer-reviewed paper probably contains correct findings” he will have come to a rational, inductive conclusion.

But a working climatologist (gastroenterologist, chemist, etc., etc.) will most likely argue, “Given my experience, this peer-reviewed paper has a non-zero chance to contain correct findings.” Which is nothing more than a restatement of (A).

The “non-zero chance” will be modified to suit his knowledge of the journal and the authors of the paper. For some papers, the chance of correct findings will be judged high, but for most papers, the chance of correct findings will be judged middling, and for a few it will be judged low as a worm’s belly.

Here is a sampling of evidence for that claim.

(1) Rothwell and Martyn (abstract and paper) examined referees’ reports from a prominent neuroscience journal and found that referee agreement was about 50%. That is, there is no consensus in neurology.

(2) No formal study (that I am aware of) has done the same for climatology, but personal experience suggests it is similar there. That is, there is at least one published paper on which the referees do not agree (at what is considered the best journal, Journal of Climate).

(3) Pharmacologist David Horrobin has written a commentary on peer-review in which he argues that the process has actually slowed down research in some fields. He also agrees with my summary:

Peer review is central to the organization of modern science. The peer-review process for submitted manuscripts is a crucial determinant of what sees the light of day in a particular journal. Fortunately, it is less effective in blocking publication completely; there are so many journals that most even modestly competent studies will be published provided that the authors are determined enough. The publication might not be in a prestigious journal, but at least it will get into print.

(4) I have just received an email “Invitation to a Symposium on Peer Reviewing” which, in part, reads:

Only 8% members of the Scientific Research Society agreed that “peer review works well as it is”. (Chubin and Hackett, 1990; p.192).

“A recent U.S. Supreme Court decision and an analysis of the peer review system substantiate complaints about this fundamental aspect of scientific research.” (Horrobin, 2001)

Horrobin concludes that peer review “is a non-validated charade whose processes generate results little better than does chance.” (Horrobin, 2001). This has been statistically proven and reported by an increasing number of journal editors.

But, “Peer Review is one of the sacred pillars of the scientific edifice” (Goodstein, 2000), it is a necessary condition in quality assurance for Scientific/Engineering publications, and “Peer Review is central to the organization of modern science…why not apply scientific [and engineering] methods to the peer review process” (Horrobin, 2001).

This is the purpose of the International Symposium on Peer Reviewing: ISPR ( being organized in the context of The 3rd International Conference on Knowledge Generation, Communication and Management: KGCM 2009 (, which will be held on July 10-13, 2009, in Orlando, Florida, USA.

Be sure to visit the first link for more information.

(5) Then there is the Sokal Hoax, where a physicist sent a paper full of gibberish to a preeminent social science journal to see if it would be published. It was. Sokal was careful to play to the preconceptions of the journals’ editors to gain acceptance. The lesson is the oldest: people—even scientists!—easily believe what they want to.

(5) John Ioannidis and colleagues in their article “Why Current Publication Practices May Distort Science.” The authors liken acceptance of papers in journals to winning bids in auctions: sometimes the winner pays too much and the results aren’t worth as much as everybody thinks. A review of the article here.

(7) UPDATE. Then there is, the repository of non-peer-reviewed “preprints” (papers not yet printed in a journal). Arxiv is an acknowledgment by physicists, and lately mathematicians and even climatologists, that it’s better to take your findings directly to your audience and bypass the slow and error-prone refereeing process.

(8) It is easy to get a paper into print when the subject is “hot”, or when you are friends with the editor or he owes you a favor, or your findings shame the editor’s enemies, or through a mistake, or by laziness of the referees, or in a journal with a reputation for sloppiness. In most fields, there are at least 100 monthly/quarterly journals. Thus it is exceedingly rare for a paper not to find a home, no matter how appalling or ridiculous its conclusions.

Update: 4 March; Reader Jack Mosevich reminds us to see this article at Climate Audit. Real-life example of politically correct refereeing.

The listing of these facts is solely to prove that (A) and (B) are false, and that peer review is a crude sifter of truth.

Thus, when an activist or inactivist points to a peer-reviewed paper and says, “See!”, he should not be surprised when his audience is unpersuaded. He should never argue that some finding must be true because it came from a peer-reviewed paper.

This web page has also tracked several peer-reviewed, published papers that are crap. Examples here, here, here, and here (more are coming).


1Incidentally, I have only had one methods paper rejected; all others I wrote were accepted to the first journal I sent them to. Nearly every collaborative paper I co-wrote has also been accepted eventually. I am an Associate editor at Monthly Weather Review, and have been a referee more times than I can remember, both for journals and grants.

I mention these things to show that I am familiar with the process and that I am not a disgruntled author seeking to impugn a system that has treated him unfairly. To the contrary, I have been lucky, and have had a better experience than most.

February 23, 2009 | 25 Comments

What appeal to authority means and what it doesn’t

This article is meant to be the first is a small series of demonstrations of how not and how to argue for or against climate activism. The level of argumentation on the web has long passed subbasement (people have been calling each other “Hitler” for over two years), but worse are the misuses and misunderstandings of logic. People throw around terms like ad hominem and appeal to authority constantly, without understanding what they are saying.

My hope is that when you see an abuse of the type outlined, you simply cut and paste the link to these pages. This will save us all a lot of time and unnecessary typing. I hope.


A prominent climate inactivist forwarded me a document in which he argued against some of the more catastrophic claims said to be due to global warming.

At the beginning of his piece he implied that James Hansen, who is the best known climate activist, should not be trusted because Hansen only had training as an astrophysicist and not as a climatologist.

This is a poor argument because the author of the piece was not himself a climatologist. If you must be an official climatologist before being allowed to comment on climatology—a position that is logically valid given that you can satisfactorily define “climatologist”—then just about everybody, activist and inactivist, must shut up. Including the author of that piece—and almost certainly, including you.


Mr Activist: this means that you would not be allowed to say anything whatsoever about global warming except to repeat what you have been told by an official climatologist. You would be allowed to say “Mr Climatologist says B” and nothing else.

(“B” can be any statement or proposition about climatology only—it cannot be about politics or health or biology or anything else.)

Mr Activist would not be allowed to say “Mr Climatologist says B, and you’re a fool not to believe it.” Pause and understand this. The reason is that he is not qualified to say what and what is not foolish because he is not a climatologist.

If an official climatologist says “B, and people would be fools not to believe it,” then you can repeat that statement. But you cannot adorn it, nor comment further, nor say anything else. You can repeat what you are told by your betters and then you must keep quiet.

The only exception to this logical rule is that if everybody, activist and inactivist, agreed on the additional premise, “People that do not believe what official climatologists say are fools.” Then you can logically say, “Mr Climatologist says B, and you’re a fool not to believe it.”

But almost certainly, everybody would not agree on that premise. Let’s see why.


The prominent inactivist made a logical mistake by arguing “Because Hansen is not a climatologist, his statements on climatology are false.” This is only valid if only climatologists make true statements about climatology and if non-climatologists always make false statements about climatology. Is that true?

Obviously not. Plenty of climatologists have made statements about climatology that turned out false in fact and in theory. And plenty of non-climatologists have made statements about climatology that turned out true in fact and in theory.

Because of this, it necessarily means that anybody is allowed to say anything they want about statements of fact or theory about climatology. This includes both activists and inactivists. We can now see that neither side can accuse the other of making a logical mistake by talking about climatology.

So stop arguing about this point!


Suppose Hansen is an official climatologist and he makes the statement, H = “The global average temperature in 2010 will be at least half a degree hotter than in 2009.” (Whenever we see ‘H’, we must remember that it stands for “The global average…”)

If an activist then says, “Hansen, an expert, says H. Therefore, because Hansen is an expert, H is true.” This argument is invalid; the activist has made a mistake. H cannot be true because Hansen said so. The logical error made is called “appealing to authority.”

Most know of this mistake and avoid obvious instances of it. But see 5.

Suppose a second activist said, “Hansen, an expert, says H. Therefore, because Hansen is an expert, H is likely to be true.” This is not a mistake and is a rational thing to say. This is because experts making statements like H are often, but not always, right. Therefore it is rational to suppose that the expert is likely to be right again.

The statement made by the second activist is an appeal to authority, too, but a sound, inductive one.

Mr Inactivist: it does you no credit to accuse non-climatologists of being irrational if they are making arguments of the second type. It is often wise to appeal to authority like this, and is what we all do when we enter an aircraft, trusting the pilot to get us safely to our destination.


Most will agree that Hansen meets the definition of climatologist, but then so does your author, and so do several people (like Dr Lindzen) who do not always agree with what Hansen says.

Now we have trouble. For if only official climatologists can make true statements about climatology, and if two (or more) official climatologists make contradictory statements about climatology, then we have a logical contradiction if the premise “Only climatologists make true statements about climatology” is true. We have already seen it is false, so we are safe.

But suppose Mr Activist says, “Most climatologists say B. Therefore B is true.” This is the same logical error: appeal to authority in the deductive sense.

Let the second activist say “Most climatologists say B. Therefore B is likely to be true.” This is a perfectly rational thing to say.

Even more, the first premise appears to be true in fact: Most climatologists do agree on most statements B about climatology. Therefore, it is rational for people, and their close cousins politicians, to say to themselves “B is likely to be true.”

Mr Inactivist: your only appeal, if you believe B to be false, is to marshal arguments against B. You must not call B a “hoax” or use other disparaging terms as you risk being guilty of appealing strictly to your own authorities (however, there is more to say here, but we’ll save this for another time).

Mr Activist: Because you have reasoned B is more likely to be true, you cannot say “Therefore, everybody must believe B.” To do so is to make the same appeal-to-authority logical error. You must also not express amazement that anybody dare disagree with B for the same reason, and because you must remember that the inactivist has consulted his own authority or set of facts and is arguing inductively just as you are.


Mr Activist, you must not say that Mr Inactivist’s authorities are ineligible because they do not agree with your authorities. This is the same logical error: appeal to authority once more.

And it is a foolish thing to say because you risk defining “expert” solely as “somebody who agrees with what I want.” That is not a logical error, but it is asinine.


Finally, Mr Activist, you must understand Mr Inactivist is making an inductive, and therefore rational, appeal to authority, when he argues “Yes, most climatologists say B, but I believe they are mistaken because these other climatologists claim to have shown where the first are in error.”

Thus, if Mr Inactivist says, “Therefore, B is likely to be false”, then he has said a rational thing. But if he has said, “Therefore, B is certainly false”, then he has made the same error and you can call him on it.


Update: I want to leave as an exercise about how arguments about “peer review” and “consensus” fit into the appeal to authority arguments. After you have said something about “peer review”, then read this article.

February 10, 2009 | 31 Comments

Theory confirmation and disconfirmation

Time for an incomplete mini-lesson on theory confirmation and disconfirmation.

Suppose you have a theory, or model, about how some thing works. That thing might be global warming, stock market prices, stimulating economic activity, psychic mind reading, and on and on.

There will be available a set of historical data and facts that lead to the creation of your theory. You will always find it an easy process to look at those historical data and say to yourself, “My, those data back my theory up pretty well. I am surely right about what drives stock prices, etc. I am happy.”

Call, for ease, your theory MY_THEORY.

It is usually true that if the thing you are interested in is complicated—like the global climate system or the stock market—somebody else will have a rival theory. There may be several rival theories, but let’s look at only one. Call it RIVAL_THEORY.

The creator of RIVAL_THEORY will say to himself, “My, the historical data back my theory up pretty well, too. I am surely right about what drives stock prices, etc. I am happy and the other theory is surely wrong.”

We have a dispute. Both you and your rival are claiming correctness; however, you cannot both be right. At least one, and possibly both, of you is wrong.

As long as we are talking about historical data, experience and human nature shows that the dispute is rarely allayed. What happens, of course, is that the gap between the two theories actually widens, at least in the sense of strength which the theories are believed by the two sides.

This is because it is easy to manipulate, dismiss as irrelevant, recast, or interpret historical data so that it fits what your theory predicts. The more complex the thing of interest, the easier it is to do this, and so the more confidence people have in their theory. There is obviously much more that can be said about this, but common experience shows this is true.

What we need is a way to distinguish the accuracy of the two theories. Because the historical data won’t do, we need to look to data not yet seen, which is usually future data. That is, we need to ask for forecasts or predictions.

Here are some truths about forecasts and theories:

If MY_THEORY says X will happen and X does not happen, then MY_THEORY is wrong. It is false. MY_THEORY should be abandoned, forgotten, dismissed, disparaged, disputed, dumped. We can say that MY_THEORY has been falsified.

For example, if MY_THEORY is about global warming and it predicted X = “The global mean temperature in 2008 will be higher than in 2007” then MY_THEORY is wrong and should be abandoned.

You might say that, “Yes, MY_THEORY said X would happen and it did not. But I do not have to abandon MY_THEORY. I will just adapt it.”

This can be fine, but the adapted theory is no longer MY_THEORY. MY_THEORY is MY_THEORY. The adapted, or changed, or modified theory is different. It is NEW_THEORY and it is not MY_THEORY, no matter how slight the adaptation. And NEW_THEORY has not made any new predictions. It has merely explained historical data (X is now historical data).

It might be that RIVAL_THEORY theory made the same prediction about X. Then both theories are wrong. But people have a defense mechanism that they invoke in such cases. They say to themselves, “I cannot think of any other theory besides MY_THEORY and RIVAL_THEORY, therefore one of these must be correct. I will therefore still believe MY_THEORY.”

This is the What Else Could It Be? mechanism and it is pernicious. I should not have to point out that because you, intelligent as you are, cannot think of an alternate explanation for X does not mean that an alternate explanation does not exist.

It might be that MY_THEORY predicted Y and Y happened. The good news is that we are now more confident that MY_THEORY is correct. But suppose it turned out that RIVAL_THEORY also predicted that Y would happen. The bad news is that you are now more confident that RIVAL_THEORY is correct, too. How can that be when the two theories are different?

It is a sad and inescapable fact that for any set of data, historical and future, there can exist an infinite number of theories that equally well explain and predict it. Unfortunately, just because MY_THEORY made a correct prediction does not imply that MY_THEORY is certainly correct: it just means that it is not certainly wrong. We must look outside this data to the constructs of our theory to say why we prefer MY_THEORY above the others. Obviously, much more can be said about this.

It is often the case that a love affair develops between MY_THEORY and its creator. Love is truly blind. The creator will not accept any evidence against MY_THEORY. He will allow the forecast for X, but when X does not happen, he will say it was not that X did not happen, but the X I predicted was different. He will say that, if you look closely, MY_THEORY actually predicted X would not happen. Since this is usually too patently false, he will probably alter tactics and say instead that it was not a fair forecast as he did not say “time in”, or this or that changed during the time we were waiting for X, or X was measured incorrectly, or something intervened and made X miss its mark, or any of a number of things. The power of invention here is stronger than you might imagine. Creators will do anything but admit what is obvious because of the passion and the belief that MY_THEORY must be true.

Some theories are more subtle and do not speak in absolutes. For example, MY_THEORY might say “There is a 90% chance that X will happen.” When X does not happen, is MY_THEORY wrong?

Notice that MY_THEORY was careful to say that X might not happen. So is MY_THEORY correct? It is neither right or wrong at this point.

It turns out that it is impossible to falsify theories that make predictions that are probabilistic. But it also that case that, for most things, theories that make probabilistic predictions are better than those that do not (those that just say events like X certainly will or certainly will not happen).

If it already wasn’t, it begins to get complicated at this point. In order to say anything about the correctness of MY_THEORY, we now need to have several forecasts in hand. Each of these forecasts will have a probability (that “90% chance”) attached, and we will have to use special methods to match these probabilities with the actual outcomes.

It might be the case that MY_THEORY is never that close in the sense that its forecasts were never quite right, but it might still be useful to somebody who needs to make decisions about the thing MY_THEORY predicts. To measure usefulness is even more complicated than measuring accuracy. If MY_THEORY is accurate more often or useful more often, then we have more confidence that MY_THEORY is true, without ever knowing with certainty that MY_THEORY is true.

The best thing we can do is to compare MY_THEORY to other theories, like RIVAL_THEORY, or to other theories that are very simpler in structure but are natural rivals. As mentioned above, this is because we have to remember that many theories might make the same predictions, so that we have to look outside that theory to see how it fits in with what else we know. Simpler theories that make just as accurate predictions as complicated theories more often turn out to be correct (but not, obviously, always).

For example, if MY_THEORY is a theory of global warming that says that there is a 80% chance that global average temperatures will increase each year, we need to find a simple, natural rival to this theory so that we can compare MY_THEORY against it. The SIMPLE_THEORY might state “there is a 50% chance that global average temperatures will increase each year.” Or it might be that LAST_YEAR’S_THEORY might states “this year’s temperatures will look like last year’s.”

Thus, especially in complex situations, we should always ask, when somebody is touting a theory, how well does that theory make predictions and how much better is it than its simpler, natural rivals. If the creator of the touted theory cannot answer these questions, you are wise to be suspicious of that the theory and to wait until that evidence comes in.