Skip to content

Category: Philosophy

The philosophy of science, empiricism, a priori reasoning, epistemology, and so on.

April 30, 2010 | 15 Comments

Is Experimental Economics Irrational?

Everybody knows there’s no such thing as money. So how come everybody acts like it’s real?

In particular, why do economists and other similar creatures find the lack of “rationality” curious when reviewing the transactions, and game-theory simulations of transactions, between real people?

There do exist bits of shiny metal, certain organic byproducts, and slips of paper that are called “money.” But these objects have no intrinsic value. Money is a concept, not a thing. It is a proxy for agreements between people, a mechanism to ease the trading of things that do have value.

Like I said, everybody knows this. So why has it been so difficult—why did it take so long—to see the logical consequences of this truth? Why, that is, is there the consternation over the lack of “rationality” when it comes to theories of money.

Use an example of buying a six-pack of beer from a bodega in New York City. Not some homeopathic brew like Coors Light. Real beer, like Brooklyn Brewery’s IPA.

The Korean lady in charge can announce that the beer is “Regular price $10; Today 10% off” or she can say “Regular price $8, plus New York City health tax surcharge of $1.” (This example is not fictional: NYC is always pegging up its sin taxes.)

Which would you prefer? According to classic economic and game theories, you’re not supposed to have a preference. Any deviation from indifference is considered “irrational” because, either way, you’re out nine bucks. Either way gets you the six-pack.

But nobody buys just a six-pack. You don’t “buy” anything, actually. You make an agreement between you and the shopkeeper; or, more realistically, between you and your crew of family and friends against the shopkeeper and her organization.

Your contract negotiations are short; much is agreed upon before you walk into the store. When our lady offers the beer for $1 less than usual, there is at least the appearance that she is giving a gift. What I receive is the beer, plus the good feeling that I am being treated nicely.

As all marketers know, I am negotiating for both the beer and the experience of buying it. Management with experience in negotiating with unions know experience counts. What often becomes a sticking point in these, obviously more formal, negotiations is not money but respect.

A union will sometimes accept fewer dollars in return for more autonomy, or better toilets, or anything that awards its members more esteem from the suits. This makes sense—it is rational—because the non-economist union members know that money isn’t everything.

Lack of respect—between me and my rapacious government—is why I would be in a bad mood after shelling out eight bucks for beer plus yet another one for tax.

That extra dollar might be so irksome that I am willing to take a car to Jersey—New Jersey!—to give my $8 to a different family-owned organization. I’d pay more money for this, but I’d receive the additional experience of being able to shop for beer and wine simultaneously; an experience New York State forbids. (Oh, yes.)

Over the past decade, the field of “experimental economics” has grown fat. It’s the same old economics, but married to the more practical mathematics of game theory, with the addition of college students corralled into prisoner’s dilemmas.

It was from these fields that economists are finally accepting that money isn’t real. Only they don’t put it that way. The say, in wonderment, that “man is not rational”—by which they mean that we don’t function as if money were real.

They carry out various simulations and discover, like our example above, that the “optimal” solution is often neglected for an “irrational” one. But “optimal” means quantitatively optimal given that money is real.

We’ll have to talk more about this, but these experiments suffer from an irremovable fault. Since money is not real, and since our economic transactions are really just negotiations and agreements, then the experimenters can never remove themselves from the experiment. They are just as much a part of it as the volunteers are.

Those volunteers will, of course, react differently to different experimenters. The hope is that the differences in oddities, irascibilities, quirks, and other weirdnesses of these volunteer-experimenter interactions will even out somehow. This is a matter of faith, and misplaced faith at that.

Even when they recognize this, it will difficult to shake economists loose from money. It’s so quantitative! It can be p-valued, plotted and pie-charted, set into percentages. Mostly, it can be modeled with soothing mathematics.

But how do you quantify my willingness to drive to Jersey to avoid a tax, or a student’s distraction due to the “teacher pants” the experimental economist wears to the game?

You cannot. So—once more: everybody all together now—in the end, the conclusions will be too certain, too sure.

———————————-

The idea of this post came from Karl Sigmund’s interesting review of Herbert Gintis’s The Bounds of Reason: Game Theory and the Unification of the Behavioral Sciences. Linked on—where else?—Arts & Letters Daily.

April 16, 2010 | 37 Comments

Randomness is a Matter of Information

How many pads of paper do I have on my desk right now? How many books are on my shelves this minute?

You don’t know the answer to any of these questions, just as you don’t know if the Tigers will beat the Marlins tonight, whether IBM’s stock price will be higher at the close of today’s bell, and whether the spin of an outermost electron in my pet ytterbium atom is up or down.

You might be able to guess—predict—correctly any or all of these things, but you do not know them. There is some information available that allows you to quantify your uncertainty.

For example, you know that I can have no pads, or one, or two, or some other discrete number, certainly well below infinity. The number is more likely to be small rather than large. And since we have never heard of a universal physical law of “Pads on Desks”, this is about as good as you can do.

In a severe abuse of physics language, we can say that, to you, there are exactly no pads, exactly one pad, exactly two pads, and so forth, where each possibility exists in a superposition until…what? Right: until you look.

I know how many pads of paper I have because, according to my sophisticated measurement, there are three. And now, according your new information, the probability that there are three has collapsed to one. Given this observation—and accepting the observation is without error and given the belief in our mental stability—the event is no longer random, but known.

The point of this tedious introduction is to prove to you that “randomness” merely means “unknown.” Probability, and its brother randomness, are measures of information. What can be random to you can be known to me.

An event could be random to both of us, but that does not mean that we have identical information that would lead us to quantify our probabilities identically. For example, the exact number of books I have on my shelf is unknown to me and you: the event is random to both of us. But I have different information because I can see the number of shelves and can gauge their crowdedness, whereas you cannot.

A marble dropping in a roulette wheel is random to most of us. But not to all. Given the initial conditions—speed of the wheel, spin and force on the marble, where the marble was released, the equations of motion, and so forth—where the marble rests can be predicted exactly. In other words, random to thee but not to me.

I am happy to say that Antonio Acin, of the Institute of Photonic Sciences in Barcelona, agrees with me. On NPR, he said, “If you are able to compute the initial position and the speed of the ball, and you have a perfect model for the roulette, then you can predict where the ball is going to finish — with certainty.” (My aunt Kayla sent me this story.)

The story continues: “[Acin] says everything that appears random in our world may just appear random because of lack of knowledge.” Amen, brother Antonio.

A Geiger counter measures ionizing radiation, such as might occur in a lump of uranium. That decay is said to be “random”, because we do not have precise information on the state of the lump: we don’t know where each atom is, where the protons and so forth are placed, etc. Thus, we cannot predict the exact times of the clicks on the counter.

But there’s a problem. “You can’t be certain that the box the counter is in doesn’t have a mechanical flaw…” In other words, information might exists that allows the clicks to be semi-predictable, in just the same way as the number of books on my selves are to me but not to you.

So Acin and a colleague cobbled together ytterbium atoms to produce “true” randomness, by which they mean the results of an electron being “up” or “down” cannot be predicted skillfully using any information.

In their experiment, the information on the ytterbium atoms’ quantum (which means discrete!) state is not humanly accessible, so we can never do better than always guessing “up”1.

It is misleading to say that they are “generating” randomness—you cannot generate “unknowness.” Instead, they have found a way to block information. Information is what separates the predictable from the unpredictable.

The difference is crucial: failing to appreciate it accounts for much of the nonsense written about randomness and discrete mechanics.

—————————————————————————————————-

1Brain teaser for advanced readers. Acin’s experiment generates an “up” or “down”, each occurring half the time unpredictably. Why is guessing “up” every time better than switching guesses between “up” and “down”?

Update This is what happens when you write these things at 5 in the morning. The teaser is misspecified. It should read:

Acin’s experiment generates an “up” or “down”, each occurring as they may. When is guessing “up” (or “down” as the case might be) every time better than switching guesses between “up” and “down”?

You will see that I idiotically gave away the answer in my original, badly worded version.

April 8, 2010 | 16 Comments

Peer Review and Proof

Suppose that I today, below, in this post, prove to you a certain mathematics theorem, say, “The Symmetry of Individual Constants.”

Is that theorem true?

Before you answer, consider that you probably aren’t a professional mathematicians or logician. Since you are not, you might not understand each step in the proof. Or if you are a mathematician, because you are inexperienced in the particular field, you might not understand the shorthand or the “givens” routinely used.

Also, and most importantly, the proof has not been “peer reviewed.” Which is to say, it has not been vetted by others who do understand the history and shorthand of this particular field.

Given all that, I repeat the question: is that theorem true?

Of course it is. This was not a trick question. I said—my main premise was—that I proved the theorem. A proof means that the truth of a theorem has been demonstrated. Quot erat demonstradum, and so forth.

Specifically, it does not matter one whit whether you believe the result, nor does it matter whether scores of people believe it. Even stronger, it does not matter whether experts in the field know of, approve of, or have reviewed the result.

We operate under a simple tautology: either my proof is valid or it is not. Whether that proof has been vetted is irrelevant. The theorem is true or not. Moreover, it has always been true or not. It was even true before I discovered a proof of its validity. Human agency does not—and cannot—change the truth status of the theorem.

Obviously, then, where a valid proof appeared is irrelevant to the proof’s validity. Another way to say this is that it is a logical fallacy to claim, “Because this result was not peer reviewed and published in an academic journal, it is false.”

This fallacy is routinely used by academics who cannot refute a distasteful result. “If So-and-So is so confident in his work, let him publish it!”

Our friend Steve McIntyre hears this fallacy a lot. McIntyre has published on his blog many true statements about the climate that are disliked or are unwanted by mainstream climate scientists. Instead of trying to answer, or to refute, McIntyre’s results, these global warming advocates dismiss them because they have not been peer reviewed and published.

Unfortunately, this fallacy—as shockingly obvious as it is—is usually accepted as proof of refutation. That is, it is thought that a refutation has been demonstrated merely because a credentialed scientist cast an aspersion.

Journalists—God love ’em—are particularly eager to swallow this faulty line of reasoning. I have yet to see a case where a reporter rebuts, “Yes, McIntyre’s work hasn’t been published. But how is it wrong?” That is, of course, the only question worth asking.

This fallacy does not just find a home in climate theory: it is warmly accepted everywhere. It’s familiarity is proof that scientists have egos just as large as Hollywood stars.

Have you heard of Grigory Perelman, the bushy eyebrow-bearing Russian mathematician who has been turning down prizes?

In 2002, Perelman wrote up a valid proof of the Poincaré conjecture, probably the most lusted after unsolved problem in mathematics. Briefly, imagine a two-dimensional surface on which is placed a loop of string. If you can shrink that loop continuously until it reaches a point such that the loop nowhere becomes kinked or stuck on a hole on the surface, then that 2D surface is really the same as a sphere; albeit a sphere that might be pulled, crushed, and twisted out of shape.

The Poincaré conjecture says that this same loop-shrinking is true for three-dimensional spaces. And what’s the most familiar 3D space? Well, Space. The universe. Not for the first time, Poincaré had said something deep about life, the universe, and everything.

Point is, the non-conformist Perelman submitted his results to the non-peer reviewed, non-journal Arxiv.org. This is a place where scientists (you have to be recommended to win publishing privileges) place their thoughts in advance of, or in lieu of, peer review. It’s a place to publish not-so-polished, non-popular thoughts in the hope of winning First to Discover status or to avoid the tediousness of peer review.

It turns out that a group of Chinese mathematicians wanted in on Perelman’s success. They re-wrote Perelman’s results and published them in a peer-reviewed, academically approved journal. They also strongly hinted (in a 2003 Science article) that theirs was the real proof—so that credit properly belonged to them—because their proof had been formally published.

The incident is non-technically retold in the New Yorker, where it now seems that Chinese group have recognized their use of the not-published fallacy. They are now claiming “they couldn’t follow” Perelman’s shorthand, even though other mathematicians could. Well, academics was always a blood-stained enterprise.

Anyway, let’s get to work. How many instances of the not-published fallacy can we find?

March 30, 2010 | 38 Comments

What Do Philosophers Believe? Survey: Part II

Read Part I.

  1. Epistemic justification: internalism or externalism? 43% went with externalism, about 26% with internalism. It is impossible to write about this without sounding goofy or repetitive. Briefly, internalism means you know why you know something—and you know why you know why you know something. Externalism means something exterior to you causes the things that happen to you exterior to you are actually exterior to you. That straight? Let’s go with externalism because “how” questions are impossible to answer. That is, you can’t answer how something that is necessarily true can be true, but you can know it is true.
  2. External world: idealism, skepticism, or non-skeptical realism? Idealism is the belief that everything that exists is just thought; that there is no external reality, but there are minds that think things, and that thought is existence. If you remember anything about Bishop Berkeley, then you will remember idealism. About 4% of philosophers ignore Dr Johnson’s sore toe and still hold this view.

    Skepticism is what I call an academic belief (we meet another shortly): only academics pretend to believe these. I say “pretend” because nobody really holds these beliefs, and if they say they do they are lying or insane. Skepticism is the belief that nothing exists. 5% of the academic philosophers actually said they agreed with this. But how did these 5% even know they were answering a survey?

    Realism is the belief that an external world, independent of human belief, exists. I am happy to report that about 82% agreed with this.

  3. Free will: compatibilism, libertarianism, or no free will? “No free will” is another of those academic beliefs (see the survey notes for term definitions). That is, nobody, even those who hold that we have no free will, actually believes we have no free will. I think most of the astonishing 12% who say there is no free will do so because they have not found a way to reconcile determinism (each effect has a cause) with free will. They reason that the universe is marching mechanically along, each effect becoming a cause for another effect, and thus they cannot find room for willed actions. But in doing this, these folks have forgotten an even more fundamental philosophical argument: just because you can’t think of an explanation for a thing, doesn’t mean the explanation doesn’t exist.

    You’ll also notice that those who argue against free will tend to do so to excuse people’s bad behavior. “The criminal must be let go, judge, because he had no free will!” They forget, when they make that asinine argument, that the judge can counter, “I have no choice but to incarcerate your mascot.”

    Compatibilism (79%) is the idea that free will is compatible with determinism; a secular Calvinism, if you like. Libertarianism (14%) in this sense means acting with a free will in the absence of determinism. It’s not clear—nobody knows how—determinism can sometimes shut itself off to allow free will, but libertarians believe that it can. I lean towards a mix of Searle (the mind is not a computer) and Penrose (quantum mechanics does not help you free yourself from determinism), here; that is, towards libetarianism.

    No, I cannot explain how we have free will in the face of determinism. But that does not mean that free will doesn’t exist, because I have knowledge that it does. Think of it this way: a Berkeley High School graduate will take a car ride and know he gets from A to B without having a clue how the engine works. Yet he knows he got from A to B. So traveling by car is true, but how it is true, he doesn’t know.

  4. God: theism or atheism? 73% went with atheism, about 15% with theism. This, of course, is the question. What’s most interesting about it, academically speaking, is that arguments for atheism usually consist of arguments against theism.

    I mean, a philosopher will triumphantly announce a new line of thought which, say, invalidates the ontological argument (a popular attempt at proving God’s existence), and then say, to himself only, “Well, that’s that. Since I can’t accept any of the arguments that prove God’s existence, then God must not exist. Plus, look at my fancy new smart phone, built on the laws of science. Also, I don’t need God to explain life. Of course, most of my colleagues are atheists, and I don’t want to seem a bore.” He thus sides with atheism.

    But, I need hardly add, that an argument showing the invalidity of the ontological argument is not logically equivalent to disproving God’s existence. Philosophers know that, of course, which is why they keep their reasoning to themselves.

    I have no new arguments to offer on this question, but consider this. Since all our knowledge is built on faith (see yesterday’s point #1), it is not inconceivable to suppose that knowledge of the question must necessarily rest on faith. (The Christian religion believes that.) Naturally, I can offer no proof.

In there is still interest, there are more great questions left. Like “Logic: classical or non-classical?” Classical, of course.

Read Part I.

It’s national Pass On The Briggs month here at wmbriggs.com. If your interpretation of this phrase is on the generous side, email a link of this page to a friend who hasn’t been here before. The best kind of friend is one who has need of a statistician and who has a lot of money.