Skip to content
May 12, 2008 | 8 Comments

Stats 101: Chapter 2

Chapter 2 is now ready for downloading—it can be found at this link.

This chapter is all about basic probability, with an emphasis on understanding and not on mechanics. Because of this, many details are eliminated which are usually found in standard books. If you already know combinatorial probability (taught in every introductory class), you will probably worry your favorite distribution is missing (“What, no Poisson? No negative binomial? No This One or That One?”). I leave these out for good reason.

In the whole book, I only teach two distributions, the binomial and the normal. I hammer home how these are used to quantify uncertainty in observable statements. Once people firmly understand these principles, they will be able to understand other distributions when they meet them.

Besides, the biggest problem I have found is that people, while they may be able to memorize half a dozen distributions or formulas, do not understand the true purpose of probability distributions. There is also no good reason to do calculations by hand now that computers are ubiquitous.

Comments are welcome. The homework section (like in every other chapter) is unfinished. I will be adding more homework as time goes on, especially after I discover what areas are still confusing to people.

Once again, the book chapter can be downloaded here.

May 8, 2008 | 17 Comments

The Sean Bell shooting and probability

Yesterday, there were several protests in New York City. The participants were “outraged” over the recent acquittal of two black cops and one Lebanese cop who shot and killed Sean Bell, who was black.

Much was made about the fact that the three cops shot at Bell’s car 50 times. This number was touted repeatedly by some as evidence that the cops had used excessive force.

Let’s look at this from the probabilistic viewpoint. It turns out that when a cop fires his weapon at a person, he only hits his target about 30% of the time. Anybody who has ever fired a weapon before, especially in an altercation, will know that this is a pretty good rate, but of course not good enough to guarantee that just one shot will be enough to stop a target.

So about how many times must a cop fire so that he is at least 99.9% sure of hitting his target?

Well, if he fired just once, he has a 30% of hitting, or a 70% chance of missing. If he fired twice, what is the chance of hitting at least once? Hitting at least once can happen in three ways: hitting with the first bullet and missing with the second; missing with the first and hitting with the second; or hitting with both. The only other possibility is missing on both. The probability of all these scenarios is 1 (something has to happen). So the chance of hitting at least once is 1 minus the chance of missing both. Or 1 - (0.7)(0.7) = 1 - 0.49 = 0.51.

This means that only firing two shots gives the officer a 50/50 chance of hitting his target. Not very good odds. He must fire more times to increase them.

It turns out that the same formula can be used for any number of shots. The probability of hitting at least once in three shots is 1 - (0.7)^3 = 1 - 0.34 = 0.66. The probability of hitting at least once in n shots is then 1 - (0.7)^n.

We want 1 - (0.7)^n to be at least 0.999. Or, written mathematically, 1 - (0.7)^n > 0.999. Now we have to recall high school algebra and solve for n. Subtract 1 from both sides and cancel the negative signs, which gives (0.7)^n > 0.001.

Now the hard part. If you don’t remember, just take my word for it, but now we use logarithms. So that we get n log(0.7) > log(0.001), or n > log(0.001)/log(0.7) = 20 (rounding to the nearest shot).

That’s right. In order for the cop to be pretty sure of hitting his target (and therefore ensuring his target does not hit him), a copy has to shoot at least 20 times.

Thus, given that three cops were firing, 50 total shots does not seem that unusal.

Note: one cop shot 31 times, on 11, and the other 8. Of course, the above analysis ignores all external evidence, such as how the probability of hitting decreases when aiming at a moving target, awareness by one cop of shots fired by another, whether the cops were well motivated, etc.

May 3, 2008 | 23 Comments

Stats 101: Chapter 1

UPDATE: If you downloaded the chapter before 6 am on 4 May, please download another copy. An older version contained fonts that were not available on all computers, causing it to look like random gibberish when opened. It now just looks like gibberish

I’ve been laying aside a lot of other work, and instead finishing some books I’ve started. The most important one is (working title only) Stats 601, a professional explanation of logical probability and statistics (I mean the modifier to apply to both fields). But nearly as useful will be Stats 101, the same sort of book, but designed for a (guided or self-taught) introductory course in modern probability and statistics.

I’m about 60% of the way through 101, but no chapter except the first is ready for public viewing. I’m not saying Chapter 1 is done, but it is mostly done.

I’d post the whole thing, but it’s not easy to do so because of the equations. Those of you who use Linux will know of latex2html, which is a fine enough utility, but since it turns all equations into images, documents don’t always end up looking especially beautiful or easy to work with.

So below is a tiny excerpt, with all of Chapter 1 available at this link. All questions, suggestions for clarifications, or queries about the homework questions are welcome.


1. Certainty & Uncertainty

There are some things we know with certainty. These things
are true or false given some evidence or just because they are
obviously true or false. There are many more things about which
we are uncertain. These things are more or less probable given
some evidence. And there are still more things of which nobody
can ever quantify the uncertainty. These things are nonsensical or

First I want to prove to you there are things that are true,
but which cannot be proved to be true, and which are true based
on no evidence. Suppose some statement A is true (A might be
shorthand for “I am a citizen of Planet Earth”; writing just ‘A’ is
easier than writing the entire statement; the statement is every-
thing between the quotation marks). Also suppose some statement
B is true (B might be “Some people are frightfully boring”). Then
this statement: “A and B are true”, is true, right? But also true is
the statement “B and A are true”. We were allowed to reverse the
letters A and B and the joint statement stayed true. Why? Why
doesn?t switching make the new statement false? Nobody knows.
It is just assumed that switching the letters is valid and does not
change the truth of the statement. The operation of switching
does not change the truth of statements like this, but nobody will
ever be able to prove or explain why switching has this property.
If you like, you can say we take it on faith.

That there are certain statements which are assumed true
based on no evidence will not be surprising to you if you have
ever studied mathematics. The basis of all mathematics rests on
beliefs which are assumed to be true but cannot be proved to
be true. These beliefs are called axioms. Axioms are the base;
theorems, lemmas, and proofs are the bricks which build upon
the base using rules (like the switching statements rule) that are
also assumed true. The axioms and basic rules cannot, and can
never, be proved to be true. Another way to say this is, “We hold
these truths to be self-evident.”

Here is one of the axioms of arithmetic: For all natural
numbers x and y, if x = y, then y = x. Obviously true, right? It is just
like our switching statements rule above. There is no way to prove
this axiom is valid. From this axiom and a couple of others, plus
acceptance of some manipulation rules, all of mathematics arises.
There are other axioms?two, actually?that define probability.
Here, due to Cox (1961), is one of those axioms: The probability
of a statement on given evidence determines the probability of its
contradictory on the same evidence. I’ll explain these terms as we

It is the job of logic, probability, and statistics to quantify
the amount of certainty any given statement has. An example
of a statement which might interest us: “This new drug improves
memory in Alzheimer patients by at least ten percent.” How prob-
able is it that that statement is true given some specific evidence,
perhaps in the form of a clinical trial? Another statement: “This
stock will increase in price by at least two dollars within the next
thirty days.” Another: “Marketing campaign B will result in more
sales than campaign A.” In order to specify how probable these
statements are, we need evidence, which usually comes in the form
of data. Manipulating data to provide coherent evidence is why
we need statistics.

Manipulating data, while extremely important, is in some
sense only mechanical. We must always keep in mind that our
goal is to make sense of the world and to quantify the uncertainty
we have in given problems. So we will hold off on playing with data
for several chapters until we understand exactly what probability
really means.

2. Logic

We start with simple logic. Here is a classical logical argument,
slightly reworked:

All statistics books are boring.

Stats 101 is a statistics book.
Therefore, Stats 101 is boring.

The structure of this argument can be broken down as follows.
The two statements above the horizontal line are called premises;
they are our evidence for the statement below the line, which is
the conclusion. We can use the words “premises” and “evidence”
interchangeably. We want to know the probability that the conclusion
is true given these two premises. Given the evidence listed,
it is 1 (probability is a number between, and including, 0 and 1).
The conclusion is true given these premises. Another way to say
this is the conclusion is entailed by the premises (or evidence).

You are no doubt tempted to say that the probability of the
conclusion is not 1, that is, that the conclusion is not certain,
because, you say to yourself, statistics is nothing if not fun. But
that would be missing the point. You are not free to add to the
evidence (premises) given. You must assess the probability of the
conclusion given only the evidence provided.

This argument is important because it shows you that there
are things we can know to be true given certain evidence. Another
way to say this, which is commonly used in statistics, is that the
conclusion is true conditional on certain evidence.

(To read the rest, Chapter 1 is available at this link.)

April 28, 2008 | 20 Comments

Hitting or Pitching. Which wins more games?

By Tim Murray and William Briggs

You obviously need to score runs to win baseball games, and buying better hitters does this for a team. But you also need to keep your opponent from scoring too many runs, and buying better pitchers does this. Good, error-free, fielding, all other things being equal, will also help a team keep the runs scored against it low. Most teams cannot afford to buy both the best batters and the best hurlers, so they have to make decisions.

You’re the newly appointed manager for your favorite team. The roster is nearly made out, and you find you have money for one more player. You can buy a hitter to improve your team’s overall batting average (BA) or you can acquire a pitcher to lower your team’s earned run average (ERA). What do you do?

We decided to try and answer this question by looking at the complete data from the 2001 to the 2007 seasons for all teams in Major League Baseball. For each team, the number of regular season Wins, batting average, earned run average, number of errors, which league American or National, and total payroll were collected. We also counted the total runs scored for and allowed for each team, but since these statistics were so closely connected with batting average and earned run average, we don’t consider them further.

Payroll is obviously used to buy what teams consider, but as fans know to their grief do not always work out to be, the best players. If winning more games was simply a matter of increasing the payroll, the New York Yankees would win every World Series. Thankfully, then, money isn’t everything.

But it is something. This picture shows the payroll by the number of wins, with each team receiving its own color (since this is for seven years, each team appears seven times on this, and all other, plots). The team to the far right in blue are the Yankees, far exceeding any other team in money spent. The club next to them in red are the Boston Red Sox. There is a huge difference in the amount of money spent between teams. The 2006 Florida Marlins spent the least at about $15 million but won a respectable 78 games. They were followed closely by Tampa Bay, which in 2000 spent about $20 million, only rising to $24 million by 2007. Their wins were steady at around 66.

wins by payroll

A horizontal line has been drawn in at 90 games to show that there is still an enormous range of team payrolls for clubs winning at least this impressive number of games. For example, the 2001 Oakland A’s spent only about $34 million to capture 102 games. They increased the payroll a mere $6 million the next year and won 103 games. Oakland, as documented in the book Moneyball by Michael Lewis, didn’t really drop much below 90 games until last year, winning only 76 games while spending the most they ever had, nearly $80 million.

While spending a lot does not guarantee winning the most games in any year, it does help. The Yankees, for example, never dropped below 94 games (in 2007). Boston was the second biggest spender, and it has helped them win at least 82 games a year. However, most teams cannot spend nearly as much these two. Other teams must be grateful that money isn’t everything.

This second picture explains why money can’t necessarily buy happiness. Each of the three predictive statistics, BA, ERA, and Errors, is plotted against Payroll. A statistical (“nonparametric”) regression line is drawn on each to give a rough, semi-qualitative idea of the relationship of the variables. The signals go in the expected direction: larger payrolls mean, on average, higher BAs, lower ERAs, and lower numbers of Errors. But none of the signals are very strong.

wins by BA, ERA, and Errors

To explain what we mean by that, pick any level of payroll, say $100 million. Then look at the scatter around that number (the points below and above the solid line). With BA, the scatter is just about as wide as the range of team batting averages in the data, which are .240 to .292. The same is true for both ERAs and Errors. Still, there is a general weak trend: spending more money does, very crudely, buy you a better team.

But not much better. For example, if you wanted to spend enough to be 90% sure of upping your team’s batting average 5 points (from the median of .268 to .273), you’d have to shell out an extra $50 million (this is after controlling for League, Errors, and team ERA). That’s a huge increase in team salaries. Even worse, the players you buy would have to have extraordinarily high batting averages to bring the entire team’s average 5 points higher. It’s the same story for ERA and Errors. The point being, is that predicting what players will do, paying more money for those you consider better, and their actual performance after you buy them is not just a tricky business, but an almost impossible one.

This still doesn’t answer what is better, in the sense of predicting more wins: hitting or pitching. Take a look at this picture:

BA, ERA, and Errors frequency by League

This shows fancy, souped-up, “histograms” (called density estimates) for the frequency of BA, ERA, and Errors by League. Higher areas on the graph, like a regular histogram, mean that number is more likely. For example, the most likely value of ERA for teams in the National League is just over 4.0.

It’s clear from these pictures that the American League teams have on average higher ERAs and BAs than do clubs in the National League. Obviously, the designated hitter rule for the American League accounts for most, if not all, of this difference. There doesn’t seem to be any real differences in Errors between the two Leagues, which makes sense. The League differences between ERA and BA have to be accounted for when answering our main question.

This next series of pictures shows there is even more complexity. The first is a plot, separated by League, of each teams’ BA by ERA. There is some weak evidence that as ERA increases, BA drops, especially in the American (A) League, perhaps another remnant of the designated hitter effect. But this isn’t a very strong indicator.

BA by ERA by League

This next pictures shows some stronger relationships. The top two panels, again separate by League, are plots of ERA (on the vertical axis) by Errors (on the horizontal axis): as ERA increases, so do numbers of Errors. Similarly for BA, as numbers of Errors increases, the batting averages of teams tend to decrease. All this evidence means that when a team is bad, it tends to be bad in all three dimensions, and when it is good, it tends to be good in all three dimensions. This is no surprise, of course, but we do have to control for these factors when answering our question.

BA, ERA, by Errors by League

We finally come to our main question, which we answer with a complicated statistical model, one which accounts for all the evidence we have so far demonstrated. The type of model we use accounts for the fact that the number of Wins is a discrete number, by which we mean the total Wins can be 97 or 98, say, but they cannot be 97.4. In technical terms, it is called a quasi-Poisson generalized linear model, a fancy phrase that means that the model is very like a linear regression model, about which you may have heard, but with some twists and extra knobs that allow us to control for our interacting factors and discrete response.

The answer lies in these complicated-looking pictures. Let’s work through them slowly. First, only look at the top picture, which is the modeled, or predicted number of wins by various batting averages.

Predicted wins

There are two sets of three curves. The brownish is for the National League, and the blueish for the American. Now, in order to predict how many wins a team will have, we have to supply four things: their expected BA, ERA, number of errors, and League. That’s a lot of different numbers, so to simplify somewhat, we will fix the number of Errors at the median observed figure, which is 104. (Changing Errors barely changes the results.)

We still have to plug in a BA, ERA, and League in order to predict the number of wins. We first start by plugging in the BA over the range of observed values, but we still have to supply an ERA. In fact, we supply three different ERAs: the observed median, and first and third quartiles, which are: 4.04, 4.37, and 4.74. For the American League, these are the three blue curves: the top one corresponds to the lowest ERA of 4.04, the middle for the value of 4.37, and the bottom for the highest value of 4.74. To be clear: each point on these curves is the result of four variables: a BA, an ERA, a number of Errors, and a League. From these four variables, we predict the number of wins, which varies as the four variables do.

All of these curves sweep upwards, implying the obvious: higher BAs lead to more predicted Wins, regardless of ERA or League. At the lowest BAs, differences in ERA are the largest in the American League. Meaning that, if your team is hitting very poorly, small variations in pitching account for large changes in the number of games won. To make sure you see this, focus on the very left-most points of the graph, where the BAs are the smallest. Then look at the three blue curves (American League): the three left-most points on the blue curve are widely separated. Moving from a team ERA of 4.74 to 4.04 increases the number of games won from 61 to 78, or 17 more a season, which is of course a lot. But when a team is batting well, while differences in ERA are still important, they are not as influential. These are the right-most blue points on the figure: notice how at the largest BAs, the three curves (again representing different ERAs) are very close together. If a team in the American League is batting very well, improvements in pitching do not account for very many more games won.

That is so for the American League, but perhaps surprisingly not for the National, where the opposite occurs. Differences in ERA are more important for high batting averages, but not as important for low ones: better pitching becomes more crucial as the team bats better. The brown curves spread out more for high BAs, and are tighter at low BAs.

Now let’s look at the bottom picture. This is the same sort of thing, but for the range of ERAs are three fixed levels of BA: .259, .266, and .272. The top curves are the highest BA, and the bottom curves the lowest. Looking first at the American League, we can see that when the team ERA is low, differences in BA do not account for much. In fact, when the team ERAs are the lowest, improvements in batting in the American League are almost not different at all! When team ERAs are high, changes in BA mean larger differences in numbers of games won: the spread between the blue lines increases as ERA increases.

Again, the situation is opposite for the National League: when the team ERA is low, changes in BA are more important than when teams ERAs are high. In this league, when team ERAs are low, good batting can make a big difference in numbers of games won. But when ERAs are high, improvements in batting do not change the number of games won very much.

Once more, we point out that we can draw each of these three curves again for different numbers of Errors. We did so, but found that the differences between those curves and the ones we displayed were minimal, but not negligible: for example, adding a whopping 40 errors onto a team that ordinarily only commits 80, on average only costs them 2 games a season. Higher BAs or ERAs can mitigate this somewhat, from losing 2 games to only losing about 1 extra game a season. So while Errors are important, they are by far decisive factors in an overall season.

So what should you do?

Look again at the two plots. In the BA plot, the highest number of predicted wins, for a BA of .292 for the ERA of 4.04 (the lowest pictured) is about 104 games for National League teams, and about 100 for American League clubs. But the hitoghest number of predicted wins, looking at the ERA plot, for teams with the lowest ERA of 3.13 with the BAs of .272 (the highest pictured) is about 111 games for the National League and 107 games for the American. Conversely, back in the BA plot, those teams with the lowest BAs of .240 and high ERAs of 4.74 won only about 61 games in the American League and 67 in the National. While—in the ERA plot—teams with the worst ERAs of 5.71 and lowest BAs of .259 won only about 56 games in the American and 62 in the National.

Clearly, then, pitching is more important than batting overall: more games on average will be won by those clubs who have the lower ERAs than those teams with the higher BAs.

But that isn’t necessarily the answer to our question. Remember that you only have money for one more player. Should you recruit or trade for a better pitcher or batter? It depends on what kind of team you have now. Our team right now has a certain ERA, BA, and expected number of Errors, so what do we do? The final answer is in this last picture.

Effects of ERA and BA

This shows improvement, in either ERA (decreasing) or BA (increasing) on the bottom axis. The other axis shows for each “unit” of improvement (0.05 for ERA, 0.001 for BA), the additional games won. These are the same, in essence, of the plots above, but they show the data in a different fashion (the same colors still represent the two leagues). The way this figure works is that you pick a certain point, say a BA of .266 or an ERA of 4.34 (which is the same point on the graph), and then move upwards (to the right on the horizontal axis) by one “unit” (0.05 for ERA, 0.001 for BA) and then pick off the number of additional games won.

No matter where we are on the graph, ERA easily wins this race, in the sense that buying a better picture to improve the ERA wins more games than buying a better batter to improve the BA. This is true for either league. (These pictures are also concocted using the median values of ERA, BA, and Error, as mentioned above: do not worry if you don’t understand this; the results do not change for the other values.)

So spend your money on the pitcher.

Tim Murray is a student at Central Michigan University and can be reached at William Briggs is a statistician in New York City and can be reached at