William M. Briggs

Statistician to the Stars!

Page 394 of 567

Four Chords Is All You Need: The Limited Nature of Pop Music

Coming tomorrow: the infamous two-envelope problem, solved! More mathematical constructivism. But today, as it’s Sunday, something light and airy…and non taxing.

A “comedy rock” group which bills itself as the Axis of Awesome has independently discovered the Musical Badness measure. Recall, the Musical Badness measure says repetitiveness makes for poor music. This may be repetitiveness within a song, or even across a genre.

The Axis of Awesome have researched assiduously and found that the most popular pop music has only ever employed four chords, and no others. Just four, and the same four in each song; perhaps, but not likely, occurring in a different order.

About two dozens of these “hits” are sung in the following video (which, I must warn you, uses bad language twice). Beatles fans will want to pay attention at the 2:43′ mark.



“That’s all it takes to be a star” indeed!

One thing that makes pop music difficult to appreciate—a.k.a. bad—is that so much of it sounds the same. “Not true, Briggs, you fool!” you retort. “I don’t know about your tin ear, but I can clearly tell the difference between one song and the next.”

Well, as a matter of fact, so can I. But what is it that makes one song distinct from another? Given the research of the Axis of Awesome, it can only be two things: the lyrics and the voice of the singer. That is, the distinctiveness of that voice.

It may well be that the ear, when hearing yet one more four-chord-progression song, is so hungry for something new that it, in concert with the brain, inflates the significance of the singer’s voice. The song becomes that singer’s voice, as long as the lyrics are catchy.

This might be why pop songs always sound off except when sung by the original voice. The songs even sound false when they are heard by the same singer, but when sung live if you first heard the song from a studio recording; or the opposite if you first heard it live.

Note that this is “heard” live and not “witnessed” live: being there in person obviously affects the experience.

This curiosity does not affect music from Mozart, say, or Bach. In “classical” music, we are awarded with complexity and richness and so our minds are directed towards the music itself, and not to anything extraneous.

Like a video, or gossip about the band, or memories of what you were doing at the time you heard the song. This can explain why the pop music of our teen years sounds good, but as we age newer songs sound progressively bad.

We are told constantly that the mark of good science is replication. A discovery that cannot be duplicated is suspect. We are right, therefore, to ask whether the Axis of Awesome’s research has been verified. It has.

Several years ago, the ground was laid on this subject by Rob Paravonian, another comedian, who noticed that much of pop music merely duplicated Pachelbel’s Cannon in D. His seminal paper was entitled, “Pachelbel Rant,” which can be viewed here. Anxious readers can skip to the 2:18′ mark, which is where the meat begins.

Another founder, Space City Marc, building on the Axis of Awesome, has given us a classification of the four-chord song, which he calls the “Six Four One Five: Sensitive Female Chord Progression (SFCP).”

It’s any chord progression that starts with the minor six (vi) and then moves to the major four (IV), the major one (I) and the major five (V). Ideally, it would then repeat. As an example, a SFCP in A minor would be Am-F-C-G.

Audio examples of the SFCP can be found in the fundamental paper “Striking a chord.” Space City Marc is clearly anxious to be “non-judgmental”, however, and takes pains to tell us that repetitiveness isn’t bad because, well, it’s used so often.

More seriously, we have philosopher Roger Scruton, who said,

Countless pop songs give us permutations of the same stock phrases, diatonic or pentatonic, but kept together not by any intrinsic power of adhesion but only by a plodding rhythmical backing and banal sequence of chords. This example from Ozzy Osbourne illustrates what I have in mind: no point in copyrighting this tune, though no point in suing for breach of copyright either.

The Osbourne tune may be found linked in Scruton’s “Soul Music

Scruton has lots to say, and who warns us that

here is growing, within popular music, another kind of practice altogether, one in which the movement is no longer contained in the musical line but exported to a place outside it, to a center of pulsation which demands not that you listen but that you submit.

Hat tip to Dvorak Uncensored.


18 Comments

St Petersburg Paradox; Games and Statistical Decisions; RIP David Blackwell

David Blackwell, who died two weeks ago, was one of the first mainstream statisticians to “go Bayesian.” And for that and his unique skill in clearly explaining difficult ideas, we owe him plenty.

Blackwell handed in his slide rule and the grand age of 91. A good run!

He worked on cool problems. From his Times obituary, “His fascination with game theory, for example, prompted him to investigate the mathematics of bluffing and to develop a theory on the optimal moment for an advancing duelist to open fire.”

If that isn’t slick—and useful!—I don’t know what is. Of course it’s useful; because it doesn’t have to be two guys facing off with pistols, it can be two tank columns facing off with depleted uranium rounds.

One of the big reasons statisticians started the switch to Bayesian theory, or at least accorded it respect, is that it is aptly suited to decision theory, which Blackwell (with Girshick) explicated neatly in their to-be-read book Theory of Games and Statistical Decisions. I encourage you to buy this book: you can pick up a copy for as little as three bucks.

A classic decision analysis problem of the sort Blackwell examined is this.

St Petersburg Paradox

The estimable Daniel Bernoulli gave us this problem, one of the first creations of decision theory. You have to pay a certain amount of money to play the following game:

A pot starts out with one dollar. A coin is then tossed. If a head shows, then the amount in the pot is doubled. If a tail shows, the game is over and you win the pot, else the coin is re-flipped repeatedly until a tail appears. How much should you pay to play?

Suppose you pay ten bucks and the coin shows a tail the very first throw. You win the dollar in the pot, but it costs you a bundle. You won’t make any money unless a tail waits until at least the fifth throw.

The standard solution begins by introducing the idea of expected value. This is usually a misnomer, because the “expected” value is often one that you do not expect or is impossible. Its formal definition is this: the weighted sum of everything that can happen multiplied by the probability of everything that can happen.

For example, the expected value of a die roll is:

    EV = 0.167*1 + 0.167*2 + 0.167*3 + 0.167*4 + 0.167*5 + 0.167*6 = 3.5,

where 0.167 is 1/6, the probability of seeing any result. This says we “expect” to see 3.5, which is impossible. The dodge we introduce is to turn the die roll into a game that can be played “indefinitely.” Suppose you win one dollar for every spot that shows. Then, for example, if a 5 shows you win $5.

If you were to play the die game “indefinitely” the average amount won per game would converge to 3.5, and seeing an average of 3.5 is certainly possible. For instance, you win $6 on the first roll and $1 on the second, for an average of $3.5 per roll. However, expected value is the average after you play a number of games that approaches infinity.

We can now apply expected value to the St Petersburg game:

    EV = (1/2)*1 + (1/4)*2 + (1/8)*4 + … = infinity.

There’s a 1/2 chance of winning $1, a 1/2 * 1/2 = 1/4 chance of winning $2 (we see a Tail on the second throw), a 1/2 * 1/2 * 1/2 = 1/8 chance of winning $4 (we see a tail on the third throw), and so on. Those of you who have had a “pre-calculus” course will quickly see that this sum approaches infinity.

Yes, that’s right. The “expected” amount you win is infinite. Therefore, this being true, you should be willing to pay any finite sum to play! If you’re convinced, please email me your credit card number and we’ll have a go.

The classical solution to this “paradox” is to assume that your valuation of money is different than its face value. For example, if you already have a million, adding 10 bucks is trivial. But if you have nothing, then 10 bucks is the difference between eating and going hungry. Thus, the more you have, the less more is worth.

Through calculus, you can use a down-weighted money function to give less value to absurdly high possibilities in the St Petersburg game. So, instead of winning $2100 (a million billion billion) if a tail doesn’t show until the 100th toss, and which has the chance of 1/2100 = 8 x 10-31, you say that amount is worth only a vanishingly fractional amount.

Whatever down-weighting function is used (usually some form of log(money)), calculus can supply the result, which is that the expected value becomes finite. The results are usually in the single-dollars range; that is, the calculus typically shows the expected value to be anywhere from $2 to $10, which is the amount you should be willing to pay.

The real solution is to assume what is true: the amount of money is not infinite! Using only physically realizable finite banks, we know the pot can never exceed some fixed amount.

If that amount is, say, $1 billion, then the number of flips can never exceed 30. The expected value, ignoring down-weighting, of 30 flips is only $30 * (1/2) = $15. And we can, if we like, even include the down-weighting! (Even $1 trillion gives only a max 40 tosses with expected value $20!)

Thus, the St Petersburg “paradox”, like all paradoxes, never was. It was a figment of our creation, a puzzle concocted with premises we knew were false.

More on finitism, or mathematical constructivism here.


12 Comments

How to Fool Yourself—And Others—With Statistics

See the news box to the left. I wrote this long ago and never used it. I do not love it. But since I am so busy, I haven’t the time to write something new. Feel free to disparage.

Remember how much you hated your college statistics course? It made little sense. It was confusing, even nonsensical. It was an endless stream of meaningless, hard-to-remember formulas.

All that is true—it was awful—but you were wrong to hate it. Because it has been a balm and a boon to mankind, especially to researchers in need of a paper. Publish or perish rules academia, and no other tool has been as useful in generating papers as statistics has.

Statistics is so powerful that it can create positive results in nearly any situation, including those in which it shouldn’t. For example, this week we read in the newspaper that “statistics show mineral X” is good for you, only to read next week that “statistics show” it isn’t. How can statistics be used to simultaneously prove and disprove the same theory? Easy.

But first note that I am talking about how statistics as she is practiced by the unwary or unscrupulous. Statisticians themselves, as everybody knows, are the most conscientious and honest bunch of people on the planet.

How to prove your theory

Step 1: Start with a theory or hypothesis you want to be true.

Step 2: Gather data that might be related to that theory; more is better.

Step 3: Choose a probability model for that data. Remember the “bell-shaped curve”? That’s a model, one of hundreds at your disposal.

Step 4: These models have knobs called parameters which are tuned—via complex mathematics—so that the model fits.

Step 5: Now it gets tricky. Pick a test from that set of formulae you were made to memorize. This test must say how your theory relates to the model’s parameters. For example, you might declare, “If my theory is true, then this certain knob cannot be set to zero.” The test then calculates a statistic, which is some mathematical function of your data.

You then calculate the probability of seeing a statistic as large as you just calculated given that the relevant knob is set to zero. That is, the test says how unusual the observed statistic is given that the probability-parameter statement about your theory is true—and given the model you picked is correct.

You might dimly recall that the result of this calculation is called a p-value. It’s true definition is so difficult to remember that nobody can remember it. What people do remember is that a small one—less than 0.05—is good.

If that level is reached, you’re allowed to declare statistical significance. This is not the same as saying your theory is true, but nobody remembers that, either. Significance is vaguely meaningful only if both a model and the test used being are true and optimal. It gives no indication of the truth or falsity of any theory.

Statistical significance is easy to find in nearly any set of data. Remember that we can choose our model. If the first doesn’t give joy, pick another and it might. And we can keep going until one does.

We also must pick a test. If the first doesn’t offer “significance”, you can try more until you find one that does. Better, each test can be tried for each model.

If that sounds like too much work, there’s a trick. Due to a quirk in statistical theory, for any model and any test, statistical “significance” is guaranteed as long as you collect enough data. Once the sample size reaches a critical level, small p-values practically rain from the data.

But if you’re impatient, you can try subgroup analysis. This is where you pick your way through the data, keeping only what’s pretty, trying various tests and models until such a time as you find a small p-value.

The lesson is that it takes a dull researcher not to be able to find statistical “significance” somewhere in his data.

Boston Scientific

About two years ago the Wall Street Journal (registration required) investigated the statistical practices of Boston Scientific, who had just introduced a new stent called the Taxsus Liberte.

Boston Scientific did the proper study to show the stent worked, but analyzed their data using an unfamiliar test, which gave them a p-value of 0.049, which is statistically significant.

The WSJ re-examined the data, but used different tests (they used the same model). Their tests gave p-values from 0.051 to about 0.054; which are, by custom, not statistically significant.

Real money is involved, because if “significance” isn’t reached, Boston Scientific can’t sell their stents. But what the WSJ is quibbling, because there is no real-life difference between 0.049 and 0.051. P-values do not answer the only question of interest: does the stent work?

The moral of the story

No theory should be believed because a statistical model reached “significance” on a set of already-observed data. What makes a theory useful is that it can predict accurately never-before-observed data.

Statistics can be used for these predictions, but it almost never is.

I think predictions are avoided on the principle that when ignorance is bliss, tis folly to know that your theory can’t be published.

Incidentally, we statisticians have heard every version of “liars figure”, “dammed lies”, etc., so you’ll pardon me for not chuckling when in response you whip out your Disraeli.

Update If you thought this post was bad, you might try watching this video (I can think of at least two good reasons to): A Strange Tale About Probability.


5 Comments

US Military Fatality Statistics: Have Suicides Increased?

All statistics were gathered from this DOD site. 2010 numbers were current as of 5 June this year; they were not part of the plots below.

In 1983, the year in which yours truly entered into servitude with his Uncle Sam, the United States military fatality rate was 0.1%. That is, roughly 1 out of every 1000 service members handed in their helmets early. This rate dropped rapidly, reaching a low in 2000, when about 1 out of every 10,000 died in uniform.

That rate, as to be expected unfortunately, has risen since the start of the last two wars. There was also an upward blip during the First Gulf War, as shown in this picture (from 1980 to 2009):

US Military Fatalities overall

Interestingly, the rate for the last two years, even though we are engaged in two wars, is less than it was at the start of the 1980s.

The 2010 data only includes data until 5 June: but the rate so far is 3 out of every 10,000; and is projected to be lower than 2009.

The distinct causes of death are more interesting, as shown in this picture:

US Military Fatalities by type

Accident fatalities have dropped dramatically, resurging during the surge, then tempering back to their low levels, presumably after the new recruits were trained.

Naturally, the fatality rate due to hostilities increased by a bunch with the start of the wars. But just look at that rapid fall off after the surge in 2007! Fatalities in 2008 and 2009 were about 2 out of every 10,000, a halving from the previous year.

You might have heard other reports from on high, but as this picture shows quite dramatically, the surge worked.

The homicide rate of the United States—as a whole—is about 5.4 per 100,000, a rate which has held somewhat steady over the last decade. But the homicide rate for the military has been—and still is—lower than in the population.

In 2009, the military rate, while high, was still lower than in the population. And, so far in 2010, it looks to be returning to a level of about 2 per 100,000.

Deaths due to illness also decreased until the wars, as might be expected. Well, maybe not as expected. The drop from 1980 to 2000 is itself interesting and noteworthy. The rate, even in 2009, is still lower than the US population. (But this might be due to age: military members are relatively young and the young do not die from disease at the same rate as the old, of course. I do not have time to compile age-matched statistics to compare.)

The most unfortunate, and non-ignorable, statistic is the suicide fatality rate, or “Self Induced Fatalities”, to put it in government-speak. It might not seem odd that this rate has increased during the current wars, but then why the increase in the mid-1990s? After all, the First Gulf War was already over four years before that rate peaked. I don’t have an answer.

The good news is that the current rate, as of 5 June, looks to be coming in at half last year’s rate. The other item of significant note is that the current high rate (as of 2009) puts the military on roughly equal terms with the population as a whole. That is, for most years, the suicide rate for service members is lower than in the population.

Finally, the fatality rate due to terrorism is low. The Marine barracks bombing in Lebanon sticks out and cannot be ignored, but apparently the terrorist attack of Maj. Nidal Malik Hasan, who shot up Fort Hood in the name of a belief which shall remain nameless.

Hasan murdered 13 in what all but members of the current administration call a terrorist attack. Yet the official statistics say “0” died in 2009 due to terrorism. Surely an oversight?


7 Comments
« Older posts Newer posts »

© 2015 William M. Briggs

Theme by Anders NorenUp ↑