Book review

The limits of statistics: black swans and randomness

The author of Fooled by Randomness and The Black Swan, Nassim Nicholas Taleb, has penned the essay THE FOURTH QUADRANT: A MAP OF THE LIMITS OF STATISTICS over at Edge.org (which I discovered via the indispensable Arts & Letters Daily).

Taleb’s central thesis and mine are nearly the same: “Statistics can fool you.” Or “People underestimate the probability of extreme events”, which is another way of saying that people are too sure of themselves. He blames the current crisis on Wall Street on people misusing and misunderstanding probability and statistics:

This masquerade does not seem to come from statisticians—but from the commoditized, “me-too” users of the products. Professional statisticians can be remarkably introspective and self-critical. Recently, the American Statistical Association had a special panel session on the “black swan” concept at the annual Joint Statistical Meeting in Denver last August. They insistently made a distinction between the “statisticians” (those who deal with the subject itself and design the tools and methods) and those in other fields who pick up statistical tools from textbooks without really understanding them. For them it is a problem with statistical education and half-baked expertise. Alas, this category of blind users includes regulators and risk managers, whom I accuse of creating more risk than they reduce.

I wouldn’t go so far as Taleb: the masquerade also often comes from classical statistics and statisticians, too. Much of the statistical methods that are taught to non-statisticians had their origin in the early and middle part of the 20th century before there was access to computers. In those days, it was rational to make gross approximations, assume uncertainty could always be quantified by normal distributions, guess that everything was linear. These simplifications allowed people to solve problems by hand. And, really, there was no other way to get an answer without them.

But everything is now different. The math is new, our understanding of what probability is has evolved, and everybody knows what computers can do. So, naturally, what we teach has changed to keep pace, right?

Not even close to right. Except for the modest introduction of computers to read in canned data sets, classes haven’t change one bit. The old gross approximations still hold absolute sway. The programs on those computers are nothing more than implementations of the old routines that people did by hand—many professors still require their students to compute statistics by hand! Just to make sure the results match what the computer spits out.

It’s rare to find an ex-student of a statistics course who didn’t hate it (“You’re a statican [sic]? I always hated statistics!” they say brightly). But it’s just as rare to find a person who had, in the distant past, one of two courses who doesn’t fancy himself an expert (I can’t even list the number of medical journal editors who have told me my new methods were wrong). People get the idea that if they can figure out how to run the software, then they know all they need to.

Taleb makes the point that these users of packages necessarily take a too limited view of uncertainty. They seek out data that confirms their beliefs (this obviously is not confined to probability problems), fit standard distributions to them, and make pronouncements that dramatically underestimate the probability of rare events.

Many times rare events cause little trouble (the probability that you walk on a particular blade of grass is very low, but when that happens, nothing happens), but sometimes they wreak havoc of the kind happening now with Lehman Brothers, AIG, WAMU, and on and on. Here, Taleb starts to mix up estimating probabilities (the “inverse problem”) with risk in his “Four Quadrants” metaphor. The two areas are separate: estimating the probability of an event is independent of what will happen if that event obtains. There are ways to marry the two areas in what is called Decision Analysis.

That is a minor criticism, though. I appreciate Taleb’s empirical attempt at creating a list of easy to, hard to, and difficult to estimate events along with their monetary consequences should the events happen (I have been trying to build such a list myself). Easy to estimate/small consequence events (to Taleb) are simple bets, medical decisions, and so on. Hard to estimate/medium consequence events are climatological upsets, insurance, and economics. Difficult to estimate/extreme consequence events are societal upsets due to pandemics, leveraged portfolios, and other complex financial instruments. Taleb’s bias towards market events is obvious (he used to be a trader).

A difficulty with Taleb is that he writes poorly. His ideas are jumbled together, and it often appears that he was in such a hurry to gets the words on the page that he left half of them in his head. This is true for his books, too. His ideas are worth reading, however, though you have to put in some effort to understand him.

I don’t agree with some of his notions. He is overly swayed by “fractal power laws”. My experience is that people often see power laws where they are not. Power laws, and other fractal math, give appealing, pretty pictures that are too psychologically persuasive. That is a minor quibble. My major problem is philosophical.

Taleb often states that “black swans”, i.e. extremely rare events of great consequence, are impossible to predict. Then he faults people, like Ben Bernanke, for failing to predict them. Well, you can’t predict what is impossible to predict, no? Taleb must understand this, because he often comes back to the theme that people underestimate uncertainty of complex events. Knowing this, people should “expect the unexpected”, a phrase which is not meant glibly, but is a warning to “increase the area in the tails” of the probability distributions that are used to quantify uncertainty in events.

He claims to have invented ways of doing this using his fractal magic. Well, maybe he has. At the least, he’ll surely get rich by charging good money to learn how his system works.

20 replies »

  1. “His ideas are jumbled together, and it often appears that he was in such a hurry to gets the words on the page that he left half of them in his head.” And didn’t proof-read presumably.

    Sorry. Irresistible.

    Rich

  2. The black swan event in this case was a 10% decline in home prices. How was this so unexpected? Any trader knows that a parabolic rise in price ends in tears eventually. Witness the crash in oil and commodities, which are 30-60% off the highs.

    And if you are leveraged 50:1 almost any decline wipes you out. What were these guys thinking?

    I have little understanding of fractals and such but am thinking this subject is akin to the LTP and Hurst processes discussed over at CA recently, which I’ve only partially grasped. You are always easy to read and it’d be nice if you wrote more on this someday.

  3. Hi –

    I’ve seen Taleb’s stuff before, and I think he has hit on a more fundamental problem than merely statistics and ignorance.

    It’s vastly more a problem of understanding. When I take a new partner under my wing and work with them in terms of understanding what they are supposed to be doing, I spend absolutely no time with them talking about econometric tests and the form of equations or any other technical aspects of their jobs. Rather, we spend two days talking about nothing but model specifications, during which I try to disabuse them of the notion that the greatest thing about regression analysis is being able to data-mine large data sets (in the sense of “hmmm, that didn’t give me a good r^2 or DW results, let me try this… or this… or this…. until you have a lovely equation that means absolutely nothing.

    When doing industrial work, I don’t let them estimate anything until they have worked with input-output tables first and understood what interdependencies are and why it’s important to understand these fundamentally.

    This is what Taleb means when he says people don’t understand statistics. Statistics is a tool to get where you want to go, but far too many view it either as an end for itself and the rest view it as a way of manipulating raw data in order to get a justification for what they want to do to begin with.

    Further, being able to start to quantify relationships and being able to quantify results doesn’t mean that you are beginning to understand these, let alone being able to quantify anything like the risk involved.

    (I had a long example here but WordPress thinks it’s too spammy… sigh)

    Taleb does have problems in getting his ideas out, but his ideas are worth working out.

    To sum up knowledge without understanding may be more dangerous to modern societies and economies than any other challenge since the Great Depression. That’s where Taleb’s work is a good start.

  4. The events at Lehman, etc. do not require better modeling of infrequent finance events. It’s very much what one would expect based on a good reading of Brealey and Myers and an Econ 101 books. Maybe a few papers on agency effect and public choice theory.

    And Lehman is not a disaster. Nor is Fannie going broke a disaster. It was paper wealth, mostly. Taxpayers being on the hook for subordinated debt-holders at Fannie…that is a disaster.

  5. TCO:
    Would you care to expand on your last sentence? I am particularly interested in the words “not a disaster”.

  6. “Professional statisticians can be remarkably introspective and self critical.”

    They’re just good at hiding it.

  7. I agree that Taleb is a rambling writer and therefore a more difficult read than he ought to be. But he is correct in his essential theme that our estimates of risk (defined as uncertainty of outcome) are systematically too small. The current failure of financial risk models is a good example of that– even Greenspan noted that.
    Taleb’s contiguous theme of the strength of confirmation bias is also important. It is confirmation bias PLUS underestimating risk that makes for a deadly combination. The AGW/climate issue is a powerful example of this. As for fractals, it is true that he has fallen in love with them, perhaps to a fault, but he does a service when he calls attention to them as a characteristic feature of complex systems. Too bad the climate community is neither trained in this area nor willing to learn about it. Witness for example the work of physicists Nicola Scafetta and Bryan West who showed that the fluctuation spectrum of solar activity and the earth’s climate have the identical fractal power law, making for a prima facia case that the two are linked complex systems. This linkage is further demonstrated empirically by the clear correlation between solar activity and global average temperature. Yet the IPCC and most of the climate community continue to maintain that the effect of the sun on global climate is negligible because computer climate models do not predict it. And it’s too late for them to back down, much to the detriment of their credibility.

  8. Briggs

    I am not a statistician. However, I have been required to study statistics in my Physics degree and my MBA. Having discovered your web site, how I wish my lecturers were more like you in presentation.

    I particularly like the statement in this piece where you state:

    “The two areas are separate: estimating the probability of an event is independent of what will happen if that event obtains”

    How true. In my career as an Environmental Manager for Resources projects I have often found myself arguing with regulators who think they are experts on risk. In the late 1990s, in desperation I defined types of Environmental Risk (I don’t claim to be the first to use the terms, but I haven’t found anyone else to substantiate a claim to have used the terms earlier than I). My terms where:

    Primary Risk – The probability that an event could happen (eg accident on oil well)
    Secondary Risk – The probability that damage will occur at the accident site (eg oil spill)
    Tertiary Risk – The probability that damage will occur at a location remote from the accident site (eg oil slick migrates across ocean to reef system)
    Quaternary Risk – The probability of recovery at an impacted site.

    All too often the regulators took the Primary Risk as the Tertiary Risk and could not (at least in my experience) ever understand the concept of Quaternary Risk.

    Thank you for your web site – I am a regular visitor learning much from your postings.

    PS The Black Swan is our State Emblem down here.

  9. Harry,

    Thanks for the kind words.

    Logic was dealt a hard blow—self inflicted at that—upon discovery of your black swans. It is only now starting to recover.

  10. Briggs

    That is a hard blow for logic – it is well over 200 years since the Black Swan was discovered.

    I note Taleb is quoted by you as follows:

    “black swans”, i.e. extremely rare events of great consequence, are impossible to predict

    My Football team here is the Swans named for Swan River upon which the Black Swans were (supposedly) first found. Methinks Taleb is right, the Swans have won precious few grand finals in their time in the competition and are impossible to predict for a win. They are not favourites for this weekends Grand Final. I think the probability of them coming second is of the order of 0.65 – 0.7, much to my late Great Uncles chagrin (he was an ‘original Swan” in the team).
    I am predicting no event of great consequence for my team this weekend.

  11. Coincidentally, I just finished listening to the 14 1/2 hour book, ” The Black Swan”. It was given to me by an Emory Medical student.

    You are right in that Taleb’s writing is a bit jumbled, because more than once I had to re-listen to portions of his book. Even at that, It is a good one.

    I, too, would like to learn his methods of mitigating risk of Black Swans on investment portfolios.

    My sister is holding a Lehman Bros. bond bought over five years ago when it was A rated. Any suggestion that the loss will just be a paper loss is mistaken. The interest payments have ceased, and there are questions as to how much of the principal can be recovered. The losses are real.

  12. Estimating the probability of extreme events based on statistical distributions assumes that the phenomena involved are stochastic. Some phenomena are, I suppose, but the eventual financial collapse of highly leveraged investment banks is more or less a given. The probability is 100 percent. Chance has nothing to do with it, any more than chance is to blame when your printer breaks the day after the warranty period passes.

    Murphy did not propose a theory, he stated a Law. There are, in fact, a great many things we can have certainty about. Death and taxes are the classic cases, but a great many more are out there.

  13. Briggs–
    Some of the oldest methods are still useful for specific applications. We do still need to calibrate instruments– the good instruments have relatively linear responses over their range of operation. (Or you can transform the response to look linear in a transform domain.)

    There are still many physical processes that can be usefully described using linear approximations. (e.g. Lift vs. angle of attack on a wing at small angle of attack– which is the useful region of operation.)

    The old fashioned methods still permit people to get useful consistent answers without arguing about approaches. Some methods are useful for trouble shooting, process control and what-not. Some are good for experiment design.

    So, of course these older methods need to be taught. (BTW, the approach to experimental design is, whenever possible, take data so you can be confident you measured what you wanted to measure at a precision necessary without having to tease out the answer using “fancy” statistics. This isn’t always possible, but when it is possible, it’s a blessing.)

  14. Lucia,

    Assuming linearity is often ok, as you suggest, especially for physical processes. But it’s often, or even nearly always wrong for human “processes”.

    Anyway, linearity is a small problem. Much worse is assuming that the uncertainty in most things can be quantified using normal distributions. Not only is that obviously wrong on its face, worse is that statistics is geared up only to say something about unobservable parameters, so in the end everybody walks away too sure of themselves.

  15. I thought that Decision Analysis was the point of the article’s “Map”.

    My understanding from Taleb’s book is that being unprepared for the black swan is the behavior that gets punished consistently. In that context I took it that he wasn’t critical becuase Bernanke didn’t forsee the event. Rather the Feds are the very experts who should be helping us prepare for the black swan. Instead, they seemed to be pretending the inevitable wouldn’t happen.

    Boy Scout motto: “Be prepared”.

  16. ““Almost no one expected what was coming. It’s not fair to blame us for not predicting the unthinkable.“— Daniel H. Mudd, former chief executive, Fannie Mae”

    From this article:
    http://www.nytimes.com/2008/10/05/business/05fannie.html?pagewanted=1&_r=1&hp

    Many of us must ‘think the unthinkable’ and design to those points. Highly unlikely events, but having enormously significant consequences, are the normal focus points in some industries.

    I’m certain the reviews by the Senate of the recent occurrence of a highly unlikely event will conclude that everyone focus on the enormously significant consequences aspects in the future.

    BTW, the article says this about Mr. Mudd:
    “When the mortgage giant Fannie Mae recruited Daniel H. Mudd, he told a friend he wanted to work for an altruistic business. ”

    And then says this:
    “Mr. Mudd collected more than $10 million in his first four years at Fannie.”

    That’s the kind of altruistic business I want to tap into.

  17. You said, “He is overly swayed by “fractal power laws”. My experience is that people often see power laws where they are not.” Have you read Benoit Mandelbrot’s book “The Misbehavior of Markets”? Another on the same topic is Per Bak’s “How Nature Works: The Science of Self-Organized Criticality”. I wonder what you have to say about the differences Mandelbrot discusses between evaluating risk based on a random walk model as opposed to a power law model.

Leave a Reply

Your email address will not be published. Required fields are marked *