Class - Applied Statistics

Taleb’s Curious Views On Probability — Part III: Ergodicity & All That

Read Part I, Part II

Ergodic in probability has a technical definition. Without going into mathematical details (which are fine except possibly when applied), a “sequence” is defined as a run of measurements of some observable. A sub-sequence is a portion of the sequence.

Here is where belief that probability is ontic causes trouble. First, no real sequence is of infinite length, thus no sub-sequence can be infinite. The observations are measurements, as said, of real things, say, stock prices. The measurements do not possess any properties beyond those in the things themselves, i.e. prices of stocks. The measurements do not have a mean in the sense of a parameter from a probability model; of course, arithmetic averages can be calculated from any observed sequence. But the measurements do not possess any parameter from any probability distribution that may be used to represent uncertainty in them. The measurements do not possess probability. This we learned in Part I.

With me?

Ergodic, or ergodicity, is the property that any sub-sequence in the measurements possess the same probability characteristics of the entire sequence, or other sub-sequences. Since none of any real sequence possess any probability characteristics in any ontic sense, the term is of no use in reality, however useful it might be in imagining infinite sequences of mathematical objects.

We might find some use for ergodicity, rescue it as it were, in the following way. A set of assumptions M, i.e. a model, is used to make predictions of a sequence up to some point t. After t, we might amend these assumptions, to say Mt, and make new predictions. Why this change at t? Only because there is some new assumption (or observation etc.) which impinged upon your mind.

Example: Use M for stock price y; at time t, the stock splits, and so M is amended to Mt to incorporate knowledge of the split. If M ever changes (because your assumptions, premises, etc. do), however often, through time, in practice we do not have ergodicity. In this sense, ergodicity is just like probability in being purely epistemic. But since we know we changed M, we don’t need to label that change “ergodic activity at time t”.

Make sense?

Of course, since real sequences do not possess, in the ontic sense, ergodicity, there is no point in going and looking for it. You cannot find what doesn’t exist. For real sequences, you are always welcome to change your assumptions at any time. In this sense, it is you that creates practical ergodicity when you change M, which is how you know it’s there.

How do you know to change M? How indeed! That is ever the problem. There is no universal solution, save discovering the causes or y (which for stock prices isn’t going to happen).

Back to Taleb. His use of the term appears to assume the mathematical definition, which says probability exists; e.g. he says things like “detect when ergodicity is violated”. This is not only Taleb, of course, but most users of probability models. The error is common. It is why Taleb’s examples about ergodicity aren’t quite coherent. But it’s not his fault.

Switch to our last topic, repetition of exposure. This allows Taleb to run back to the precautionary principle he loves so well.

If one claimed that there is “statistical evidence that the plane is safe”, with a 98% confidence level (statistics are meaningless without such confidence), and acted on it, practically no experienced pilot would be alive today. In my war with the Monsanto machine, the advocates of genetically modified organisms (transgenics) kept countering me with benefit analyses (which were often bogus and doctored up), not tail risk analyses for repeated exposures.

Only frequentist statistics need confidence (and all readers of Uncertainty know the frequentist theory fails on multiple fronts, and is useful nowhere). Predictive probability does not.

It is true, and obvious, that if there is a risk in an act, repeating the act increases the overall risk.

What risk is there in, say, eating a GMO BLT? I have no idea, and neither does Taleb. There are well known benefits, though, as there always are when bacon is involved. If I knew of any risk, it may be that the cumulative benefits outweigh those cumulative risks. But I know of no risks save that “GMOs might hurt me”.

That statement is actually a tautology: it is equivalent to “GMOs might hurt me and they might not hurt me.” It therefore as the assumption to a model of S = “GMOs will hurt me” of no use. Tautologies never add information; they are like multiplying by 1. S does not have a probability without assumptions.

I might, as Taleb likes to do in the precautionary principle (review!), use different assumptions, say, “Monsanto’s lawyers are jerks and their GMOs cause, when the circumstances are in place, small amounts of damage when eaten.” With that, we can form a medium to high probability that S is true, especially upon repeated exposure (it would certainty and not only high probability except for that “circumstances” condition).

Now Monsanto’s lawyers are jerks. Suing because Monsanto’s DNA wanders via natural pollination into some poor innocent farmer’s field is evil and shouldn’t be allowed. But from these truths it does not follow Monsanto’s GMOs cause harm. You need more than just suspicions that they might cause harm, because “might” is a tautology.

It’s enough for Taleb, because he wants you to consider not only the harm that GMOs (or global warming) will cause you, but will cause all of humanity plus its pet parakeets. Yet he offers (as far as I can see) nothing more than the tautology as evidence for S, and however many times you multiply a tautology, it is still a tautology in the end. A thousand “might harms” is still one “might or might not harm”.

If you are determined to prove GMOs cause harm, you need to demonstrate how. And then you still haven’t demonstrated that the benefits of them outweigh these harms. There will be no one-size-fits-all decision there.

5 replies »

  1. My objection to GMOs (in the human-engineered sense; not what happens naturally over time) is that it fiddles with, profits from, and monopolizes the seeds that God freely gave to humanity.

  2. The Precautionary Principle is valid because new ways of doing things have in the past not always worked as advertised, for a large number of reasons (causes).

    Even if you cannot compute the probability that GMO’s in particular are bad for you, you therefore still want to proceed with caution. More so because you do not know the cause and effect releations that exist for GMO foods.

    Secondly, would Talebs position not be that people do not use probability models with fat tails as much as they should?

  3. GMO, though, is nothing more than fancy breeding. In fact, depending on circumstances, GMO breeding is better than ‘natural’, since the process is more controlled. On the other hand, people are generally not aware that biological life possesses great inherent variability and is designed not to lose information. Besides, the ‘natural’ function of viruses is to ensure that the inherent variability isn’t lost, by spreading the information throughout all biological organisms.

  4. If one is really serious about GMO’s one should put a stop to all animal breeding and natural breeding of different food types (apples, for example). What nonsense!

  5. My only knowledge of ergodicity is that Henri Poincare proved in his Recurrence Theorem that certain systems will, after a long but finite time, return to a state very close to the initial state. So, what does that do to the Second Law?

Leave a Reply

Your email address will not be published. Required fields are marked *