William M. Briggs

Statistician to the Stars!

Page 148 of 700

Pascal’s Pensées, A Tour: V

PascalSince our walk through Summa Contra Gentiles is going so well, why not let’s do the same with Pascal’s sketchbook on what we can now call Thinking Thursdays. We’ll use the Dutton Edition, freely available at Project Gutenberg. (I’m removing that edition’s footnotes.)

Previous post.

Now that the hacking is all sorted out, we’re back to our regularly scheduled program. Thinking Thursdays!

14 When a natural discourse paints a passion or an effect, one feels within oneself the truth of what one reads, which was there before, although one did not know it. Hence one is inclined to love him who makes us feel it, for he has not shown us his own riches, but ours. And thus this benefit renders him pleasing to us, besides that such community of intellect as we have with him necessarily inclines the heart to love.

Notes This is so, but then so is this: “For the time will come when they will not endure sound doctrine; but after their own lusts shall they heap to themselves teachers, having itching ears; And they shall turn away their ears from the truth, and shall be turned unto fables” (King James). Also see this translation: “For the time will come when people will not tolerate sound doctrine but, following their own desires and insatiable curiosity, will accumulate teachers and will stop listening to the truth and will be diverted to myths.”

15 Eloquence, which persuades by sweetness, not by authority; as a tyrant, not as a king.

16 Eloquence is an art of saying things in such a way—(1) that those to whom we speak may listen to them without pain and with pleasure; (2) that they feel themselves interested, so that self-love leads them more willingly to reflection upon it.

It consists, then, in a correspondence which we seek to establish between the head and the heart of those to whom we speak on the one hand, and, on the other, between the thoughts and the expressions which we employ. This assumes that we have studied well the heart of man so as to know all its powers, and then to find the just proportions of the discourse which we wish to adapt to them. We must put ourselves in the place of those who are to hear us, and make trial on our own heart of the turn which we give to our discourse in order to see whether one is made for the other, and whether we can assure ourselves that the hearer will be, as it were, forced to surrender. We ought to restrict ourselves, so far as possible, to the simple and natural, and not to magnify that which is little, or belittle that which is great. It is not enough that a thing be beautiful; it must be suitable to the subject, and there must be in it nothing of excess or defect.

Notes Point (2) does not say something good about listeners. While it’s true the speaker has a duty to ease pain, the listener is not excused labor. If he is, we’re back to tickling ears. Anyway, it’s clear that when Pascal said, “We ought to restrict ourselves, so far as possible, to the simple and natural, and not to magnify that which is little, or belittle that which is great” he proved that he would not have been a hit on the Internet. I’m also reminded of the late philosopher David Stove’s lament “You or I might perhaps be excused if we sometimes toyed with solipsism, especially when we reflect on the utter failure of our writings to produce the smallest effect in the alleged external world.” From “Epistemology and the Ishmael Effect.”

A reminder that we’re skipping some points, like 17, which states rivers are moving roads.

18 When we do not know the truth of a thing, it is of advantage that there should exist a common error which determines the mind of man, as, for example, the moon, to which is attributed the change of seasons, the progress of diseases, etc. For the chief malady of man is restless curiosity about things which he cannot understand; and it is not so bad for him to be in error as to be curious to no purpose.

The manner in which Epictetus, Montaigne, and Salomon de Tultie wrote, is the most usual, the most suggestive, the most remembered, and the oftenest quoted; because it is entirely composed of thoughts born from the common talk of life. As when we speak of the common error which exists among men that the moon is the cause of everything, we never fail to say that Salomon de Tultie says that when we do not know the truth of a thing, it is of advantage that there should exist a common error, etc.; which is the thought above.

Notes Montaigne would have made a great blogger. Epictetus, who did not publish, would have been hired by either a faithless university or some White House administration and then, at some point, abandoned when he went on quip too far. Incidentally, Salomon de Tultie is Pascal’s nom de plume, and Salomon is the French of Solomon. But what about that curious “it is of advantage that there should exist a common error”? The analogy I see is that every ship has only one captain. It is better sailing when all are in one accord (whether openly or not) then to have many hands pointing in different directions. This doesn’t preserve all ships from foundering, but it does most. And morale is better.

What Might Pope Francis’s Upcoming Encyclical Look Like?

For the love of money is the root of all evil: which while some coveted after, they have erred from the faith, and pierced themselves through with many sorrows.

So said St Paul in his first letter to Timothy, and human history is loaded with evidence confirming this view. Latterly, I say, money has been replaced in part by Theory. Pope Francis thinks Inequality. Which, he said, is the “fruit of the law of competitiveness that means strongest survive over the weak” which is the “logic of exploitation” and “waste”.

Or so he said in Italian to a group in Milan, his words translated by Vatican Insider. There is thus the very real danger here and elsewhere of missing nuances and even of incorrect wordings. So let’s tread carefully.

It is necessary, if we really want to solve problems and not get lost in sophistry, to get to the root of all evil which is inequity. To do this there are some priority decisions to be made: renouncing the absolute autonomy of markets and financial speculation and acting first on the structural causes of inequity.

Obviously, or at least I hope obviously, you cannot push the “strongest survive over the weak” metaphor too far. Neither “inequality.” If there were absolute equality, where the weak and strong are as one, there would be no Pope and no right or wrong ideas. Neither could there be politicians in charge to renounce absolute autonomy of markets or of anything else.

Incidentally, we musn’t form a USA-centric view of the Pope’s words. Here, for instance, the markets are very much tied to government, the executives of one are the executives of the other. Market leaders assist (if I may be allowed the euphemism) the government in fashioning laws and regulations to their mutual benefit.

The Pope is interested in the kind of inequality that causes some of the world to go hungry. “[T]he number one concern must be for the actual person, how many people lack food on a daily basis and have stopped thinking about life, about family and social relationships, just fighting to survive?” And here comes the kicker:

“Despite the proliferation of different organizations and the international community on nutrition, the ‘paradox’ of John Paul II still stands.” There is food for everyone, but not everyone can eat” while “at the same time the excessive consumption and waste of food and the use of it for other means is there before our eyes.”

Despite? Is that the right word? But he’s right about waste. The amount of food we toss out would have scandalized our ancestors. My maternal grandfather was fond of saying, and of enforcing, “Take what you want, but eat what you take.”

In a different venue (also translated), Pope Francis said that humans should think of themselves as lords but not masters of creation. This strikes me as accurate. In charge but restrained by natural law. The danger to those who slaver or fume over the Pope’s environmental words lies in thinking our environmental policy must consist in jumping from wanton disregard to unthinking worship. We dearly love a false dichotomy.

A Christian who does not protect Creation, who does not let it grow, is a Christian who does not care about the work of God, that work that was born from the love of God for us. And this is the first response to the first creation: protect creation, make it grow.

And from the Milan speech (with choppy translation grammar):

The earth is entrusted to us so it may be a mother to us, capable of sustaining each one of us. Once, I heard a beautiful thing: the earth is not a legacy that we have received from our parents rather it is on loan to us from our children, so that we safeguard it, nurture it and carry it forward for them. The earth is generous will never leave those who custody it lacking. The earth, which is the mother for all, demands our respect and non-violence or worse the arrogance the masters. We have to pass it on to our children improved, guarded, because it was a loan that they have given to us.

You have to read your own (right or left) political desires into this to have any policy of consequence flow from it. No definite directives can be implied from the Pope’s words. One cannot, for instance, argue that thus a carbon tax must follow. Neither can you say (which nobody does say) you can do whatever you want.

But many think or hope they can “leverage” the Pope to further their politics. Even now “eco-ambassadors” are flowing in great numbers to Rome to have a photo-op (secular blessing) because they are sure the Pope’s upcoming encyclical can be used by them as a bludgeon. They want in on what they are sure will be a good thing. We’ll see.

How Good Is That Model? Scoring Rules For Forecasts: Part I

14804061203_5ba074687d_h

Part I of III

All probability (which is to say, statistical) models have a predictive sense; indeed, they are only really useful in that sense. We don’t need models to tell us what happened. Our eyes can do that. Formal, hypothesis testing, i.e. chasing after statistical “significance”, leads to great nonsense and the cause of many interpretational errors. We need models to quantify the uncertainty of what has not yet been measured or made known to us. Throughout this series I take models in that sense (as all should).

Which is this. A model—a set of premises—is used to make predictions about some observable Y, a proposition. For example, a climate model might predict what the (operationally defined) global mean surface temperature will be at some time, and Y is the proposition “The temperature was observed at the time to be y”. What I have to say applies to all probability models of observable events. But I’ll use temperature as a running example because of its familiarity.

If a model said “The temperature at the time will be x” but was really y, then the model has been falsified. The model is not true. Something is wrong with the model. The model said x would occur but y did. The model is falsified because it implied x would happen with certainty. Now the model may have always hit at every time up to this point, and it may continuing hitting forever after, but it missed this time and all it takes is one mistake for a model to be falsified.

Incidentally, any falsified model must be tossed out. By which I mean that it must be replaced with something new. If any of the premises in a model are changed, even the smallest least consequential one, strictly the old model becomes a new one.

But nobody throws out models for small mistakes. If our model predicted accurately every time point but one we’d be thrilled. And we’d be happy if “most of the time” our forecasts weren’t “too far off.” What gives? Since we don’t reject models which fail a single time or are not “too far off”, there must be hidden or tacit premises to the model. What can these look like?

Fuzz. A blurring that takes crystalline predictions and adds uncertainty to them, so that when we hear “The temperature will be x” we do not take the words at their literal meaning, and instead replace them with “The temperature will be about x”, where “about” is happily left vague. And this is not a problem because not all probability is (or should be!) quantifiable. This fuzz, quantified or not, saves the model from being falsified. Indeed, no probability model can ever be falsified unless that model becomes (at some point) dogmatic and says “X cannot happen” and we subsequently observe X.

Whether the fuzzy premises—yes, I know about fuzzy logic, the rediscovery and relabeling of classic probability, keeping all the old mistakes and adding in a few new ones—are put there by the model issuer or you is mostly irrelevant (unless you’re seeking whom to blame for model failure). The premises are there and keep the models from suffering fatal epistemological blows.

Since the models aren’t falsified, how do we judge how good they are? The best and most basic principle is how useful the models were to those who relied upon them. This means a good model to one person can be a poor one to another. A farmer may only care whether temperature predictions were accurate at distinguishing days below freezing, whereas the logistics manager of a factory cares about exact values for use in ordering heating oil. An environmentalist may only care that the forecast is one of doom while being utterly indifferent (or even hostile) to the actual outcome, so that he can sell his wares. The answer to “What makes a good model” is thus “it depends.”

Of course, since many decisions fall into broad categories we can say a little more. But in so saying, we must always remember that goodness depends on actual use.

Consider the beautiful game of petanque, wherein manly steel balls are thrown towards a target. Distance to the target is the measure of success. The throw may be thought of as a model forecast of 0 (always 0) and the observation the distance to the target. Forecast goodness is taken as that distance. Linear distance, or its average over the course of many forecasts, is thus a common measure of goodness. But only for those whose decisions are a linear function of the forecast. This is not the farmer seeking frost protection. Mean error (difference between forecast and observation) probably isn’t generally useful. One forecast error of -100 and another of +100 average to 0, which is highly misleading—but only to those who didn’t use the forecasts!

You can easily imagine other functions of error as goodness measures. But since our mathematical imagination is fecund, and since there are an infinite number of functions, there will be no end to these analyses, a situation which at least provides us with an endless source of bickering. So it might be helpful to have other criteria to narrow our gaze. We also need ways to handle the fuzz, especially when it has been formally quantified. That’s to come.

Update Due to various scheduling this-and-thats, Part II of this series will run on Friday. Part III will run either Monday or Tuesday.

Temperature Grids, Interpolation, And Over-Certainty

A reader writes:

I am a fairly new reader of your blog, coming from WattsUpWithThat and reading with delight and frustration your thoughts on statistics and climate. I have a question in this regard, as I am trying to figure out a thing or two about my local climate and weather.

Recently we were told that Norway (my home) was 2-something degrees warmer in 2014 than the “normal”. I asked our Met office how they calculated this, and the reply I got was that they take all stations available and smear them across a 1×1 km grid through some kind of interpolation in order to get full coverage of mainland Norway. And thus they can do an average.

Now, I realise that whatever we do there is no such physical thing as a mean temperature for Norway. But say that I would like to calculate an average of the available data, would it not be more appropriate to just do the average of all stations without the interpolation?

I would really appreciate it if you would give an opinion on this.

Anders Valland

Interpolation is a source of great over-certainty. Here’s why.

You can operationally define a temperature “mean” as the numerical average of a collection of fixed, unchanging stations. The utility of this is somewhat ambiguous, particularly for large areas, but adding a bunch of numbers together poses no theoretical difficulty.

The problem comes when the stations change—say an instrument is swapped—or when old stations are dropped and new ones added. That necessarily implies the operational definition has changed. Which is also fine, as long as it is remembered that you cannot directly compare the old and new definition. Nobody remembers, though. Apples are assigned Orange designations. It’s all fruit, right? So what the heck.

It gets worse with interpolation. This is when a group of stations, perhaps changing, are fed as input to a probability model, and that model is used to predict what the temperature was at locations other than the stations. Now, if it worked like that, I mean if actual predictions were made, then interpolation would be fine. But it doesn’t, not usually.

There are levels of uncertainty with these probability models, which we can broadly classify into two kinds. The first is that internal to the model itself, which is called the parametric uncertainty. Parameters tie observations to the model. If you can remember the “betas” of regression, these are they. Nearly all statistical methods are obsessively focused on these parameters, which don’t exist and can’t be seen. Nobody except statisticians care about parameters. When the model reports uncertainty, it’s usually the uncertainty of these parameters.

The second and more important level of uncertainty is that of the prediction itself. What you want to know is the uncertainty of the actual guess. This uncertainty is always, necessarily always, larger than the parametric uncertainty. It’s hard to know without knowing any details about the models, but my experience is that, for interpolation models, prediction uncertainty is 2 to 8 times as large as the parametric uncertainty. This is an enormous difference.

If the interpolation is used to make a prediction, it must be accompanied by a measure of uncertainty. If not, toss it out. Anybody can make a guess of what the temperature was. To be of practical use, the prediction must state its uncertainty. And that means prediction and not parametric uncertainty. Almost always, however, it’s the latter you see.

You have to be careful because parametric uncertainty will be spoken of as if it is prediction uncertainty. Why? Because of sloppiness. Prediction uncertainty is so rare that most practitioners don’t know the difference. In order to discover which kind of uncertainty you’re dealing with, you have to look into the guts of the calculations, which are frequently unavailable. Caution is warranted.

The uncertainty is needed to judge how likely that claimed “2 degrees warmer” is. If the actual prediction with prediction uncertainty is “90% chance of 2 +/- 6” (the uncertainty needn’t be symmetric, of course; and there’s no reason in the world to fixate on 95%), then there is little confidence any warming took place.

But watch out for parametric uncertainty masquerading as the real thing. It happens everywhere and frequently.

« Older posts Newer posts »

© 2016 William M. Briggs

Theme by Anders NorenUp ↑