William M. Briggs

Statistician to the Stars!

Author: Briggs (page 2 of 411)

Decline Of Participation In Religious Rituals With Improved Sanitation

All the world's religions, in bacterial form.

All the world’s religions, in bacterial form.

Answer me this. Earl at the end of the bar, on his sixth or seventh, tells listeners just what’s wrong with America’s science policy. His words receive knowing nods from all. Does this action constitute peer review?

Whatever it is, it can’t be any worse than the peer review which loosed “Midichlorians—the biomeme hypothesis: is there a microbial component to religious rituals?” on the world. An official paper from Alexander Panchin and two others in Biology Direct, which I suppose is a sort of bargain basement outlet for academics to publish.

The headline above is a prediction directly from that paper, a paper so preposterous that it’s difficult to pin down just what went wrong and when. I don’t mean that it is hard to see the mistakes in the paper itself, which are glaring enough, for all love. No: the important question is how this paper, how even this journal and the folks who contribute to it, can exist and find an audience.

Perhaps it can be put down to the now critical levels of the politicization of science combined with the expansion team syndrome. More on that in a moment. First the paper.

It’s Panchin’s idea that certain bugs which we have in our guts make us crazy enough to be religious, and that only if there were a little more Lysol in the world there would be fewer or no believers.

Panchin uses the standard academic trick of citing bunches of semi-related papers, which give the appearance that his argument has both heft and merit. He tosses in a few television mystery-show clues, like “‘Holy springs’ and ‘holy water’ have been found to contain numerous microorganisms, including strains that are pathogenic to humans”. Then this:

We hypothesize that certain aspects of religious behavior observed in human society could be influenced by microbial host control and that the transmission of some religious rituals could be regarded as a simultaneous transmission of both ideas (memes) and organisms. We call this a “biomeme” hypothesis.

Now “memes” are one of the dumbest ideas to emerge from twentieth-century academia. So part of the current problem is that dumb ideas aren’t dying. Capital-S science is supposed to be “self correcting”, but you’d never guess it from the number of undead theories walking about.

Anyway, our intrepid authors say some mind-altering, religion-inducing microbes make their hosts (us) go to mass, or others to temple, and still more to take up posts as Chief Diversity Officers at universities just so that the hosts will be able pass on the bugs to other folks. Very clever of the microbes, no? But that’s evolution for you. You never know what it’ll do next.

Okay, so it’s far fetched. But so’s relativity—and don’t even get started on quantum mechanics. Screwiness therefore isn’t necessarily a theory killer. But lack of consonance with the real world is. So what evidence have the authors? What actual observations have they to lend even a scintilla of credence to their theory?

None.

Not one drop. The paper is pure speculation from start to finish, and in the mode of bad Star Trek fan fiction at that.

So how did this curiosity (and others like it) become part of Science? That universities are now at least as devoted to politics as they are to scholarly pursuits is so well known it needs no further comment here. But the politics of describing religion as some sort of disease or deficiency is juicy and hot, so works like this are increasingly prevalent. Call them Moonacies, a cross between lunacies and Chris Mooney, a writer who makes a living selling books to progressives who want to believe their superiority is genetic.

Factor number two, which is not independent of number one, is expansion team syndrome. The number of universities and other organizations which feed and house “researchers” continue to grow, because why? Because Science! We’re repeatedly told, and everybody believes, that if only we all knew more Science, then the ideal society will finally have been created. Funding for personnel grows. Problem is, the talent pool of the able remains fixed, so the available slots are filled with the not-as-brilliant. Besides, we’re all scientists now!

New journals are continuously created for the overflow, and they’re quickly filled with articles like this one giving the impression things of importance are happening. Not un-coincidentally, these outlets contain greater proportions of papers which excite the press (no hard burden). And so here we are.

Comments On Dawid’s Prequential Probability

Murray takes the role of a prequential Nature.

Murray takes the role of a prequential Nature.

Phil Dawid is a brilliant mathematical statistician who introduced (in 1984) the theory of prequential probability1 to describe a new-ish way of doing statistics. We ought to understand this theory. I’ll give the philosophy and leave out most of the mathematics, which are not crucial.

We have a series of past data, x = (x1, x2, …, xn) for some observable of interest. This x can be quite a general proposition, but for our purposes suppose its numerical representation can only take the values 0 or 1. Maybe xi = “The maximum temperature on day i exceeds Wo C”, etc. The x can also have “helper” propositions, such as yi = “The amount of cloud cover on day i is Z%”, but we can ignore all these.

Dawid says, “One of the main purposes of statistical analysis is to make forecasts for the future” (emphasis original) using probability. (It’s only other, incidentally, is explanation: see this for the difference.)

The x come at us sequentially, and the probability forecast for time n+1 Dawid writes as Pr(xn+1 | xn). “Prequential” comes from “probability forecasting with sequential prediction.” He cites meteorological forecasts as a typical example.

This notation suffers a small flaw: it doesn’t show the model, i.e. the list of probative premises of x which must be assumed or deduced in order to make a probability forecast. So write pn+1 = Pr(xn+1 | xn, M) instead, where M are these premises. The notation shows that each new piece of data is used to inform future forecasts.

How good is M at predicting x? The “weak prequential principle” is that M should be judged only on the pi and xi, i.e. only how on good the forecasts are. This is not the least controversial. What is “good” sometimes is. There has to be some measure of closeness between the predictions and outcomes. People have invented all manner of scores, but (it can be shown) the only ones that should be used are so-called “proper scores”. These are scores which require pn+1 to be given conditional on just the M and old data and nothing else. This isn’t especially onerous, but it does leave out measures like R^2 and many others.

Part of understanding scoring is calibration. Calibration has more than one dimension, but since we have picked a simple problem, consider only two. Mean calibration is when the average of the pi equaled (past tense) the average of the xi. Frequency calibration is when whenever pi = q, q*100% of the time x = q. Now since x can only equal 0 or 1, frequency calibration is impossible for any M which does produce non-extreme probabilities. That is, the first pi that does not equal 0 or 1 dooms the frequency calibration of M.

Ceteris paribus, fully calibrated models are better than non-calibrated ones (this can be proven; they’ll have better proper scores; see Schervish). Dawid (1984) only considers mean calibration, and in a limiting way; I mean mathematical limits, as the number of forecasts and data head out to infinity. This is where things get sketchy. For our simple problem, calibration is possible finitely. But since the x are given by “Nature” (as Dawid labels the causal force creating the x), we’ll never get to infinity. So it doesn’t help to talk of forecasts that have not yet been made.

And then Dawid appears to believe that, out an infinity, competing mean-calibrated models (he calls them probability forecasting systems) are indistinguishable. “[I]n just those cases where we cannot choose empirically between several forecasting systems, it turns out we have no need to do so!” This isn’t so, finitely or infinitely, because two different models which have the same degree of mean calibration can have different levels of frequency calibration. So there is still room to choose.

Dawid also complicates his analysis by speaking as if Nature is “generating” the x from some probability distribution, and that a good model is one which discovers this Nature’s “true” distribution. (Or, inversely, he says Nature “colludes” in the distribution picked by the forecaster.) This is the “strong prequential principle”, which I believe does not hold. Nature doesn’t “generate” anything. Something causes each xi. And that is true even in the one situation where our best knowledge is only probabilistic, i.e. the very small. In that case, we can actually deduce the probability distributions of quantum x in accord with all our evidence. But, still, Nature is not “generating” x willy nilly by “drawing” values from these distributions. Something we-know-not-what is causing the x. It is our knowledge of the causes that is necessarily incomplete.

For the forecaster, that means, in every instance and for any x, the true “probability distribution” is the one that takes only extreme probabilities, i.e. the best model is one which predicts without error (each pi would be 0 or 1 and the model would automatically be frequency and mean calibrated). In other words, the best model is to discover the cause of each xi.

Dawid also has a technical definition of the “prequential probability” of an “event”, which is a game-theoretic like construction that need not detain us because of our recognition that the true probability of any event is 0 or 1.

Overall

That models should be judged ultimately by the predictions they make, and not exterior criteria (which unfortunately includes political considerations, and even p-values), is surely desirable but rarely implemented (how many sociological models are used to make predictions in the sense above?). But which proper score does one use? Well, that depends on exterior information; or, rather, on evidence which is related to the model and to its use. Calibration, in all its dimensions, is scandalously underused.

Notice that in Pr(xn+1 | xn, M) the model remains fixed and only our knowledge of more data increases. In real modeling, models are tweaked, adjusted, improved, or abandoned and replaced wholesale, meaning the premises (and deductions from the same) which comprise M change in time. So this notation is inadequate. Every time M changes, M is different, a banality which is not always remembered. It means model goodness judgments must begin anew for every change.

A true model is the one that generates extreme probabilities (0 or 1), i.e. the identifies the causes, or the “tightest” probabilities deduced from the given (restricted by nature) premises, as in quantum mechanics. Thus the ultimate comparison is always against perfect (possible) knowledge. Since we are humble, we know perfection is mostly unattainable, thus we reach for simpler comparisons, and gauge model success by it success over simple guesses. This is the idea of skill (see this).

Reminder: probability is a measure of information, an epistemology. It is not the language of causality, or ontology.

—————————————————————————–

Thanks to Stephen Senn for asking me to comment on this.

1The two papers to read are, Dawid, 1984. Present position and potential developments: some personal views: statistical theory: the prequential approach. JRSS A, 147, part 2, 278–292. And Dawid and Vovk, 1999. Prequential probability: principles and properties. Bernoulli, 5(1), 125–162.

Explanation Vs Prediction

The IPCC, hard at work on another forecast.

The IPCC, hard at work on another forecast.

Introduction

There isn’t as much space between explanation and prediction as you’d think; both are had from the same elements of the problem at hand.

Here’s how it all works. I’ll illustrate a statistical (or probability) model, though there really is no such thing; which is to say, there is no difference in meaning or interpretation between a probability and a physical or other kind of mathematical model. There is a practical difference: probability models express uncertainty natively, while (oftentimes) physical models do not mention it, though it is there, lurking below the equations.

Let’s use regression, because it is ubiquitous and easy. But remember, everything said goes for all other models, probability or physical. Plus, I’m discussing how things should work, not how they’re actually done (which is very often badly; not your models, Dear Reader: of course, not yours).

We start by wanting to quantify the uncertainty in some observable y, and believe we have collected some “variables” x which are probative of y. Suppose y is (some operationally defined) global average temperature. The x may be anything we like: CO2 levels, population size, solar insolation, grant dollars awarded, whatever. The choice is entirely up to us.

Now regression, like any model, has a certain form. It says the central parameter of the normal distribution representing uncertainty in y is a linear function of the x (y and x may be plural, i.e. vectors). This model structure is almost never deduced (in the strict sense of the word) but is assumed as a premise. This is not necessarily a bad thing. All models have a list of premises which describe the structure of the model. Indeed, that is what being a model means.

Another set of premises are the data we observe. Premises? Yes, sir: premises. The x we pick and then observe take the form of propositions, e.g. “The CO2 observed at time 1 was c1“, “The CO2 observed at time 2 was c2,” etc.

Observed data are premises because it is we who pick them. Data are not Heaven sent. They are chosen and characterized by us. Yes, the amount of—let us call it—cherishing that takes place over data is astonishing. Skip it. Data are premises, no different in character than other assumptions.

Explanation

Here is what explanation is (read: should be). Given the model building premises (that specified, here, regression) and the observed data (both y and x), we specify some proposition of interest about y and then specify propositions about the (already observed) x. Explanation is how much the probability the proposition about y (call it Y) changes.

That’s too telegraphic, so here’s an example. Pick a level for each of the observed x: “The CO2 observed is c1“, “The population is p”, “The grant dollars is g”, etc. Then compute the probability Y is true given this x and given the model and other observed data premises.

Step two: pick another level for each of the x. This may be exactly the same everywhere, except for just one component, say, “The CO2 observed is c2“. Recompute the probability of Y, given the new x and other premises.

Step three: compare how much the probability of Y (given the stated premises) changed. If not at all, then given the other values of x and the model and data premises, then CO2 has little, and maybe even nothing, to do with y.

Of course, there are other values of the other x that might be important, in conjunction with CO2 and y, so we can’t dismiss CO2 yet. We have a lot of hard work to do to step through how all the other x and how this x (CO2) change this proposition (Y) about y. And then there are other propositions of y that might be of more interest. CO2 might be important for them. Who knows?

Hey, how much change in the probability of any Y is “enough”? I have no idea. It depends. It depends on what you want to use the model for, what decisions you want to make with it, what costs await incorrect decisions, what rewards await correct ones, all of which might be unquantifiable. There is and should be NO preset level which says “Probability changes by at least p are ‘important’ explanations.” Lord forbid it.

A word about causality: none. There is no causality in a regression model. It is a model of how changing CO2 changes our UNCERTAINTY in various propositions of y, and NOT in changes in y itself.1

Explanation is brutal hard labor.

Prediction

Here is what prediction is (should be). Same as explanation. Except we wait to see whether Y is true or false. The (conditional) prediction gave us its probability, and we can compare this probability to the eventual truth or falsity of Y to see how good the model is (using proper scores).

Details. We have the previous observed y and x, and the model premises. We condition on these and then suppose new x (call them w) and ask what is the probability of new propositions of y (call them Z). Notationally, Pr( Z | w,y,x,M), where M are the model form premises. These probabilities are compared against the eventual observations of z.

“Close” predictions means good models. “Distant” ones mean bad models. There are formal ways of defining these terms, of course. But what we’d hate is if any measure of distance became standard. The best scores to use are those tied intimately with the decisions made with the models.

And there is also the idea of skill. The simplest regression is a “null x”, i.e. no x. All that remains is the premises which say the uncertainty in y is represented by some normal distribution (where the central parameter is not a function of anything). Now if your expert model, loaded with x, cannot beat this naive or null model, your model has no skill. Skill is thus a relative measure.

For time series models, like e.g. GCMs, one natural “null” model is the null regression, which is also called “climate” (akin to long-term averages, but taking into account the full uncertainty of these averages). Another is “persistence”, which is the causal-like model yt+1 = yt + fuzz. Again, sophisticated models which cannot “beat” persistence have no skill and should not be used. Like GCMs.

More…

This is only a sketch. Books have been written on these subjects. I’ve compressed them all in 1,100 words.

———————————————————————————-

1Simple causal model: y = x. It says y will be the value of x, that x makes y what it is. But even these models, though written mathematically like causality, are not treated that way. Fuzz is added to them mentally. So that if x = 7 and y = 9, the model won’t abandoned.

Summary Against Modern Thought: God Has No Passive Potentiality

This may be proved in three ways. The first...

This may be proved in three ways. The first…

See the first post in this series for an explanation and guide of our tour of Summa Contra Gentiles. All posts are under the category SAMT.

Previous post.

We started with a in-depth proofs that there must exist, for anything to change, an Unchanging Changer, an Unmoved Mover. We call this “entity” God. Why? To merely label this Primary Force (to speak physically) “God” felt like cheating. Why does a physical force have to be called God? Isn’t that topping it high? That’s because we don’t yet know that it’s the logical implications of the foregoing proof that insist this force is God. So far, we know the force must be eternal, i.e. outside of time. Today, we see that it must be without potentiality. Still not enough to come to God, as He is usually understood—but we have many chapters to go! Today’s proofs are so succinct and clear they need little annotation.

Chapter 16: That in God there is no passive potentiality

1 NOW if God is eternal, it follows of necessity that He is not in potentiality.i

2 For everything in whose substance there is an admixture of potentiality, is possibly non-existent as regards whatever it has of potentiality, for that which may possibly be may possibly not be. Now God in Himself cannot not be, since He is eternal. Therefore in God there is no potentiality to be.ii

3 Again. Although that which is sometimes potential and sometimes actual, is in point of time potential before being actual, nevertheless actuality is simply before potentiality: because potentiality does not bring itself into actuality, but needs to be brought into actuality by something actual. Therefore whatever is in any way potential has something previous to it. Now God is the first being and the first cause, as stated above.[1] Therefore in Him there is no admixture of potentiality.iii

4 Again. That which of itself must necessarily be, can nowise be possibly, since what of itself must be necessarily, has no cause, whereas whatever can be possibly, has a cause, as proved above.[2]iv Now God, in Himself, must necessarily be. Therefore nowise can He be possibly. Therefore no potentiality is to be found in His essence.

5 Again. Everything acts according as it is actual. Wherefore that which is not wholly actual acts, not by its whole self, but by part of itself. Now that which does not act by its whole self is not the first agent, since it acts by participation of something and not by its essence. Therefore the first agent, which is God, has no admixture of potentiality, but is pure act.v

6 Moreover. Just as it is natural that a thing should act in so far as it is actual, so is it natural for it to be passive in so far as it is in potentiality, for movement is the act of that which is in potentiality.[3] Now God is altogether impassible and immovable, as stated above.[4] Therefore in Him there is no potentiality, namely that which is passive.

7 Further. We notice in the world something that passes from potentiality to actuality. Now it does not reduce itself from potentiality to actuality, because that which is potential is not yet, wherefore neither can it act. Therefore it must be preceded by something else whereby it can be brought from potentiality to actuality. And if this again passes from potentiality to actuality, it must be preceded by something else, whereby it can be brought from potentiality to actuality. But we cannot go on thus to infinity. Therefore we must come to something that is wholly actual and nowise potential. And this we call God.vi

————————————————————————–

iRecall that to be in potentiality means possessing the capability of change, but as was proved over the course of many weeks, God does not changed. He is the Unmoved Mover.

iiThis metaphysical truth is a hammer. Note very carefully that we move from this truth to God. We do not start with belief. Speaking very loosely, God is a theorem. And I only mention this to counter to frequent, and really quite ridiculous charge, that our knowledge of God is entirely “made up” (of beliefs).

iiiAin’t that a lovely point? Remember: it is not the potential of you being in Cleveland that actually moves you there. Something actual must do that. Actualities fulfill potentialities.

ivLinger over this one, dear reader. What is necessary must be.

vThis follows from God being unchanging.

viI adore these kinds of proofs. Once you understand what a infinite regression truly implies, understanding dawns brightly. The “base” of all must be actual and not in potential. Must be. St Thomas calls this “base” God. We still haven’t felt why he does this, but we’re getting closer.

Next installment.

[1] Ch. xiii.
[2] Ch. xv.
[3] 3 Phys. i. 6.
[4] Ch. xiii.

Gibbon (And O’Brian) On Too Many Lawyers

egib

This is Gibbon, quoted in Patrick O’Brian’s The Reverse of the Medal by the character Dr Stephen Maturin, who then speaks:

‘”It is dangerous to entrust the conduct of nations to men who have learned from their profession to consider reason as the instrument of dispute, and to interpret the laws according to the dictates of private interest; and the mischief has been felt, even in countries where the practice of the bar may deserve to be considered as a liberal occupation.”

‘He thought—and he was a very intelligent man, of prodigious reading—that the fall of the Empire was caused at least in part by the prevalence of lawyers. Men who are accustomed over a long series of years to supposing that whatever can somehow be squared with the law is right—or it not right then allowable—are not useful members of society; and when they reach positions of power in the state they are noxious. They are people for whom ethics can be summed up by the collected statues.’

Gibbon would have agreed that “lawyers” include regulators and modern-day bureaucrats (many of whom are trained lawyers). The Authoritarian (these days read: progressive, leftist) believes that the law and morality are one, an ancient and diseased fallacy as ineradicable and as harmful as rats. This is why she seeks to enlarge the law to encompass all manner of activity, and of thought. Their well known slogan is “Whatever is not mandatory is forbidden!”

Help me. What group is it that constantly, loudly, nervously, and boorishly insists, at every opportunity, of their collective rationality and reason?

Skip it. Nobody needs another lesson on the left’s zeal for shackling, but what is less known is how progressive policy drives excesses on the right. Men who understand that the law is everything, and who know no other morals, will push that law to its extreme. This causes a natural reaction and encourages a greater tightening of the bonds. The process is iterative and ends only when the knots become so burdensome that life is strangled.

The law does not forbid a man from maximizing short-term profit by firing large swaths of employees who only yesterday he called “family.” It is natural to pity the dispossessed and to despise the (family) man, but the inclination to force the State (with the help of lawyers) to punish the man causes more harm than good.

The man is punished, but he feels aggrieved more than shamed, and thus seeks (with the help of lawyers) to further test the limits of the law, which causes more excess. And so on.

Through it all the State is seen as Arbiter, the Supreme Entity. This belief is encouraged by both sides. But people forget the State is made of people, especially those people who falsely believe in the equality of law and morality. Like lawyers.

Solution? A fundamental change in how we view the world. How do we bring it about? Don’t know. Blog posts? Your ideas?

Older posts Newer posts

© 2014 William M. Briggs

Theme by Anders NorenUp ↑