William M. Briggs

Statistician to the Stars!

Page 152 of 665

Decline Of Participation In Religious Rituals With Improved Sanitation

All the world's religions, in bacterial form.

All the world’s religions, in bacterial form.

Answer me this. Earl at the end of the bar, on his sixth or seventh, tells listeners just what’s wrong with America’s science policy. His words receive knowing nods from all. Does this action constitute peer review?

Whatever it is, it can’t be any worse than the peer review which loosed “Midichlorians—the biomeme hypothesis: is there a microbial component to religious rituals?” on the world. An official paper from Alexander Panchin and two others in Biology Direct, which I suppose is a sort of bargain basement outlet for academics to publish.

The headline above is a prediction directly from that paper, a paper so preposterous that it’s difficult to pin down just what went wrong and when. I don’t mean that it is hard to see the mistakes in the paper itself, which are glaring enough, for all love. No: the important question is how this paper, how even this journal and the folks who contribute to it, can exist and find an audience.

Perhaps it can be put down to the now critical levels of the politicization of science combined with the expansion team syndrome. More on that in a moment. First the paper.

It’s Panchin’s idea that certain bugs which we have in our guts make us crazy enough to be religious, and that only if there were a little more Lysol in the world there would be fewer or no believers.

Panchin uses the standard academic trick of citing bunches of semi-related papers, which give the appearance that his argument has both heft and merit. He tosses in a few television mystery-show clues, like “‘Holy springs’ and ‘holy water’ have been found to contain numerous microorganisms, including strains that are pathogenic to humans”. Then this:

We hypothesize that certain aspects of religious behavior observed in human society could be influenced by microbial host control and that the transmission of some religious rituals could be regarded as a simultaneous transmission of both ideas (memes) and organisms. We call this a “biomeme” hypothesis.

Now “memes” are one of the dumbest ideas to emerge from twentieth-century academia. So part of the current problem is that dumb ideas aren’t dying. Capital-S science is supposed to be “self correcting”, but you’d never guess it from the number of undead theories walking about.

Anyway, our intrepid authors say some mind-altering, religion-inducing microbes make their hosts (us) go to mass, or others to temple, and still more to take up posts as Chief Diversity Officers at universities just so that the hosts will be able pass on the bugs to other folks. Very clever of the microbes, no? But that’s evolution for you. You never know what it’ll do next.

Okay, so it’s far fetched. But so’s relativity—and don’t even get started on quantum mechanics. Screwiness therefore isn’t necessarily a theory killer. But lack of consonance with the real world is. So what evidence have the authors? What actual observations have they to lend even a scintilla of credence to their theory?

None.

Not one drop. The paper is pure speculation from start to finish, and in the mode of bad Star Trek fan fiction at that.

So how did this curiosity (and others like it) become part of Science? That universities are now at least as devoted to politics as they are to scholarly pursuits is so well known it needs no further comment here. But the politics of describing religion as some sort of disease or deficiency is juicy and hot, so works like this are increasingly prevalent. Call them Moonacies, a cross between lunacies and Chris Mooney, a writer who makes a living selling books to progressives who want to believe their superiority is genetic.

Factor number two, which is not independent of number one, is expansion team syndrome. The number of universities and other organizations which feed and house “researchers” continue to grow, because why? Because Science! We’re repeatedly told, and everybody believes, that if only we all knew more Science, then the ideal society will finally have been created. Funding for personnel grows. Problem is, the talent pool of the able remains fixed, so the available slots are filled with the not-as-brilliant. Besides, we’re all scientists now!

New journals are continuously created for the overflow, and they’re quickly filled with articles like this one giving the impression things of importance are happening. Not un-coincidentally, these outlets contain greater proportions of papers which excite the press (no hard burden). And so here we are.

Comments On Dawid’s Prequential Probability

Murray takes the role of a prequential Nature.

Murray takes the role of a prequential Nature.

Phil Dawid is a brilliant mathematical statistician who introduced (in 1984) the theory of prequential probability1 to describe a new-ish way of doing statistics. We ought to understand this theory. I’ll give the philosophy and leave out most of the mathematics, which are not crucial.

We have a series of past data, x = (x1, x2, …, xn) for some observable of interest. This x can be quite a general proposition, but for our purposes suppose its numerical representation can only take the values 0 or 1. Maybe xi = “The maximum temperature on day i exceeds Wo C”, etc. The x can also have “helper” propositions, such as yi = “The amount of cloud cover on day i is Z%”, but we can ignore all these.

Dawid says, “One of the main purposes of statistical analysis is to make forecasts for the future” (emphasis original) using probability. (It’s only other, incidentally, is explanation: see this for the difference.)

The x come at us sequentially, and the probability forecast for time n+1 Dawid writes as Pr(xn+1 | xn). “Prequential” comes from “probability forecasting with sequential prediction.” He cites meteorological forecasts as a typical example.

This notation suffers a small flaw: it doesn’t show the model, i.e. the list of probative premises of x which must be assumed or deduced in order to make a probability forecast. So write pn+1 = Pr(xn+1 | xn, M) instead, where M are these premises. The notation shows that each new piece of data is used to inform future forecasts.

How good is M at predicting x? The “weak prequential principle” is that M should be judged only on the pi and xi, i.e. only how on good the forecasts are. This is not the least controversial. What is “good” sometimes is. There has to be some measure of closeness between the predictions and outcomes. People have invented all manner of scores, but (it can be shown) the only ones that should be used are so-called “proper scores”. These are scores which require pn+1 to be given conditional on just the M and old data and nothing else. This isn’t especially onerous, but it does leave out measures like R^2 and many others.

Part of understanding scoring is calibration. Calibration has more than one dimension, but since we have picked a simple problem, consider only two. Mean calibration is when the average of the pi equaled (past tense) the average of the xi. Frequency calibration is when whenever pi = q, q*100% of the time x = q. Now since x can only equal 0 or 1, frequency calibration is impossible for any M which does produce non-extreme probabilities. That is, the first pi that does not equal 0 or 1 dooms the frequency calibration of M.

Ceteris paribus, fully calibrated models are better than non-calibrated ones (this can be proven; they’ll have better proper scores; see Schervish). Dawid (1984) only considers mean calibration, and in a limiting way; I mean mathematical limits, as the number of forecasts and data head out to infinity. This is where things get sketchy. For our simple problem, calibration is possible finitely. But since the x are given by “Nature” (as Dawid labels the causal force creating the x), we’ll never get to infinity. So it doesn’t help to talk of forecasts that have not yet been made.

And then Dawid appears to believe that, out an infinity, competing mean-calibrated models (he calls them probability forecasting systems) are indistinguishable. “[I]n just those cases where we cannot choose empirically between several forecasting systems, it turns out we have no need to do so!” This isn’t so, finitely or infinitely, because two different models which have the same degree of mean calibration can have different levels of frequency calibration. So there is still room to choose.

Dawid also complicates his analysis by speaking as if Nature is “generating” the x from some probability distribution, and that a good model is one which discovers this Nature’s “true” distribution. (Or, inversely, he says Nature “colludes” in the distribution picked by the forecaster.) This is the “strong prequential principle”, which I believe does not hold. Nature doesn’t “generate” anything. Something causes each xi. And that is true even in the one situation where our best knowledge is only probabilistic, i.e. the very small. In that case, we can actually deduce the probability distributions of quantum x in accord with all our evidence. But, still, Nature is not “generating” x willy nilly by “drawing” values from these distributions. Something we-know-not-what is causing the x. It is our knowledge of the causes that is necessarily incomplete.

For the forecaster, that means, in every instance and for any x, the true “probability distribution” is the one that takes only extreme probabilities, i.e. the best model is one which predicts without error (each pi would be 0 or 1 and the model would automatically be frequency and mean calibrated). In other words, the best model is to discover the cause of each xi.

Dawid also has a technical definition of the “prequential probability” of an “event”, which is a game-theoretic like construction that need not detain us because of our recognition that the true probability of any event is 0 or 1.

Overall

That models should be judged ultimately by the predictions they make, and not exterior criteria (which unfortunately includes political considerations, and even p-values), is surely desirable but rarely implemented (how many sociological models are used to make predictions in the sense above?). But which proper score does one use? Well, that depends on exterior information; or, rather, on evidence which is related to the model and to its use. Calibration, in all its dimensions, is scandalously underused.

Notice that in Pr(xn+1 | xn, M) the model remains fixed and only our knowledge of more data increases. In real modeling, models are tweaked, adjusted, improved, or abandoned and replaced wholesale, meaning the premises (and deductions from the same) which comprise M change in time. So this notation is inadequate. Every time M changes, M is different, a banality which is not always remembered. It means model goodness judgments must begin anew for every change.

A true model is the one that generates extreme probabilities (0 or 1), i.e. the identifies the causes, or the “tightest” probabilities deduced from the given (restricted by nature) premises, as in quantum mechanics. Thus the ultimate comparison is always against perfect (possible) knowledge. Since we are humble, we know perfection is mostly unattainable, thus we reach for simpler comparisons, and gauge model success by it success over simple guesses. This is the idea of skill (see this).

Reminder: probability is a measure of information, an epistemology. It is not the language of causality, or ontology.

—————————————————————————–

Thanks to Stephen Senn for asking me to comment on this.

1The two papers to read are, Dawid, 1984. Present position and potential developments: some personal views: statistical theory: the prequential approach. JRSS A, 147, part 2, 278–292. And Dawid and Vovk, 1999. Prequential probability: principles and properties. Bernoulli, 5(1), 125–162.

Explanation Vs Prediction

The IPCC, hard at work on another forecast.

The IPCC, hard at work on another forecast.

Introduction

There isn’t as much space between explanation and prediction as you’d think; both are had from the same elements of the problem at hand.

Here’s how it all works. I’ll illustrate a statistical (or probability) model, though there really is no such thing; which is to say, there is no difference in meaning or interpretation between a probability and a physical or other kind of mathematical model. There is a practical difference: probability models express uncertainty natively, while (oftentimes) physical models do not mention it, though it is there, lurking below the equations.

Let’s use regression, because it is ubiquitous and easy. But remember, everything said goes for all other models, probability or physical. Plus, I’m discussing how things should work, not how they’re actually done (which is very often badly; not your models, Dear Reader: of course, not yours).

We start by wanting to quantify the uncertainty in some observable y, and believe we have collected some “variables” x which are probative of y. Suppose y is (some operationally defined) global average temperature. The x may be anything we like: CO2 levels, population size, solar insolation, grant dollars awarded, whatever. The choice is entirely up to us.

Now regression, like any model, has a certain form. It says the central parameter of the normal distribution representing uncertainty in y is a linear function of the x (y and x may be plural, i.e. vectors). This model structure is almost never deduced (in the strict sense of the word) but is assumed as a premise. This is not necessarily a bad thing. All models have a list of premises which describe the structure of the model. Indeed, that is what being a model means.

Another set of premises are the data we observe. Premises? Yes, sir: premises. The x we pick and then observe take the form of propositions, e.g. “The CO2 observed at time 1 was c1“, “The CO2 observed at time 2 was c2,” etc.

Observed data are premises because it is we who pick them. Data are not Heaven sent. They are chosen and characterized by us. Yes, the amount of—let us call it—cherishing that takes place over data is astonishing. Skip it. Data are premises, no different in character than other assumptions.

Explanation

Here is what explanation is (read: should be). Given the model building premises (that specified, here, regression) and the observed data (both y and x), we specify some proposition of interest about y and then specify propositions about the (already observed) x. Explanation is how much the probability the proposition about y (call it Y) changes.

That’s too telegraphic, so here’s an example. Pick a level for each of the observed x: “The CO2 observed is c1“, “The population is p”, “The grant dollars is g”, etc. Then compute the probability Y is true given this x and given the model and other observed data premises.

Step two: pick another level for each of the x. This may be exactly the same everywhere, except for just one component, say, “The CO2 observed is c2“. Recompute the probability of Y, given the new x and other premises.

Step three: compare how much the probability of Y (given the stated premises) changed. If not at all, then given the other values of x and the model and data premises, then CO2 has little, and maybe even nothing, to do with y.

Of course, there are other values of the other x that might be important, in conjunction with CO2 and y, so we can’t dismiss CO2 yet. We have a lot of hard work to do to step through how all the other x and how this x (CO2) change this proposition (Y) about y. And then there are other propositions of y that might be of more interest. CO2 might be important for them. Who knows?

Hey, how much change in the probability of any Y is “enough”? I have no idea. It depends. It depends on what you want to use the model for, what decisions you want to make with it, what costs await incorrect decisions, what rewards await correct ones, all of which might be unquantifiable. There is and should be NO preset level which says “Probability changes by at least p are ‘important’ explanations.” Lord forbid it.

A word about causality: none. There is no causality in a regression model. It is a model of how changing CO2 changes our UNCERTAINTY in various propositions of y, and NOT in changes in y itself.1

Explanation is brutal hard labor.

Prediction

Here is what prediction is (should be). Same as explanation. Except we wait to see whether Y is true or false. The (conditional) prediction gave us its probability, and we can compare this probability to the eventual truth or falsity of Y to see how good the model is (using proper scores).

Details. We have the previous observed y and x, and the model premises. We condition on these and then suppose new x (call them w) and ask what is the probability of new propositions of y (call them Z). Notationally, Pr( Z | w,y,x,M), where M are the model form premises. These probabilities are compared against the eventual observations of z.

“Close” predictions means good models. “Distant” ones mean bad models. There are formal ways of defining these terms, of course. But what we’d hate is if any measure of distance became standard. The best scores to use are those tied intimately with the decisions made with the models.

And there is also the idea of skill. The simplest regression is a “null x”, i.e. no x. All that remains is the premises which say the uncertainty in y is represented by some normal distribution (where the central parameter is not a function of anything). Now if your expert model, loaded with x, cannot beat this naive or null model, your model has no skill. Skill is thus a relative measure.

For time series models, like e.g. GCMs, one natural “null” model is the null regression, which is also called “climate” (akin to long-term averages, but taking into account the full uncertainty of these averages). Another is “persistence”, which is the causal-like model yt+1 = yt + fuzz. Again, sophisticated models which cannot “beat” persistence have no skill and should not be used. Like GCMs.

More…

This is only a sketch. Books have been written on these subjects. I’ve compressed them all in 1,100 words.

———————————————————————————-

1Simple causal model: y = x. It says y will be the value of x, that x makes y what it is. But even these models, though written mathematically like causality, are not treated that way. Fuzz is added to them mentally. So that if x = 7 and y = 9, the model won’t abandoned.

Philosophic Issues in Cosmology V: What Measurements Tell Us—Guest Post by Bob Kurland

Distance Ladder from Skynet University (UNC)

Distance Ladder from Skynet University (UNC)

Bob Kurland is a retired, cranky, old physicist, and convert to Catholicism. He shows that there is no contradiction between what science tells us about the world and our Catholic faith.

Read Part IV.

When you can measure what you are speaking about, and express it in numbers, you know something about it, when you cannot express it in numbers, your knowledge is of a meager and unsatisfactory kind; it may be the beginning of knowledge, but you have scarcely in your thoughts advanced to the stage of science.—Lord Kelvin.

The following types of data are primary: positions and luminosities of stars and galaxies (including x-ray, UV, visible, IR, microwave and radio-frequency radiation ); wavelengths of spectral lines from these objects; Doppler shifts of such wavelengths (shifts in the wavelength that depend on the velocity of the object emitting the radiation); frequencies, intensities and polarizations of the microwave cosmic background radiation (CBR).

It’s important to realize that there is a “ladder” of inferences of secondary data from these primary data. For example, the distances of nearby stars (10-100 light years or so distant from us) can be estimated relatively accurately by parallax measurements. From the intensity of light observed, one can then estimate accurately the intrinsic brightness of these stars. One can then use other properties, at known distances, to set up what are called “standard candles”: properties that relate to the intrinsic brightness, so that the intrinsic brightness can be inferred, to give from the observed intensity an inferred distance.

Mass Density and Curvature of Space

Various standard candles are used at various distances, including cepheid variables to supernovae and galactic lensing of quasars. One of the first standard candles was the intrinsic brightness of the Cepheid variables. Hubble used these to estimate the distance of stellar objects and to construct his plot of red shift versus distance, which was the basis for the expanding universe theory. Since that time more accurate measures have given very good linear relation between red-shift (velocity moving away from us) and distance from us.

One can also count the number of objects within the field of view and from this count make an estimate of the total number of objects to be seen, and thus infer the total (baryonic-ordinary) mass. From this astronomical data one can infer the following: the actual ratio of matter to a critical value; this ratio is designated “Omega 0” (with uppercase Greek letter). If Omega 0 is > 1, space-time is positively curved (like a sphere) and the universe expansion will eventually turn into a collapse, for a “big crunch”; if Omega 0 is = 1 space-time is flat and the universe will expand in a uniform way; for Omega 0 <1, the universe is curved as in a saddle surface, and will expand indefinitely.

Dark Energy

Observations of red shifts from distant supernovae and from temperature anisotropies in the cosmic background radiation suggest that there is a “dark energy”, a pressure (as in the “lambda” constant in Einstein’s original formulation) that makes the expansion of the universe accelerate. (What this is saying is the expansion rate is slower for older, more distant objects, faster for more recent, closer objects, so there is an acceleration of the rate.)

Evidence for an Expanding Universe

The following observations, in addition to the red shift, confirm the picture of a universe expanding from a hot big bang: the cosmic background radiation, the relative abundance of hydrogen to helium in the universe (about 3/1) and the lack of heavy elements in far distant galaxies. The cosmic background radiation is like the embers of a burnt-out fire, the embers of the hot “Big Bang” spread evenly throughout the universe. The small irregularities in the cosmic background radiation indicate the fluctuations that grew into stars and then galaxies. The relative abundance of hydrogen to helium is consistent with models of element formation that took place at an early, high temperature stage of the universe. For far distant galaxies (10 billion years light distance, say), they are also at an early stage of development (remember, going in distance is also going back in time) and therefore heavy elements have not yet formed by the collapse of red giant stars.

Ellis lists (among others) the following common misconceptions about the expanding universe:

  • Misconception 1: The universe is expanding into something. It is not, as it is all there is. It is just getting bigger, while always remaining all that is.
  • Misconception 2: The universe expands from a specific point, which is the centre of the expansion. All spatial points are equivalent in these universes, and the universe expands equally about all of them. Every observer sees exactly the same thing in an exact RW geometry. There is no centre to a FL universe.
  • Misconception 3: Matter cannot recede from us faster than light. It can, at an instant; two distantly separated fundamental observers in a surface {t = const} can have a relative velocity greater than c if their spatial separation is large enough. No violation of special relativity is implied, as this is not a local velocity difference, and no information is transferred between distant galaxies moving apart at these speeds. For example, there is presently a sphere around us of matter receding from us at the speed of light; matter beyond this sphere is moving away from us at a speed greater than the speed of light. The matter that emitted the CBR was moving away from us at a speed of about 61c when it did so.

The next in this series will deal with the Anthropic Principle.

« Older posts Newer posts »

© 2016 William M. Briggs

Theme by Anders NorenUp ↑