Philosophy

What That Spurious Correlation Website Tells Us About Statistics

ncage

Many (thanks!) readers sent links to Tyler Vigen’s Spurious Correlations, whose motto is “Discover a new correlation – an interesting spurious correlation each day!”

My favorite is tying the yearly number of people who drowned by falling into a swimming-pool with the number of films Nicolas Cage appeared in. The correlation is, ominously, 0.666, meaning that the more we see of Cage the more widespread and Satanic the death toll. Surprised?

But that correlation is trivial compared to per capita consumption of cheese in the States with the number of people who died by becoming tangled in their bed sheets, which reaches a whopping, and statistically significant, 0.947.

Before you wander off giggling, answer this: what makes Mr Vigen’s entries different from those offered at “data journalism” sites like Vox and FiveThirtyEight? Or indeed different from learned journals, such as Stroke, which recently published the “significant” discovery that “Geomagnetic Storms Can Trigger Stroke“?1

Consider that we have dispassionate data culled from government sources, and therefore as pure-as-can-be. No rules of statistics have been violated. No miscalculations have been made. True, “statistical significance” should join chiropractory, communism, and continual motion machines on the scrap heap of beguiling but baneful, baleful beliefs. But switching to another quantitative probability interpretation won’t fix anything.

Like, for instance, a Bayesian technique. Turning these correlations into Bayesian parametric posteriors would change little. Even going whole hog and speaking of posterior predictive distributions, where the uncertainty of the parameters are “integrated out” and the model speaks purely in terms of observables, though it would be an improvement, wouldn’t do much good.

That’s because the quantitative “signals” identified by Vigen, and by many “researchers” in their papers, are real—they really are there. That cheese-bed sheet correlation really is absurdly high. But could eating cheese cause somebody to strangle themselves with his bed sheet? Maybe. Cheese is very binding, as my Grandmother used to say.

Hilarious puns aside, we suspect these correlations because we can’t think of plausible (efficient) causal connections. We have no proof of lack of causality, not in the “formal data” anyway. But that is because probability is far more than its formal quantification.

Statistics is shockingly limited. It never, or at least not natively, asks about causation. Instead, it asks about correlations. “Given this what is the probability of that,” is what it is good at, not “What caused that?”

This would not be problematic except that everybody, unless they’re forewarned, mistakes statistical correlations for causality—and even when forewarned the error is made. That stroke paper says the sun’s rays are causing apoplexy. How? Who knows? But the sun’s rays are a form of radiation and radiation, as our culture affirms every chance it gets, is bad. Strokes are bad, too. Therefore, the sun might cause strokes.

And, hey, maybe it does. The statistics say it might; there are even wee p-values. But statistics also suggests Nicholas Cage causes suicides.

The key difference between the stroke paper and cheese-bed sheet connection is that the authors of the former work took care to build a plausible causal story, while Vigen’s site offers none, and even asks you to consider there could be none. The formal quantitative result is the same in both cases.

This is where statistical practice becomes schizophrenic. Everybody knows that there is more to the evidence that than which is formally quantified. But if the formally quantified evidence is pleasing (wee p-values, etc.) it is taken as proof of the speculated causation, as if, that is, it were the complete evidence. Read the discussion section of any paper which relies on statistics to see this, particularly in the so-called soft sciences or those claiming the horrors which await us once global warming finally strikes (soon, soon).

The opposite also holds. Consider that if we knew, really knew, the causal process by which solar rays caused stroke, it wouldn’t matter what the statistical evidence said. Non wee p-value? Well, that could be a faulty observation, the wrong population, something.

Part of the problem is the intense drive to quantify and to leave everything non-quantifiable behind. You can’t stick ideas into formulas. Another part is the mysticism which accompanies classical statistical measures. Wee p-values are magic.

So what to do? Ah, that’s the hard part. One quick example. If we want to map the uncertainty of the flight of a bullet from a Smith & Wesson six-shooter, we ask a physicist about the equations of motion related to ballistics. Because why? Because those equations quantify our understanding of the causality. So much force in such and such a way results in the bullet being caused to land over there. That’s a pure causal model.

Except it won’t work in practice, not perfectly. Because that causal model won’t nail the precision of the landing past some point. We may be able to say, via the causal model, that the bullet will land somewhere on some target, but that’s it. To say more, we can add a probability model to the causal model which gives us probabilities the bullet lands in specific locations on the target.

We do that because we don’t understand all the forces acting on the flight of the bullet. Those parts which we don’t understand are “random”, i.e. unknown, those parts which we do know are the (gross) causes and are modeled accordingly.

In other words, the best models are mixtures of physics and probability. About these, more later.

———————————————————————

1We might look at this paper, which was discovered by K.A. Rodgers, in depth later.

Categories: Philosophy, Statistics

43 replies »

  1. Your example of the Smith and Wesson is very good. If we control for as many variables as possibly-condition of the gun, load used in ammo, bench rest clamped down, robotic trigger pull, target at specifice distance, previous sighting in of gun and wind speed (if any), we should get a fairly accurate picture of where the bullet will land. However, if we remove the “clamped to a bench rest” and put in a person, the trigger pull variable changes in addition to the accuracy now depending on the person’s skill. The skill depends on the person having a “good” day or a “bad” day, etc. If we know little about the condition of the gun, the ammo, etc, the probability of knowing where the bullet will land goes down even further. This is an excellent example of where trying to determine the outcome of an event (where a bullet fired will land) can be extremely complex. It is indeed a mixture of physics and probability. It’s also quite easy to demonstrate to people who don’t understand that while physics gives us some answers and probability others, it’s often a combination of the two. How much each contributes varies widely—wider than many people ever realize.

  2. So, the problem here, then, is “statisticsism”?

    The adjunct support discipline to “scientism”?

  3. Yes. It is always a hazard to someone enthralled by a technique or tool to see everything in terms of that technique/tool. (Heisenberg warned of this.) Someone enthralled will a hammer will see nails everywhere, a victim of hammerism.

  4. Just returned from seeing Godzilla. If you haven’t already seen it and are a fan of monster/disaster/effects movies, do yourself a favor and go. Now. If you like films like this but hate when the human drama is nothing more than a thin plot device to hang the SFX on, you probably won’t hate this movie … you may, like I did, totally love it. It’s one of the better films in the genre, and I was pleasantly surprised.

    If you’re a bit of a science buff that carps at silly things in movies, Godzilla has several of those. But the one that made me laugh the most (no spoiler here) is a scene where a Navy admiral on the bridge of his aircraft carrier is looking at a big colorful tactical display. California coast to the right, actual monster out of “sensor range”, but twisty colorful paths reaching out to Livermore, Redwood City, Bodega Bay, Martinez … all over the place. And then … wait for it … some yeoman yammering in the Admiral’s ear, “The tracking models show the most likely destination is San Francisco.”

    I was the only one in the theater laughing. Just thought I’d share.

  5. So, speaking again of models. And again and again and againagainagiainagain ….

    Your example of the Smith and Wesson is very good. If we control for as many variables as possibly-condition of the gun, load used in ammo, bench rest clamped down, robotic trigger pull, target at specifice distance, previous sighting in of gun and wind speed (if any), we should get a fairly accurate picture of where the bullet will land.

    It’s a great example. I have another one. Ship gun fire-control systems. Starting from rudimentary tables of ballistics data in WWI, which required manual aiming inputs from gun crews, by WWII these systems became highly automated — as in the fire control officer would tell a computer what target he wanted to hit, and (I simplify) the guns would aim themselves.

    Computers in WWII? Yup. Radar directed too. They were electromechanical analog computers that weighed as much as an automobile. These things solved differential equations in real time and while under fire in combat.

    In addition to range, bearing and motion of the target vessel, they would compensate for the firing ship’s rolling and pitching motions in heavy seas, meterological conditions, coriolis effect, Magnus effect and other ballistics of the particular shells being fired … on and on.

    Accurate? Well predictably … not so much. In good conditions, repeat shell accuracy (hitting the same spot over and over from the same position) was on the order of 0.5% of range. For an Iowa class battlewagon firing its 16 in. batteries out to 10 miles, 0.5% of range is 100 yards. 300 feet. 3,600 inches. A football field.

    Compound that stunning lack of accuracy with the fact that the enemy ship commander is no dummy. He’s manuvering hard, changing course, velocity anything he can do. He knows that you are going to be twiddling the parameter knobs on your analog computer. He knows you need to adjust for actual conditions that your computer model hadn’t figured out for itself. He knows that even if you wanted to, you couldn’t put another salvo exactly where the last one splashed, and that you’re working on aiming somewhere else.

    So what does he do? A major tactic was to change course and chase the salvos that missed.

    In such a dynamic wholly unpredictable environment, with all the human factors of a devious and intelligent enemy, why use such a mechanically reliable, but expensive and inaccurate machine that’s really just a glorified abacus? One that can barely put two shots in a row into a football field on a good day in non combat conditions?

    I’ll let Wikipedia do the answering:

    Because of radar, Fire Control systems are able to track and fire at targets at a greater range and with increased accuracy during the day, night, or inclement weather. This was demonstrated in November 1942 when the battleship USS Washington engaged the Imperial Japanese Navy battlecruiser Kirishima at a range of 18,500 yards (16,900 m) at night. The engagement left Kirishima in flames, and she was ultimately scuttled by her crew. This capability gave the United States Navy a major advantage in World War II, as the Japanese did not develop radar or automated fire control to the level of the US Navy and were at a significant disadvantage.

    At night. In 1942. With a computer made from mechanical gears, motors and not an integrated circuit in sight. The Iowa herself used essentially the same targeting computer but with much better radar for shore bombardment of Iraqi positions. In 1991.

    Sometimes “bad”, but just good enough is … just good enough. And if it’s the best of its class of “bad” technologies, well then, it’s excellent. Right?

  6. I’m thinking that if it’s the best of its class of “bad” technologies, well, it’s still bad. Of course, superlatives are fascinating things and one might be able to squeeze excellent out of best of the bad—but I doubt it! 🙂

  7. Sheri, the point is that adjectives like good, bad, accurate, inaccurate are meaningless when used in isolation. What is a bad model? Well, it’s one that makes bad predictions. What’s a bad prediction? Well, it’s one that didn’t predict what actually happened.

    That tells me nothing. Zip. Nada.

    It’s usless to say, “This model is bad” and leave it at that. It’s worse than useless to say, “This model is bad, so they must all be bad,” it’s fallaciously useless. To then stand pat on those meaningless and logically unsound arguments is the worst of all — it’s total feculence of the taurine variety.

    If a model is bad, I want to know how bad it is. Is it less or more bad than some other model? If the model is so bad, what can we do to improve it?

    How bad can it still be but still be useful?

    I want at least some comparative adjectives. Then I have at least a qualitative way to make an evaluation.

    What I really want are superlatives though. At the very very least, show me the most bad model. Show me the least bad model. Then I get a sense of the range of badness. I can then say that the most bad model is the worst, and the least bad model is the best.

    I want to what can we do to make the best model better because I have a basis for comparison. I’m not running around bellyaching about how bad everything is, telling people how not to do things. I want to find out how to do things better, if possible.

    In other words, the best models are mixtures of physics and probability. About these, more later.

    Yes! Applause! Tell us about the best models. How to do them according to best practices.

    Solving problems starts with saying, “This model is a turd!” but doesn’t stop there. If it does stop there, all we get are more turds and potentially no solutions.

  8. I hear you saying that sometimes my use of terms is as incorrect and lacking in value as those of the people whose models I do not accept. This is may be true, though not because I don’t want to avoid this, but because the complexity of the situation and the models donot really allow for adequate discussion in comment boxes and sometimes not even in a blog post. I will attempt to expand on why I think some models are bad and what I think a good model would be. It may not be possible to always provide the details and alternatives you desire—but I will try to provide more detail as to why I object to a model. As time permits.

    (And the least bad model is still bad, but it is the best of the bad. If we don’t label it as “best of the bad”, people start to think the model is actually a good one. )

  9. Sheri, the comment was addressed to you, but I was not speaking only to you. I am remiss for not making that clear. What follows is in the same spirit.

    My ranting — and it is ranting struggling to be constructive — is because you, Briggs, I, we, everyone, are locked in a massive political struggle over a great number of things. Not just here in the US where I live, but worldwide. What passes for political speech today amounts to not much more than two year olds stomping around throwing a temper tantrum.

    In other words, it’s not rational thought. My spiel about adjectives isn’t just about words, it’s about how to think and about attitude. Concern is different than complaint. Critical thinking is different from criticism.

    While I’m on the nuance of words: I define skepticism as the process of trying to find out what’s wrong so that errors can be identified to build upon and improve — not to undermine, destroy and stagnate. The best constructive skepticism is not just saying, “No, you’re wrong.” That often helps, but it’s ever so much better when it’s followed by, “Here’s how I think you could do it better.”

    Yes, I’m inclined to believe that humans are contributing to a warming planet with our greenhouse gas emissions and land-use changes. But I’m every bit a skeptic as any of the hard-core contrarians out there.

    Show me the good, the bad and the ugly models; that was mainly directed at Briggs. But yes, by all means look for them on your own. Share them if you like; I’m always one for getting diverse — especially opposing — opinions, perspectives, information and methods from others.

  10. It’s usless to say, “This model is bad”

    Just as it is useless to say “X is the best” without stipulating what “best” means. It’s the first thing learned in system engineering. “Good, better, best, bad, worse, etc.” can only be determined by performance wrt requirements. It’s all relative. What’s “good” for you may not be for me.

    The Mark 14 torpedo scandal arose because of inadequate testing to the requirements not to mention some of the requirements were unrealistic per se

    A model which has not been tested with respect to prediction can never be “good” with respect to prediction. One has to wonder what its purpose is.

  11. “Just as it is useless to say “X is the best” without stipulating what “best” means … Good, better, best, bad, worse, etc.” can only be determined by performance wrt requirements. It’s all relative.

    Right on the money, DAV.

    A model which has not been tested with respect to prediction can never be “good” with respect to prediction. One has to wonder what its purpose is.

    Who says models (plural) haven’t been tested against predictions (plural)? Which model? Which prediction? Within or without what allowable or useful error bounds?

    If you want to know what a given model’s purpose is, I’m 100% sure that the people who designed it tell us what that purpose is. It’s really easy to say, “This model sucks because it doesn’t predict what I want it to,” when it wasn’t built to predict what you want it to. Orrrrrr … what someone who understands their model and its utility better than you do simply can’t give you what you want because it’s not presently possible.

    Or, like the Mk 14, started off wonky because of poor testing and confounding failure modes, was improved until it became a reliable weapon and remained in use for almost 40 years.

  12. Brandon,

    “Accurate? Well predictably … not so much. In good conditions, repeat shell accuracy (hitting the same spot over and over from the same position) was on the order of 0.5% of range. For an Iowa class battlewagon firing its 16 in. batteries out to 10 miles, 0.5% of range is 100 yards. 300 feet. 3,600 inches. A football field.”

    You have no idea what you are talking about.

    1) What you are describing “hitting the same spot over and over from the same position” is precision not accuracy. Accuracy in ballistics is determined by how close the center of a group of shots is to the bulls-eye.

    2) Considering you are talking about artillery and not direct fire weapons that level of precision is actually quite amazing. Prior to the development of those kinds of ballistics computers, the precision for artillery fire was probably in the 5% to 10% of range for land based artillery and naval artillery would be lucky to get 20% of range.

  13. Who says models (plural) haven’t been tested against predictions (plural)? </I.

    It really isn't hard to find untested models. They are rife in academic papers. Briggs is fond of highlighting them. If they have been tested then it's rather strange the results of those tests are often omitted from publication.

    It’s really easy to say, “This model sucks because it doesn’t predict what I want it to,”

    And even easier to say “you haven’t the faintest idea it will predict what you say it will” when the test results are absent.

    Or, like the Mk 14, started off wonky because of poor testing and confounding failure modes, was improved until it became a reliable weapon

    However it wasn’t improved because of poor testing and confounding failure modes. It was improved because 1) someone (singular or plural) wanted a torpedo that actually worked and 2) someone (singular or plural) took the time to do the job right.

  14. MattS: I think I was 7 or 8, watching my father tare a digital microbalance in his lab while he was in grad school. “Dad, why are you putting the empty tray on the scale first?” I don’t remember his exact words but I learned the lesson: you can have as precise a measuring device as you want, but it doesn’t do you any good if it’s not accurate. Before weighing small amounts of powdered reagents, you’ve got to account for the weight of the plastic tray you’re putting them into. And since each little tray varies in weight from one to the next, you’ve got to tare up the scale each time.

    My 12th grade physics teacher was a Pakistani, and I remember quite well his lecture on the the difference because of how he said “MAYYshzzzurment”, “aahKURahhSEE” and “preseeeYAWN” with all the accENTSs on the wrong syllABles.

    So yes, you’re right, projectile target groupings are an issue of precision, not accuracy. I was sloppy.

    Of course, if you’re getting 10 yard groupings but are 300 yards short or long, it doesn’t do much good either unless all you’re after is getting your enemy to wet their pants.

    Both the precision and accuracy of US WWII naval gunnery is fantastically amazing. There were no 10 mile cold-bore one shot one kills that I’m aware of. The first several salvos were all about getting the range dialed in. Often, the idea was not to put a full salvo into that 100 yard circle, but stagger them out so that at least one shell of the salvo you were sending would record a hit.

    All nits now addressed (yes, no?), the main point was that when your target is 200+ yards along the waterline and has a beam of 40 yards or so, a 100 yard circle is close enough. Kind of like horseshoes or nuclear weapons. Or as Dad used to say when I was trying to measure off a 2×4 cut with micrometer-like precision and accuracy, “We’re not building Swiss watches here, son, that’s good enough for government work right where you are.”

  15. DAV:

    It really isn’t hard to find untested models. They are rife in academic papers. Briggs is fond of highlighting them.

    I have no doubt that you’re correct. Couple of things:

    1) Briggs is one blogger out of bazillions. That’s a limited sample. It’s not good practice to make generalizations about a population based on a selective subset of the available data.

    2) Climatology is a politically polarized field of study. Both sides of the debate therefore have a bias. Selecting a sample with a known bias and inferring something about a population on that observation alone is not only bad practice, it’s possibly unethical as well depending on your scope of policy influence.

    3) The properly skeptical approach to a problem such as this is to seek out examples that falsify the claims made by both parties with individual, unguided, independent research.

    That independent research needs to start with properly skeptical questions:

    1) Is this particular model untested because this paper is the first one to describe it?
    2) Can this particular model even be tested?
    3) What is this model supposed to do?
    4) What are reasonable error bounds considering the complexity of what’s being modeled?
    5) Is it strictly necessary for this model to be accurate and/or precise to +/- 0.005 % when being a percent or two off in either direction is a useful enough result?
    6) How much would it cost to reduce this model’s error to what I want it to be, and is that really worth it?

    And even easier to say “you haven’t the faintest idea it will predict what you say it will” when the test results are absent.

    Totally agree. If any researcher is out there doing that on my tax dime, I want their act to be cleaned up. And I say with all confidence that someone out there is doing bad research with my money because you can’t have lived on this planet for any number of years without finding all sorts of examples of bad science being done.

    However it wasn’t improved because of poor testing and confounding failure modes. It was improved because 1) someone (singular or plural) wanted a torpedo that actually worked and 2) someone (singular or plural) took the time to do the job right.

    Well duh. Accurate torpedoes don’t make themselves. It’s not the last weapon system that started out buggy either. AIM-9 Sidewinders, anyone? Those things couldn’t hit the broad side of a hootch from the inside when they first rolled off the line.

  16. It’s not good practice to make generalizations about a population based on a selective subset of the available data.

    Granted but who says I have? The topic of this blog post is a practice that’s been on the rise for some time. I’m not going to list all the places I’ve been but you could also try NumberWatch of JunkScience for starters.

    Briggs’s complaint: … if the formally quantified evidence is pleasing (wee p-values, etc.) it is taken as proof of the speculated causation, as if, that is, it were the complete evidence. is oft repeated at those two in one form or another.

    I’d bet you’d have a hard time finding epidemiology papers where they don’t stop at finding a wee p-value but then go on to establish (through testing) the causal relationships they imply (and sometime state explicitly). IOW: they don’t bother to determine if the relationships they have found are not spurious. The low p-value says it all as far as they are concerned. Followups seem to be a rarity.

    Look at the cigarette studies. Some are quick to tell you that smoking leads to a 20x increase in the chances of getting lung cancer but many (as in all) fail to reveal that the chance of NOT getting lung cancer are about the same for smokers and non-smokers. Just like buying 20 Megamillion tickets will increase your chances of winning by 20x but your chances of losing are roughly the same as those who have only bought one ticket. However, unlike the lottery, NOT smoking is no guarantee of NOT getting lung cancer. The reason: how one gets cancer is far from understood. The smoking/cancer relationship is likely spurious or the smoking part is insufficient in itself.

    Climatology …

    I haven’t read all of the comments but the blog post is “Spurious Correlations”. Not sure what climatology has to do with it. I can’t find mention of it in the blog post. At least not by searching for “climate” or “climatology”. If anything, the climate modelers engage in circular reasoning. They build models that cause outputs to rise on increasing CO2 levels then point to the increasing CO2 level model inputs resulting in higher model outputs as examples they are using correct assumptions. But this is OT for this blog post.

    5) Is it strictly necessary for this model to be accurate and/or precise to +/- 0.005 % when being a percent or two off in either direction is a useful enough result?

    No, of course not. The problem all too often is the model’s accuracy (in epidemiology and the other -ologies) has never been determined while simultaneously the conclusion that it is a true representation of reality is claimed or at least implied.

    Worse is when lawyers get hold of these studies then argue the unwarranted conclusions contained within. Look at what happened to Vioxx. For similar try OverLawyered and Volokh.

  17. Granted but who says I have?

    [grin] Not me. I don’t know what you’ve looked at and what you haven’t.

    Briggs’s complaint: … if the formally quantified evidence is pleasing (wee p-values, etc.) it is taken as proof of the speculated causation, as if, that is, it were the complete evidence. is oft repeated at those two in one form or another.

    Which I fully agree with. You then go on to cite the almost canonical examples of how you can get statistics to say whatever you want them to say if you’re a biased researcher with low ethical standards.

    I haven’t read all of the comments but the blog post is “Spurious Correlations”. Not sure what climatology has to do with it. I can’t find mention of it in the blog post. At least not by searching for “climate” or “climatology”.

    Oh I know. But it’s one of Briggs’ pet topics. It’s one of mine too, and that’s how I washed up on his shore. It’s one of Shari’s big issues, she’s got her own blog as well.

    Way I see it, Briggs is often talking about climatology even when he’s not.

    If anything, the climate modelers engage in circular reasoning. They build models that cause outputs to rise on increasing CO2 levels then point to the increasing CO2 level model inputs resulting in higher model outputs as examples they are using correct assumptions. But this is OT for this blog post.

    Ok fine, I’ll keep AGW out of it. If I strip out “climate” stuff in the above block of text and add some (implications), what you have said is:

    1) Assert that the group of (all) ABC researchers engage in fallacious reasoning.
    2) They (all) build models that cause levels of Y to rise on projected rising levels of X.
    3) They (all) then use the rising level of Y as evidence that their selection of X as causal is correct.

    So now comes me, BRG, reading your opinion of ABC research but having no opinion of my own; only some general knowledge of the problem. Immediate flags go up:

    1) The scope of ABC research is very complex with thousands of researchers across tens of scientific disciplines.
    2) There are hundreds of known models used to research ABC.
    3) DAV has just characterized (all) ABC researchers poor scientists.
    4) Because of (1) and (2) being so large, (3) being true is really really unlikely.

    I don’t care what the field of study is. That’s not science-speak, it’s political-speak. I evaluate science on the science as far as I understand it; starting with theory from first principles if I don’t already feel grounded in it. Otherwise, I haven’t any real hope of wading through the political garbage to figure out who’s telling the truth, if anyone.

    If there’s no, as in zero none nada zilch nyet empty null nothing, theoretical underpinning to ABC research then I say, “Hmm, interesting correlation, but looks like someone is datamining for profit,” and then forget about it.

    Conversely, if someone says, “There’s no theoretical underpinning to ABC research and ABSOLUTELY NO EVIDENCE OF IT” and I know better, then I know something is awry with whoever made that comment.

    These are extreme cases, mind. But that’s what happens when people start speaking in absolutes around me, even if only implied.

    The problem all too often is the model’s accuracy (in epidemiology and the other -ologies) has never been determined while simultaneously the conclusion that it is a true representation of reality is claimed or at least implied.

    It’s a real problem, DAV, no doubt about it. My stepfather’s father was what you call a food scientist. Worked for big agra figuring out how to turn corn, wheat, soy, cows, pigs, chickens, gophers — whatever they could grow in quantity for cheap — into stuff Americans would not only want to eat, but to eat a LOT of, and with a high profit margin.

    My stepdad had endless stories of how the evolution of the “balanced meal” to the “four food groups” to the “food pyramid” to the “whatever it is now because I stopped paying attention a long time ago” evolved. Mostly because ADM, Con-Agra and whoever else with deep pockets threw bushels of money at slavering politicians whoring for votes. Liberally sprinkled with some fairy dust they made up and got published in peer reviewed journals; pols gotta have their butts covered before taking those kind of handouts.

    My stepfather could bake like crazy; he knew a lot of food chemistry and where to get the very best ingredients. And he ate what made him feel good, not what science he knew to be bad told him to.

    Anyway, one science is not like the other. Good to be skeptical, but bad to overgeneralize.

  18. When teaching physics I always reminded people to very carefuly distinguish statistics from probabilities.
    Statistics is about the past while probabilities are about the future.
    The former is never potentially sure to shed light on the latter.

    In physics everything is strictly causal. Not just 99,99%. Everything.
    But causal (and deterministic) doesn’t mean that probabilities are banned, neither does it mean that probabilities express “ignorance”.
    Probabilities appear spontaneously in physics and have nothing to do with uncertainty or ignorance.
    2 examples :
    – Quantum Mechanics. The Schrödinger equation is causal and strictly deterministic. However the perfectly known wave function doesn’t allow to compute in a deterministic way the state of the system. It yields only probabilities that the state will be in an X or Y state.
    -Deterministic chaos. This is the clasical counter part to QM. The system is described by perfectly deterministic equations (f.ex Navier Stokes). Yet they cannot be solved analytically or numerically. The system always finishes to unpredictably wander somewhere after a certain time. However if the system has a certain property called ergodicity then there exists an invariant (e.g independent of initial conditions) probability distribution that the system is in a state X or Y. So here too like in QM it is impossible to predict where the system will EXACTLY be but it is possible to predict EXACTLY what are the probabilities that the system will be here or there.

    So a physical model is not always “causality + statistics” where the statistics would “modelize” some sort of ignorance spreading the exact causal solution point in a fuzzy ball.
    On the contrary, the important theories say that there are no exact points and the fuzzy balls ARE the only reality science can know.

  19. Brandon: I don’t consider what you do a rant and I fully understand your reasoning. Politics and society in general tend to exhibit the behaviour of toddlers. Also, I agree with “No, you’re wrong” is not a complete response, but rather we do need the “how to do better” part. This is something I have explained to parents whose children I have take care of: If you only tell little Mark here that he is not to throw things in a restaurant, he is to not scream, he is to not…..little Mark is then left wondering what in the world it is he is supposed to be doing. It’s not self-evident to Mark.

    DAV: So true with Vioxx. I often wonder how many incorrect “causal” relationships are found in the pharmaceutical lawsuits and the industry itself.

    Tom: Good point. We use the past to predict the future in most things. It’s not necessarily valid, but it seems the only way we have to predict and we are determined to predict everything, it seems. Causality is 100% but prediction is only as good as how many of the necessary variables we understand in the system, how much they are involved in causality of the event we hope to predict, etc. Predictions of the future are very dicey, yet people seem very comfortable believing them if you dress them up in “science”. The distinction between knowing exactly where something is and where it will probably be is lost on most people. I like your statement that the fuzzy balls are the only reality science can know. Very true.

  20. 3) DAV has just characterized (all) ABC researchers poor scientists.

    Tsk! No I didn’t. I certainly never gave a number except to imply more than one. I suggest you find and watch Gavin’s TED talk for a typical example though. His argument is oft repeated with little correction (as in none?) from other modelers so I guess all comes close.

    4) Because of (1) and (2) being so large, (3) being true is really really unlikely.

    Freudian slip I suppose — (3) was not true.

  21. 2) There are hundreds of known models used to research ABC.

    Let’s see, in context ABC == GCM. (2) is not true either. There are less than 100 operated from 18 climate centers and most are variants of the same model with differing only initial conditions. According to the following, the IPCC used “no fewer than 34” of them. Why not all? Unless they were cherry picking, there likely are no more than 34 of them.

    http://planet.botany.uwc.ac.za/nisl/Climate_change/page_62.htm

    Again, this is OT for this blog title.

  22. DAV: Everything in (parenthesis) were implications. Perhaps a better way of saying it is things I read into what you had written. Could very well be that I’m the one making hasty assumptions, but, well, I’ve read a lot of ABC research debates over the years where ABC represents any politically polarized issue, not just (C)AGW/CC. The majority of discussions on such topics consists of each side presenting only the information that supports their view. Natural human tendency, fully understandable that it happens. But that’s still not science, it’s politics. Science answers to reality, not what we think is politically expedient or palatable.

    Again, this is OT for this blog title.

    I’ll file your AGW-specific comments for another time when it is the explicit subject of the post.

  23. Sheri: It often sounds like ranting in my head when I’m writing. I appreciate you saying that it does not come across that way, truly. Your analogy about dealing with children is beautiful, thank you for understanding what I was saying. Seems you and I can disagree agreeably. I consider that no small victory. Cheers.

  24. The difference between probability, statistics, and process control can be modeled with a bead box containing some white and some red beads.
    1. Probability: Given the proportion of red beads in the box, what is the likelihood of securing x red beads in a sample of n?
    2. Statistics: Given x red beads in a sample of n, what is the likely proportion of red beads in the box?
    3. Process Control: Is there a box?

  25. TomVonk:

    In physics everything is strictly causal. Not just 99,99%. Everything.
    But causal (and deterministic) doesn’t mean that probabilities are banned, neither does it mean that probabilities express “ignorance”.

    With you so far. It does get discussed here, but one has to dig for it.

    The Schrödinger equation is causal and strictly deterministic. However the perfectly known wave function doesn’t allow to compute in a deterministic way the state of the system. It yields only probabilities that the state will be in an X or Y state.

    Whiplash. The first sentence is a problem for me because I cannot recall having ever read it before.

    The next one is tougher because I don’t understand “perfectly known wave function”. Perfectly known as in the mathematical description? I buy that. The rest of that sentence I completely get. Your final statement I have no issue with, it perfectly matches what I thought I already understood about QM.

    In sum, my confusion here is due to not being able to understand the significance of where determinism actually fits in. IOW, why is it necessary to make this distinction if we can only ever predict state X or Y probabalistically?

    And further, if we don’t attempt to observe X or Y, superposition persists until we do. I thought that was the whole point of Bell’s theorem?

  26. Whiplash. The first sentence is a problem for me because I cannot recall having ever read it before.

    Consider the following 2 equations :

    1)
    dT/dt = D.L(T) where T is temperature, t is time, L is the Laplacian operator and D is some constant called thermal diffusivity.
    The equation is called heat equation and enables to compute how will evolve the temperature in each point of some body with a given D.
    It is deterministic because knowing the temperatures at some time t, you can easily and unambiguously compute the temperatures at some later time t+dt.
    And it is obviously causal – the evolution is defined by a D which says what kind of material we consider and depends on many physical parameters that can be computed separately.

    2)
    dF/dt = i.K.L(F) where F is some function, t is time, L is the Laplacian operator, i is the complex number i²=-1 and K is some constant.
    This is exactly the same equation as in the case 1) above where we only multiplied the right side by i.
    But because it is irrelevant whether we consider complex or real functions, everything we said about the heat equation also applies to this one.
    It is deterministic and causal.
    Of course there are minor differences in the form of the solution because the presence of the i leads to oscillating solutions of the second equation while the solutions of the heat equation don’t oscillate. Mere mathematical details.

    Now the second equation is nothing else than the Schrödinger equation for a free particle and F is the wave function. The basis of all QM.
    Therefore it is clear that the Schrödinger equation is fully deterministic and causal and F can be as perfectly known as a temperature of a body can.
    There is no uncertainty about F.

    The probabilistic feature of QM enters only when we are interested to predict the value of an observable like position, velocity, energy and such. This is because F is NOT an observable . F is the probability distribution of the observable and therefore allows to compute only the probability that the observable takes this or that value.

    So as a conclusion it can be clearly seen that QM is causal and deterministic but this causality an determinismus applies to a probability distribution function which can be uniquely, accurately and unambiguously computed but it doesn’t apply to the observable themselves that can be known only via F (e.g by their probabilities).

    As for the observation – one can not say that the system is in any precise state prior to observation. QM tells you that the only knowable (predictable) feature of an observable is its probability distribution because the Schrödinger equation is causal and deterministic. From that follows that if you suppose that the system was in some precise e.g non probabilistic state before the observation, you violate the principle that only probabilities are knowable. That’s what the Bell inegalities are about.
    Basically only probabilities may be known before the observation and, like William rightly said, once you observed a value, the probability to observe the value you did is 1 (this is called decoherence in QM).

  27. Thanks for the reply, Tom.

    The probabilistic feature of QM enters only when we are interested to predict the value of an observable like position, velocity, energy and such. This is because F is NOT an observable . F is the probability distribution of the observable and therefore allows to compute only the probability that the observable takes this or that value.

    That clarifies things for me quite a bit. It fits what I thought I already understood, but states it in a precise way that helps me better understand more of the nunance of the maths.

    As for the observation – one can not say that the system is in any precise state prior to observation. QM tells you that the only knowable (predictable) feature of an observable is its probability distribution because the Schrödinger equation is causal and deterministic.

    Perfectly clear now. To rephrase, is it correct to say that prior to observation, we’re only certain about the range of possible states and the distribution of possible states because we know F? I’m remembering high school chemistry lectures about electron orbitals as an example.

    That’s what the Bell inegalities are about. Basically only probabilities may be known before the observation and, like William rightly said, once you observed a value, the probability to observe the value you did is 1 (this is called decoherence in QM).

    Very clear about the probability of the observed value being 1. What had been tripping me up is thinking about Alice/Bob-type Bell tests, or the easier (for me) to understand classic two-slit/interferometer experiments. The word deterministic jumps out in a way that makes more sense to me when I review pilot-wave explanations for the observed phenomena — which have been the most intuitive way for me to think about them.

  28. [quote] To rephrase, is it correct to say that prior to observation, we’re only certain about the range of possible states and the distribution of possible states because we know F?[/quote]

    Right. It is an amusing paradox ; “we are absolutely certain about what the uncertainty is”.

    Word of caution. I imagine that a “pilot wave” idea might be a good crutch to visualize the concept that the (perfectly known) wave function “guides” the particle.
    This is just a convenient but misguiding image.
    Obviously the wave function is a complex number, can’t be real and is not an observable – it guides nothing and has no physical reality.
    Best is to stick to what it really is : a probability distribution which happens to obey a deterministic and causal equation.

    Einstein was wrong – God really plays dies.
    However lucky us, he has no reluctance to inform us perfectly accurately what kind of die he is using and how eventually biased the die is.
    But he adamantly refuses to tell us what the result of the throw will be.
    Perhaps he imposed on himself not to know 🙂

  29. This does not mean God plays dice. It means that we can only explain what God did by using the dice analogy. It is very possible that there is an explanation out there that does not include probability. We may need new math—as did Newton—to figure it out. Fractals are another example. Maybe we can’t ever envision the complexity. But the fact that we use probability does not mean that is the reality of the situation. It means it’s our limits. It’s very possible to use a “model” or statistice to explain something, get the right answers for now, and later find there’s an explanation that explains everything without resorting to probability or using the model we had used. It’s we who lack the understanding and then assume God does the same.

  30. TomVonk:

    It is an amusing paradox ; “we are absolutely certain about what the uncertainty is”.

    Deliciously amusing. Very tasty, thanks.

    Best is to stick to what it really is : a probability distribution which happens to obey a deterministic and causal equation.

    I take your point. It will require some reformatting. Pilot wave was one of those things that I stumbled across after having already started thinking about it in those terms. That kind of confirmation is tough to unglue.

    Einstein was wrong – God really plays dice … perhaps he imposed on himself not to know 🙂

    If ever there were an argument for God and free will, that would be the place to start.

  31. Sheri:

    This does not mean God plays dice. It means that we can only explain what God did by using the dice analogy.

    Temporarily stipulating that God exists: He has undeniably set things up so that it looks unpredictable to us.

    It’s we who lack the understanding and then assume God does the same.

    Yes. We cannot help but come up with stories to help us make sense of things. One of those stories could be God Himself.

    Assuming God exists, all indications are that He does not want any of us to be able to “prove” to someone else if He exists, what He’s like, what His methods are, or even why we’re here.

    I know that I don’t know any of those answers. I have no idea what anyone else has experienced which would lead them to believe otherwise. If we exist by Design or chance, I simply can’t tell you; how could I?

  32. Brandon: You cannot tell by using science. We cannot prove God’s existence, there are disagreements over what he is like, but there is a Bible that tells us what we need to know about God. Of course, there are additional books written by prophets, and other Gods or names for Gods so picking the one to follow may seem a daunting task. As you note, if you have not experienced anything that would lead you to believe God exists, you will not understand.
    Science says we evolved by chance, if you consider evolution and the Big Bang to be proper science. I have no doubt things evolve, but beyond that, it’s all a theory with no proof ever available, which does seem to infuriate some scientists. I believe you are correct in saying we cannot know through science from whence we came.

  33. Sheri: I put it like this: what science says is that it appears we evolved by chance. It remains silent on whether God’s Creative method was evolution or not. And I agree, science and religion ultimately cannot prove or falsify each other. That does not mean they cannot overlap. As far as my beliefs go on God, I say, “I don’t know, but I’m awfully intrigued.” As for science, it’s my go-to everyday way of making sense out of the world that I can see feel and touch.

  34. Brandon: Just curious—what do you do when science fails to make sense of the world? Keep looking? I’m not being sarcastic, just wondering. There are so many things that don’t make sense in the world one can touch and feel (and that religion may actually not help explain for some), there have to be gaps.

  35. Science says we evolved by chance,

    Actually, “science” says no such thing. First of all, science does not answer questions like that; second that evolution is by natural selection [and likely by other means as well], not by chance. “Chance” is not and cannot be causal.

  36. Sheri:

    Just curious—what do you do when science fails to make sense of the world? Keep looking?

    No worries, I took it as an honest question. When anything fails to make sense of the world I keep looking. Curiosity about the unknown and/or uncertain is a major motivation to live.

    Yes, there will always be gaps in science, religion, philosophy, mathematics … any human thought or enterprise is subject to our own finite limitations.

  37. In my world we hold that statisticians are smarter than engineers.

    Which like saying that turtles are smarter than goldfish: true, but then the bar was set pretty low.

  38. It is very possible that there is an explanation out there that does not include probability. We may need new math—as did Newton—to figure it out. Fractals are another example.

    Of course I didn’t mean “God” literally. Especially as I am not sure that such a (?) exists and if it does what its properties are. Or even if it is rationnally knowable.
    However the comment on the 2 sentences above is “No” and “No”.

    There are many results (Bell being one of them) that point in the direction that the probabilistic interpretation is the right one and there is none saying the opposite. Therefore it is rather very unlikely (but not totally impossible) that there exists an explanation not involving probabilities (e.g QM).

    And no we do not need “new” maths. There are no results or theories indicating some mathematical problems which would need “new maths”. Maths advanced a huge way since Newton and domains like string theory are using maths that Newton would not have dreamt of.
    As for fractals, these are well understood and mathematically well represented (it’s generally mundane iterations giving birth to topologies with non integer dimension).

  39. Brandon: Thank you for the answer.

    Tom: Was there any evidence before Newton created calculus that there would ever be a need for new math? Did anyone envision needing anything outside of geometry, trigonometry and algebra?
    I understand that based on current mathematics and quantum mechanics that probabilities seem like the actual answer. However, the history of science has shown that what is “very probably true” can be overturned by something as simple as the realization that there is a force that makes things fall to earth. I can’t remove that possibility of new understandings, no matter how remote, without removing the probability of new discoveries.
    Fractals are well understood now, but there was a “discovery” phase to them. It was a big deal when Mendelbrot described the fractal geometry. Why minimize the man’s discovery?

  40. Horace:

    Which like saying that turtles are smarter than goldfish: true, but then the bar was set pretty low.

    Ohhh, I like that.

    TomVonk:

    Maths advanced a huge way since Newton and domains like string theory are using maths that Newton would not have dreamt of.

    Not to at all diminish the efforts of those working on grand unified theories, but some wags have said that string theorists are dreaming.

    Sheri: you’re welcome as always.

Leave a Reply

Your email address will not be published. Required fields are marked *