Statistics

Don’t Use Statistical Models (When You Don’t Have To. Which Is Nearly Always)

r99

Today, a convincing argument proving the current practice of probability and statistics is mostly wrong, or wrong-headed.

The picture above does—and does not—show herbicide use in several countries by year (full screen view).

The dots are the measured values. I know nothing about how the values were measured, whether the measurements have uncertainty in them, whether they all use the same measurement method by country or time, what the mix of herbicides are, or anything else. Doubtless these should not be dots but blobs to indicate the uncertainty in the measurements. But let that pass. Assume, as the graph asks us to, that the dots are the data. Know what that means?

THE DOTS ARE THE DATA!

Know what else the dots being the data means?

It means all that other material on the graph did not happen. The smoothed lines, the gray tubing are a statistical model. The dots happened. The statistical model did not happen.

The data are reality. The model is fantasy. Why substitute fantasy for reality?

Well, fantasy is more scientific. Science happens when ordinary data is turned into a model. Science happened here, and lots of it. Now let me hastily say that the folks who did this are the nicest people; highly intelligent, with good motives. They were only doing what everybody else does.

Only problem is, everybody else is wrong.

There was no reason to impose a statistical model on this data, and there was even less than no reason to use the model as a replacement for the data. And the model did just that: it replaced the data. Don’t think so? What was the first thing your eye was drawn to? You bet: the model. The model made the first impression.

This is natural, because the model is so smooth and lovely. It tells you what you want to hear. That something is going on and you know what it is. That’s the seductive lie of statistical models, that they know the cause of things. They don’t. They never do. They cannot. The model cannot tell you why those dots are there and why those dots took the values they did. The model replaces reality, remember.

Statistical and probability models are silent on cause. All of them. No probability model (statistics is just a subset of probability) gives information on cause. This is proved in the book Uncertainty: The Soul of Modeling, Probability & Statistics. That probability models cannot discern cause is why, not incidentally, all hypothesis testing should be eliminated (in their frequentist or Bayesian implementations).

Instead of testing and replacing data with what didn’t happen, probability models should only be used to characterize the uncertainty of that which we do not know. Pay attention, now. It’s going to get complicated.

We do not know the future. Thus, probability models of the past can be used to make guesses of the future. Not in a causative way, mind you. In a purely probabilistic way only. We can—and should—make statements like this, “Given our past measurements and given these assumptions about characterizing their uncertainty, the probability future data will look like such-and-such is X.”

That’s all probability models can do!

Probability models are only useful in making predictions. That’s it. Nothing more.

Sometimes we do not know the past. We know some of it, but not all of it. Probability models can make predictions of the parts we don’t know, just like they make predictions of the future we don’t know. Again, all we can and should say are thing like, “Given our past measurements and given these assumptions about characterizing their uncertainty, the probability the data in the past we don’t know looked like such-and-such is X.”

If you cannot see how vast and consequential a disruption this is from the current uses of probability models, then I have failed to convey just how shocking a change this is. Current usage focuses on the past and invents fallacies about discerning cause. The use I advocate is pure probability: its only use is to characterize (I do not say measure) uncertainty in the unknown.

Turns out the creators of the graph wanted to say something about increasing and decreasing use of herbicides. Reality (assuming again the dots are the real data without uncertainty) would have sufficed. We start with definition of ‘increase’ or ‘decrease’, then just look: the definition will be true or false. Models aren’t needed.

Here, the model on increases over-certainty in our judgement of what happened. Increasing and decreasing aren’t so easy to see in just the dots. They’re far too easy to see in the models. But we’re seeing what wasn’t there.

Categories: Statistics

16 replies »

  1. The models are only showing what one would see when blurring the eyes while viewing the graphs (mostly, anyway). A kind of summary. However, even assuming the models accurately capture what the data are doing,, they don’t add anything as you’ve said. We see the data change. Well, that’s nice. The real question should be “So what?”

  2. That smoothing model is really annoying in the UK.

    Without it, you would probably instantly notice what looked like a steep drop, but the model makes it look like a nice curve down. Makes you wonder what regulation went into play in 2005.

  3. This reminds me of the EPA brochures on the “dangers” of radon to homeowners, brochures which contain lots of estimates, but not one single actual fact. Not even a model that pretends it’s a fact!

  4. I’ve always thought that if I could measure something, possibly anything, with sufficient decimal places, I would get a different answer every time I took a measurement. Say of the weight of an object. My statistical model will be quite simple, I’ll just use the mean of all measurements that pass some QC checks. I’d settle on my model mean as a better guide to reality than any of the individual measurements.

  5. A rare event: I somewhat disagree with DAV. He says: “The models are only showing what one would see when blurring the eyes while viewing the graphs (mostly, anyway).”
    I would counter that the drawings (models) literally cause (using Matt’s stringent, accurate definition) the creation of an entirely new category, not present in the data at all, called “outliers”.
    These graphs create ex nihilo a new category — I think it’s fair to say a new reality — that strongly suggests to us that some actual data points are as less important than the model — because they don’t ‘fit’ the model.
    Down that road lies perdition.

  6. JohnK,

    Outliers are problems for scientists because they imply that we know less about the phenomena that is being studied than we think we do. Hence the goal of removing or marginalizing them as much as possible.

    Imagine how much knowledge we have lost by removing outliers. Imagine if Fleming had just tossed out the moldy petri dish (oops it’s an outlier!)

  7. BRIGGS SAYS: “The data are reality. The model is fantasy. Why substitute fantasy for reality?”

    WHY substitute the model for the data points? Because the model depicts trends in activity — and those are very often highly relevant in relation to something else. Perhaps, for example, the EU banned some kind of chemical, so herbicide use dropped pending development of a replacement; that trend coupled with other measures such as price increases, or shortages, of certain crops, increases in certain diseases, etc. Collectively, such trends do inform.

    One fundamental flaw that recurs here is that a single model is the focus; models interrelate.

    Continuing with the Briggs’ mindset:

    BRIGGS: “Statistical and probability models are silent on cause. All of them. No probability model (statistics is just a subset of probability) gives information on cause.”

    SO WHAT???? Knowledge about cause is not a criterion for usefulness; humanity has no solid understanding of the cause of gravity, even models of light (particle vs energy wave) have flaws smart grade school students learn — but that hasn’t stopped civilization from exploiting these in numerous ways.

    Similarly, statistical models — admittedly flawed models that everyone agrees are approximations — are put to use to great positive effect in more & more everyday devices, for example:

    Aircraft Flight Management Systems — enable commercial aircraft to fly a pre-programmed route with optimal efficiency (climb rates, cruise speeds, ground track, etc. — all are pre-programmed and performed by the FMS much more efficiently [less fuel consumption] than any human can do the job [and I bet many of you thought the pilot was actually flying the plane the whole way]);

    Aircraft engines incorporate control systems based on imperfect statistical models that adjust internal air flow paths based on numerous operating parameters to optimize engine performance;

    …not to mention aircraft collision avoidance systems….

    Computers applying model approximations control modern auto engines for optimal efficiency, emissions control, etc. Such imperfect models are now found in numerous appliances to include washing machines, dryers, etc….

    Power plants, including nuclear, incorporate some automated control systems based on demand trends; these balance providing power to various demand centers via different sources while also automating much of the power generation systems control functions to help ensure safety.

    Smart bombs & missiles — Recall Desert Storm? Allied forces bombed numerous military targets in highly populated Baghdad, taking out anti-aircraft sites & the like while leaving civilian structures yards away completely unharmed. Those weapons used/still use statistical models to get an approximate solution (with and without GPS).

    …and the list can go on & on….

    The ugly fact is, imperfect statistical models have proven utility. And, industry is continuing to develop and apply them and in so doing is making things even better. The people in industry doing that are ones that can tap the benefits of the models. Those that nitpick miss out.

    Briggs-the-philosopher, and many of his acolytes here, would limit the use of statistics to an educational realm: “probability models should only be used to characterize the uncertainty of that which we do not know…”

    BRIGGS goes on to say: “Science happens when ordinary data is turned into a model. Science happened here, and lots of it. …They were only doing what everybody else does. Only problem is, everybody else is wrong.”

    If “everybody else is wrong” about making statistical models then I for one am thankful and I hope they continue being wrong at every opportunity.

    Say what one will of war, but most would agree that attacking only military targets with surgical precision–via flawed statistical models–in high population density areas beats carpet bombing entire cities to get the same targets.
    Automating power plants, both to provide electricity and to ensure the machines generating the power operate within safe limits–via flawed statistical models–is undoubtedly more efficient than human engineers making crude macro-tweaks every so often.
    Modern FMSs (and other aircraft control systems) save much more fuel and calculate & fly much more efficient (& faster, and safer) routes–via flawed statistical models–than any human is capable of.

    Flawed statistical models [that replace the data they are derived from] are making things possible people a generation ago couldn’t imagine possible.

    Sometimes some modelers overreach, make mistakes, etc. That reflects the flawed efforts of those individual modelers — not the general technique or general application of modeling.

  8. JohnK,

    “outlier”means “not normal”. The only way to avoid them is to never have “normal” data. This is turn means doing nothing with the data other than plotting. An action without much purpose.

    The first step in analyzing a data set is to “eyeball it”. However, that is a form of local averaging. That’s what these models seem to have done.Presumably there is some reason the data were plotted against time. For example, one might be interested in areas and times where the averages generally increased or decreased and steer further investigation with the hope of discovering why.

    There may be points that seem to fall far from the averages (outliers). Standing apart from the crowd is not necessarily a bad thing. These could indicate bad data points but often indicate there are special considerations to be taken into account.

  9. Only problem is, everybody else is wrong.

    Current usage focuses on the past and invents fallacies about discerning cause.

    But we’re seeing what wasn’t there.

    Mr. Briggs, I understand you are writing this post for your readers. Perhaps it’s time to stop your fantasy of “everyone else is wrong” or “everyone is drunk except me.” Assuming that DAV is not a trained statistician, if he can explain that there are good reasons both the model and data are constructed in one graph for the discussed simple data set, you can too. (DAV, please take this as a compliment.) Advocate some piratical solutions, which are worth spending your time. To me, your post has become like (bad) propaganda about statistics.

    For example, the 18 data points in https://www.wmbriggs.com/post/20435/ are given in the paper, show your readers what you really mean by

    Technically (and you can ignore this point), the uncertainty in the estimated force should be used in the regression, but it’s not clear they did this. That means the gray line, and subsequent equation Force = 1.16843 x Power will be too certain.
    You’ve readers who are capable of understanding basic measurement error models or basic statistical analyses.

    I know, flattery is never my strong point. And you can be sure that my flattery is sincere and never a lie.

  10. Ken,

    I think you make a good point, but it’s limited to well known physical phenomena or very well studied behavior where causes are already known. Most science doesn’t operate like engineering – there are far too many variables (econometrics or climate science for example). So we replace the data with a model and we come up with a conclusion that fits our presumptions.

    Even in engineering, we have to worry though. The automatic pilot is no help with hijackers.

  11. Surely there is a difference between estimating model parameters by fitting a known physical theory to a set of measurements, and trying to invent a new model.

    Herbicide use does not appear to be based on some physical model.

  12. “Given our past measurements and given these assumptions about characterizing their uncertainty, the probability future data will look like such-and-such is X.” This is my exact problem with most so called “scientist” today. They don’t use the above statement, of course the other problem is over half of the population doesn’t know what the above statement means anyway.

  13. To JH and Ken,

    Please re-read Briggs’ post and do so dispassionately. I normally don’t comment on Briggs’ posts, but because of the role of models/technology in society today, there are not many issues more important than this one.

    Academia is currently filled with corrupt, dishonest, narrowly trained specialists who are merely (or have become) nothing more than agenda-seeking individuals. It is a breathe of fresh air that an academic is taking his own field to task.

    Ken, I’d like to address your comments – but you’ll see they apply to JH’s comments equally.

    You wrote:

    “WHY substitute the model for the data points? Because the model depicts trends in activity – and those are very often highly relevant in relation to something else.”

    Yes and that is precisely Briggs’ point! It’s not the DATA that suggests trends, it is the MODEL. One of the early lessons undergrads learn in real analysis is that when the proof of a theorem isn’t obvious and algebraic manipulation doesn’t help, try pushing definitions. Let’s do that here.

    Definition: Model = data + assumptions.

    So, plugging in the definition of the word “model”, your sentence becomes:

    “WHY substitute the data+assumptions for [just] the data points? Because the data+assumptions depicts trends in activity – and those are very often highly relevant in relation to something else.”

    And that is precisely the point Briggs is making!

    Next, you wrote (regarding causality):

    “SO WHAT???? Knowledge about cause is not a criterion for usefulness; humanity has no solid understanding of the cause of gravity…”

    This is absurd! Of course we do. While ABSOLUTE knowledge (in any field) may not be attainable, we approach certainty when
    OUR MODELS ARE VERIFIED BY EMPIRICAL EVIDENCE. You then go on to site examples that imply that the most useful models
    are precisely those that have been verified!

    1. Aircraft Flight Management Systems – Empirically verified by successful flight – not merely a model!
    2. Aircraft engines incorporate control systems – Again, empirically verified via simulations and flight.
    3. Computers applying model approximations control modern auto engines – again, empirically verified via simulations, tests,
    and the use of cars by MILLIONS of people for DECADES.

    Not to mention that these examples have theoretical foundations in control theory, CFD, mechanics, etc… that have also been tested and verified empirically to separate the wheat from the chaff.

    And as you wrote, “the list can go on & on…”

    The cause of your confusion is that you see no difference between purely hypothetical models and those that actually have empirical support.

    Proof of this can be seen in your following comment (emphasis mine):

    “The ugly fact is, imperfect statistical models have PROVEN UTILITY.
    And, INDUSTRY is continuing to DEVELOP AND APPLY them and in so doing is making things even better.
    The people in INDUSTRY doing that are ones that can tap the BENEFITS of the models.
    Those that NITPICK miss out.”

    Those in industry are the ones that nitpick!! They must be, because their models must have empirical support!
    Those in industry have no use for hypothetical models with no empirical support. They are the ones tossing unsupported models aside
    and keeping the (imperfect) models that actually work and have been tested. Google Hamming’s comments regarding planes
    that require some function to be Lebesgue integrable or the size of the set of statisticians that need to use measure theory in applied work.
    Or, recall Einstein. After the mad scramble against Hilbert to get the mathematics of relativity worked out, his theory remained just that; a theory.
    It was only AFTER there were measurements made to confirm his theory that his “model” became verified and a sensation.
    Today, society allows the model to define reality without empirical support. This is a sad, new phenomenon.

    The rest of your comments are a reflection of this unfortunate phenomenon.

    My suggestion to both of you, Ken and JH: after re-reading Briggs’ post, please pick up a book on sampling theory. I’m sure the two
    of you know about the post-WWII golden years at Bell Labs. I’m currently working on updating a sampling book by Bill Williams
    who wrote this book while at Bell Labs in 1977. It is excellent. You can get the 1977 copy for PENNIES on amazon.
    You need not work through the problems, but once you read this book you’ll have an idea of all the potential problems with substituting models for reality.
    After reading that book you might come away saying, “Yes, all well and good – but that was 40 years ago and we’ve since then assimilated the advice
    therein and no longer make sampling errors”. Well, that’s incorrect, but fine; move onto a statistical inference book. If you’re a regular reader of Briggs,
    here’s your chance to look at some of the Bayesian methods he writes about. There’s a truly wonderful book by Berger (Statistical Decision Theory) that
    goes through the Bayesian approach. If you prefer a more classic approach, Rice or Casella are tough to beat. These books are obscenely priced,
    so if you do not have the benefit of an academic library, check out Hogg’s Mathematical Statistics. The book is in its 7th or 8th edition and
    you can get an earlier edition for pennies on amazon. Again, you need not do the exercises; merely read carefully and what you’ll notice EVERYWHERE are the following: “assume that…”, “given the sample was drawn from a population that..”, “Suppose we have distribution…”, “If the samples/residuals are i.i.d…”. Then you’ll realize that talking about models without their assumptions is science fiction; not science.

    Once you read those books, you’ll realize your comment:

    “Briggs-the-philosopher, and many of his acolytes here, would limit the use of statistics to an educational realm:
    ‘probability models should only be used to characterize the uncertainty of that which we do not know…'”

    … is terribly mistaken. Briggs’ acolytes hold advanced degrees in math, stats, and/or physics – and love models. I assure you.
    What we would limit to an educational realm are models with no empirical support. You know, exactly the way those in industry, who you cited, do.

    Please take your time through my book suggestions then follow-up with the following two short books:

    1. Weber: Science as a Vocation
    2. Garcia y Gasset: Revolt of the Masses

    Read those books in the context of today. Do you see the furor over fake news and dishonest journalism? The internet gives anyone a platform
    to say anything. Well, R and Python give anyone the platform to “prove” anything. We’re on the precipice of the “fake science” cliff; let’s not fall off. We need well informed citizens to fight back against agenda driven (ie, fake) science. And what’s worse; fake science can occur even when there’s no agenda!
    Some years back another statistician (Larry Wasserman) had a blog and in it he lamented the way statisticians were falling behind computer scientists.
    He pointed out how it’s the CS students who are churning out papers and contributing new machine learning methods, while statisticians are being left behind. But, of course, there’s a reason for this. In my graduate courses on linear modeling, for example, we spent an awfully long time on regression diagnostics. When do the models fail? What happens when assumptions no longer hold? etc… If you’re familiar with the late (and imho, great) statistician David Freedman’s books on statistical models, you’ll know what I mean. CS majors otoh, are just cranking out code and underlying assumptions are less important than creating cool tools. I do not believe this to be the result of malicious intent; simply different emphases/interests.
    In the right hands these tools are amazing, but with great power comes great responsibility.

    Science is about the pursuit of truth, not agenda. Science has given us so much, it’s fine to have faith in science.
    But, as Augustine wrote; doubt is a prerequisite of faith.

    Please have a look at the books I’ve suggested, then update this thread in a few months. We’ll be here.

    Best of luck.

  14. Anthony,

    There is nothing that I can disagree with in this post, as I cannot survey practitioners as to what models are to them. And Briggs, who enjoys imagining that others don’t know anything, as usual, just knows what others think and believe. It is most strange to me that a statistician can so easily make conclusion of this sort.

    There is nothing new or useful to me in this post. It is not new that most students (even those with MS or MA degree) don’t comprehend what statistics is. It is not new that some practitioners just click and click.

    Perhaps, practitioner simply don’t understand how to apply statistics appropriately but understand what modeling is about. But again, this point is not important. Teach them if one can. And to say that we nearly always don’t need to use statistical models is basically telling people not to hire him. Yep, just use eye-ball and make inferences and predictions! You know, how often do we collect data that can be sorted by eye-ball method and simple scatter plots nowadays?

    If one looks for junks, one would have no problem fining them. For example, just look at the graph here –https://www.wmbriggs.com/post/15095/. Similarly, if one looks for excellent research, one would have no problem finding it also.

    Let me put it this way, repeatedly pointing out something obvious without ever showing solutions can be very annoying to academics.

    I don’t think CS scientists are just cranking out code. For example, SAS Data Mining Enterprises offers many options, and one would need to know the underlying assumptions to choose the correct one. Those programs are written for various assumptions. You are right that if one understands data structure and statistical methods and computational algorithms well, statistics are powerful tools. And it is nearly impossible to just use eye-ball or a simple scatter plot to make inference and predictions nowadays.

  15. The cause of your confusion is that you see no difference between purely hypothetical models and those that actually have empirical support.

    Anthony,

    I’d like to know you how you reach the above strange conclusion either Ken or I is confused between hypothetical models and those actually have empirical support? Please don’t tell me that you catch the disease of “everyone who seems to disagree with me is confused” from Briggs.

    What does it mean “models with empirical support”? Whether you agree or not, the graphs as shown in the post is a way (not the only way) to show empirical support for the fitted model to be used for inference and predictions. Briggs appears to think that people plot the fitted model and obervations in one figure because people are confused about some strange thing. Well, Briggs is a strange person who has done and said strange things that beyond my ken.

Leave a Reply

Your email address will not be published. Required fields are marked *