Podcast

Over-Certainties Of Science, Scientists, Experts, Quacks & Their Models

Here is the video I made for that Truth Over Fear Summit that they tried to cancel. Gaze in wonder! Be astonished! Engage your awe engine!

All graphics done with forefinger and Gimp. Which makes them all collector’s editions.

I ask all to remember only two things from this video, which are universally applicable:

1) All models only say what they are told to say;

2) Solutions, reactions, decisions, and so on, related to model output, are also models.

With those truths wedged firmly between your ears, you can tackle all of science, and even The Science.

Subscribe or donate to support this site and its wholly independent host using credit card or PayPal click here

Categories: Podcast, Statistics

22 replies »

  1. John B.

    I just checked on a clean browser (no cookies or history) and it worked fine.

  2. We could bet how long the video is going to survive on youtube. I downloaded it even before watching, just in case

  3. What is this in my youtube subs? Great stuff that cannot be repeated too much. Smart to leave comments off.

  4. Here’s what my model says:

    A cabal of billionaire sociopaths, in their insatiable lust for money and power, have engineered a pandemic panic to terrorize everyone into submitting to their power. Inflating a seasonal flu into the Black Death through a global media propaganda campaign of lies the cabal succeeds in destroying economies, lives, and the health of millions while subjecting their captives to ghastly medical experiments. The poor frightened people, who can’t figure out what is actually going on, have been made to think that the devils who are perpetrating this monstrous crime are their friends, and that their enemies are those who can see what’s going on and are trying to warn the people.

  5. I’ve been frustrated by the CDC data for a while now – needlessly hard to find basic numbers, and needlessly complex presentation of basic numbers.

    But a key point here: if your graph showing annualized US weekly death numbers 1917 – 2021 is correct, the CDC’s own numbers show the Coronadoom has caused NO, as in NO, uptick in death rates over what was expected, based on history – see, for example, https://www.macrotrends.net/countries/USA/united-states/death-rate. The slop of the line should increase dramatically starting in 2020 – even as few as 400,000 ‘excess’ deaths would push the annual rate over 1%, as opposed to the projected – and, if your graph is right, actual! – death rate of a little under 0.9%. But it doesn’t, instead following the slope the UN forecast before the Coronadoom, an upward trend which seems to be based on simple aging of the US population.

    So, taking that graph at face value, we would conclude that, according to the CDC’s own numbers, the Cornonadoom at worse resulted in an unmeasurably small uptick in overall deaths in 2020. That seems like big news.

    I guess I’m asking: you sure about those numbers? Not that I doubt they could be accurate, just want to be sure, since CDC numbers are presented in such confusing ways.

  6. @Joseph Moore,

    From memory, the annual all cause crude death rate in the USA (does *not* count abortions) has been about 0.9% of the actual population for decades (since the 50s or 60s). Nothing I saw, especially given how awful the “case” criteria were and the various incentives to list covid on the death certificate, said, to me, that the crude death rate should have changed much. It has wobbled about said 0.9%, so yeah, the number should have increased from the closer to 0.85% it had been for the couple of years prior to 2020 and I expect 2021’s number to be closer to 0.85%, if not a bit lower (total population is about 332 million now).

  7. Very interesting, but you don’t really explain in sufficient detail how “models only say what they are told to say.” Or, more specifically, you don’t give any real world examples of how model fit conflicts with reality testing on non-modeled measures. Ferguson’s models are obviously wrong, but that per se does not prove that his error is in what he “told the model to say.” Maybe his math was wrong. Maybe his assumption of R was wrong, etc. But to “tell” a model that a disease might kill X number of people isn’t wrong. Can you maybe take one of the studies on mask wearing and show how reality testing proves it wrong, while the model fit was correct? I agree with the various things you say about governments’ overreaction, their need to cause panic, their desire to obtain more power, etc. I agree with all of it. But your indictment of the science of modeling here is not convincingly argued, IMO. You keep saying the same thing over and over, “models only say what they are told to say,” but you do not really substantiate this.

  8. 86G: ”Ferguson’s models are obviously wrong, but that per se does not prove that his error is in what he “told the model to say.” Maybe his math was wrong. Maybe his assumption of R was wrong, etc. But to “tell” a model that a disease might kill X number of people isn’t wrong.”

    This is obtuse. If the model is wrong it’s because the model’s inputs are wrong. His math was wrong, his assumption of R was wrong, etc., whatever, something was wrong. So a model that predicts X number of people die, but they don’t, is wrong, because the inputs are wrong. And so you are wrong. Wrong. Funny word. Looks Chinese.

  9. 86G, et. al:

    The simplest [If X then Y] propositional models would initially seem to have much more practical utility than the more complex [If X+n(X1)+n(X2)+n(X3)+(etc.) then Y] propositional models. However, recall that the historical “consensus sceptics” of heavier-than-air flight relied upon an [If X then Y] model to their ultimate chagrin. They believed with conviction in a determinative absolute – because they could not conceive of any model (read – mechanism) which might overcome their own conceptual limitations. The flight skeptics’ MODEL failed. It could only generate the result which it was capable of generating.

    MODELS only “represent” essential mechanisms (- or – means by which one “force” or “state” transforms into another). The more accurate our knowledge and understanding of such mechanisms is, the better we can potentially “model” them. However, “MODELS” always presume a degree of correctness in our knowledge about such mechanisms and how they function – which may or may not be warranted. Here in lies the rub (as they say) for mere quantification of mechanistic processes which are not fully understood isn’t necessarily explanatory – or even meaningful.

    Often complex MODELS {such as those created by that infamous mathematical epidemiologist Neil Ferguson at Imperial College, London (for SARCOV2)} essentially represent computational processes masquerading as (immutable) “mechanisms”. Ferguson proposed (to the world) that if his MATHEMATICAL COMPUTATIONS were correctly performed then his PREDICTIONS must be considered reliable (because he is a government “EXPERT”). But, as Briggs endeavors to make clear, models can only produce those results which they are capable of producing. Models cannot supply NEW OR BETTER “mechanisms” which are not already inbuilt. Some models (e.g. BAYES) make a pretense of being able to “learn” – but – the veil is sheer.

    JUST FOR FUN: ||The Flight of the Phoenix (1965/FILM) || https://www.imdb.com/title/tt0059183/ ) explores a Model vs. Reality problem in an intriguing way. When survival is at stake – can the modeler’s skills save-the-day?

  10. Some models (e.g. BAYES) make a pretense of being able to “learn” – but – the veil is sheer.

    What they learn are the weights. This is particularly true of neural networks with their vast arrays of interconnected additions. This works well for certain types of problems (e.g., pattern recognition, specific game playing, speech to text) but fails miserably for others. Even with those in which they excel there is the builtin assumption that the problem can be reduced to smaller subsets of inputs or simpler lists of rules.

    Humans apparently store their learning differently and are clearly more flexible.

    We don’t really know how human learn or what “learning” actually entails outside of obvious generalities (e.g., learning cause and effect, learning relationships, etc.) which are really categories of learning explaining nothing. We don’t even know what it means to “know” something beyond the obvious tautology “well, we just know”. Still, neural nets and perhaps even Bayes networks may provide clues to understanding what humans may actually be doing when “learning”.

  11. Rarely do I watch or recommend videos, but this was great. Thank you Dr. Briggs.

  12. I don’t hear it mentioned much in reference to models or simulations, but an important aspect of evaluating the utility of a model is determining whether the model is interpolating or extrapolating. A model that extrapolates is highly suspect right off the bat, and unlikely to be well-behaved or useful; a model that interpolates at least has a chance of matching reality closely enough to be useful, although in my experience, typically a lot of trial and error is still required.

    If you want to make headlines with your model, you should extrapolate; models that interpolate are generally much less interesting. If a particular modelling effort must rely on extrapolation, perhaps a model is the wrong tool.

    The above is my working model of the general nature of models, and is an extrapolation of my experience, and is therefore almost certainly useless.

  13. Thank you Matt! Think this is sinking into my brain. Could the word pattern be substituted for the word model??

    God bless, C-Marie

  14. “All models say only what they are told to say” sounds to me like a reformulation of “all syllogisms are circular”. After all, the conclusion is necessarily implicit in the premises. One author suggested that a syllogism might be useful if it the conclusion is something that surprises you or if the circle is big enough.

Leave a Reply

Your email address will not be published. Required fields are marked *