Selling Fear Is A Risky Business: Part Last

Read Part I, Part II. Don’t be lazy. This is difficult but extremely important stuff.

Let’s add in a layer of uncertainty and see what happens. But first hike up your shorts and plant yourself somewhere quiet because we’re in the thick of it.

The size of relative risks (1.06) touted by authors like Jerrett get the juices flowing of bureaucrats and activists who see any number north of 1 reason for intervention. Yet in their zeal for purity they ignore evidence which admits things aren’t as bad as they appear. Here’s proof.

Relative risks are produced by statistical models, usually frequentist. That means p-values less than the magic number signal “significance”, an unfortunate word which doesn’t mean what civilians think. It doesn’t imply “useful” or “important” or even “significant” in its plain English sense. Instead, it says the probability of seeing a test statistic larger (in absolute value) than the one produced by the model and observed data if the “experiment” which gave the observations were indefinitely repeated and if certain parameters of the quite arbitrary model are set to 0.1 What a tongue twister!

Every time you see a p-value, you must recall that definition. Or fall prey to the “significance” fallacy.

Now (usually arbitrarily chosen and not deduced) statistical models of relative risk have a parameter or parameters associated with that measure.2 Classical procedure “estimates” the values of these parameters; in essence, makes a guess of them. The guesses are heavily—as in heavily—model and data dependent. Change the model, make new observations, and the guesses change.

There are two main sources of uncertainty (there are many subsidiary). This is key. The first is the guess itself. Classical procedure forms confidence or credible “95%” intervals around the guess.3 If these do not touch a set number, “significance” is declared. But afterwards the guess alone is used to make decisions. This is the significance fallacy: to neglect uncertainty of the second and more important kind.

Last time we assumed there was no uncertainty of the first kind. We knew the values of the parameters, of the probabilities and risk. Thus the picture drawn was the effect of uncertainty of the second kind, though at the time we didn’t know it.

We saw that even though there was zero uncertainty of the first kind, there was still tremendous uncertainty in the future. Even with “actionable” or “unacceptable” risk, the future was at best fuzzy. Absolute knowledge of risk did not give absolute knowledge of cancer.

This next picture shows how introducing uncertainty of the first kind—present in every real statistical model—increases uncertainty of the second.

Again, these are true probabilities and not "densities." See Part II.
Again, these are true probabilities and not “densities.” See Part II.

The narrow reddish lines are repeated from before: the probabilities of new cancer cases between exposed and not-exposed LA residents assuming perfect knowledge of the risk. The wider lines are the same, except adding in parameter uncertainty (parameters which were statistically “significant”).

Several things to notice. The most likely cancer cases stopped by eliminating completely coriandrum sativum is still about 20, but the spread in cancer stopped doubles. We now believe there could be more cancer cases, but there also could be many fewer.

There is also more overlap between the two curves. Before, we were 78% sure there would be more cancer cases in the exposed group. Now there is only a 64% chance: a substantial reduction. Pause and reflect.

Parameter uncertainty increases the chance to 36% (from 22%) that any program to eliminate coriandrum sativum does nothing. Either way, the number of affected citizens remains low. Affected by cancer, that is. Everybody would be effected by whatever regulations are enacted. And don’t forget: any real program cannot eliminate completely exposure; the practical effect on disease must always be less than ideal. But the calculations focus on the ideal.

We’re not done. We still have to add the uncertainty in measuring exposure, which typically is not minor. For example, Jerrett (2013) assumes air pollution measurements from 2002 effect the health of people in the years 1982-2000. Is time travel possible? Even then, his “exposure” is a guess from a land-use model. Meaning he used the epidemiologist fallacy to supply exposure measurements.

Adding exposure uncertainty pushes the lines above outward, and increase their overlap. We started with 78% chance any regulations might be useful (even though the usefulness affected only about 20 people); we went to 64% with parameter uncertainty; and adding in measurement error will move that number closer to 50%—the bottom of the barrel of uncertainties. At 50%, the probability lines for exposed and not-exposed would exactly overlap.

I stress I did not use Jerrett’s model—because I don’t have it. He didn’t publish it. The example here is only an educated guess of what the results would be under typical kinds of parameter uncertainty and given risks. The direction of uncertainty is certainly correct, however, no matter what his model was.

Plus—you knew this was coming: my favorite phrase—it’s worse than we thought! There are still sources of uncertainty we didn’t incorporate. How good is the model? Classical procedure assumes perfection (or blanket usefulness). But other models are possible. What about “controls”? Age, sex, etc. Could be important. But controls can fool just as easily as help: see footnote 2.

All along we have assumed we could eliminate exposure completely. We cannot. Thus the effect of regulation is always less than touted. How much less depends on the situation and our ability to predict future behavior and costs. Not so easy!

I could go on and on, adding in other, albeit smaller, layers of uncertainty. All of which push that effectiveness probability closer and closer to 50%. But enough is enough. You get the idea.


————————————————————————————————

1Other settings are possible, but 0 is the most common. Different models on the same data give different p-values. Which one is right? All. Different test statistics used on the same model and data give different p-values. Which one is right? All. How many p-values does that make all together? Don’t bother counting. You haven’t enough fingers.

2Highly technical alley: A common model is logistic regression. Read all about them in chapters 12 and 13 of this free book (PDF). It says the “log odds of getting it” are linearly related to predictors, each associated with a “parameter.” The simplest such model is (r.h.s) b0 + b1 * I(exposed), where the I(exposed) equals 1 when exposed, else 0. With a relative risk of 1.06 and exposed probability of 2e-4, you cannot, with any sample size short of billions, find a wee p-value for b1. But you can if you add other “controls”. Thus the act of controlling (for even unrelated data) can cause what isn’t “significant” to become that way. This is another, and quite major, flaw of p-value thinking.

3“Confidence” intervals mean, quite literally, nothing. This always surprises. But everybody interprets them as Bayesian credible intervals anyway. These are the plus or minus intervals around a parameter, giving its most likely values.

21 Comments

  1. Sheri

    You mean the 95% confidence the IPCC is reporting for global warming might not mean what we are being told it does?

  2. Scotian

    If I hike up my shorts won’t that increase the possibility of developing cancer of the albondigas. Good articles Briggs and I don’t give out compliments readily. All this reminds me, and at my venerable age everything reminds me of something, of the kind of statistics done by education researchers on new fangled approaches to pedagogy where they try to convert minor effects into earth shattering significance. A topic for a future article perhaps? I should also note that new fangled in this case inevitably means approaches that have been tried and have failed for the last hundred odd years – the field has no memory.

    Although I suspect that you are joking, Sheri, I will add for clarification the the 95% confidence that the IPCC totes has nothing to do with statistics and merely represents a feeling pulled out of thin air. See the numerous recent articles posted on WUWT.

  3. Sheri

    Yes, Scotian, I was joking. It never hurts to clarify, however, as many people often do not realize that some statists are based on virtually nothing.

  4. Sheri

    Should be statistics. I’m blaming new keyboard for all the bad typing!

  5. Scotian

    Sheri, statists works as well!

  6. DAV

    Nit: “the number of effected citizens remains low. Effected by cancer, that is.”

    Suppose you meant “the number of affected citizens remains low. Affected by cancer, that is.”

    The verb effect effectively (hmmmm….) means cause. The citizens aren’t caused by the cancer are they?

  7. Briggs

    DAV,

    Another typo placed by my enemies!

  8. HARLEYRIDER1978

    Epidemiologists Vote to Keep Doing Junk Science
    http://www.manhealthissue.com/2007/06/epidemiologists-vote-to-keep-doing-junk-science.html
    Epidemiologists Vote to Keep Doing Junk Science

    Epidemiology Monitor (October 1997)

    An estimated 300 attendees a recent meeting of the American College of
    Epidemiology voted approximately 2 to 1 to keep doing junk science!

    Specifically, the attending epidemiologists voted against a motion
    proposed in an Oxford-style debate that “risk factor” epidemiology is
    placing the field of epidemiology at risk of losing its credibility.

    Risk factor epidemiology focuses on specific cause-and-effect
    relationships–like heavy coffee drinking increases heart attack risk. A
    different approach to epidemiology might take a broader
    perspective–placing heart attack risk in the context of more than just
    one risk factor, including social factors.

    Risk factor epidemiology is nothing more than a perpetual junk science machine.

    But as NIEHS epidemiologist Marilyn Tseng said “It’s hard to be an
    epidemiologist and vote that what most of us are doing is actually harmful
    to epidemiology.”

    But who really cares about what they’re doing to epidemiology. I thought
    it was public health that mattered!

    we have seen the “SELECTIVE” blindness disease that
    Scientist have practiced over the past ten years. Seems the only color they
    see is GREEN BACKS, it’s a very infectious disease that has spread through
    the Scientific community with the same speed that any infectious disease
    would spread. And has affected the T(thinking) Cells as well as sight.

    Seems their eyes see only what their paid to see. To be honest, I feel
    after the Agent Orange Ranch Hand Study, and the Slutz and Nutz Implant
    Study, they have cast a dark shadow over their profession of being anything
    other than traveling professional witnesses for corporate hire with a lack
    of moral concern to their obligation of science and truth.

    The true “Risk Factor” is a question of ; will they ever be able to earn
    back the respect of their profession as an Oath to Science, instead of
    corporate paid witnesses with selective vision?
    Oh, if this seems way harsh, it’s nothing compared to the damage of peoples
    lives that selective blindness has caused!

    The rise of a pseudo-scientific links lobby

    Every day there seems to be a new study making a link between food, chemicals or lifestyle and ill-health. None of them has any link with reality.

    http://www.spiked-online.com/index.php/site/article/13287

  9. HARLEYRIDER1978

    JOINT STATEMENT ON THE RE-ASSESSMENT OF THE TOXICOLOGICAL TESTING OF TOBACCO PRODUCTS”
    7 October, the COT meeting on 26 October and the COC meeting on 18
    November 2004.

    http://cot.food.gov.uk/pdfs/cotstatementtobacco0409

    “5. The Committees commented that tobacco smoke was a highly complex chemical mixture and that the causative agents for smoke induced diseases (such as cardiovascular disease, cancer, effects on reproduction and on offspring) was unknown. The mechanisms by which tobacco induced adverse effects were not established. The best information related to tobacco smoke – induced lung cancer, but even in this instance a detailed mechanism was not available. The Committees therefore agreed that on the basis of current knowledge it would be very difficult to identify a toxicological testing strategy or a biomonitoring approach for use in volunteer studies with smokers where the end-points determined or biomarkers measured were predictive of the overall burden of tobacco-induced adverse disease.”

    In other words … our first hand smoke theory is so lame we can’t even design a bogus lab experiment to prove it. In fact … we don’t even know how tobacco does all of the magical things we claim it does.

    The greatest threat to the second hand theory is the weakness of the first hand theory.

  10. HARLEYRIDER1978

    This pretty well destroys the Myth of second hand smoke:

    http://vitals.nbcnews.com/_news/2013/01/28/16741714-lungs-from-pack-a-day-smokers-safe-for-transplant-study-finds?lite

    Lungs from pack-a-day smokers safe for transplant, study finds.

    By JoNel Aleccia, Staff Writer, NBC News.

    Using lung transplants from heavy smokers may sound like a cruel joke, but a new study finds that organs taken from people who puffed a pack a day for more than 20 years are likely safe.

    What’s more, the analysis of lung transplant data from the U.S. between 2005 and 2011 confirms what transplant experts say they already know: For some patients on a crowded organ waiting list, lungs from smokers are better than none.

    “I think people are grateful just to have a shot at getting lungs,” said Dr. Sharven Taghavi, a cardiovascular surgical resident at Temple University Hospital in Philadelphia, who led the new study………………………

    Ive done the math here and this is how it works out with second ahnd smoke and people inhaling it!

    The 16 cities study conducted by the U.S. DEPT OF ENERGY and later by Oakridge National laboratories discovered:

    Cigarette smoke, bartenders annual exposure to smoke rises, at most, to the equivalent of 6 cigarettes/year.

    146,000 CIGARETTES SMOKED IN 20 YEARS AT 1 PACK A DAY.

    A bartender would have to work in second hand smoke for 2433 years to get an equivalent dose.

    Then the average non-smoker in a ventilated restaurant for an hour would have to go back and forth each day for 119,000 years to get an equivalent 20 years of smoking a pack a day! Pretty well impossible ehh!

  11. Jonathan D

    Briggs, an example of how these things might be looked at which might be interesting.

    Someone modelled the effects of exposure to/consumption of a range of chemicals in usual epidemiological manner (you would have a field day). The chemicals in question are by-products of processes which result in huge health benefits, so noone was particualr keen dramatising the risks, although obviously the researches wanted the ‘requires more study’ message to go through.

    The results of the model were a few relative risks that were calculated to be significant. The researchers didn’t leave it at that but (sensibly) gave an indication of how many more cases were most likely to be avoided, if (it was stressed) the figures accurately reflected a causal relationship. They didn’t pretend they could get rid of the troublesome molcules altogether, but gave a fairly small number based on reducing exposure form the modelled levels to a relatively attainable threshold. On the other hand, they did ignore any uncertainty in this number beyond the basic caveats about causality.

    The interesting part is that people representing the body that might be expecting to do something about these by-products wanted to remove this interpretation of the relative risks from the report, and were happier just talking about the relative risks. The number of cases was as small as the ones in your posts, but it was felt that the public might object to even one case being portrayed as avoidable, and that speaking of 20 cases which might be avoided would make the possible problem feel more ‘real’ than simply talking about a 1.06 relative risk.

  12. Francois

    I hope Briggs can clear this up for me. He has pointed out, here and elsewhere very valid shortcomings of epidemiological studies such as observational studies and so forth. Here is my question: since the confounding factors that seem to cause something cannot be known entirely (residual confounders) and therefore not adjusted for in the analysis of these epidemiogical studies, are epidemiological studies worthless? Since they will always produce biased results, should we dump epidemiology? Keep in mind that controlled trials are not always possible, as they are expensive, and might not be ethical. If it was not for observational epidemiology, would we have known that cigarette smoking likely causes cancer? You would agree giving one group cigarettes another fake-cigarette to test the cancer question might not fly. What is a good epidemiological study, and do you have an example of one you find impressive?

    Thanks Briggs, I love your blog and read it every day.

    Cheers

    F.

  13. Scotian

    Francois,

    The proper use of epidemiology is the original one – the determination of the source of food poisoning and infectious disease outbreaks. The most famous is the first.

    http://en.wikipedia.org/wiki/1854_Broad_Street_cholera_outbreak

    But since we now produce more epidemiologists than there are real problems to solve they have turned their attention to the minuscule, and the field has been corrupted. The current obsession with environmental causes of disease has fueled this trend.

  14. Briggs

    Francois,

    Your question wins for best one asked this month. Short answer: scrap the system, begin anew. I’ll write about this more later. Scotian’s link apt.

    Harely,

    Great links, thanks.

  15. john

    “If it was not for observational epidemiology, would we have known that cigarette smoking likely causes cancer? ..”

    Actually Francois we pretty much know that smoking does not cause cancer; In fact research into high risk occupational groups such as those exposed to high levels of PAHCs suggest that non-smokers are at a higher risk of developing lung cancer.

  16. Sheri

    It’s fascinating reading the comments on smoking and cancer here. I also read blogs (and write them) on climate change. It is virtually forbidden to even mention tobacco research there because the global warming crowd jumps on it and labels everyone anti-science. We KNOW cigarettes cause cancer. To say otherwise is anti-science. Trying to explain the flaws in the theory is impossible, in part because this just seems to feed the anti-science frenzy. It’s great to be able to discuss the limitations of studies here. Very refreshing.

  17. Bruce

    G’day. What infuriates me is when interest groups use VERY LARGE GRAPHS to show very small changes! A 6 foot high graph showing 1/4% change looks menacing, but tells you nothing. After seeing “An inconvenient truth” I no longer had any fears about climate change. The so called “proof” was so small as to be statistically insignificant. The same goes for cancer rates from smoking. You have more chance of winning the lottery than getting lung cancer under the average age for your population.
    Look at numbers properly. Compare to the population and work out the percentages yourself. It’s just basic math.
    Bruce

  18. obiwankenobi

    Environmental Cancer-A Political Disease? [Paperback]
    S. Robert Lichter (Author), Stanley Rothman (Author)

Leave a Reply

Your email address will not be published. Required fields are marked *