William M. Briggs

Statistician to the Stars!

Page 154 of 408

Study Finds 9-Month-Old Babies Are Racist

MSNBC reports this:

University of Massachusetts Amherst researchers placed sensors on the heads of 9-month-old babies…and measured brain activity when infants were shown pictures of white and black faces expressing emotion. Five-month-old babies could differentiate between happy or sad faces in both races equally. Nine-month-old babies related better to their own race. Also, the 5-month-olds’ brain activity happened in the front of the brain; the older, more racist babies experienced activity in the back.

The paper is in the May issue of the journal Development Science. It’s called “Building biases in infancy: the influence of race on face and voice emotion matching” by Margaret Vogel, Alexandra Monesson and Lisa Scott.

The main finding is that babies’ “face recognition skills become tuned to groups of people they interact with the most.” Who would have guessed? The authors also say this: “This developmental tuning is hypothesized to be the origin of adult face processing biases including the other-race bias. In adults the other-race bias has also been associated with impairments in facial emotion processing for other-race faces.” Again, this is partly uncontroversial. Human beings are better at finding subtleties in the familiar.

Anyway, our trio gathered babies together whose “parents reported their infants having had little to no previous experience with African American or other Black individuals.” They did not do the opposite and find babies who never saw white faces. They had 24—count ‘em—5-month-olds and 24 9-month-olds. This makes 24 + 24 = 48, a simple math equation, but important to assimilate because of the authors’ admission that for the behavioral analysis

43 infants were excluded due to experimenter or technical error (n = 8), because they became fussy during testing (n = 1), because they exhibited a side looking bias (n = 14), because they failed to fixate both images during one of the test trials (n = 18), or because the infant was not Caucasian (n = 2).

I leave it as homework to discover what is 48 minus 43. For the electrophysiological analyses, they had 15 5-month-olds and 17 9-month-olds, but 19 these were added to the result from the homework question (how many were 5-months old or 9-months old we are never told); however, 23 of these 15 + 17 = 32 were excluded too. What we have here, in statistical terms, is small sample (get it? get it?).

For the behavioral analysis, babies were sat in front of a computer monitor on which was flashed images of smiling black or white women in pairs, some familiar some not. The amount of time babies looked at one or the other faces was measured. For the electrophysiological analyses, babies were subjected to mixtures of “happy or sad” black and white faces and voices.

If the babies didn’t buy any of this manipulation, “the experimenter viewing the infant via live video feed paused the experiment and presented digital images ⁄ sounds of ‘Elmo’ until they fixated the screen.” No word on how often that happened.

Oh, did I mention electrodes were glued to the kids? Indeed, “Trials were discarded from analyses if they contained more than 12 bad channels” from these electrodes. No word on how many were excluded. But they made the babies go through hundreds of trials; and “average of 95.93″ for one part of the study, etc.

Now, it appears that they did their t-tests based on the samples they would have had had they not tossed out the data. There are words about this being fairer. Or something. It is just not clear. But as larger samples make smaller p-values no matter what, they are biasing things in their favor.

Five-month-olds were not racist: the p-value just wasn’t small enough. But it’s the 9-month-olds where the trouble starts. Older babies spent on average “59.2%” of time looking at white novel faces but just “52.3%” of the time looking at the black novel faces (in the paired-faces experiments). Nine-month-olds also sparkled slightly more via the electrodes than did the 5-month-olds, but the difference is slight and only in on some but not all electrodes.

And it goes on. But it’s all—I hope the authors can forgive me for saying this—rather dull. The differences here are slight and, as said above, in some sense expected. The authors even manage not to include any speculation in their conclusions about “what it all means.” Overall, a fairly routine paper with a few (standard) mistakes. So why the fuss?

Well, it seems the authors just couldn’t help themselves and in the press release accompanying the publication said,

‘These results suggest that biases in face recognition and perception begin in preverbal infants, well before concepts about race are formed,’ said study leader Lisa Scott in a statement.

‘It is important for us to understand the nature of these biases in order to reduce or eliminate [the biases].’

Just couldn’t help themselves, I suppose. They went from something banal—babies can identify the familiar better the novel—to something asinine. All that was missing was the suggestion to create a government program to eliminate “racism” in babies.


Thanks to Al Perrella for suggesting this topic.

The Dire And Depressing Implications Of Science As Scientism: Two Introductions

A long introduction…

Scientism is the fallacy that all that is known and all that can be known, can only be known through scientific methods: that which is testable is all that counts. It is the false belief that all areas of inquiry can and should be subject to scientific, i.e. empirical, inquiry.

A prime, and maybe even the sole, reason that some people give for the belief in scientism is that science has been so good at prediction, that prediction has been steadily improving and broadening in scope, and that therefore it is rational to suppose that it will continue to do improve and broaden.

This is quite lovely because it contains two nuggets of truth, but it is these nuggets that lead to the fundamental error. And this is because the scientismist (who may or may not be a scientist) substitutes the truth of these nuggets for the truth of the entire statement. The first nugget is “science has been so good at prediction, that prediction has been steadily improving and broadening in scope, and that therefore it is rational to suppose that it will continue to do improve and broaden.”

It is rational to believe that scientific progress will continue. It however is irrational to believe that because science has progressed that it always will or that it will progress into areas which are not scientific, i.e. that are empirically testable. It is a small and understandable mistake to suppose that science will always progress, especially when we hear that the iPhone 6 (or is it 7?) is in the works, but it is a major error to suppose that science will be able to answer all non-scientific questions. And it is another offense to say that non-scientific questions do not exist because science can only answer scientific questions: this of course begs the question.

The second nugget of truth, perhaps slightly more subtle, is the appeal to “goodness” of prediction or explanation. Why is it good that predictions match reality? It is good; that is, it is true that it is a good that scientific predictions closely match reality. It is also true that this closeness is also a good reason for us to believe in the truth of the scientific theory which makes the good prediction. That is, it is true that prediction closeness is a reason to believe in the truth of a theory. Lots of truths swimming around here: enough to suggest that since all these parts are true, the parts joined together are true, i.e. that scientism is true.

But to say that closeness of prediction is good or that closeness is a good reason to believe in a theory are both non-scientific statements. We can’t know that prediction closeness is a good by appealing to any empirically testable thing. These are metaphysical beliefs. They may be axiomatic or they may be derivable from simpler axioms, but they are not prone to measurement.

This is only a small proof why scientism is false and that faith in scientific progress is often misplaced. And, as suggested from the beginning, it is only a long-winded introduction to a series of meaty, masculine, must-read posts by Edward Feser as he reviews Alex Rosenberg’s uber-scientistic The Atheist’s Guide to Reality, perhaps the best tract in support of scientism that exists.

David Stove often praised certain philosophers for being wrong and for making mistakes so clearly. Feser says as much about Rosenberg whom he praises for understanding the implications of the radical scientism and atheism he preaches, the major one of which is “nice nihilism”. Of course, Feser will explain better than I can that the desire to be “nice”, nihilistically or not, is a moral concept, and so Rosenberg defeats himself as he steps into the ring.

Feser’s 10-part series cannot be missed (and I’ll know if you have, for there will be homework in future posts). Incidentally, probably in June, I’ll be reviewing his other must-have work, The Last Superstition.

The second part of the introduction is for Michael Flynn, a science-fiction author and blogger at The TOF Spot. Flynn was kind enough to link to our post yesterday, where we began the Official New Mismeasure of Man list. Flynn has some interesting things to say about the progress and hope for progress in science, and just what this means in terms of scientism. He comes to the conclusion that “Modern science is under attack, not by creationist outsiders but by academic insiders.

Get reading!

Why Republicans Deny Science—And Reality: Request For Help

Update The link to Sam Harris’s abysmal study are now provided.

That is the (modified) subtitle of the sanctimoniously self-satisfied Chris Mooney’s new book, “The Republican Brain: The Science of Why They Deny Science–and Reality.” I just this morning learned of this latest entry of dismal statistics from Jonah Goldberg’s NRO column (see also this).

Regular readers will be long familiar with the parade of faulty papers which claim that Republicans, conservatives, and Christians are stupid, unthinking, easily led, uncompassionate, and set off on their sad road by delusional beliefs in God or because they once attended a Fourth of July parade (yes, really).

Apparently Mooney, who also wrote The Republican War on Science, has compiled these studies and come to the conclusion that the sheer number of them proves that he and his fellow leftists are just better creatures. As in wired better, genetically superior, purer souls by birth—made of the Right Stuff, worthy of what comes to them, more worthy, perhaps, of life.

Doubt me? From the blurb: “A significant chunk of the electorate, it seems, will never accept the facts as they are, no matter how strong the evidence.” Just you ponder what this sentence implies.

I have not yet read the book: my information comes from Mooney’s other writings, the comments to his book on Amazon (which amusingly pairs Mooney’s book with Rachel Madow’s Drift), and my long exposure to the same genre of papers which Mooney uses as the basis for his book.

I have been meaning for quite some time to compile a list of the posts I wrote exposing the flaws, biases, faulty statistics, unwarranted conclusions, rampant, wild-eyed speculation, bizarreries, and even gaucheries of these papers. But the key word is “time:” I haven’t any. Yet I have begun the list today here (I checked back to 30 June 2010). I’ll later reproduce it under my Start Here tab.

Can I ask your help in reminding me of the relevant posts? Write them in the Comments section or send me an email. Send me other criticisms as well, if you can. Or send me links to news reports or papers which make claims along the Moonian line.

Send this list (a link to this post) to anybody in thrall to Mooney or who thinks well of his ideas.

The New Mismeasure Of Man: Official List

(Say: good title for a book.)

Supplementary papers likeAcademic Pscyhologists Over-Rely On WEIRD People: Overconfidence Results, Women Spot Snakes Faster Before Their Periods, and How To Present Anything As Significant are okay, but not the main thrust. Eschew global warming unless the focus is on what kind of people reasonable, clear-thinking skeptics (i.e. “deniers”) are.

It’s worth reading those reviewers mentioned about. One reviewer claims that one of the main points of Mooney’s book is “The liberal/conservative divide has widened over the past few decades not only because of the conservative revolution of the 1970s-80s, but also because of the growth of cable news and the Internet. The new sources allow conservatives to have easy access to like-minded thinkers and a wide array of ‘experts’ to back up their erroneous claims and create a new reality that conforms to their worldview.”

Yet–somehow—leftists are immune to this kind of mindless herding. Sheesh.

Update Goldberg posted this video (hat tip to Nate Winchester) yesterday (I didn’t know about this study, but it’s one for the books).

Leon Panetta Unleashes U.S. Army To Battle Carbon Dioxide

Sgt Rock battles climate change!“Sir!” The lieutenant, clearly agitated, rushed into the staff room and snapped to attention.

“What is it, Mann? What’s the word from the field?” asked the general.

“We’re surrounded, sir!” The lieutenant lost his composure. “We’ve tried everything but the enemy is increasing. It’s all around us!” The lieutenant began to circle the room aimlessly, the last of his sanity sadly ebbing away.

The general nodded, his chin sinking to his chest. “I feared as much. Major Schmidt, put me on the hotline.”

Schmidt waddled to the corner where the red phone sat gleaming. It was a direct line to the Pentagon that connected automatically whenever the handset was lifted. Schmidt knew what it meant for the general to make this call. He hesitated for a moment, hoping to forestall the inevitable. But he knew his duty. He picked up the phone.

“Secretary Panetta? General Hansen, sir.” Despite the overwhelming sadness engulfing his heart, his voice did not quaver.

“I’ve been expecting your call for hours, Hansen. The press—well, most of it—is waiting anxiously for me to tell them what to write. They’re growing restless. I’ve warned them that the ‘area of climate change has a dramatic impact on national security. Rising sea levels, severe droughts, the melting of the polar caps, the more frequent and devastating natural disasters’ all threaten the peace. This is war at its bloodiest. But we have, until China cashes in on the bonds they purchased from us, the strongest military on the face of this benighted planet. If any nation can battle climate change, it’s us.”

“It’s worse than we feared, sir,” began the general. “Nothing we’ve tried has been effective. And, sir, this is not all. Every weapon we throw at the enemy makes him stronger. Whatever we do is countered instantly. He only becomes stronger. It’s almost as if he knows what we’re going to do in advance. He…” The general’s voice took on a hysterical tinge.

“Snap out of it, man! Just tell me what happened.”

“Sir, we started by trying to shoot the carbon dioxide molecules from the sky. Reports indicate we’ve expended millions of rounds. But it’s no use. Our bullets are just too big. They sail harmlessly through the air.

“The plan to send the Army of M1 Abrams and Hummers on the offensive backfired. We drove and drove and drove but the enemy would not fight. We launched more missiles, rockets, and grenades than I can count. All of them fizzled. The Air Force sent wave after wave of fighters. They shot all they had. Nothing.

“It was only then that our scientists were able to show that everything we did—everything—only increased the number of the enemy. Regular munitions didn’t work, would never work.”

“This is bad, general. Very bad.”

“Yes, sir. A ray of hope was provided by British Intelligence. They worked out a scheme in which soldiers would take to the field and capture the enemy in hand-held non-EM-blocking polymerized containment devices—”

“—What in the world…?”

“Plastic baggies, sir. The plan was to march to the enemy’s hot zone, open the baggies and expose them to the air and then, before they had a chance to escape, to close the baggies and seal the enemy within.”

“Brilliant!” Secretary Panetta marveled.

“No, sir; not brilliant. Futile. Oh, I’ll admit the plan worked at first. Analysis showed that after the first twenty-four hours, the enemy was actually reduced by 0.00000000000034%.” The general sighed.

“Then what’s the problem? You didn’t run out of ammunition?”

“Plenty of ammunition, sir. But we ran out of storage space. We’ve filled all our hangers, offices, barracks, cupboards, closets, every space we could think of, but there just isn’t anywhere left to put the prisoners. But, sir, to anticipate your thoughts, even if we could find a place to put them, we can’t capture enough of the enemy fast enough. And paradoxically, the harder the soldiers work at putting these vile molecules out of commission, the more of them appear. It’s almost demonic.”

The line was silent for a minute. Finally, Secretary Panetta spoke. “You’re not saying what I think you’re saying, are you, general?”

“Sir, I…I can hardly speak the words. But a good general knows when a battle is lost.”

“I can’t tell the press the battle is lost. Our dear leader has promised the seas would recede and the skies would clear. I’ll do the only thing I can do.”

“What’s that, sir?”

“I’ll ask for more money to study the problem.”

Statistics Proves Same Drug Both Causes And Does Not Cause Same Cancer

statisticsNo, the title of today’s post is not a joke, even though it has often been used that way in the past. The title was inspired by yesterday’s Wall Street Journal article “Analytical Trend Troubles Scientists.”

Thanks to the astonishing fecundity of the p-value and our ridiculous practice of reporting on the parameters of models as if those parameters represented reality, we have stories like this:

In 2010, two research teams separately analyzed data from the same U.K. patient database to see if widely prescribed osteoporosis drugs [such as fosamax] increased the risk of esophageal cancer. They came to surprisingly different conclusions.

One study, published in the Journal of the American Medical Association, found no increase in patients’ cancer risk. The second study, which ran three weeks later in the British Medical Journal, found the risk for developing cancer to be low, but doubled.

How could this be!

Each analysis applied a different methodology and neither was based on original, proprietary data. Instead, both were so-called observational studies, in which scientists often use fast computers, statistical software and large medical data sets to analyze information collected previously by others. From there, they look for correlations, such as whether a drug may trigger a worrisome side effect.

And, surprise, both found “significance.” Meaning publishable p-values below the magic number, which is the unquestioned and unquestionable 0.05. But let’s not cast aspersions on frequentist practices alone, as probelmatic as these are. The real problem is that the Love Of Theory Is The Root Of All Evil.

Yes, researchers love their statistical models too well. They cannot help thinking reality is their models. There is scarcely a researcher or statistician alive who does not hold up the parameters from his model and say, to himself and us, “These show my hypothesis is true. The certainty I have in these equals the certainty I have in reality.” Before I explain, what do other people say?

The WSJ suggests that statistics can prove opposite results simultaneously when models are used on observational studies. This is so. But it is also true that statistics can prove a hypothesis true and false with a “randomized” controlled trial, the kind of experiment we repeatedly hear is the “gold standard” of science. Randomization is a red herring: what really counts is control (see this, this, and this).

Concept 1

There are three concepts here that, while known, are little appreciated. The first is that there is nothing in the world wrong with the statistical analysis of observational data (except that different groups can use different models and come to different conclusions, as above; but this is a fixable problem). It is just that the analysis is relevant only to new data that is exactly like that taken before. This follows from the truth that all probability, hence all probability models (i.e. statistics), is conditional. The results from an observational study are statements of uncertainty conditional on the nature of the sample data used.

Suppose the database is one of human characteristics. Each of the human beings in the study have traits that are measured and a near infinite number of traits which are not measured. The collection of people which make up the study is thus characterized by both the measured traits and the unmeasured ones (which include time and place etc.; see this). Whatever conclusions you make are thus only relevant to this distribution of characteristics, and only relevant to new populations which share—exactly—this distribution of characteristics.

And what is the chance, given what we know of human behavior, that new populations will match—exactly—this distribution of characteristics? Low, baby. Which is why observational studies of humans are so miserable. But it is why, say, observational astronomical studies are so fruitful. The data taken incidentally about hard physical objects, like distant cosmological ones, is very likely to be like future data. This means that the same statistical procedures will seem to work well on some kinds of data but be utter failures on others.

Concept 2

Our second concept follows directly from the first. Even if an experiment with human beings can be controlled, it cannot be controlled exactly or precisely. There will be too many circumstances or characteristics which will remain unknown to the researcher, or the known ones will not be subject to control. As good as you can design an experiment with human beings is just not good enough such that your conclusions will be relevant to new people because again those new people will be unlike the old ones in some ways. And I mean, above and here, in ways that are probative of or relevant to the outcome, whatever that happens to be. This explains what a sociologist once said of his field, that everything is correlated with everything.

Concept 3

If you follow textbook statistics, Bayesian or frequentist, your results will be statements about your certainty in the parameters of the model you use and not about reality itself. Click on the Start Here tab and look to the articles on statistics to read about this more fully (and see this especially). And because you have a free choice in models, you can always find one which lets you be as certain about those parameters as you’d like.

But that does not mean, and it is not true, that the certainty you have in those parameters translates into the certainty you should have about reality. The certainty you have in reality must always necessarily be less, and in most cases a lot less.

The only way to tell whether the model you used is any good is to apply it to new data (i.e. never seen by you before). If it predicts that new data well, then you are allowed to be confident about reality. If it does not predict well, or you do not bother to collect statistics about predictions (which is 99.99% of all studies outsides physics, chemistry, and the other hardest of hard sciences), then you are not allowed to be confident.

Why don’t people take this attitude? It’s too costly and time consuming to do statistics the right way. Just look how long it takes and how it expensive it is to run any physics experiment (about genuinely unknown areas)! If all of science did their work as physicists must do theirs, then we would see about a 99 percent drop in papers published. Sociology would slow to a crawl. Tenure decisions would be held in semi-permanent abeyance. Grants would taper to a trickle. Assistant Deans, whose livelihoods depend on overhead, would have their jobs at risk. It would be pandemonium. Brrr. The whole thing is too painful to consider.

« Older posts Newer posts »

© 2014 William M. Briggs

Theme by Anders NorenUp ↑