Statistics

People Growing Dumber Or Research Growing Slacker?

Hipp Chronoscope

If there was ever a paper which was in danger of refuting itself by its title, this is it. “Were the Victorians cleverer than us? The decline in general intelligence estimated from a meta-analysis of the slowing of simple reaction time” in the peer-reviewed journal Intelligence says that human beings have been growing steadily stupider since the late Nineteenth Century.

If so, one wonders if that is true of the press who uncritically reported the work as well.

We might guess college teachers Michael Woodley, Jan te Nijenhuis, and Raegan Murphy, the authors of this work, had grown suspicious that the decline in student intelligence was part of a larger effect. “Is everybody,” they must have wondered, “surer of their own abilities with less and less cause? If so, how can we prove it?”

How about unambiguously measuring IQ of a fair sample (not neglecting the dark corners of the earth) of similarly aged human beings each year for many years and then showing how the distribution of scores changes through time?

Too tough, that. Better to measure how long it takes people to swat a paddle after they hear a bell.1

The idea is reaction time is correlated to intelligence, or so people claim. Quicker times are not always associated with bigger brains—think of poor Admiral Nelson!—but it kinda, sorta is.

One thing we don’t want to do is just look at a numerical average of scores. Masks far too much information. Think about it. It could be that, in a certain time and place, a lot of folks test slow-stupid and a similar amount score fast-smart; a sort of U-shape in the distribution of scores. Its numerical mean would be identical to the scores of a different group of folks who all scored about the same.

Better to look at the distribution—the full range, the plus and minuses—than just the numerical mean.

Our authors looked at the numerical mean of reaction times. But don’t hold it against them. The mistake they made is so ubiquitous that it is not even known to be one.

Anyway, the authors gathered their data from a study of another man, one Irwin Silverman2, who searched through the archives and found papers which experimented on reaction times. The average times from most of these papers (our authors tossed some out) were used in the current study.

A picture of the machine used back in Francis Galton’s day, the Hipp chronoscope, is pictured above. Modern-day chronoscopes do not look the same. A show of hands, please: how many think that measuring the same man first on the Hipp chronoscope and then again on a computer will result in identical scores? They’d have to be, else people (to whom electricity was new and freaky) who hit Morse-code paddles in 1888 would not be comparable to people (who grew up with video games and cell phones) clicking a mouse in 2010.

How did our authors adjust for these differences? Shhhhh.

Figure 1

Figure 1

The plot shows where the magic happened. Each dot represents the average reaction time for a reaction-time study (y-axis) in the year the study was conducted (x-axis). Small dots are studies with fewer than 40 folks, larger open circles are those studies with more than 40.

See the way the line, made by a fancy statistical model, slopes upward? That line says people are growing stupider. Never mind you have to squint to see it. Never mind reaction time isn’t IQ. Never mind the enormous gaps in time between the old and new studies. And never mind that if you extrapolate this model it proves that Eighteenth Century denizens would have all bested Einstein at Chess and that those fifty years from now will listen exclusively to NPR. Concentrate instead that a wee p-value has been produced, therefore the the authors’ hypothesis is true.

Our authors apply a generous coating of theory (“dysgenic model”) to explain this crisis. Silverman disagrees with “dysgenics” and says it’s because of “the buildup of neurotoxins in the environment and by the increasing numbers of people in less than robust health who have survived into adulthood”.

My theory is that instead of IQs shrinking, people are increasingly able to find patterns in collections of meaningless dots.


Update Occurred after chatting with Stijn de Vos (‏@StijnDvos) that if this research were true, we should hang out by the Whack-A-Mole to discover future Nobel Prize winners.

———————————————————————————

Thanks to John Kelleher for alerting us to this topic. See comments here, too.

1I have in mind the “sobriety test” taken by Dr Johnny Fever, WKRP; a clip of which I could have showed except for the massive greed of the recording industry; but never mind, never mind.

2“Simple reaction time: it is not what it used to be”, American Journal of Psychology. 123.1 (Spring 2010): p39.

Categories: Statistics

16 replies »

  1. As someone who never mastered any of the eye-hand coordination tasks except on the very micro level, I was surprised to find that this might indicate I am not very bright. I do wonder how researchers come to these ideas.

    On the comment section on the link given, there is a comment about lead causing this. Actually, that is very easy to measure and does not involve any presuppositions concerning coordination and IQ. It’s probably not as interesting a study–too black and white.

  2. According to the press article, the researchers used reaction time as a proxy:

    “Each study gauged participants’ so-called visual reaction times — how long it took them to press a button in response to seeing a stimulus. Reaction time reflects a person’s mental processing speed, and so is considered an indication of general intelligence.”

    Assuming that reaction time is a valid proxy for intelligence, the researchers would necessarily have to account for the likely fact that modern study subjects are less physically robust*–so even if they BEGAN reacting at the time as those Victorian era folks did, they would still take a tiny bit longer to physically move the same distance to complete a measurement (e.g. to reach & then press a button) due to inferior physical conditioning. Such an adjustment seems impossible to account for….but these particular researchers appear up to that challenge….

    * A host of data supports this–increased obesity, poorer diets, etc. etc. etc.

    What’s really interesting is Briggs capacity to consistently, seemingly day after day, find research studies so inane….

  3. Ken,

    These studies are all highlighted in the national and international press. Says a lot about our culture’s willingness to believe dumb things.

  4. Should the title of the paper be, “Were the Victorians cleverer than we are?….”

  5. It is obvious that people with faster reaction times are more intelligent. Everyone knows that sprinters, basketball and football players and other athletes are on average more intelligent than the doctors, lawyers, and professors of the world, including our estimable host. I mean, seriously, who would you want coming off the starting blocks in your 400M relay in the Olympics? Carl Lewis or Briggs?

    /sarc

  6. @prole

    What do you mean “would?” Hipster Colonscope has been pushing out great tunes for years.

  7. A better fit to the dots would be a curve where the initial part gradually slopes upward from 1888 to 1984, never reaching above 208 on the y-axis, and then with a steep upward curve upward from 1984 to present where the y-axis value reaches past 304.

    This type of curve is consistent with the hockey-stick type curve, from which we may posit that global warming causes lower IQs.

  8. This study immediately caught my eye because it looked at reaction times, which are correlated to mental processing speed, which is correlated to intelligence. Now, this caught my eye because I once took a fancy-pants professional IQ test of sorts. The doctor, showing me my results, said that my scores were rather high, but he did not calculate an actual value for my IQ because my measured mental processesing speed was at or near “retardation” level (which is a word that is rather apt for mental processing speed). Clearly, at least this man thought that the correlation wasn’t related to my actual intelligence, but to my scores. The slower processing speed dampened my *score* not my *intelligence*. So, this slowing of processing speed, even if it happened, might have nothing to do with actual intelligence, although it may have to do with measured intelligence, at least for him.

  9. Are People Growing Dumber Or Research Growing Slacker? (II)

    Was some historical population, like the Victorians, smarter than we are on average? It’s at least a fascinating and legitimate question. Matt thought his readers might be interested in my informal take on the broader context of this question and the study that Matt wrote about recently.

    1. Matt awesomely noted that the authors just threw away data in their possession and instead chose to analyze abstractions like means (and parameters that means serve as proxies for). Not good for your study when you do that.

    2. Awesome also: the picture of the Hipp Chronoscope and Matt’s noting that no calibration against modern reaction time instrumentation seems evident. In mild defense of the study authors, they were simply using another guy’s study on Victorian vs. modern reaction time (henceforth RT). But it’s still their fault if they used those results credulously: GIGO and all that.

    3. To operationalize “Were the Victorians ‘smarter’?,” the study authors implicitly use IQ. To say this once: it has never been true that IQ tests are bunk. IQ is indeed a pretty good proxy for ‘smarter’. Over and over and over, IQ has been shown to predict better job performance, for example. Different forms of a test that highly correlates with standard IQ tests has been given to recruits in the US armed forces for many years. Higher scores on that test are demonstrably good (not perfect) at predicting ability to master driving a tank and a whole host of other military and real-world skills.

    4. But IQ tests weren’t even invented when the Victorians lived. However, there is literature documenting a mild correlation (maybe about -0.3) between RT and IQ, which the authors duly cite. Lower (that is, quicker) RT, higher IQ. (Richard Feynman did play a mad bongo). And the key: there were some Victorian RT studies. So, in theory, you might be able to compare RTs across time, and nudge the truth probability of the statement “They were smarter” a little above or below coin-toss odds. Cool.

    5. Note: even with an impeccable study, you’d still have a proxy (RT) of a proxy (IQ) for ‘smarter’. And this was far from an impeccable study. So going in, we should expect raising or lowering the probability of “They were smarter” only a little above or below a coin toss with this study.

    6. I now want to make mention of the seriousness of their result, if it is accurate. It’s a HUGE effect that they’re pointing to. I haven’t seen the final published article, but a draft said there’d been a loss of FOURTEEN IQ points since Victorian times. That’s one whole SD!

    That would not only mean that an average adult now would have the mental age of a Victorian eleven-year-old (OK, maybe that’s not shocking enough), but also, since IQ is Gaussian, or at least Gaussian-oid, in distribution, then the right tail back then was much, much, much ‘rightier’ than now.

    Even with increased population, there would be many fewer super-productive super-geniuses now, and perhaps even more importantly, very many fewer available ‘smart fraction’ people to do the moderate- to moderate-heavy cognitive lifting that makes an advanced technological economy go.

    7. I almost can’t stress enough how extraordinary a result it is, with pervasive and profound implications if true. But the normal heuristic ‘for extraordinary results’ is to provide ‘extraordinary evidence’. Evidence which is not forthcoming from this study.

    8. So: a fascinating question, and a potentially cool way (RT) to look into it, but a study that’s certainly not the best, with results that are WAY too big to be believed without a great deal more substantiation.

    9. And yet — the study is still potentially cool, because it also signals a possibility of investigating the ‘dysgenic model’.

    10. It’s widely assumed that for a long time, higher IQ was advantageous reproductively. On average, being smarter meant you could better figure out ways that your kids could survive life’s rigors and reproduce. But (it is further contended) at some point after the Industrial Revolution, that started not to be true. And the study authors do point to existing literature (of whatever quality) that supports this view.

    11. So here, roughly, is the ‘dysgenic’ model. There is demographic evidence of a worldwide, long-term decrease in fertility, particularly among higher IQ women. For a hundred years at least, higher IQ women appear to have been having proportionally fewer babies (that survived and grew up to reproduce) than their peers. I wouldn’t say that the demographic evidence for this effect is completely incontrovertible, but you just don’t see serious demographers noting “Wow! look at how many babies high IQ women are having!”, and you see a lot — a lot — of demographic (and anecdotal) evidence to the contrary.

    12. And since IQ is sizably heritable (though effect estimates range from 20 to 80%), to put this in the most dramatic terms, elementary evolutionary theory would posit that high IQ is no longer a feature, it’s now a bug.

    13. That is, by the (provisional demographic) facts, high IQ no longer is optimal for reproductive success. Instead, (somewhat) lower IQ is now optimal, which is not a theory or a moral argument but simply a restatement of the facts through an evolutionary lens. Lower IQ women not only outnumber higher IQ women. They are factually having proportionally more babies. And therefore we should see proportionally fewer higher IQ babies over time and thus see a population IQ decrease over time. This is the ‘dysgenic’ model.

    14. Now here’s a key point: But this implicit prediction in the ‘dysgenic model’ is apparently NOT ACCURATE. If we’re ‘dysgenic model’ proponents, we’re quite annoyed that actual measured IQ scores have gone up, not down, over time.

    15. So an easy, but possibly incorrect, way to view this is that the ‘dysgenic’ model people found a measurement that they liked (a supposed increase in RT over time), and are flogging that instead of their lying eyes. To see why it’s not automatically a self-serving sophism of the dysgenic model crowd, we have to do a little digging into the Flynn effect.

    16. In the 1980s psychometrician James Flynn (and before him Richard Lynn regarding the Japanese) noted a steady, linear, world-wide increase in measured IQ scores for roughly the last 100 years. Which led psychometricians to wonder if people were actually getting smarter pretty much all the time for the last 100 years, which (put that way), seems non-obvious, at least.

    17. Many explanations for this ‘Flynn effect’ have been advanced. Explanations have run all the way from some kind of ongoing systematic measurement error, to the idea that measured IQ can only measure phenotypic IQ (nurture plus nature), and that a more IQ-nurturing environment (better nutrition, universal education, etc.) is the explanation. However, the evidence for any particular explanation has not been decisive, so there is no consensus. But implicit in almost all of them is the idea that its cause is some kind of IQ-nurturing environmental factor or factors.

    19. Why not genetic explanations? A subtlety: by definition, the confluence of factors that produces higher IQ is improbable. That’s what being nearer a tail of a probability distribution means. Knowing this, serious psychometricians shy away from saying, (1) We’re seeing higher (phenotypic) IQ. (2) A genetic improbability occurs throughout the population, not just once, but steadily, and in only one direction. (3) Therefore, higher (genetic) population IQ. Especially when the demographic evidence is not exactly congenial.

    20. In sum, there’s very, very little psychometric evidence for the ‘dysgenic model’ at present. Indeed, if anything, we see the opposite. But the demographic evidence for the dysgenic model seems equally compelling. The best demographic evidence appears to strongly conflict with the best psychometric evidence. And that’s interesting.

    21. Maybe IQ has no genetic basis after all, or maybe the dysgenic IQ effect is for the moment being masked by the Flynn effect (and some recent psychometric research documents a slowing down or even reversing of the Flynn effect in some of the most advanced modern populations). But who knows?

    22. This is the intellectual climate wherein this little, hardly magisterial and, one could even argue, shoddy, study fits. I mean — fourteen IQ points? But the real scientific picture at the moment is still much more puzzling than would appear at first glance. It is at least possible that a much more rigorous version of the study authors’ research could in fact give a clearer picture.

    23. Finally, people do wonder if the Victorians, or the generations of Greeks that produced Plato, Aristotle, Archimedes, Euclid, were in fact smarter than we are on average. Some kind of proxy for ‘smarter’ is all we’ll ever be able to use, and RT is a plausible and clever proxy. Maybe it’s nothing, but maybe, given time, it could be something.

  10. I think people are getting dumber. Many reasons. We don’t move much, we don’t practically develop motor skills, many are not able to read a long text, so we are just idiots with gadgets))

Leave a Reply

Your email address will not be published. Required fields are marked *