Skip to content

Category: Statistics

The general theory, methods, and philosophy of the Science of Guessing What Is.

November 5, 2018 | 12 Comments

An Illustration Of Type I Scientism

Regular readers will recall there are two Types of Scientism. Type I is belief that Science is needed to verify commonplace truths. Type II is the belief that only Science can provide truth.

Both are false. Type II Scientism leads to empiricism, atheism, and similar mental maladies. These are all bad, but it’s still not clear if Type II is the worst form of Scientism. For you usually arrive at Type II through the gateway of Type I.

A Type I headline might be “Men Stronger Than Woman On Average, Study Finds.” The study was not necessary, or at least it wasn’t throughout all of human history. Studies like this (and there are some) highlight another science error, which is Fantasist or Willed Science. Fantasists will say “Women are as strong as men”, and so will need scientific evidence (of Type I) to prove to them it is not so. That some will not believe this evidence is an error opposite of scientism. Even scientists themselves commit this fantastical error, usually because they are in love with a Theory (as the Fantasists are). I do not pursue this here.

You and I, dear readers, have dissected many Type I papers over the years. From this “research”—I mean ours, not the papers—we have discovered that a leading cause of Type I Scientism is the need to publish. Forcing scientists to speak when they have nothing interesting to say causes lousy work to be artificially elevated. It clogs journals.

Scientists have some pride, even then they know what they are made to push is weak and is better left unsaid. To cover their shame, they write badly, hoping a blizzard of jargon and bloated sentences will make their piles of words look shiny (this may not be consciously planned). And so journals, like port-a-potties after Grateful Dead concert, fester.

Even this would be okay, except for the Expansion Effect. The Expansion Effect is the flooding of the system of sub-par talent (“Every child should go to college”). This further bloats content, causing a drag on the system as separating the metal from ore takes more and more time.

The worst part of Type I Scientism is false advertising and the subsequent encouragement of non-scientists to view scientists with more esteem than they deserve. That leads to Type II Scientism, which itself leads to utilitarianism and a host of other sins.

Enough of that. Here’s the headline: Growing up in a house full of books is major boost to literacy and numeracy, study finds.

While the average number of books in a home library differed from country to country — from 27 in Turkey to 143 in the UK and 218 in Estonia — “the total effects of home library size on literacy are large everywhere”, write Sikora and her colleagues in the paper, titled Scholarly Culture: How Books in Adolescence Enhance Adult Literacy, Numeracy and Technology Skills in 31 Societies. The paper has just been published in the journal Social Science Research.

“Adolescent exposure to books is an integral part of social practices that foster long-term cognitive competencies spanning literacy, numeracy and ICT skills,” they write. “Growing up with home libraries boosts adult skills in these areas beyond the benefits accrued from parental education or own educational or occupational attainment.”

This is Type I all the way. Kids given books are more likely to read than kids not given books is not a subject of worthy research. Especially when it must be obvious that the parents who own the books are smarter, on average, than those that don’t. And that means the kids are smarter, on somewhat less of an average, given the partial heritability of intelligence.

What do the authors say? In their conclusion they ask:

Now that we have established that scholarly culture as indicated by the size of home libraries, confers enduring cognitive skills in literacy, numeracy, and technology, the next burning question becomes: “How does this come about?”

“Role modelling”, they say, “Children emulate parents who read.”

Then comes the jargon and Expansion Effect:

Acquisition of specific strategies proposed by significant others or discovered in books themselves: children build “toolkits” of strategies that they apply in multiple situations (Swidler, 1986). Stimulation of cognitive skills through family social practices: books are interwoven with positive affect, specific mental activities, know-how, and motivational states (Reckwitz, 2002). Storytelling, imaginative play, charades, and vocabulary development come to mind (Evans, et al., 2010). We suggest that scholarly culture is a way of life rather than concerted cultivation (Lareau, 2011).

Good grief.

November 2, 2018 | 10 Comments

Black Coffee Drinkers Are Sadistic Psychos: It’s Science!

Headline New study says you might be a psychopath if you like black coffee

A new study from the University of Innsbruck in Austria says that people who drink their coffee black often have psychopathic or sadistic traits…

The people behind the report surveyed more than 1,000 adults about their taste preferences with foods and drinks that are bitter. To get answers, the adults in the study took four different personality tests that examined traits like narcissism, aggression, sadism and psychopathy.

Interestingly, the study found that people who tend to like bitter foods such as black coffee or tonic water also had personality traits that could be seen as bitter and unpleasant.

“The results of both studies confirmed the hypothesis that bitter taste preferences are 39 positively associated with malevolent personality traits, with the most robust relation to everyday 40 sadism and psychopathy,” the study says.

The peer-reviewed paper is “Individual differences in bitter taste preferences are associated with antisocial personality traits” by Christina Sagioglou and Tobias Greitemeyer in Appetite.

Regular readers can stop here, for this is yet another sorry tale of wee p-values and attempts to quantify the unquantifiable. If only somebody wrote a book exposing these pernicious methods, and offered a way of escape, then we would not be in this mess.

The paper opens with this stunning announcement: “Eating and drinking are universal social phenomena.”

From there we soar into the stratosphere: “The sense of taste is innately hedonic and biased.”

Who knew?

Skip it. Here is their main question: “Could it be that the extent to which people learn to relish bitter substances is related to their personality?” The obvious answer did not suggest itself to our authors, hence they conducted a survey. They asked a bunch of questions to people recruited on line (which they elevated to “studies”), and paid them sixty cents.

The questions were quantified, as is usual, but wrongheaded. It is believed by many that emotions and thought can be given unique numbers, which is bizarre—and false. How much do you agree with that sentiment on a scale of 42,000 to 1 googol?

One of the questions, to which we can be sure everybody answered absolutely honestly, knowing they were bring tracked, on a scale of 1 to 5, was “I have threatened people I know.”

They went from that to the so-called Dark Triade, which we have met before. “I tend to manipulate others to get my way”, “I tend to be callous or insensitive”, etc.

Many, many, many other pseudo-quantified questions followed. Then came the “bivariate correlations and multiple regression analyses”—and wee p-values. None of it done in a predictive sense, of course; everything was parametric.

Their “betas” (recalling the numerical scale ranges) were small, even trivial, meaning the differences in traits was not worth writing home about, but the “betas” did expose their wee p-values, which excited the authors. (Large sample sizes almost always give wee ps, which is one of the major failings of p-vlaues.)

You can feel their excitement (an 8.2 on a scale of -4 to 12) when they wrote “The present results provide the first empirical evidence for the hypothesis that bitter taste preferences are linked to malevolent personality traits.”

And “Particularly robust associations were found for everyday sadism, which was significantly predicted by general bitter taste preferences when controlling for third variables across both studies.”

Control? “Drinking coffee with sugar and milk, for example, successfully masks most of its bitterness. Similar adjustments in preparation can lead to a number of items losing its original bitter taste.”

It wasn’t all lattes and soy milk flavorings, no, sir. There were some problems.

Further inconsistencies between the general and food specific measure arose. First, only the general taste preference measure was associated with less agreeableness. This raises questions as to which specific connotation of the general measure produced this correlation. We can only speculate about an answer.

Allow me to speculate. Scientists are so harassed into publishing anything that nonsense often results, because if they don’t publish, they lose their jobs.

“Also, in preferring bitter tasting foods more than less sadistic people, everyday sadists may perceive them as positive due to their potential to cause distaste, that is, to cause a negative experience in other people.”

How can you take this stuff seriously?

October 24, 2018 | 10 Comments

Quack Quack: 25% of Students “Traumatized” By 2016 Election

Headline 25% of students say they were traumatized by the 2016 election, study says

A quarter of students found the 2016 so traumatic they now report symptoms of PTSD, according to a new study.

Researchers surveyed Arizona State University students around the time of President Donald Trump’s inauguration in 2017, and some had stress scores on par with that of school shooting witnesses’ seven-month follow-ups.

Twenty-five percent of the 769 students, who were an even mix of genders and races and socioeconomic backgrounds, reported ‘clinically significant’ levels of stress.

The most severe cases were seen among women, black, and non-white Hispanic students, who were 45 percent more likely to feel distressed by the 2016 run between Trump and Hillary Clinton.

There is a reason many still believe psychology is not far removed from witch doctoring. And this paper is that reason.

The peer-reviewed paper is “Event-related clinical distress in college students: Responses to the 2016?U.S. Presidential election” by Melissa Hagan and a few others in Journal of American College Health.

“Did he say ‘Journal of American College Health‘?”

Yes, he did. A whole journal devoted to the well-being of our over-privileged tykes. Don’t miss the article “Understanding contributing factors to verbal coercion while studying abroad.

Anyway, back to Melissa and her pals. The paper opens:

Although U.S. presidential elections occur every four years, the 2016 election was perhaps the most polarizing and emotionally evocative political event for young people in recent history.

Why does this verbiage sound like it came from the Youth Synod? Never mind.

The current study surveyed a diverse sample of college students 2-3 months after the election to examine: (1) perceived impact of the election on close relationships; (2) prevalence of subclinical and clinical election-related distress symptoms, including intrusion and avoidance; (3) demographic differences in these symptoms.

Clinical distress symptoms. As is real trauma. As in medication-eligible mental maladies. As in genuine sickness. Could this be real? Does it matter?

How did these wondrous findings come about? By asking questions with quantified unquantifiable answers. Among others:

Participants responded to the 15-item Impact of Event Scale (IES), a measure of stress responses to a significant life event. Prompted to keep the U.S. presidential election in mind, participants indicated how frequently each statement was true for them since the election, with response options from 1 (not true at all) to 4 (often true).

Results?

Although total IES scores on average did not exceed clinically significant levels (M = 18.65, SD = 15.72, Range: 0–69), 25.0% of students (n = 192) were above the cutoff for clinically significant event-related distress.

Clinically significant event-related distress. I repeat: clinically significant.

Then came the wee p-values (regressions; I’ve cut out the wee-p details):

Significant predictors of event-related distress included being female (compared to male), Democrat (compared to Republican), Independent or other party (compared to Republican), dissatisfied with the outcome, non-Christian or no religious affiliation (compared to Christian), and reporting either a positive or negative impact of the election on close relationships.

Golly, what, uh, surprises. We don’t need the wee ps to realize women, Democrats, etc. would answer differently than non-women, non-Democrats, etc. They said “We identified a high rate of event-related distress symptoms, with certain groups reporting particularly high intrusion and/or avoidance symptoms related to the election…When examined independently, females, racial minorities, those from the working and lower-middle social classes, Democrats, non-Christians, and sexual minorities reported significantly more event-related distress.”

Sadly, their “data do not allow us to identify the cause of the relatively high rate of symptoms”. But they suspect “issues of identity and social inequality” as the culprits.

They warn “The high rate of clinical distress symptoms suggests that college health practitioners be aware of the potential for the state of U.S. politics to profoundly affect students’ emotional health and consider this possibility when interfacing with students about the causes and consequences of stress.”

Coddling and babying and pacifying are the solutions, we guess.

They close by emphasizing they are seriously serious and that these are genuine health problems they’re discussing.

Approximately one-fourth of the sample met suggested criteria for clinically significant distress, which is concerning because elevated event related stress is predictive of future distress and subsequent PTSD diagnoses.

Donald Trump caused PTSD diagnoses?

Glorious, if true. Alas, if it did, it leads us to suspect the veracity of PTSD diagnoses, or of the truthfulness of the kiddies when crying over spilt electoral votes, or the integrity of academics crying “clinically significant.”

October 22, 2018 | 13 Comments

You Can’t Quantify The Unquantifiable

I stole the picture below from YouGov.

A new YouGov study reveals exactly how positively and negatively the population perceives various descriptions to be.

YouGov showed respondents a selection of adjectives from a list of 24 and asked [a group of about 1,000–2,000 Britons] to score each on a scale from 0-10, with 0 being “very negative” and 10 being “very positive”.

Study the picture for a moment.

Even more important for us is a second picture at their site which compares average responses of Britons and Americans. There are many differences: abysmal had a difference (by my eye) of about 1.5; others were smaller. But even the small ones represent mean differences, meaning there is some variability in the differences, which they didn’t picture.

Now the picture above smacks of “density estimates”, which I won’t explain, but think of it as a way to make fancy histograms. It (over-)smooths the actual results. (I was in grad school when smoothing methods were going to save the world.) Ignore that twist. It’s clear enough that substantial variation exists for each word.

It’s true there is little overlap in scores in words at the extremes, such as perfect and appalling. Nobody I know who speaks English would ever confuse these words. Then I don’t know a lot of people. The vocabulary of benighted college students is particularly thin these days, so one never knows.

The point is this. That the variance seen is in no way unusual or unexpected, not in the different “scores” given the individual words nor in the differences in the scores by country. Of course, the so-called country differences could be partly genuine (they speak a weird English over there) and partly no different than if you were to take just the Britons and split them in two using whatever marker you like. In other words, it’s another measure of variation.

The differences within in any word are well withing the differences touted in wee-p research. Meaning, of course, that many of the results touted to be caused by theories favored by the researchers could just as well be caused by different understandings on the words. Yes, even with these variabilities differences in means between words can exist, but there is no proof this comes about because of the theory or because of the differences in understanding.

Attempting quantification of unquantifiable mental states and feels is far too common.

How much do you agree with this assessment on a scale of -42 to 3 in increments of 1/e before -10 and 1/pi after it, though those in Britain, may use mostly whole numbers.