William M. Briggs

Statistician to the Stars!

Page 145 of 751

Who Wants Viewpoint Diversity In The Academy?

A student meets the academy.

A student meets the academy.

Reader Gary Boden sent over a link to Heterodox Academy. Some friends of ours are part of this, like Judith Curry and Scott Lilienfeld. In their words:

We are social scientists and other scholars who want to improve our academic disciplines. We have all written about a particular problem: the loss or lack of “viewpoint diversity.” It’s what happens when everyone in a field shares the same political orientation and certain ideas become orthodoxy. We have come together to advocate for a more intellectually diverse and heterodox academy.

Their big move was a paper “Political diversity will improve social psychological science.” The Abstract:

Psychologists have demonstrated the value of diversity — particularly diversity of viewpoints — for enhancing creativity, discovery, and problem solving….

Stop right there. Diversity is one of those words pleasing to the ear, it sings a seductive song, it tells of a golden destination, it…well, enough floweryness (this is for Will). Nobody in their right mind wants diversity in the academy, or, come to that, anywhere.

Diversity in viewpoint is a good thing, is it? The full range of thought possible by humans would be the most diverse, therefore we should seek infinite diversity in infinite combinations. Yet only the insane would demand true diversity of thought.

One guy holds the idea, and teaches and advocates for the same, that faculty should be split open and their carcasses put on a spit. Another guy has the same idea for the faculty’s children. Another has the same idea for you.

In the Department of Family Studies is a new fellow who thinks your mother is a puta and leads the class to her house to chant, some in support of your mother’s experiment in living, others opposed. Over in Logic is a professor who will try his utmost to convince his students, using all his powers of persuasion, that there is no such thing as cause or truth.

Oh, wait. We actually already have those guys.

But enough. You get the idea. Supply your own examples. Diversity of thought is dumb.

How about diversity of ability? Well, why not hire the severally encephalopathic to staff the myriad departments of Diversity? Surely they couldn’t do a worse job. But putting them in the Physics Department is not such a wise idea. I won’t give any more examples; they’re easy enough for you to generate.

Diversity of ability is dumb.

That leaves diversity of characteristics. Women and minorities are encouraged to apply—the implication being non-women, non-minorities are discouraged. Would hiring more black females pretending to be men but who self-identify as “gay” (did you think I made that one up?) improve the Mathematics faculty? If so, then by all means let’s have quotas. Otherwise, let’s not.

Diversity of characteristics is dumb.

Now what the Heterodox demonstrate is that the Academy, particularly in Sociology departments, are staffed almost entirely by lefties. Which is what everybody already knew. They also demonstrate what readers of this blog already knew, that ideology often masks for research (I’ve done hundreds of examples; they have others in their paper). And they make the obvious point that this ideology-as-research is harmful.

Their solution is to leak in some righties to mix with the lefties to temper their enthusiasm. They call this “viewpoint diversity.” And they want to increase diversity for diversity’s sake because they, like nearly all of us these days, see diversity as a good in and of itself.

Diversity isn’t an unconditional good; it is an unconditional evil.

What we really want are people devoted to the truth. Okay, easy to say, hard to do. After all, the current staff over in Sociology think they have the truth, which is why it’s so easy for them to miss their confirmation bias. Too, the lefties are increasingly coming to believe that their enemies are not just wrong, but immoral. Hiring a token neo-conservative isn’t likely to have much of an influence in that kind of atmosphere. Given budgetary and other constraints, forcing any department to be 50-50 on some disputed and disputable measure of politics will never fly.

Solution? Treat ideological departments as we would any other toxic waste site. Rope them off, keep the kids away, do your best to ignore the fumes seeping out. And rebuild somewhere else.

Chesterton On Polls

iln

I was led by Dale Ahlquist, president of the American Chesterton Society, to an article by Chesterton on the kinds of statistics used in polls. Here is an excerpt (the second paragraph is added for readability):

It is an error to suppose that statistics are merely untrue. They are also wicked. As used to-day, they serve the purpose of making masses of men feel helpless and cowardly…

And I have another quarrel with statistics. I believe that even when they are correct they are entirely misleading. The thing they say may sometimes be positively and really true: but even then the thing they mean is false. And it must always be remembered that this meaning is not only the only thing to which we ought to pay attention, but is literally, as a rule, the only thing our mind receives. When a man says something to us in the street, we hear what he means: we do not hear what he says. When we read some sentence in a book, we read what it means: we cannot see what it says. And so when we read statistics. It is impossible for the human intellect (which is divine) to hear a fact as a fact. It always hears a fact as a truth, which is an entirely different thing. A truth is a fact with a meaning. Many facts have no meaning at all, as far as we can really discover: but the human intellect (which is divine) always adds a meaning to the fact which it hears…

If we hear nothing else at all but this, that a man in Worthing has a cat, our souls make a dark unconscious effort to find some connections between the spirit of Worthing and the love of domestic animals…So when some dull and respectable Blue Book or dictionary tells us some dull and respectable piece of statistics, as that the number of homicidal arch-deacons is twice that of homicidal deans, or that five thousand babies eat soap in Battersea and only four thousand in Chelsea, it is almost impossible to avoid making some unconscious deduction from the facts, or at least making the facts means something…It is psychologically impossible, in short, when we hear real scientific statistics, not to think that they mean something. Generally they mean nothing. Sometimes they mean something that isn’t true…

Statistics never give the truth, because they never give the reasons.

Chesterton gave an example of a poll in which it was learned a certain high percentage of folks breakfasted at some later hour, to which a reader might “react” (in his words) “Lazy Beasts!”, though each of the people polled who ate late had an admirable reason for doing so. These reasons were lost in the summary.

Now we, with a century’s more experience, are supposed to be more sophisticated about polls. We wouldn’t hear a poll that reported “62% of Catholics support the President” and put that support down to Catholicism (or its lack). And we wouldn’t read in a “scientific” study that “58.432% of men had a high hate score but only 53.918% (P < 0.001) women had a high score" and claim that maleness caused the higher percentage. Would we? And not having made those classic blunders, we surely wouldn't go further and say something asinine like "Catholics support the President" or "Men hate more than women". Right? Call this the mis-ascription or causal fallacy, the claim that the label assigned in the survey causes the answers given.

Now the ascription is not always in error. If somebody says to an exit pollster, “I voted for the Democrat candidate because I’m a Democrat” and if the pollster releases his results that say, “89% of those identifying as Democrat voted for the Democrat candidate”, then we tell the truth if we say, “At least one Democrat voted for the Democrat candidate because that person was a Democrat.” About the others in the sample, we do not know.

It is not an unreasonable assumption to say more than one voted because of his party status, though that assumption should be couched in probabilistic language, but it is clearly fallacious to say all the remaining did so. And it is fallacious even if everybody else in the sample voted on the party line because all were loyal party members. It is fallacious because the cause wasn’t measured, thus there is no warrant to claim the cause is known.

Chesterton is right. The ascription of some cause is a reaction, an irresistible temptation. Even those characteristics not part of the official data measurement are in game as “the” cause. This is why so many experts are terrific at saying why something happened, but why they are so terrible at making predictions.

——————————————

GK Chesterton, The Illustrated London News, 18 Nov 1905, vol 37, no 967, p. 702.

The Philosophy Of Uncertainty: An Introduction. Complete Preface. Update

9561066959_58ebf67d5c_k

Le livre, he is done! I yesterday sent a proposal to a philosophy editor at Cambridge. Not enough equations or pictures in it to pique the interest of the statistics editor, I guess.

Thanks to those who volunteered to copy edit. I may still call on your services, especially if I can’t find a publisher who isn’t interested in it or me (I have eccentricities) and want to self publish.

Here is the Preface (the Author’s Forward has all the acknowledgements and thanks). I’ll also in time put up pieces of the proposal so that you can see details of each chapter. (The Preface doesn’t reveal much.)

P.S. I have been neglecting my email in order to finish, so if I haven’t answered yours (which is likely, since I have about 200 to answer), this is why.

Update Shot down at Cambridge. On to the next! I just sent query to Oxford. And Springer. And MIT press. And Wiley.

Dear Professor Briggs

Thanks for sending me the details relating to your proposed project ‘The Philosophy of Uncertainty’. The topic is an interesting one, but I’m afraid I don’t think that the style and approach of this book would be suited to the Cambridge list. I’m sure you will find that other publishers think differently, though, so I hope you will try the project out on them. I hope you are successful in finding a home for it.

Best wishes —

Preface

Fellow users of probability, statistics, and computer “learning” algorithms, physics and social science modelers, big data wranglers, philosophers of science, epistemologists; other respected citizens. We’re doing it wrong.

Not completely wrong; not everywhere; not all the time; but far more often, far more pervasively, and in far more areas than you’d imagine.

What are we doing wrong? Probability, statistics, causality, modeling, deciding, communicating, uncertainty. Everything to do with evidence.

Your natural reaction will be—this is a prediction based on plentiful observations and simple premises—“Harumph.” I can’t and shouldn’t put a numerical measure to my guess, though. That would lead to over-certainty, which I will prove to you is already at pandemic levels. Nor should I attempt to quantify your harumphiness, an act which would surely contribute to scientism.

Now you may well say “Harumph”, but consider: there are people who think statistical models prove causality or the truth of “hypotheses”, that no probability can be known with certainty until the sound of the last trump, that probabilities can be read from mood rings, that induction is a “problem”, that randomness is magic, that parameters exist, that p-values validate theories, that computers learn, that models are realer than observations, that model fit is more important than model performance.

And that is only a sampling of the oddities which beset our field. How did we go awry? Perhaps because our training as “data scientists” (the current buzzword) lacks a proper foundation, a firm philosophical grounding. Our books, especially our introductions, are loaded with a legion of implicit metaphysical presumptions, many of which are false. The student from the start is plunged into formula and data and never looks back; he is encouraged not to ask too many questions but instead to calculate, calculate, calculate. As a result, he never quite knows where he is or where he’s going, but he knows he’s in a hurry.

The philosophical concepts which are necessarily present aren’t discussed well or openly. This is only somewhat rectified once, and if, the student progresses to the highest levels, but by that time his interest has been turned either to mathematics or to solving problems using the tools with which he is familiar, tools which appear “good enough” because everybody else is using them. And when the data scientist (a horrid term) finally and inevitably weighs in on, say, “What models really are”, he lacks the proper vocabulary. Points are missed. Falsity is embraced.

So here is a philosophical introduction to uncertainty and the practice of probability, statistics, and modeling of all kinds. The approach is Aristotelian. Truth exists, we can know it, but not always. Uncertainty is in our minds, not in objects, and only sometimes can we measure it, and there are good and bad ways of doing it.

There is not much sparkling new in this presentation except in the way the material is stitched together. The emphasis on necessary versus local or conditional truth and the wealth of insights that brings will be unfamiliar to most. A weakness is that because we have to touch on a large number of topics, many cannot be treated authoritatively or completely. But then the bulk of that work has been done in other places. And a little knowledge on these most important subjects is better than none, the usual condition. Our guiding light is St Thomas, ora pro nobis, who said, “The smallest knowledge that may be obtained of the highest things is more desirable than the most certain knowledge obtained of lesser.” It is therefore enough that we form a fair impression of each topic and move onward. The exceptions are in understanding exactly what probability is and, as importantly, what it is not and in comprehending just what models are and how to tell the good from the bad.

This isn’t a recipe book. Except for simple but common examples, this book does not contain the usual lists of algorithms. It’s not that I didn’t want them, it’s more that many proper ones don’t yet exist, or aren’t well understood; and anyway, they can be a distraction. This book is, however, a guide on how to create such recipes and lists, as well as a way to shoehorn (when possible) older methods into the present framework when new algorithms haven’t yet been created. This book is thus ideal for students and researchers looking for problems upon which to work. The mathematical requirements are modest: this is not a math book. But then probability is not a mathematical subject, though parts of it are amenable to calculation.

Some will want to know what to call this unfamiliar new theory. Well, it isn’t a theory. It is The Way Things Are. The approach taken is surely not frequentist, a method which compounds error upon error, but it is also not Bayesian, not in the usual sense of that term, though it is often close in spirit to objective Bayesianism. There is no subjectivism here. The material here is closely aligned to Keynes’s, Stove’s, and Jaynes’s logical probability. Many elements from the work of these and similar gentlemen are found here, but there are also subtle and important differences. If a name must be given, Probability As Argument is as good as any, though I prefer simply Probability.

If we’re doing it wrong, what’s right? Models should be used to make probabilistic predictions of observable entities. These predictions can, in turn, be used to make decisions. If the predictions fail, the models fail and should be abandoned. Eliminate all forms of hypothesis tests, which only serve to confirm biases. Do not speak of parameters.

Here is the book in brief. All truth is conditional on or with respect to something. There are thus necessary or universal and conditional or local truths. Truth resides in the mind, and not in objects except in the sense that they exist (or not). Truth is not relative in the modern sense of that word. Probability aims at truth. We come to know many truths via induction, which is widely misunderstood and is not a “problem”, indeed it provides the surest form of knowledge. Logic is the study of the relationship between propositions, and so is probability. All probability, like all truth, is therefore conditional.

Most probability is not quantifiable, but some is. Probability is not subjective, and limiting relative frequency is of no use to man or beast. Chance and randomness are not mystical causes; they are only other words for ignorance. Science is of the empirical. Models—whether quantum mechanical, medical, or sociological—are either causal or explanative. Causal models provide certainty, and explanative models uncertainty. Probabilistic models are thus not causal (though they may have causal elements).

Bayes is not what you think. Hypothesis testing should immediately and forever be tossed onto the scrap heap of intellectual history and certainly never taught to the vulnerable. Probability is not decision. The parameter-centric, even parameter-obsessed, way of thinking about models must also be abandoned; its use has lead to widespread, enormous over-certainty and caused more than one soul to be lost to scientism. Its replacement? Models which are and must be checked against reality. The best way to check against reality is conditional on the decisions to which models are put. The most common, widespread errors that come in failing to understand not treating probability logically are shown, including the common mistakes made in regression, risk measures, the over-reliance on questionnaires, and so on.

The language used in this book will not be familiar to regular users of probability and statistics. But that is rather the point. It ought to be.

How working statisticians and probabilists should read this book. Start with Chapter on Probability Models, then read the two successive Chapters on Statistical & Physical Models and Modelling Strategy & Mistakes. After this, start at the beginning for the proofs of the assumptions made in those Chapters.

Everybody else, and in particular students, should start at the beginning.

Stream: So-Called Homophobia Now Being Labeled a ‘Mental Disease’

ransom66

Today’s post is at The Stream: “So-Called Homophobia Now Being Labeled a ‘Mental Disease'”.

So now “homophobia” is being called a “mental disorder,” where “homophobia” is defined in part as holding to the traditional, natural law, and religious understanding of same-sex attraction and acts.

A new paper “http://onlinelibrary.wiley.com/doi/10.1111/jsm.12975/full#jsm12975-bib-0034” by Giacomo Ciocca and others was picked up by the press and announced with the headline “New Study Suggests Connections Between Homophobia And Mental Disorders“.

This press article opened, “Homosexuality was long derided as a mental disorder…but a new study suggests that it might be more likely that it’s actually homophobia that is a sign of mental disorder.” The article quoted one of the study authors (E.A. Jannini) as saying, “After discussing for centuries if homosexuality is to be considered a disease, for the first time we demonstrated that the real disease to be cured is homophobia, associated with potentially severe psychopathologies.”

Potentially severe psychopathologies? Sounds like the sort of thing that requires treatment, perhaps against the will of patients.

Go there to read the rest.

Some of the details about the stats were cut or rearranged, which I don’t think a general audience would have followed, and a quip about a “lab coat” added.

What we have is yet another unjustified regression used to claim causality, via wee p-values, between two sets of questions said to perfectly represent two emotional states. A large wee p-value and low variance explained, and one test out of many. And the emotional states only partly aligned with the spooky names given to them by the authors. (Even in the literature, the “psychoticism” measure is criticized, but our authors don’t say this.)

In other words, business as usual for “science”. Bloated unjustified over-certainty, utter nonsense.

But interesting that Christian beliefs—and other traditional religions like Islam and Judaism—are being called “homophobic” in some higher circles.

« Older posts Newer posts »

© 2017 William M. Briggs

Theme by Anders NorenUp ↑