Disparity is inevitable: a counter argument to filing discrimination lawsuits

Introduction

Know a lawyer who is involved in a discrimination lawsuit? Particularly one in which the plaintiff alleges discrimination because actual disparities are found in company hiring practices? Were you aware that, just by chance, a company can be absolutely innocent of discrimination even though they actually are found to have under-hired a particular group? No? Then read on to find out how.

What are diversity and disparity?

We discussed earlier that there are (at least) two definitions of diversity: one meaning a display of dissimilar and widely varying behaviors, a philosophical position that is untenable and even ridiculous (but strangely widely desired). The second meaning is our topic today.

Diversity of the second type means parity in the following sense. Suppose men and women apply in equal numbers and have identical abilities to perform a certain job. Then suppose that a company institutes a hiring policy that results in 70% women and 30% men. It can be claimed that that company does not properly express diversity, or we might say a disparity in hiring exists. Diversity thus sometimes means obtaining parity.

Disparity is an extraordinarily popular academic topic, incidentally: scores of professors scour data to find disparities and bring them to light. Others—lawyers—notice them and, with EEOC regulations in hand that call such disparities illegal, sue.

And it’s natural, is it not, to get your dudgeon up when you see a statistic like “70% women and 30% men hired”? That has to be the result of discrimination!

Of course, it was in the past routinely true that some companies unfairly discriminated against individuals in matters that had nothing to do with their ability. Race and sex were certainly, and stupidly, among these unnecessarily examined characteristics. Again, it’s true that some companies still exhibit these irrational biases. For example, Hollywood apparently won’t hire anybody over the age of 35 to write screenplays, nor will they employ actors with IQs greater than average.

Sue ’em!

It’s lawsuits that interest us. How unusual is a statistic like “70% women and 30% men hired”? Should a man denied employment at that company sue claiming he was unfairly discriminated against? Would we expect that all companies that do not discriminate would have exactly 50% women and 50% men? This is a topic that starts out easy but gets complicated fast, so let’s take our time. We won’t be able to investigate this topic fully given that it would run to a monograph-length document. But we will be able to sketch an outline of how the problem can be attacked.

Parity depends on several things: the number of categories (men vs. women, black vs. white, black men vs. black women vs. white men vs. white women, etc.; the more subdivisions that are represented, the more categories we have to track), the proportion those categories exist in the applicant population (roughly 51% men, 49% women at job ages in the USA; we only care about the characteristics of those who apply to a job and not their rates in the population), the exact definition of parity, the number of employees the company has, and the number of companies hiring. That last one is the one everybody forgets and is the one that makes disparities inevitable. Let’s see why.

Men vs. Women

Throughout all examples we assume that companies hire blindly, that they have no idea of the category of its applicants, that all applicants and eventual hires are equally skilled; that is, that there is no discrimination in place whatsoever, but also that there is no quota system in place either. All hires are found randomly. Thus, any eventual ratio of observed categories in a company is the result of chance only, and not due to discrimination of any kind (except on ability). This is crucial to remember.

First suppose that there are in our population of applicants 51% men and 49% women.

Now suppose a company hires just one employee. What is the probability that that company will attain parity? Zero. There is (I hope this is obvious) no way the company can hire equal numbers of men and women, even with a quota system in place. Company size, then, strongly determines whether parity is possible.

To see this, suppose the company can hire two employees. What is the probability of parity? Well, what can happen: a man is hired first followed by another man, a man then a woman, a woman then a man, or a woman followed by another woman. The first and last cases represent disparity, so we need to calculate the probability of them occurring by chance. It’s just slightly over 50%.

(Incidentally, we do need to consider cases where men are discriminated against: in the past, we could just focus on cases where women were, but in the modern age of rampant tort lawyers, we have to consider all kinds of disparity lawsuits. For example, the New York Post of 12 May 2009, p. 21, writes of a a self-identified “white, African, American” student from Mozambique who is suing a New Jersey medical school for discrimination.)

Now, if a woman saw that there were two men hired, she might be inclined to sue the company for discrimination, but it’s unlikely. Why? Because most understand that with only two employees, the chance for seeming, or false discrimination is high; that is, disparity resulting by chance is pretty likely (in fact, 50%).

So let’s increase the size of our company to 1000 employees. Exact parity would give us 510 men and 490 women, right? But the probability of exact parity—given random hiring—is only 2.5%! And the larger the company the less it is likely exact parity can be reached.

Can You Read My Mind?

Ghosts, ESP, telekinesis, astrology, and other assorted oddities are back in view. One of the "SyFy" channel's most popular series is a show about hunting apparitions. The movie Men Who…

Decision Calculator

This is just a rough prototype meant to be easy to play with inside a post. READ the help and guidebook! Suggestions for new canned examples welcome—the hard part is deriving historical performance data.

Rules

  1. Read the Decision Calculator guidebook below!
  2. Fill in the Performance Table, or click on one of the predefined examples.
  3. Fill in the Cost Comparison Table, or click on one of the predefined examples. You do not need to calculate the total: that’s done automatically.
  4. Click Calculate (or Reset between examples).
  5. Accuracy comparison rates are given between the Expert System and the Naive Guess.
  6. Cost results are found in the Expected Cost Comparison Table.
  7. Finally, a solution saying which option you should choose is given. Skill should be > 0!
  8. Important: Use this software at your own risk. No warranties of any kind are given or implied. Always consult a competent medical professional. .

Preset examples:

See Below HELP HELP HELP

 

1. Fill in the Historical Performance Table.

Present Absent
Test +
Test –

 

2. Fill in the Cost Comparison Table.

False Positive Costs Score False Negative Costs Score
Total: Total:

 

3. Click calculate (or Reset between examples).


 

4. The Optimal Naive Guess is to:

 

5. Accuracy (%) Comparison Table

Test Accuracy
Expert System
Naive Guess

 

6. Expected Costs Comparison Table

Test Expected False Positive Cost Expected False Negative Cost Total
Expert System
Naive Guess

 

7. The skill score is:


It should be greater than 0 for a skillful test!

 

8. The solution:

 

GUIDEBOOK

This article provides you with an introduction and a step-by-step guide of how to make good decisions in particular situations. These techniques are invaluable whether you are an individual or a business.

These results hold for all manner of examples—from deciding whether to have a PSA test or mammography, to get a vaccine, to finding a good stock broker or movie reviewer, to situations that require intense statistical modeling, to financial forecasts, to lie detector usefulness. Any situation that has a dichotomous outcome can use these techniques.

Many people opt for precautionary medical tests—frequently because a television commercial or magazine article scares them into it. What people don’t realize is that these tests have hidden costs. These costs are there because tests are never 100% accurate. So how can you tell when you should take a test?

When is worth it?

Under what circumstances is it best for you to receive a medical test? When you “Just want to be safe”? When you feel, “Why not? What’s the harm?”

In fact, these are not good reasons to undergo a medical test. You should only take a test if you know that it’s going to give you useful information. You want to know the test performs well and that it makes few mistakes, mistakes which could end up costing you emotionally, financially, and even physically.

Let’s illustrate this by taking the example of a healthy woman deciding whether or not to have a mammogram to screen for breast cancer. She read that all women over 40 should have this test “Just to be sure.” She has heard lots of horror stories about breast cancer. Testing almost seems like a duty. She doesn’t have any symptoms of breast cancer and is in good health. What should she do?

What can happen when she takes this (or any) medical test? One of four things:

  1. The test could correctly indicate that no cancer is present. This is good. The patient is assured.
  2. The test could correctly indicate that a true cancer is present. This is good in the sense that treatment options can be investigated immediately.
  3. The test could falsely indicate no cancer is present when it truly is. This error is called a false negative. This is bad because it could lead to false hope and could cause the patient to ignore symptoms because, “The test said I was fine.”
  4. The test could falsely indicate that cancer is present when it truly is not. This error is called a false positive. This is bad because it is distressing and could lead to unnecessary and even harmful treatment. The test itself, because it uses radiation, even increases the risk of true cancer because of the unnecessary exposure to x-rays.

This table shows all the possibilities in a test for the presence of absence of a thing (like breast cancer, prostate cancer, a lie, AIDS, and so on). For mammograms, “Present” means that cancer is actually there, and “Absent” means that no cancer is there. For a PSA test, “Present” means a prostate cancer is actually there, and “Absent” means that it is not. For a Movie Reviewer, “Present” means you liked a movie, and “Absent” means you did not.

Test Table
Present Absent
Test + Good: True Positive Bad: False Positive
Test – Bad: False Negative Good: True Negative

“Test +” says that the test indicates the test said the thing (cancer) is present. “Test -” says that the test indicates the absence of the thing. For the Movie Reviewer example, “Test +” means the reviewer recommended a film.

There are two cells in this graph that are labeled “Good,” meaning the test has performed correctly. The other two cells are labeled “Bad,” meaning the test has erred. Study this table to be sure you understand how to read it because it will be used throughout this article.

 

Error everywhere

The main point is this: all tests and all measurements have some error. There is no such thing as a perfect test or perfect measurement! Mistakes always happen. This is an immutable law of the universe. Some tests are better than others, and tables like this are necessary to understand how to rate how well a particular test performs.