activism, human rights promotion

Human Rights Promotion (22): What Hope is There For Persuasion?

An American suffragette with an umbrella stands next to a baby carriage and wears a sign proclaiming "Women! Use your vote", circa 1920

An American suffragette with an umbrella stands next to a baby carriage and wears a sign proclaiming “Women! Use your vote”, circa 1920

The ability to persuade other people is important for human rights in at least two different ways:

  • How do we achieve respect for human rights? Since a lot of human rights violations are caused by ideas and opinions – for example by harmful moral judgments or political ideologies – respect for human rights depends at least in part on our ability to change minds, other people’s as well as our own.
  • Why do we need human rights? Certain human rights in particular, such as the right to free speech, are justified by our need to persuade others. We want to express ourselves and we express ourselves for different reasons: to communicate our identity, to signal what we think about something, but most importantly to persuade others of the goodness of our opinions, compared to their opinions. That’s a universal human need. Ideally, we also believe that expressing our opinions improves those opinions. We prepare our opinions in advance of expressing them, and – knowing that we will be criticized for those opinions by other agents freely expressing themselves – we try our best to prepare our opinions for this criticism. We consider possible counterarguments in advance and how to reply to them. This brings with it the possibility that we refine our opinions or even replace them with better ones, based on our inner reasoning in preparation of our expression. Free speech – our own free speech and that of our critics – helps us improve our opinions. Persuasion – both of others and of ourselves – is therefore an important reason why we need human rights. (This is the theory behind the notion of the marketplace of ideas).

The problem is that people don’t seem to be very good at persuading each other or themselves. The description of communication that I’ve given here is highly idealized. If we can’t dramatically improve our ability to persuade, then we’ll have a hard time fighting for rights because we’ll lose weapons as well as reasons necessary for this fight. There are other non-communicative means to increase the levels of respect for human rights (reciprocity, self-interest, the law etc.), and the need to improve our opinions and to persuade isn’t the only possible justification for human rights (other justification are offered here). But in such an important fight a restricted arsenal or rationale is a net negative. So it’s worth the effort to try and remove some of the things that make it hard to persuade.

So what are we up against? Apart from the obvious and uninteresting fact that some people are immune to persuasion – good luck talking to the Taliban – there are other and perhaps even more damaging causes of a lack of persuasion: confirmation bias, the importance of emotions rather than reasoning or argumentation as a basis of our beliefs, polarization, and a whole set of other psychological biases (e.g. the belief that beautiful people make better sounding arguments).

Damaged_Snapseed

What to do about all this? We should avoid the obvious conclusion that humans are merely bias machines governed by unconscious reflexes, responses to stimuli, emotions and prejudices formed through ages of human evolution. Or that rational argument based on facts and sound reasoning never plays any role. Many but probably not all our opinions and decisions are biased by prejudice and emotive reactions created by a mind shaped by evolution. There’s certainly no hope of radically removing those parts of our minds that work that way, but we can hope to reduce their effect. If we are conscious of our confirmation bas, for instance, then we can try to counteract it by actively seeking out disconfirming information or by making an effort to read people from the opposing side. Rational persuasion can and does occur, and we can make it occur more often than it does today. For example, here and here are two examples of cognitive scientists pushing back against the current trend in their profession. They show how strong arguments can indeed persuade people and how group reasoning in particular is helpful.

More posts in this series are here.

Standard
what is freedom

What is Freedom? (14): Do We Have Free Will?

free will

The evidence seems to say “no, there is no free will”. The notion of free will has been the object of criticism and even ridicule for as long as it exists, but it has recently become the target of a truly continuous and seeming devastating scientific onslaught. Study after study argues that we really don’t want what we do or do what we want, that we have no choice in a lot of things we do, and that we don’t decide to act the way we act and can’t act otherwise even if we want to. Here’s a short summary of the evidence:

  • Priming. People in advertising have long known that exposure to certain images – perhaps even subliminally – can change behavior. Studies have shown that American voters exposed to the American flag are increasingly supportive of the Republican Party, even if they identify as Democrats, and even if the exposure is fleeting. And it’s not just images. If a person reads a list of words including the word table, and is later asked to complete a word starting with tab, the probability that he or she will answer table is greater than if they are not primed (source). If it’s this easy for other people to decide how we act, then we can assume that we often act in ways that they decide.
  • Stereotype threat. When the belief that people like you (African-Americans, women, etc) are worse at a particular task than the comparison group (whites, men, etc) is made prominent, you perform worse at that task. Again, this makes it easy for others to change how we act.
  • free willAnchoring. In one study, German judges first read a description of a woman who had been caught shoplifting, then rolled a pair of dice that were loaded so every roll resulted in either a 3 or a 9.  As soon as the dice came to a stop, the judges were asked whether they would sentence the woman to a term in prison greater or lesser, in months, than the number showing on the dice.  Finally, the judges were instructed to specify the exact prison sentence they would give to the shoplifter.  On average, those who had rolled a 9 said they would sentence her to 8 months; those who rolled a 3 said they would sentence here to 5 months. Yet another example of how we often act not because we freely want (or “willed”) our actions but because of external pressure and manipulation.
  • Learned helplessness. Rather than try their best to escape oppression, subjugation and other predicaments, people often give up and accept their situation. A failure of the will, but a failure determined by outside forces.
  • Adaptive preferences. We settle for second best and call it the best, not because that is our free choice but because the thing that we really believe is best is out of reach. Free will? Meh.
  • Peer effects. Group membership and the presence of role models determine what is the “natural” way to act.
  • Justificational reasoning. When we defend our so-called free and freely willed actions, we tend to do so after the fact and with special attention to the good or bad reasons justifying our actions, at the expense of reasons justifying other kinds of actions. This suggests that we didn’t weigh all the reasons for all possible actions beforehand, and that our actions are therefore not actions we chose to want on the basis of good reasons. Perhaps then our actions are caused by something else, such as habit, conformism, reflexes, tradition etc. Free will is incompatible with those causes.
  • Poverty of willpower. Power of the will seems to be a finite resource that can be depleted. No willpower means no free will.
  • And then there are Benjamin Libet’s infamous studies showing a consistent build-up of electrical activity from the brain’s motor cortex before people are consciously aware of their desires.

I could go on, but this will do. Of course, none of this proves that there is no free will. At most, it makes us realize that free will is severely constrained: if it exists at all, it’s only a partial and intermittent faculty, present in unequal degrees in different people at different times in their lives.

And yet, despite all this evidence, we continue to act as if all people, , with the exception of minors and the mentally handicapped; have free will all of the time. We constantly blame people, we punish and praise them, and we say that they deserve what they get. If I – being a mentally healthy adult (at least according to some) – were to hit the person sitting next to me now, I would be castigated because everyone agrees that I could have acted otherwise. I probably could have, but perhaps I couldn’t. Who’s to tell? Perhaps a little less blame and praise could be one good outcome of psychological research. But I’m not holding my breath. We can follow this advice, or we can all act otherwise, unfortunately.

More posts in this series are here.

Standard
causes of human rights violations, human rights violations

The Causes of Human Rights Violations (52): Not Enough Bias

Mr Spock illogical

“Your illogical approach to chess does have its advantages on occasion, Captain”

If I count correctly, I have blogged about at least 12 ways in which our psychological or mental biases can lead us to violate other people’s rights:

  1. spurious reasoning justifying our actions to ourselves post hoc
  2. the role distance plays in our regard for fellow human beings
  3. the notion that what comes first is also best
  4. a preference for the status quo
  5. the anchoring effect
  6. last place aversion
  7. learned helplessness
  8. the just world fallacy
  9. adaptive preferences
  10. the bystander effect
  11. inattentional blindness, and
  12. stereotype threat

So it may come as a surprise that rationality – in the sense of the absence of biases that distort our proper thinking – can also cause rights violations. But when you think about it, it’s just plain obvious: whatever the irrational basis of Nazi anti-Semitism, the Holocaust was an example of rational planning; many people argue that Hiroshima and Nagasaki made perfect military sense; and others say the same about torture in the ticking bomb scenario.

However, the point is not just that rationality can be harmful, but that biases can be helpful. For example:

Take crime. The rational person weighs the benefit of mugging someone – the financial reward and the buzz of the violence netted off against the feeling of guilt afterwards – against the cost; the probability of being caught multiplied by the punishment.

But we don’t really want people to think so rationally because it would lead them to actually mug someone occasionally. It would be better if they had the heuristic “don’t mug people.” Such a heuristic is, however, irrational in the narrow economistic sense, as it would cause people to reject occasionally profitable actions. (source)

Given the low probability of getting caught for any crime, we would encourage crime if we would favor rationality over bias. If, on the other hand we could adopt a bias that people like us are highly likely to get caught (or, for that matter, another bias, such as the one that rich people deserve their wealth), then crime would go down.

All this is related to the question of whether false beliefs are useful for human rights.

More posts in this series are here.

Standard
democracy, what is democracy?

What is Democracy? (66): A Sports-Based Selection Process For Politicians

football game

I’ve already documented several ways in which democracy tends to malfunction. Democracy seems to be a system for

Here’s another one:

It is statistically possible that the outcome of a handful of college football games in the right battleground states could determine the race for the White House.

Economists Andrew Healy, Neil Malhotra, and Cecilia Mo make this argument in a fascinating article in the Proceedings of the National Academy of Science. They examined whether the outcomes of college football games on the eve of elections for presidents, senators, and governors affected the choices voters made. They found that a win by the local team, in the week before an election, raises the vote going to the incumbent by around 1.5 percentage points. When it comes to the 20 highest attendance teams—big athletic programs like the University of Michigan, Oklahoma, and Southern Cal—a victory on the eve of an election pushes the vote for the incumbent up by 3 percentage points. That’s a lot of votes, certainly more than the margin of victory in a tight race. (source)

Compared to some of the previously cited distortions of the democratic process, this one is particularly disturbing. You could still argue that the way politicians look or sound has at least some relevance to the political process, even though it shouldn’t determine elections. You could also argue, even if it means stretching your neurons to breaking point, that a long spell of bad weather has an adverse effect on the economy, that politicians should take countermeasures, and that they should be punished if they don’t. If you’re feeling very generous, you could even say that the order effect is a general human bias and that we shouldn’t single out democracy for condemnation when we see this effect appearing in elections.

However, there seems to be no possible excuse for voting in favor of incumbents simply because your local football team scores a win. OK, I can understand that the exhilaration makes you feel good about everything, including perhaps the performance of the incumbents and the status quo in general, but that means we should see the same distortions when people vote after having had sex or after having eaten a chocolate bar. And those latter distortions may have an even greater impact on elections, given the fact that eating chocolate and having sex is more common than watching football. Given the large number of possible distortions like these, I simply can’t convince myself that they really do occur.

Bonus malfunction:

In the summer of 1916 … a dramatic weeklong series of shark attacks along New Jersey beaches left four people dead. Tourists fled, leaving some resorts with 75 percent vacancy rates in the midst of their high season. Letters poured into congressional offices demanding federal action; but what action would be effective in such circumstances? Voters probably didn’t know, but neither did they care. When President Woodrow Wilson—a former governor of New Jersey with strong local ties—ran for reelection a few months later, he was punished at the polls, losing as much as 10 percent of his expected vote in towns where shark attacks had occurred. (source)

More posts in this series are here.

Standard
democracy, what is democracy?

What is Democracy? (65): A Political Decision Procedure Distorted by the Order Effect

ballot

ballot

(source)

People’s choices are often sensitive to differences in the order in which the options appear. This is one among many psychological biases we all suffer from to some extent. For example,

In the Eurovision song contest, for example, the first or later performers have more chance of winning than those appearing in the middle of the show. (source)

Unsurprisingly, democracy is not immune from this bias. Here’s some evidence from the Irish democracy showing that the order of candidates on ballots affects election outcomes:

The estimated effect of being listed first on an alphabetical ballot paper in an Irish general election is approximately 544 first preference votes or 1.27 percentage points for the average candidate. (source)

In California,

being listed first benefits everyone. Major party candidates generally gain one to three percentage points, while minor party candidates may double their vote shares. (source)

And it’s not just candidates’ surnames or positions on ballots that affect democratic selection procedures. The tone of their voice, their looks and a ton of other biases also play a role. And yet I still believe in the value of democracy.

Needless to say that the order effect – or “ordering effect”, or “serial position effect” – isn’t limited to politics. Next time you walk into a shop and ask for advice, you can bet that the sales person will present you the most expensive item first, because having seen this one first, all the others will look like a bargain and will influence your decision to buy.

More on the order effect here. More posts in this series are here.

Standard
human rights promotion, human rights violations, law, philosophy

Human Rights Promotion (11): Intentionality Bias Causing the Surge in Human Rights Talk

Laurel and Hardy accident

First, there has indeed been a surge in human rights talk over the past decades and even centuries (see here for some evidence). This is particularly obvious for the period since the end of WWII. Human rights have become the lingua franca among the oppressed, the persecuted and the bleeding hearts worldwide, effectively replacing language based on benevolence, honor etc. (No insult intended, I’m a bleeding heart myself). There’s something about the notion of a human right that captures the strength of demands for freedom and equality like nothing else. It makes a claim sound very strong and difficult to ignore.

Other reasons for the popularity of human rights – or better the fascination with human rights – are their clarity and simplicity, their obvious universality and the fact that they cover most if not all areas of human suffering, depravity and failing, including persecution, violence, lack of freedom, discrimination, poverty, work and the family.

A further, and as yet unexplored reason is the so-called intentionality bias. The intentionality bias is a psychological bias where actions are viewed as intentional even when they’re not.

Three studies tested the idea that our analyses of human behavior are guided by an ‘‘intentionality bias,” an implicit bias where all actions are judged to be intentional by default. In Study 1 participants read a series of sentences describing actions that can be done either on purpose or by accident (e.g., ‘‘He set the house on fire”) and had to decide which interpretation best characterized the action. To tap people’s initial interpretation, half the participants made their judgments under speeded conditions; this group judged significantly more sentences to be intentional. Study 2 found that when asked for spontaneous descriptions of the ambiguous actions used in Study 1 (and thus not explicitly reminded of the accidental interpretation), participants provided significantly more intentional interpretations, even with prototypically accidental actions (e.g., ‘‘She broke the vase”). Study 3 examined whether more processing is involved in deciding that something is unintentional (and thus overriding an initial intentional interpretation) than in deciding that something is unpleasant (where there is presumably no initial ‘‘pleasant” interpretation). Participants were asked to judge a series of 12 sentences on one of two dimensions: intentional/unintentional (experimental group) or pleasant/unpleasant (control group). People in the experimental group remembered more unintentional sentences than people in the control group. Findings across the three studies suggest that adults have an implicit bias to infer intention in all behavior. This research has important implications both in terms of theory (e.g., dual-process model for intentional reasoning), and practice (e.g., treating aggression, legal judgments). (source)

Laurel and Hardy accident2If there is indeed a tendency to view actions as intentional, then there will also be a tendency to frame problems in terms of human rights. For example, if the intentional actions of an oppressive majority assisted by prejudiced legislators and law enforcers are believed to be the main cause of discrimination of a racial minority, then holding those intentional actors legally and judicially responsible for rights violations makes sense and may be effective. When, on the other hand, a lot of this discrimination is in fact the result of unconscious bias, or when it is statistical discrimination rather than taste-based discrimination, then judicial action based on human rights is much less effective.

And it’s my opinion that a lot of human rights violations are unintentional, unconscious and statistical. That doesn’t mean we should stop framing the underlying problems in human rights terms, but it does mean that our efforts to do something about them should be non-legal and non-judicial. Story telling, making people aware of their unconscious biases against certain groups of people, incentivizing people and other strategies can then be more successful in stopping rights violations.

The intentionality bias can be understood as an example of the fundamental attribution error: the tendency to over-value dispositional or personality-based explanations for the observed behaviors of others while under-valuing situational explanations for those behaviors. A simple example, if Alice saw Bob trip over a rock and fall, Alice might consider Bob to be clumsy or careless (dispositional). If Alice later tripped over the same rock herself, she would be more likely to blame the placement of the rock (situational).

More on human rights and intentionality is here, here and here. More on biases is here.

Standard
equality, racism

Racism (29): A Natural or An Acquired Vice?

fMRI scan of the Amygdala

fMRI scan of the Amygdala

(source)

We now have strong evidence that human evolution has produced natural tendencies to favor members of the same group and to distrust and disadvantage outsiders. Insider-outsiders distinctions seem to be innate. This is the consequence of the substantial benefits of group solidarity in early human evolution, and we still live with it today.

Psychologist Catherine Cottrell at the University of Florida and her colleague Steven Neuberg at Arizona State University, argue that human prejudice evolved as a function of group living. Joining together in groups allowed humans to gain access to resources necessary for survival including food, water, and shelter. Groups also offered numerous advantages, such as making it easier to find a mate, care for children, and receive protection from others. However, group living also made us more wary of outsiders who could potentially harm the group by spreading disease, killing or hurting individuals, or stealing precious resources. To protect ourselves, we developed ways of identifying who belongs to our group and who doesn’t. Over time, this process of quickly evaluating others might have become so streamlined that it became unconscious. (source)

So, to some extent, our brains are wired for bias. Even the most liberal among us show some level of implicit bias when tested for it. All we can do is try to be aware of our prejudices as much as possible, and then correct for them.

Some want to extrapolate from these relatively uncontroversial findings and argue that racism as well is innate, even though racism is a relatively recent phenomenon unknown to early humans who almost never met members of other races.

Those who argue that racism is a natural tendency can appeal to certain findings to back up their claims. Studies have found that when whites see black faces there is increased activity in the amygdala, a brain structure associated with emotion and, specifically, with the detection of threats (source).

The problem with this sort of argument is that a biological fact doesn’t have to be innate. In fact, in this case, it has been shown that the detected brain reaction – a biological fact – does not occur in young people:

In a paper that will be published in the Journal of Cognitive Neuroscience, Eva Telzer of UCLA and three other researchers report that they’ve performed these amygdala studies–which had previously been done on adults–on children. And they found something interesting: the racial sensitivity of the amygdala doesn’t kick in until around age 14. What’s more: once it kicks in, it doesn’t kick in equally for everybody. The more racially diverse your peer group, the less strong the amygdala effect. At really high levels of diversity, the effect disappeared entirely. The authors of the study write that ”these findings suggest that neural biases to race are not innate and that race is a social construction, learned over time.” (source)

In a sense, this is good news, because it means that people can be taught not to be racist, even if we can’t be taught to be completely unprejudiced.

More on race as a social construction is here. More posts in this series are here.

Standard
equality, racism

Racism (28): Shooter Bias

armed black suspect

(source unknown)

When called to the scene of an on-going crime, police officers often have to make split-second decisions whether to shoot or not. There’s chaos, darkness, running, shouting, shooting perhaps, and no time to determine who’s who and who’s likely to do what. Training can help, but in most cases officers just rely on instincts. In other words, these are the ideal situations for the revelation of personal biases.

Because of the nature of those situations, officers sometimes make mistakes and shoot innocent persons or unarmed suspects. Now, somewhat unsurprisingly there’s research telling us that it’s more likely for white people to shoot unarmed black suspects than unarmed white suspects. This bias is called the shooter bias, and it’s not the monopoly of police officers (as lab tests with ordinary citizens have confirmed). (More here).

It seems that a lot of people have internalized the stereotype about dangerous black men, even those who would not think of themselves as having done so.

More posts in this series are here.

Standard
causes of poverty, economics, poverty

The Causes of Poverty (68): Rich People Not Giving Enough Money to Poor People

An illustration of Andrew Carnegie, originally published on July 25, 1903

An illustration of Andrew Carnegie, originally published on July 25, 1903

(source)

You can criticize trade policy, immigration restrictions, bad governance or any other commonly cited cause of poverty, but you shouldn’t forget the obvious: there are a lot of wealthy people in the world who could, without losing much wellbeing (due to the diminishing marginal utility of money), help to lift every single poor person in the world to a much higher level of wellbeing.

The fact is that they could but don’t. We do have progressive taxation systems and other means of redistribution, we have development aid, we have charity etc., but none of these things yields enough money to lift everyone out of poverty. And there’s not enough public support to strengthen these redistribution mechanisms. Development aid is already unpopular at current levels, and don’t even start to talk about tax increases. The tireless efforts of Peter Singer and company to promote giving also have only a small effect.

Peter Singer

Peter Singer

The insufficiency of giving and other means of redistribution is hard to understand, in particular given the fact that rich people are generally not very dumb and able to understand the law of diminishing marginal utility. Of course, I know about loss aversion, the endowment effect, habit formation, the importance of status etc. But again, wealthy people should in general be the ones best able to overcome biases, to distinguish the important things in life from the unimportant, and to see how helping others can be beneficial to ourselves, both psychologically and socially (helping makes you feel good, and living a good life amid misery is socially untenable). But perhaps I’m wrong about rich people.

And then there’s something else stopping us from giving more (or allowing ourselves to be taxed more, which is roughly the same thing), namely the stories we tell ourselves. For example, you often hear that it’s better to allow people to look after themselves first, so that they can create the conditions in which they unintentionally help. Allowing entrepreneurs to get rich – i.e. not taxing them too heavily and not insisting that they should give their money away rather than invest it – will be much more beneficial to the poor. Many of the poor will get a job thanks to them, and their products and services will also make the lives of many a lot better.

However, this is not incompatible with giving. True, what you give you can’t invest, but we can allow people to delay their giving until the day that they don’t need to invest a lot more. The example of Bill Gates comes to mind. So we can accept that there is some truth to the story that free enterprise takes care of a lot of poverty, and at the same time insist that there should be more giving.

Bill Gates

Bill Gates

Another story we tell ourselves goes like this: giving people money isn’t a very good way of helping poor people. Many of them will just waste it, middle men will confiscate it, third world governments will misuse it, people will become to depend on it etc. Well, that doesn’t seem to be completely correct. Experiments with conditional cash transfers are very promising. And even if it’s correct to some extent, that’s just an argument to be smarter when giving money: invest it in businesses, healthcare etc.

And finally, there’s the story about agency: helping people is disrespecting them as self-authors and self-governing moral creatures. You may make them materially better off – at least in the short run because dependence on help may create motivational problems in the long run – but you take away their dignity and make them psychologically and morally worse off. People may not want to be helped, and even if they do it may not be in their best long term interest. The problem with this story is not that it’s false as such; it’s that people may not have a long term if we fail to help, and that starvation or homelessness is also an affront to dignity, and surely one that is a lot worse than receiving help.

More about giving is here. More posts in this series are here.

(image source, image source)
Standard
causes of human rights violations, human rights violations, law

The Causes of Human Rights Violations (42): First is Best

prison photo

Psychological tests have shown that the first experience in a series of two or more is cognitively privileged. The order in which people experience things affects how they evaluate them: they tend to think the first option is the best.

Here’s an experiment showing how people decide that a criminal presented first is more worthy of parole:

Two criminals’ photographs, from the Florida Department of Corrections website … were used. Photos depicted 29 year-old males known to have committed the same violent crimes. Criminals were wearing identical correctional facility outfits; photos were pre-tested to be equally attractive and both expressing neutral facial expressions. …

Thirty-one participants … were asked to evaluate [the] two criminals and to determine who should “stay in jail” versus “be released on parole.” … [P]articipants automatically associated the first criminal with being more worthy of parole (rather than prison) compared to the second criminal. Regardless of which photo was presented first, it was the one presented first who was judged to be more worthy of parole. (source)

This is a form of order effect: people’s choices are often sensitive to differences in the order in which the options appear. (“First is best” is only one form of order effect; in some other cases, order effects show that the last options are privileged). As is clear from the example above, order effects can have consequences for human rights: if people are given parole on the basis of the psychological biases of those who decide rather than on the merits of the case, then equality before the law is done with.

It wouldn’t be very difficult to imagine and test other cases.

More posts in this series are here.

Standard
lies and statistics, statistics

Lies, Damned Lies, and Statistics (39): Availability Bias

availability bias on newspaper frontpage

example of availability bias on a newspaper’s frontpage

(source)

This is actually only about one type of availability bias: if a certain percentage of your friends are computer programmers or have red hair, you may conclude that the same percentage of a total population are computer programmers or have red hair. You’re not working with a random and representative sample – perhaps you like computer programmers or you are attracted to people with red hair – so you make do with the sample that you have, the one that is immediately available, and you extrapolate on the basis of that.

Most of the time you’re wrong to do so – as in the examples above. In some cases, however, it may be a useful shortcut that allows you to avoid the hard work of establishing a random and representative sample and gathering information from it. If you use a sample that’s not strictly random but also not biased by your own strong preferences such as friendship or attraction, it may give reasonably adequate information on the total population. If you have a reasonably large number of friends and if you couldn’t care less about their hair color, then it may be OK to use your friends as a proxy of a random sample and extrapolate the rates of each hair color to the total population.

The problem is the following: because the use of available samples is sometimes OK, we are perhaps fooled into thinking that they are OK even when they’re not. And then we come up with arguments like:

  • Smoking can’t be all that bad. I know a lot of smokers who have lived long and healthy lives.
  • It’s better to avoid groups of young black men at night, because I know a number of people who have been attacked by young black men (and I’ll forget that I’ll hardly ever hear of people not having been attacked).
  • Cats must have a special ability to fall from great heights and survive, because I’ve seen a lot of press reports about such events (and I forget that I’ll rarely read a report about a cat falling and dying).
  • Violent criminals should be locked up for life because I’m always reading newspaper articles about re-offenders (again, very unlikely that I’ll read anything about non-re-offenders).

As is clear from some of the examples above, availability bias can sometimes have consequences for human rights: it can foster racial bias, it can lead to “tough on crime” policies, etc.

More posts in this series are here.

Standard
discrimination

Discrimination (10): Large Minority of U.S. Citizens Unwilling to Vote for Gay/Muslim/Atheist Presidential Candidate

Some interesting poll results from Gallup: today, practically no one believes that a Presidential candidate’s gender or skin color automatically disqualifies him or her from office, and the same is true for a candidate’s religion, at least as long as we talk about Jewish or Catholic candidates.

This wasn’t always the case: in the 1950s, only 54% of Americans would vote for a female Presidential candidate, 38% for a Black one, 63% for a Jewish one, and 67% for a Catholic one. Large minorities and in the case of race even a large majority were heavily biased in the 1950s. The fact that this is no longer true today is evidence of moral progress – or of political correctness if you are a cynic.

However, this doesn’t mean that all forms of prejudice are gone. Even in our day and age, large minorities of Americans are still unwilling to vote for a homosexual, atheist or Muslim Presidential candidate, even if the candidate in question is well-qualified:

prejudice in voter intentions

prejudice in voter intentions

(source)

Republican voters are slightly more prejudiced than other voters, but not towards Mormons:

prejudice in voter intentions

(source)

More posts in this series are here.

Standard
causes of human rights violations, human rights violations, law

The Causes of Human Rights Violations (38): Status Quo Bias

dangerous intersection sign

when you’re affected by status quo bias, all places where you can go right or left rather than merely straight forward look dangerous

(source)

Status quo bias is an irrational preference for the current state of affairs, even when there are no obvious reasons why this state of affairs should be preferred over possible and knowable alternatives.

A preference for the status quo is not always a bias and can be entirely rational in some cases:

  • when the balance of costs and gains is in favor of the status quo and when all possible and knowable alternatives yield a lower balance
  • when some alternative yields a higher balance but the transition cost is too high
  • when your role in society requires that you are consistent (you’re a school teacher and you’re supposed to teach the canon, or a judge and precedent and predictability are important)
  • etc.

When a preference for the status quo is a form of reasonable risk avoidance, then it’s also wrong to call it a bias: it’s true that sticking with what worked in the past is a safe option when the consequences or costs of alternatives – compared to the cost of existing arrangements – are uncertain or unknowable.

However, people also tend to stick with proven options when the respective costs of different options are clear and an alternative is less costly than the status quo. We sometimes even prefer the status quo when costs aren’t an issue at all. In those cases, it’s correct to call our preferences a bias. Maybe the bias occurs because people don’t want to invest the effort of looking for alternatives and calculating all the costs. Status quo requires no mental effort. Choice is difficult, hence the tendency to do nothing. Or maybe cost calculations – when they are performed – are distorted because people wrongly attribute goodness to longevity. People often believe that something must be worth something if it has existed or if it has been practiced for a long time.

Cost calculations can also be biased because people tend to weigh the potential losses of switching from the status quo more heavily than the potential gains. This is called loss aversion – people prefer avoiding losses to acquiring gains even if the gains objectively outweigh the losses – and it could explain a preference for the status quo in the presence of alternatives that are objectively less costly. But status quo bias occurs even when there are no losses or gains from alternatives (experiments have shown that just designating an option as the status quo makes people rate it more highly). Hence, status quo bias is not always a form of loss aversion. Maybe regret avoidance plays a role (a past experience of regret teaches people to avoid decisions that imply change). Or an overvaluation of the virtue of consistency. Or the sunk cost fallacy: American involvement in Vietnam continued for years despite massive loss of lives, precisely because this loss would make defeat costly.

This last example shows how status quo bias can cause human rights violations. Other examples:

  • The use of precedent in judicial decisions even if those decisions violate human rights (overvaluing consistency).
  • Female genital mutilation often has no other justification than the fact that it has been practiced a long time, that it’s traditional (overvaluing longevity) and that abandoning it would cause disaster.

Something on the related endowment effect is here. More posts in this series are here.

Standard
causes of human rights violations, human rights violations, law

The Causes of Human Rights Violations (36): Anchoring Effect

The anchoring effect is a psychological bias that leads us to rely too heavily on one piece of information – often even information that is totally irrelevant – when making decisions. Once the anchor is set, there is a bias toward adjusting or interpreting other information to reflect the “anchored” information. I can best explain this with an example. It’s well known that judges do not simply apply legal rules to the facts of a case in a purely rational or mechanical manner. In fact, the decisions of judges are influenced by political, social and psychological biases, one of those being the anchoring effect.

German judges with an average of more than fifteen years of experience on the bench first read a description of a woman who had been caught shoplifting, then rolled a pair of dice that were loaded so every roll resulted in either a 3 or a 9.  As soon as the dice came to a stop, the judges were asked whether they would sentence the woman to a term in prison greater or lesser, in months, than the number showing on the dice.  Finally, the judges were instructed to specify the exact prison sentence they would give to the shoplifter.  On average, those who had rolled a 9 said they would sentence her to 8 months; those who rolled a 3 said they would sentence here to 5 months; the anchoring effect was 50%. (source)

What does this have to do with human rights or with the causes of human rights violations? Well, if you replace the loaded dice in the quote above with the sentencing demands of prosecutors or even the demands of the “public”, you will not be surprised to find unfairness in sentencing:

evidenceThe results of a recent study of ours (Englich & Mussweiler, 2001) indicate that accomplished trial judges with an average of more than 15 years of experience were influenced by sentencing demands, even if the demands were made by non-experts. In fact, the magnitude of this influence proved to be dramatic. Judges who considered a high demand of 34 months gave final sentences that were almost 8 months longer than judges who considered a low demand of 12 months. A difference of 8 months in prison for the identical crime. Notably, this influence occurred although both demands were explicitly made by a non-expert. (source)

Sentencing demands can be an effective “anchor” leading to violations of those human rights that require fairness in criminal trials. Skilled but ruthless prosecutors can use this in order to influence even experienced judges and to have them impose unfair sentences.

Obviously, the anchoring effect isn’t limited to criminal trials, and it’s not just the anchoring effect that can introduce a bias in judges’ rulings. I’m not sure if I already mentioned this incredible finding:

proportion of rulings in favor of prisoners

The percentage of judges’ rulings that are favorable to the accused drops gradually from about 65% to nearly zero within each decision session and returns abruptly to 65% after a break. This indicates that judges are swayed by things that shouldn’t have any bearing on their decisions.

I’m still looking for other examples of rights violations caused by the anchoring effect, but in the mean time I should mention that it must also be possible to use the effect to improve respect for human rights.

Something about the related topic of unconscious priming is here. More posts in this series about the causes of rights violations are here.

(image source)
Standard
causes of human rights violations, human rights violations, justice

The Causes of Human Rights Violations (32): The Just World Fallacy

Dr. Pangloss, who often uses the phrase best of all possible worlds

Dr. Pangloss, who often uses the phrase "best of all possible worlds"

(source)

Here’s another psychological bias that causes human rights violations to persist: the just world fallacy.

It seems that we want to believe that the world is fundamentally just. This strong desire causes us to rationalize injustices that we can’t otherwise explain: for example, we look for things that the victim might have done to deserve the injustice. The culture of poverty is a prime example, as is the “she asked for it” explanation of rape. This fallacy or bias is obviously detrimental to the struggle against human rights violations, since it obscures the real causes of those violations. The belief in a just world makes it difficult to make the world more just.

And even if its effect on human rights was neutral or positive, the fallacy would be detrimental in other ways: it doesn’t help our understanding of the world to deny that many of those who are lucky and who are treated justly haven’t done anything to deserve it, or that many of those who inflict injustices get away with it. The prevalence of the fallacy can be observed in popular culture, in which the villain always gets what he or she deserves; the implication is that those who “get” something, also deserve it.

Psychologists have come up with different possible explanations of the just world fallacy. It may be a way of protecting ourselves: if injustices are generally the responsibility of the victims themselves, then we may be safe as long as we avoid making the mistakes they made. The bias lessens our vulnerability, or better our feeling of vulnerability, and therefore makes us feel better. Another explanation focuses of the anxiety and alienation that comes with the realization that we live in a world rife with unexplained, unexplainable and unsolvable injustices. The fallacy is then akin to religious teachings about the afterlife, which are sometimes viewed as mechanisms for coping with the anxiety and alienation caused by mortality. Melvin Lerner explains the just world fallacy as a form of cognitive dissonance:

the sight of an innocent person suffering without possibility of reward or compensation motivated people to devalue the attractiveness of the victim in order to bring about a more appropriate fit between her fate and her character. (source)

sensationalist newspaper headlineAll this argues against making desert central to our theories of justice: if desert is difficult to determine because there are biases involved, then surely desert can’t be a good basis of a theory of justice.

An interesting aside: it seems that the opposite bias also exists. The so-called “mean world syndrome” is a term coined by George Gerbner to describe a phenomenon whereby violent content of mass media makes viewers believe that the world is more dangerous than it actually is. Indeed, perceptions of violence and criminality often do not correspond to real levels. People who consume a large amount of violent media or who often read the crime sections of sensationalist newspapers tend to overestimate the prevalence of violence and crime.

More on the possible causes of rights violations here.

Standard
causes of human rights violations, human rights violations

The Causes of Human Rights Violations (29): The Bystander Effect

Kitty Genovese

artistic rendering of the Kitty Genovese case

(source, the murder of Kitty Genovese is the archetypical although contested example of the bystander effect)

The bystander effect can explain the persistence of certain types or instances of rights violations. If many people witness a person in distress, then it’s the less likely that any one person will help. “I could help, but I’m sure someone will”. Numerous experiments have proven the effect. Virtually all of them find that the presence of others inhibits helping, often by a large margin. The probability of help is indeed inversely related to the number of bystanders, although not necessarily one-on-one. More precisely, the effect occurs when bystanders are strangers; when bystanders are friends help is usually forthcoming.

What are the reasons for this effect? Hard to tell, but social influence may be one: bystanders monitor the reactions of other people in an emergency situation to see if others think that it is necessary to intervene. If everyone first looks at the others, then you have a vicious circle of influence. Since everyone is doing exactly the same thing – i.e. nothing – they all conclude from the inaction of others that help is not needed. Diffusion of responsibility may be another reason: when a lot of people are present, they all assume that others carry more responsibility to intervene, because others may be seen as closer or stronger or first on the spot (this is also the thinking behind the firing squad or the Japanese procedure for capital punishment). The fear of being harmed or of offering unwanted assistance may also explain the effect.

Increasing urbanization and improved knowledge of everyday events (by way of better information systems such as the internet) can make the bystander effect more common, and can therefore make it more difficult to stop rights violations.

bystander effect

(source)

There’s a peculiar reaction to the bystander effect described here. And here are some notorious cases of the effect. More on the possible causes of rights violations here.

Standard
data, discrimination, discrimination and hate, equality, work

Discrimination (9): The Beauty Bias

Although someone’s looks and attractiveness aren’t explicitly mentioned in human rights law as prohibited grounds of discrimination, we can safely say that the general prohibition on discrimination does apply to discrimination based on appearance, just like it applies to discrimination of people belonging to a certain race, sex, religion etc. This statement may sound extreme – and some will call it the first step on a slippery slope – but I think it’s a justified statement given the fact that appearance based discrimination can be just as harmful as more traditional types of discrimination.

uglinessPeople generally prefer beautiful individuals and express this preference by giving them certain advantages. One symptom of the beauty bias is the beauty premium: in the U.S., and probably in most other countries, an attractive person earns more: the premium is about $250.000 over the course of a lifetime, compared to the least attractive. Monthly averages point to a difference between 10 and 12%, even in professions where looks wouldn’t seem to matter. Daniel Hamermesh (in “Beauty Pays“) found evidence of differences in promotions, risks of unemployment, credit facilities etc.

A number of role-playing, laboratory studies have demonstrated that more attractive men are more often hired, but the laboratory data for women are less consistent. … more attractive men had higher starting salaries and they continued to earn more over time. For women, there was no effect of attractiveness for starting salaries, but more attractive women earned more later on in their jobs. (source)

There’s no reason to believe that beautiful people deserve this kind of advantage since they generally aren’t more intelligent, productive, etc. (although some disagree about productivity). It’s simply the case that people who decide about employment, pay and career prefer beautiful people. Beauty brings along a degree of self-confidence I guess, which may persuade (possible) employers and makes them believe – correctly or not – that with higher self-confidence comes higher productivity. But that isn’t all that’s happening:

even when the experimenters controlled for self-confidence, they found that employers overestimated the productivity of beautiful people. (source)

Marilyn Monroe

Marilyn Monroe

So it looks like the beauty bias is just that, a bias, much like the bias against women and blacks.

The beauty bias can be measured because a

common standard of beauty does exist. Based on an attractiveness scale of one to five, most people surveyed will come to near agreement on a test subject’s looks, a finding that holds true across all cultures. (source)

By the way, the beauty bias operates in other areas as well. Beautiful candidates are more successful in democratic elections. And ugly criminals face rough justice:

Stephen Ceci and Justin Gunnell, two researchers at Cornell University, gave students case studies involving real criminal defendants and asked them to come to a verdict and a punishment for each. The students gave unattractive defendants prison sentences that were, on average, 22 months longer than those they gave to attractive defendants. (source)

Also:

11 percent of surveyed couples say they would abort a fetus predisposed toward obesity. College students tell surveyors they’d rather have a spouse who is an embezzler, drug user, or a shoplifter than one who is obese. (source)

Of course, earning a little less and having a smaller chance of being elected aren’t the world’s gravest human rights violations. It’s not as if we still banish the ugly from the public square:

In the 19th century, many American cities banned public appearances by “unsightly” individuals. A Chicago ordinance was typical: “Any person who is diseased, maimed, mutilated, or in any way deformed, so as to be an unsightly or disgusting subject … shall not … expose himself to public view, under the penalty of a fine of $1 for each offense.” (source)

However, the evidence above suggests that beauty does play an important role in many different areas, making the impact of appearance based discrimination potentially large. Hence the obvious question: should ugly people be protected against discrimination? Should there be a law making it illegal to pay people more simply because of their looks? After all, there seems to be no difference between this form of discrimination and more traditional forms. All forms of discrimination impose a disadvantage on a group of people for no other reason than their group membership.

However, legal protections would require a public determination of beauty and ugliness. And they would require the ugly to step forward and claim damages or benefits. That’s stigmatizing, and open to discussion: there is, as stated, a common standard of beauty, but there can still be disagreement on specific cases, especially along the margins. Beauty is to some extent in the eye of the beholder, and if you’re labeled as ugly by some or even by the majority, there may still be others who think the world of you. Including yourself. De gustibus non est disputandum. The same isn’t true for gender, sexual orientation and race (with some caveats for the latter). When governments sanction a universal scale of attractiveness we’re going down a dangerous route because this can ossify opinions about beauty and lead to even more discrimination. And then there’s the issue of self-esteem: would people be willing to apply for official recognition of their ugliness, even if the money is good?

Posts on similar subjects, such as colorism and heightism, are here and here. More of my drawings are here.

Standard
statistical jokes, statistics

Statistical Jokes (30): No Way to Bias a Coin Flip

coin toss

(source)

This excerpt from a scientific paper is not a joke, but it’s funny nonetheless, at least to me:

Dice can be loaded — that is, one can easily alter a die so that the probabilities of landing on the six sides are dramatically unequal. However, it is not possible to bias a coin flip — that is, one cannot, for example, weight a coin so that it is substantially more likely to land “heads” than “tails” when flipped and caught in the hand in the usual manner. Coin tosses can be biased only if the coin is allowed to bounce or be spun rather than simply flipped in the air. …

The law of conservation of angular momentum tells us that once the coin is in the air, it spins at a nearly constant rate (slowing down very slightly due to air resistance). At any rate of spin, it spends half the time with heads facing up and half the time with heads facing down, so when it lands, the two sides are equally likely (with minor corrections due to the nonzero thickness of the edge of the coin) … Jaynes (1996) explained why weighting the coin has no effect here (unless, of course, the coin is so light that it floats like a feather): a lopsided coin spins around an axis that passes through its center of gravity, and although the axis does not go through the geometrical center of the coin, there is no difference in the way the biased and symmetric coins spin about their axes. (source)

More on bias here; more statistical jokes here.

Standard
democracy, freedom, health, international relations, why do countries become/remain democracies

Why Do Countries Become/Remain Democracies? Or Don’t? (19): Psychological Reactions to the Threat of Disease

microscope

microscope

(source)

There sure are many reasons why countries become or fail to become democracies. In this blog series I’ve mentioned climate, geography, inequality, external triggers, prosperity, religion, resources, education etc. An original approach to this question looks at psychological reactions to the threat of disease:

Conventional explanations for a country’s political system would draw on its history, economy and culture. Randy Thornhill from the University of New Mexico, Albuquerque, however, thinks it might be determined by the threat of disease in a region. This triggers psychological biases, which originally evolved to prevent illness spreading, that also hinder the emergence of democratic ideals. (source)

The logic is that people develop psychological reactions – call them biases – which they need to protect themselves against infectious diseases, and these reactions in turn make it difficult to adopt democracy, individualism and an attitude of criticism of authority.

germsThe starting point for Thornhill and Fincher’s thinking is a basic human survival instinct: the desire to avoid illness. In a region where disease is rife, they argue, fear of contagion may cause people to avoid outsiders, who may be carrying a strain of infection to which they have no immunity. Such a mindset would tend to make a community as a whole xenophobic, and might also discourage interaction between the various groups within a society – the social classes, for instance – to prevent unnecessary contact that might spread disease.

What is more, Thornhill and Fincher argue, it could encourage people to conform to social norms and to respect authority, since adventurous behaviour may flout rules of conduct set in place to prevent contamination. Taken together, these attitudes would discourage the rich and influential from sharing their wealth and power with those around them, and inhibit the rest of the population from going against the status quo and questioning the authority of those above them. This is clearly not a situation conducive to democracy. (source, source)

What is, initially useful for public health, becomes detrimental for self-government:

[S]pecific behavioural manifestations of collectivism (e.g. ethnocentrism, conformity) can inhibit the transmission of pathogens; and so we hypothesize that collectivism (compared with individualism) will more often characterize cultures in regions that have historically had higher prevalence of pathogens. Drawing on epidemiological data and the findings of worldwide cross-national surveys of individualism/collectivism, our results support this hypothesis: the regional prevalence of pathogens has a strong positive correlation with cultural indicators of collectivism and a strong negative correlation with individualism. (source)

democracy and infection correlation

(source, dots represent countries)
Standard
causes of human rights violations, data, discrimination and hate, equality, health, trade, work

The Causes of Human Rights Violations (23): Unconscious Bias

thinking_man rodin

No matter how egalitarian, unbiased and unprejudiced we claim to be and believe to be, underneath it all many of us are quite different.

If you ask people whether men and women should be paid the same for doing the same work, everyone says yes. But if you ask volunteers how much a storekeeper who runs a hardware store ought to earn and how much a storekeeper who sells antique china ought to earn, you will see that the work of the storekeeper whom volunteers unconsciously believe to be a man is valued more highly than the work of the storekeeper whom volunteers unconsciously assume is a woman. If you ask physicians whether all patients should be treated equally regardless of race, everyone says yes. But if you ask doctors how they will treat patients with chest pains who are named Michael Smith and Tyrone Smith, the doctors tend to be less aggressive in treating the patient with the black-sounding name. Such disparities in treatment are not predicted by the conscious attitudes that doctors profess, but by their unconscious attitudes—their hidden brains. (source)

And even if most of our actions are guided by our conscious beliefs, some will be caused by unconscious prejudice, in which case we’ll have identified a cause of discrimination, a cause that will be very hard to correct.

More on the related topic of unconscious discrimination is here. More about prejudice here, and about bias here.

Standard
data, discrimination and hate, equality, justice, law

Racism (12): Implicit Racism in Criminal Justice

Overt manifestations of racial or other types of group-based hate, prejudice or discrimination are relatively rare these days because they have become increasingly unacceptable. However, the racist or prejudiced ideas that form the basis of such overt manifestations aren’t necessarily less common than they used to be. Or perhaps the word “idea” is too strong. “Unconscious biases” or even “instincts” may be more appropriate terms. “Instincts” in this context is a term used to link contemporary racism and prejudice to lingering aspects of early human evolution encouraging distrust of other groups as a survival strategy.

Indeed, certain psychological experiments have shown how easy it is to induce people to hateful behavior towards members of other groups, even people who self-describe as strongly anti-prejudice. There have also been some notorious cases of the effect of hate propaganda on people’s behavior.

On the other hand, there are some indicators that suggest a decrease in the levels of racism, and there are theories that say that it should decrease. However, other data suggest that “unconscious biases” are still very strong:

[T]his Article proposes and tests a new hypothesis called Biased Evidence Hypothesis. Biased Evidence Hypothesis posits that when racial stereotypes are activated, jurors automatically and unintentionally evaluate ambiguous trial evidence in racially biased ways. Because racial stereotypes in the legal context often involve stereotypes of African-Americans and other minority group members as aggressive criminals, Biased Evidence Hypothesis, if confirmed, could help explain the continued racial disparities that plague the American criminal justice system.

To test Biased Evidence Hypothesis, we designed an empirical study that tested how mock-jurors judge trial evidence. As part of an “evidence slideshow” in an armed robbery case, we showed half of the study participants a security camera photo of a dark-skinned perpetrator and the other half of the participants an otherwise identical photo of a lighter-skinned perpetrator. We then presented participants with evidence from the trial, and asked them to judge how much each piece of evidence tended to indicate whether the defendant was guilty or not guilty. The results of the study supported Biased Evidence Hypothesis and indicated that participants who saw a photo of a dark-skinned perpetrator judged subsequent evidence as more supportive of a guilty verdict compared to participants who saw a photo of a lighter-skinned perpetrator. (source)

Perhaps this can indeed explain part of the racial discrepancies in incarceration rates or execution rates (see also here), as well as the phenomenon of racial profiling. It could also explain this.

Maybe racism hasn’t decreased but has just become more difficult to spot, including for the racists themselves. Swastikas and KKK hoods aren’t so common anymore, and instead we have to look for unconscious biases, implicit racism or even unintentional racism.

In order to test your own unconscious biases you can take a racism test here. More on racism is here. Something on the related topic of unconscious discrimination is here.

Standard
discrimination, discrimination and hate, equality, law, statistics, work

Discrimination (6): Should People Be Liable For Unconscious Discrimination?

First of all, it’s evident that people often have unconscious motives for their actions. For example, parents “wishing the best” for their children can act out of frustration about their own past failures. So it’s likely that some acts of discrimination are based on similar “deep” motives. Some of us who genuinely believe that we are colorblind may still avoid black neighborhoods at night, cross a lonely street when a tall black male comes our way, or favor a CV sent in by someone with a “‘Caucasian” name. Tests have shown that people are more biased than they admit to themselves. (You can test your own racism here).

So we may be violating anti-discrimination laws without “really” and consciously wanting to. You could say that in such cases we shouldn’t be prosecuted for breaking the law, because there is no intent on our part. Discrimination takes place but no one really wants it to take place. True, normally there’s an intent requirement when deciding liability: if you drive your car and you hit someone who crosses the road where he or she shouldn’t do so, you’re not criminally liable. You killed a person but didn’t intend to. In some cases, the lack of intent diminishes rather than removes liability: if you’re in a fight with someone and the other person dies because of your actions, you won’t be charged with homicide but with the lesser crime of manslaughter if you didn’t intend to murder.

As the example of manslaughter already makes clear, intent isn’t always necessary for liability (another example would be Eichmann). Hence, lack of intent can’t be the reason not to make unconscious discrimination a crime.

Anyway, intent or the absence of it is often very difficult to prove. In the case of homicide/manslaughter, you can use witness accounts or physical evidence, you can reconstruct the crime and try to figure out if the killing was planned or intended, or you can interrogate the perpetrator, and even then it’s rarely easy. Things seem to be much more difficult still in cases of unconscious discrimination. Looking for intent is basically trying to look inside people’s minds, which isn’t obvious, and when people fool their own minds it’s becomes even harder.

If we accept that unconscious discrimination should be a crime in certain cases, and perhaps equivalent to conscious discrimination, then the problem is how to prove that it took place. In the case of conscious discrimination, you can often rely on the utterances of the person(s) who discriminate. That’s evidently impossible in the case of unconscious discrimination. Perhaps you can’t prove it in individual cases – if one black person’s CV is rejected, it’s probably impossible to say it’s because of implicit or unconscious racism. However, if a company rejects a large number of such CVs, and correcting for other factors such as education or skill level doesn’t remove bias in the distribution, then you may perhaps have evidence of discrimination (that’s a technique that’s useful in cases of conscious discrimination as well, by the way). So you would need to rely on statistical analysis, something that usually isn’t done in the determination of criminal liability. It’s not because x % of all killings are manslaughter that everyone charged with a killing has x % change of “getting away” with manslaughter. The decision to sentence someone for the crime of murder or manslaughter is always made on an individual basis and not a statistical one, although past conduct of the suspect can sometimes come into play.

An additional difficulty: if we accept that laws aren’t only meant to punish but also to prevent and deter, it seems that the latter goal is futile in the case of unconscious discrimination. People who are not aware that they engage in discriminatory activities will hardly be persuaded by laws telling them to stop doing so.

I’m personally not yet ready to take a firm position on these issues. For more information on this topic, take a look at this interesting paper.

Standard
democracy, lies and statistics, statistics

Lies, Damned Lies, and Statistics (31): Common Problems in Opinion Polls

truman and dewey and opinion polls

The classic polling error is from a poll on the 1948 Presidential election in the U.S. On Election night, the Chicago Tribune printed the headline DEWEY DEFEATS TRUMAN, which turned out to be mistaken. The reason the Tribune was mistaken is that their editor trusted the results of a phone survey. Survey research was then in its infancy, and few academics realized that a sample of telephone users was not representative of the general population. Telephones were not yet widespread, and those who had them tended to be prosperous and have stable addresses.

Opinion polls or surveys are very useful tools in human rights measurement. We can use them to measure public opinion on certain human rights violations, such as torture or gender discrimination. High levels of public approval of such rights violations may make them more common and more difficult to stop. And surveys can measure what governments don’t want to measure. Since we can’t trust oppressive governments to give accurate data on their own human rights record, surveys may fill in the blanks. Although even that won’t work if the government is so utterly totalitarian that it doesn’t allow private or international polling of its citizens, or if it has scared its citizens to such an extent that they won’t participate honestly in anonymous surveys.

But apart from physical access and respondent honesty in the most dictatorial regimes, polling in general is vulnerable to mistakes and fraud (fraud being a conscious mistake). Here’s an overview of the issues that can mess up public opinion surveys, inadvertently or not.

Wording effect

There’s the well-known problem of question wording, which I’ve discussed in detail before. Pollsters should avoid leading questions, questions that are put in such a way that they pressure people to give a certain answer, questions that are confusing or easily misinterpreted, wordy questions, questions using jargon, abbreviations or difficult terms, double or triple questions etc. Also quite common are “silly questions”, questions that don’t have meaningful or clear answers: for example “is the catholic church a force for good in the world?” What on earth can you answer to that? Depends on what elements of the church you’re talking about, what circumstances, country or even historical period you’re asking about. The answer is most likely “yes and no”, and hence useless.

The importance of wording is illustrated by the often substantial effects of small modifications in survey questions. Even the replacement of a single word by another, related word, can radically change survey results: see this post for examples.

Of course, one often claims that biased poll questions corrupt the average survey responses, but that the overall results of the survey can still be used to learn about time trends and difference between groups. As long as you make a mistake consistently, you may still find something useful. That’s true, but no reason not to take care of wording. The same trends and differences can be seen in survey results that have been produced with correctly worded questions.

Order effect or contamination effect

Answers to questions depend on the order they’re asked in, and especially on the questions that preceded. Here’s an example:

Fox News yesterday came out with a poll that suggested that just 33 percent of registered voters favor the Democrats’ health care reform package, versus 55 percent opposed. … The Fox News numbers on health care, however, have consistently been worse for Democrats than those shown by other pollsters. (source)

The problem is not the framing of the question. This was the question: “Based on what you know about the health care reform legislation being considered right now, do you favor or oppose the plan?” Nothing wrong with that.

So how can Fox News ask a seemingly unbiased question of a seemingly unbiased sample and come up with what seems to be a biased result? The answer may have to do with the questions Fox asks before the question on health care. … the health care questions weren’t asked separately. Instead, they were questions #27-35 of their larger, national poll. … And what were some of those questions? Here are a few: … Do you think President Obama apologizes too much to the rest of the world for past U.S. policies? Do you think the Obama administration is proposing more government spending than American taxpayers can afford, or not? Do you think the size of the national debt is so large it is hurting the future of the country? … These questions run the gamut slightly leading to full-frontal Republican talking points. … A respondent who hears these questions, particularly the series of questions on the national debt, is going to be primed to react somewhat unfavorably to the mention of another big Democratic spending program like health care. And evidently, an unusually high number of them do. … when you ask biased questions first, they are infectious, potentially poisoning everything that comes below. (source)

If you want to avoid this mistake – if we can call it that (since in this case it’s quite likely to have been a “conscious mistake” aka fraud) – randomizing the question order for each respondent might help.

Similar to the order effect is the effect created by follow-up questions. It’s well-known that follow-up questions of the type “but what if…” or “would you change your mind if …” change the answers to the initial questions.

Bradley effect

tom bradley

Tom Bradley

The Bradley effect is a theory proposed to explain observed discrepancies between voter opinion polls and election outcomes in some U.S. government elections where a white candidate and a non-white candidate run against each other.

Contrary to the wording and order effects, this isn’t an effect created – intentionally or not – by the pollster, but by the respondents. The theory proposes that some voters tend to tell pollsters that they are undecided or likely to vote for a black candidate, and yet, on election day, vote for the white opponent. It was named after Los Angeles Mayor Tom Bradley, an African-American who lost the 1982 California governor’s race despite being ahead in voter polls going into the elections.

The probable cause of this effect is the phenomenon of social desirability bias. Some white respondents may give a certain answer for fear that, by stating their true preference, they will open themselves to criticism of racial motivation. They may feel under pressure to provide a politically correct answer. The existence of the effect is, however, disputed. (Some say the election of Obama disproves the effect, thereby making another statistical mistake).

Fatigue effect

Another effect created by the respondents rather than the pollsters is the fatigue effect. As respondents grow increasingly tired over the course of long interviews, the accuracy of their responses could decrease. They may be able to find shortcuts to shorten the interview; they may figure out a pattern (for example that only positive or only negative answers trigger follow-up questions). Or they may just give up halfway, causing incompletion bias.

However, this effect isn’t entirely due to respondents. Survey design can be at fault as well: there may be repetitive questioning (sometimes deliberately for control purposes), the survey may be too long or longer than initially promised, or the pollster may want to make his life easier and group different polls into one (which is what seems to have happened in the Fox poll mentioned above, creating an order effect – but that’s the charitable view of course). Fatigue effect may also be caused by a pollster interviewing people who don’t care much about the topic.

Sampling effect

Ideally, the sample of people who are to be interviewed for a survey should represent a fully random subset of the entire population. That means that every person in the population should have an equal chance of being included in the sample. That means that there shouldn’t be self-selection (a typical flaw in many if not all internet surveys of the “Polldaddy” variety) or self-deselection. That reduces the randomness of the sample, which can be seen from the fact that self-selection leads to polarized results. The size of the sample is also important. Samples that are too small typically produce biased results.

Even the determination of the total population from which the sample is taken, can lead to biased results. And yes, that has to be determined… For example, do we include inmates, illegal immigrants etc. in the population? See here for some examples of the consequences of such choices.

House effect

A house effect occurs when there are systematic differences in the way that a particular pollster’s surveys tend to lean toward one or the other party’s candidates; Rasmussen is known for that.

I probably forgot an effect or two. Fill in the blanksif you care. Go here for other posts in this series.

Standard
causes of human rights violations, discrimination and hate, education, equality, philosophy, poverty

The Causes of Human Rights Violations (18): Stereotype Threat and Michel Foucault

There’s an interesting phenomenon called the stereotype threat, or, in other words, the threat of stereotypes about one’s capacity to succeed at something: when the belief that people like you (African-Americans, women, etc) are worse at a particular task than the comparison group (whites, men, etc) is made prominent, you perform worse at that task. (Some say that this is a type of confirmation bias, a tendency for people to prefer information that confirms their existing preconceptions – they selectively collect new evidence, interpret evidence in a biased way or selectively recall information from memory. But I’m not convinced).

A typical example of stereotype threat manifests itself when a categorical group is told or shown that their group’s performance is worse than other groups before giving them a test; the test results are often abnormally lower than for control groups. For example, on a mathematics test, if you remind a group of girls that boys tend to do better on this type of test, it is likely that the girls will do more poorly on the test than they would have had they not been told. (source)

Here’s another example:

Stereotype threat

(source)

[Irwin] Katz found that Blacks were able to score better on an IQ test, if the test was presented as a test of eye-hand coordination. Blacks also scored higher on an IQ test when they believed the test would be compared to that of other blacks. Katz concluded that his subjects were thoroughly aware of the judgment of intellectual inferiority held by many white Americans. With little expectation of overruling this judgment, their motivation was low, and so were their scores. (source)

Indeed, that could be one explanation of the stereotype threat. Or it could simply be that people score worse because they are anxious about confirming the stereotype, and that this anxiety provokes stress because of the will to do well and prove that the prejudice is wrong. Ironically, they score worse: this anxiety and stress makes them less able to perform at normal levels. Or it could be something more sinister: something like internalization of oppression. People who suffered prejudice for centuries can perhaps convince themselves of their group’s inferiority. When this inferiority is made explicit beforehand, they are reminded of it, and somehow their recollected feelings of inferiority tweak their performance.

So inferior test results – compared to control groups who haven’t been exposed to explicit stereotypes before the test – can be caused by

  • a lack of motivation to disprove entrenched and difficult to change prejudices
  • stress and anxiety, or
  • recollected feelings of inferiority.

Or perhaps something else I’m not thinking of at the moment.

Some say that this is all crap, and an extreme example of the file drawer effect or publication bias: those studies that find positive results are more likely to be published, the others stay in the file drawer. I don’t know. I do think it’s true that whatever the reality of the stereotype threat, talk about it can have perverse effects: differences in test scores are considered to be wholly explained by the threat, and real education discrimination or differences in economic opportunities are removed from the picture. In that way, the stereotype threat functions as a solidifier of prejudice and stereotype, quite the opposite of what was intended.

Michel Foucault

Michel Foucault

Assuming the threat is real, Michel Foucault comes to mind. Foucault wrote about power and the different ways it operates. Rather than just force or the threat of force, he found “an explosion of numerous and diverse techniques for achieving subjugation”. If you can convince people of their own inferiority you don’t have to do anything else. They will take themselves down. Or at least you may be able to convince people that it’s useless to struggle against prejudice because it’s so entrenched that you may as well adapt your behavior and confirm it. Also, Foucault’s claim that “power is everywhere” can be used here: power over people is even in their own minds. For Foucault,

power is not enforcement, but ways of making people by themselves behave in other ways than they else would have done. … Foucault claims belief systems gain momentum (and hence power) as more people come to accept the particular views associated with that belief system as common knowledge. Such belief systems define their figures of authority, such as medical doctors or priests in a church. Within such a belief system—or discourse—ideas crystallize as to what is right and what is wrong, what is normal and what is deviant. Within a particular belief system certain views, thoughts or actions become unthinkable. These ideas, being considered undeniable “truths”, come to define a particular way of seeing the world, and the particular way of life associated with such “truths” becomes normalized. (source)

The stereotype threat is a good example of a system that makes people behave in other ways, and of a belief system (based on prejudice) that becomes common knowledge, even among those targeted by the prejudice. Even they see it as unthinkable that their own inferiority is prejudice rather than knowledge.

More on prejudice, stereotypes, racism, gender discrimination, and Foucault.

Standard
discrimination and hate, equality, lies and statistics, statistics

Lies, Damned Lies, and Statistics (29): How (Not) to Frame Survey Questions, Ctd.

Following up from an older post on the importance of survey questions, here’s a nice example of the way in which small modifications in survey questions can radically change survey results:

homosexual or gay importance of survey questions

(source, source, source)

Another example:

Our survey asked the following familiar question concerning the “right to die”: “When a person has a disease that cannot be cured and is living in severe pain, do you think doctors should or should not be allowed by law to assist the patient to commit suicide if the patient requests it?”

57 percent said “doctors should be allowed,” and 42 percent said “doctors should not be allowed.” As Joshua Green and Matthew Jarvis explore in their chapter in our book, the response patterns to euthanasia questions will often differ based on framing. Framing that refers to “severe pain” and “physicians” will often lead to higher support for ending the patient’s life, while including the word “suicide” will dramatically lower support. (source)

Similarly, seniors are willing to pay considerably more for “medications” than for “drugs” or “medicine” (source). Yet another example involves the use of “Wall Street”: there’s greater public support for banking reform when the issue is more specifically framed as regulating “Wall Street banks”.

survey wording effect

(source)

What’s the cause of this sensitivity? Difficult to tell. Cognitive bias probably has some effect, and the psychology of associations (“suicide” brings up images of blood and pain, whereas “physicians” brings up images of control; similarly “homosexual” evokes sleazy bars, “gay” evokes art and design types). Maybe the willingness not to offend the person asking the question. Anyway, the conclusion is that pollsters should be very careful when framing questions. One tactic could be to use as many different words and synonyms as possible in order to avoid a bias created by one particular word.

More on DADT and homosexuals in the military. More on assisted suicide. More on lying with statistics.

Standard
lies and statistics, statistics

Lies, Damned Lies, and Statistics (23): The Omitted Variable Bias, Ctd.

I explained what I mean by “omitted variable bias” in a previous post in this series, so go there first if the following isn’t immediately clear. (In a few words: you see a correlation between two variables, for example clever people wear fancy clothes. Then you assume that one variable must cause the other, in our case: a higher intellect gives people also a better sense of aesthetic taste, or good taste in clothing somehow also makes people smarter. In fact, you may be overlooking a third variable which explains the other two, as well as their correlation. In our case: clever people earn more money, which makes it easier to buy your clothes in shops which help you with your aesthetics. Nonsense, I know, but it’s just to make a point).

I gave a few examples in the previous post, but found some others in the meantime. This one’s from Nate Silver’s blog:

Gallup has some interesting data out on the percentage of Americans who pay a lot of attention to political news. Although the share of Americans following politics has increased substantially among partisans of all sides, it is considerably higher among Republicans than among Democrats:

attention to political news

The omitted variable here is age, and the data should be corrected for it in order to properly compare these two populations.

News tends to be consumed by people who are older and wealthier, which is more characteristic of Republicans than Democrats.

People don’t read more or less news because they are Republicans or Democrats. And here’s another one from Matthew Yglesias’ blog:

It’s true that surveys indicate that gay marriage is wildly popular among DC whites and moderately unpopular among DC blacks, but I think it’s a bit misleading to really see this as a “racial divide”. Nobody would be surprised to learn about a community where college educated people had substantially more left-wing views on gay rights than did working class people. And it just happens to be the case that there are hardly any working class white people living in DC. Meanwhile, with a 34-48 pro-con split it’s hardly as if black Washington stands uniformly in opposition—there’s a division of views reflecting the diverse nature of the city’s black population.

More on same-sex marriage herehere and here. More posts in this series here.

Standard
lies and statistics, statistics, war

Lies, Damned Lies, and Statistics (18): Comparing Apples and Oranges

helmet bullet hole world war 1

(source)

Throughout this blog-series on abuses and mistakes in statistics, we’ve often seen how the failure to compare things that can be validly compared leads to error or deceit. Here’s another example: the introduction of tin helmets during the First World War. Before this introduction, soldiers only had cloth hats to wear. The strange thing was that after the introduction of tin hats, the number of injuries to the head increased dramatically. Needless to say, this was counter-intuitive. The new helmets were designed precisely to avoid or limit such injuries.

Of course, people were comparing apples with oranges, namely statistics on head injuries before and after the introduction of the new helmets. In fact, what they should have done, and effectively did after they realized their mistake, was to include in the statistics, not only the injuries, but also the fatalities. After the introduction of the new helmets, the number of fatalities dropped dramatically, but the number of injuries went up because the tin helmet was saving soldiers’ lives, but the soldiers were still injured.

Standard
discrimination and hate, equality, human rights nonsense, law

Human Rights Nonsense (8): Heightism or Height Discrimination

(Quick reminder about this blog series, so as to avoid misunderstandings: I don’t want to imply that human rights are nonsense; regular readers know that the purpose of this blog is quite the opposite. What I want to do with the posts in this series is to point to the ways in which the language of human rights is used to push nonsense. Burdening the system of human rights with frivolous demands, exaggerated problems, wrong priorities and silly talk only turns human rights into a less nobel cause, easily disparaged by those who have an interest in rights violations).

It’s a fact that taller people make more money than short people, with an additional inch of height adding about 2 percent to income in the U.S. (source, source). Even among female identical twins (whose heights can differ more than you might expect), the taller sister earns, on average, substantially more than the shorter (source).

heightism and wages

(source)

Moreover, taller people live better lives, at least on average. They evaluate their lives more favorably, and they are more likely to report a range of positive emotions, like enjoyment and happiness (source). They are also less likely to report a range of negative experiences, like sadness and physical pain (source, source, source). From 1904 to 1984, the taller candidate won the U.S. presidential elections 80% of the time, and only two presidents in the entire history of the United States have been shorter than the nation’s average height at the time of their presidencies (currently that’s 5.9ft) (source).

height of U.S. presidents

height of U.S. presidents

(source)

Hence:

There is no denying that we place a high premium on height, be it social, sexual, or economic, and our preference for height pervades almost every aspect of our lives. Isaac B. Rosenberg (source)

The bias towards tallness and against shortness is one of society’s most blatant and forgiven prejudices. John Kenneth Galbraith, 6.8ft.

The term heightism was coined to describe discrimination based on people’s height, and some propose to include it in antidiscrimination legislation. Others go even further: a special “height tax“.

As in the case of ageism, I don’t claim that there cannot be height discrimination. Very short people are often treated badly simply because they are short. There are still some who believe – often without being fully aware of it – that short stature is an inferior trait and therefore undesirable, and as a result they view short people as inferior human beings, or perhaps even not fully human. This is despicable. If this view leads to discrimination against people on the basis of their height (or rather lack of it) then something must be done about it. Nor do I deny that some people suffer psychologically from their (perceived) lack of height, and sometimes engage in self-mutilation in order to do something about it.

What I do claim here – as in the case of ageism – is that things tend to get blown out of proportion. Is the income differential between people of normal height and slightly taller people really an instance of discrimination? Do we really believe that employers make a conscious choice to pay taller people more? Of course, discrimination doesn’t have to be conscious discrimination. But before you get all worked up about discrimination and launch proposals for legislation and government action, it’s good to consider the possibility that we are dealing with another case of the “omitted variable bias” here. Taller people don’t get paid more because they are taller, but because they (seem to) possess other valuable characteristics, such as self-esteem and positive self-image.

Tall men who were short in high school earn like short men, while short men who were tall in high school earn like tall men. That pretty much rules out discrimination. It’s hard to imagine how or why employers could discriminate in favor of past height. … Tall high-school kids learn to think of themselves as leaders, and that habit of thought persists even when the kids stop growing. (source, source)

Adolescence is a formative period for self-esteem, and when you’re tall in adolescence, you build up self-esteem and a positive self-image, something which will be rewarded in your adult professional life.

For the most part American employers probably aren’t discriminating based on height. They’re “discriminating” based on qualities that tallness seems to encourage. (source)

So it’s not heightism, yet it is discrimination none the less. But perhaps I could ask to focus our attention on other types of rights violations, many of which are much more common and harmful. Our planet is plagued by extreme poverty, famine, war, genocide, terrorism, torture and dictatorship. We can turn to heightism when we’re finished with that. But of course, I’m biased. I’m 6.3 ft, and I would certainly suffer from pro-short affirmative action if such a policy would ever be proposed. So I would dismiss it as “nonsense”, wouldn’t I?

More posts in this series.

Standard
education, lies and statistics, poverty, statistics, war

Lies, Damned Lies, and Statistics (17): The Correlation-Causation Problem and Omitted Variable Bias, aka “Jumping to Conclusions”

correlation vs causation

correlation vs causation

(source)

Some more detailed information after my casual remark on the correlation-causation problem. Here’s a fictitious example of what is meant by “Omitted Variable Bias“, a type of statistical bias that illustrates this problem. Suppose we see from Department of Defense data that male U.S. soldiers are more likely to be killed in action than female soldiers. Or, more precisely and in order to avoid another statistical error, the percentage of male soldiers killed in action is larger than the percentage of female soldiers. So there is a correlation between the gender of soldiers and the likelihood of being killed in action.

One could – and one often does – conclude from such a finding that there is a causation of some kind: the gender of soldiers increases the chances of being killed in action. Again more precisely: one can conclude that some aspects of gender – e.g. a male propensity for risk taking – leads to higher mortality.

However, it’s here that the Omitted Variable Bias pops up. The real cause of the discrepancy between male and female combat mortality may not be gender or a gender related thing, but a third element, an “omitted variable” which doesn’t show in the correlation. In our fictional example, it may be the type of deployment: it may be that male soldiers are more commonly deployed in dangerous combat operations, whereas female soldiers may be more active in support operations away from the front-line.

OK, time for a real example. It has to do with home-schooling. In the U.S., many parents decide to keep their children away from school and teach them at home. For different reasons: ideological ones, reasons that have to do with their children’s special needs etc. The reasons are not important here. What is important is that many people think that home-schooled children are somehow less well educated (parents, after all, aren’t trained teachers). However, proponents of home-schooling point to a study that found that these children score above average in tests. However, this is a correlation, not necessarily a causal link. It doesn’t prove that home-schooling is superior to traditional schooling. Parents who teach their children at home are, by definition, heavily involved in their children’s education. The children of such parents do above average in normal schooling as well. The omitted variable here is parents’ involvement. It’s not the fact that the children are schooled at home that explains their above average scores. It’s the type of parents. Instead of comparing home-schooled children to all other children, one should compare them to children from similar families in the traditional system.

Greg Mankiw believes he has found another example of Omitted Variable Bias in this graph plotting test scores for U.S. students against their family income:

sat scores by income

(source, the R-square for each test average/income range chart is about 0.95)

[T]he above graph … show[s] that kids from higher income families get higher average SAT scores. Of course! But so what? This fact tells us nothing about the causal impact of income on test scores. … This graph is a good example of omitted variable bias … The key omitted variable here is parents’ IQ. Smart parents make more money and pass those good genes on to their offspring. Suppose we were to graph average SAT scores by the number of bathrooms a student has in his or her family home. That curve would also likely slope upward. (After all, people with more money buy larger homes with more bathrooms.) But it would be a mistake to conclude that installing an extra toilet raises yours kids’ SAT scores. … It would be interesting to see the above graph reproduced for adopted children only. I bet that the curve would be a lot flatter. Greg Mankiw (source)

Meaning that adopted children, who usually don’t receive their genes from their new families, have equal test scores, no matter if they have been adopted by rich or poor families. Meaning in turn that the wealth of the family in which you are raised doesn’t influence your education level, test scores or intelligence.

However, in his typical hurry to discard all possible negative effects of poverty, Mankiw may have gone a bit too fast. While it’s not impossible that the correlation is fully explained by differences in parental IQ, other evidence points elsewhere. I’m always suspicious of theories that take one cause, exclude every other type of explanation and end up with a fully deterministic system, especially if the one cause that is selected is DNA. Life is more complex than that. Regarding this particular matter, take a look back at this post, which shows that education levels are to some extent determined by parental income (university enrollment is determined both by test scores and by parental income, even to the extent that people from high income families but with average test scores, are slightly more likely to enroll in university than people from poor families but with high test scores).

What Mankiw did, in trying to avoid the Omitted Variable Bias, was in fact another type of bias, one which we could call the Singular Variable Bias: assuming that a phenomenon has a singular cause. In honor of Professor Mankiw (who does some good work, see here for example), I propose that henceforth we call it the Mankiw Bias.

More posts in this series.

Standard
discrimination and hate, equality, racism

Racism (7): Racial Profiling and “Driving While Black” in Illinois; A Case Study

racial profiling Driving While Black

(source)

Via The Atlantic, some information from the 2008 Annual Report of Illinois Traffic Stops:

Based on the data that emerges, it’s clear that African-American, Hispanic, and American Indian drivers are in fact being stopped more than one would expect based on their overall representation in the driving population. But the 2008 study also concludes that inferring from this that there is police bias is “problematic because [it] assume[s] that an officer knows the race of the driver before they make the stop. Very often, particularly at night, and when the vehicles are driving quickly, this is not the case”.

Regarding “consent searches” – instances where the police ask permission to search a car and therefore clearly know the race of the driver before they ask permission – and the number of such searches resulting in the discovery of contraband:

An African-American driver is about three times as likely to be the subject of a search as a Caucasian driver, with a Hispanic driver 2.4 times as likely to be the subject of a search. But when vehicles are searched, whites are more often found to be hiding contraband. Police found contraband 24.37 percent of the time when a white agreed to a search, but just 15.14 percent of the time with a minority driver. This finding is consistent with other studies nationwide. … One explanation for the disparity in consent searches may simply be that “whites are more tuned in to their constitutional rights, so they decline more often”.

So perhaps the fact that black drivers have their cars searched more often isn’t necessarily a sign of racism – whites may indeed be more likely to refuse to be searched. But the fact that whites are more likely to hide contraband should incite the police to search – or try to search – the cars of whites more often, and that doesn’t seem to happen. Why not? Well… If it’s not racism, then perhaps it’s a lack of interest in contraband.

More on racial profiling.

Standard
justice, law, measuring human rights, statistics

Measuring Human Rights (8): Measurement of the Fairness of Trials and of Expert Witnesses

fair trial expert witness

(source, illustration by the great Paul Blow)

An important part of the system of human rights are the rules intended to offer those accused of crimes a fair trial in court. We try to treat everyone, even suspected criminals, with fairness, and we have two principal reasons for this:

  • We only want to punish real criminals. A fair trial is one in which everything is done to avoid punishing the wrong persons. We want to avoid miscarriages of justice.
  • We also want to use court proceedings only to punish criminals and deter crime, not for political or personal reasons, as is often the case in dictatorships.

Most of these rules are included in, for example, articles 9, 10, 14 and 15 of the International Covenant on Civil and Political Rights, article 10 of the Universal Declaration, article 6 of the European Convention of Human Rights, and the Sixth Amendment to the United States Constitution.

Respect for many of these rules can be measured statistically. I’ll mention only one here: the rule regarding the intervention of expert witnesses for the defense or the prosecution. Here’s an example of the way in which this aspect of a fair trial can measured:

In the late 1990s, Harris County, Texas, medical examiner [and forensic specialist] Patricia Moore was repeatedly reprimanded by her superiors for pro-prosecution bias. … In 2004, a statistical analysis showed Moore diagnosed shaken baby syndrome (already a controversial diagnosis) in infant deaths at a rate several times higher than the national average. … One woman convicted of killing her own child because of Moore’s testimony was freed in 2005 after serving six years in prison. Another woman was cleared in 2004 after being accused because of Moore’s autopsy results. In 2001, babysitter Trenda Kemmerer was sentenced to 55 years in prison after being convicted of shaking a baby to death based largely on Moore’s testimony. The prosecutor in that case told the Houston Chronicle in 2004 that she had “no concerns” about Moore’s work. Even though Moore’s diagnosis in that case has since been revised to “undetermined,” and Moore was again reprimanded for her lack of objectivity in the case, Kemmerer remains in prison. (source)

Read more posts in this series.

Standard
lies and statistics, statistics

Lies, Damned Lies, and Statistics (6): Statistical Bias in the Design and Execution of Surveys

dilbert statistician

(source)

Statisticians can – wittingly or unwittingly – introduce bias in their work. Take the case of surveys for instance. Two important steps in the design of a survey are the definition of the population and the selection of the sample. As it’s often impossible (and undesirable) to question a whole population, statisticians usually select a sample from the population and ask their questions only to the people in this sample. They assume that the answers given by the people in the sample are representative of the opinions of the entire population.

Bias can be introduced

  • at the moment of the definition of the population
  • at the moment of the selection of the sample
  • at the moment of the execution of the survey (as well as at other moments of the statistician’s work, which I won’t mention here).

Population

Let’s take a fictional example of a survey. Suppose statisticians want to measure public opinion regarding the level of respect for human rights in the country called Dystopia.

First, they set about defining their “population”, i.e. the group of people whose “public opinion” they want to measure. “That’s easy”, you think. So do they, unfortunately. It’s the people living in this country, of course, or is it?

Not quite. Suppose the level of rights protection in Dystopia is very low, as you might expect. That means that probably many people have fled the country. Including in the survey population only the residents of the country will then overestimate the level of rights protection. And there is another point: dead people can’t talk. We can assume that many victims of rights violations are dead because of them. Not including these dead people in the survey will also artificially push up the level of rights protection. (I’ll mention in a moment how it is at all possible to include dead people in a survey; bear with me).

Hence, doing a survey and then assuming that the people who answered the survey are representative for the whole population, means discarding the opinions of refugees and dead people. If those opinions were included the results would be different and more correct. Of course, in the case of dead people it’s obviously impossible to include their opinions, but perhaps it would be advisable to make a statistical correction for it. After all, we know their answers: people who died because of rights violations in their country presumably wouldn’t have a good opinion of their political regime.

Sample

And then there are the problem linked to the definition of the sample. An unbiased sample should represent a fully random subset of the entire and correctly defined population (needless to say that if the population is defined incorrectly, as in the example above, then the sample is by definition also biased even if no sampling mistakes have been made). That means that every person in the population should have an equal chance of being chosen. That means that there shouldn’t be self-selection (a typical flaw in many if not all internet surveys of the “Polldaddy” variety) or self-deselection. The latter is very likely in my Dystopia example. People who are too afraid to talk won’t talk. The harsher the rights violations, the more people who will fail to cooperate. So you have a perverse effect that very cruel regimes may score better on human rights surveys that modestly cruel regimes. The latter are cruel, but not cruel enough to scare the hell out of people.

The classic sampling error is from a poll on the 1948 Presidential election in the U.S.

On Election night, the Chicago Tribune printed the headline DEWEY DEFEATS TRUMAN, which turned out to be mistaken. In the morning the grinning President-Elect, Harry S. Truman, was photographed holding a newspaper bearing this headline. The reason the Tribune was mistaken is that their editor trusted the results of a phone survey. Survey research was then in its infancy, and few academics realized that a sample of telephone users was not representative of the general population. Telephones were not yet widespread, and those who had them tended to be prosperous and have stable addresses. (source)

truman holding the newspaper with the headline dewey defeats truman

(source)

Execution

Another reason why bias in the sampling may occur is the way in which the surveys are executed. If the government of Dystopia allows statisticians to operate on its territory, it will probably not allow them to operate freely, or circumstances may not permit them to operate freely. So the people doing the interviews are not allowed to, or don’t dare to, travel around the country. Hence they themselves deselect entire groups from the survey, distorting the randomness of the sample. Again, the more repressive the regime, the more this happens. With possible adverse effects. The people who can be interviewed are perhaps only those living in urban areas, close to the residence of the statisticians. And those living there may have a relatively large stake in the government, which makes them paint a rosy image of the regime.

More posts in this series.

Standard