An Introduction to Evaluation

9 Bias and Motivated Reasoning

Section 1: Introduction

We know how to unpack and analyze arguments and we know the relevant features for evaluating them.  Now all we need to do is put our knowledge and skill to work, right?  Well…no.  Unfortunately, things are not so simple, since we are subject to a variety of biases and cognitive illusions that can make accurate and objective evaluation of arguments difficult.  Although these illusions and biases are pervasive and often unconscious, we can adopt a number of strategies to limit their effects over us.  In this chapter we will start with a consideration of one of the most pervasive forms of bias, myside bias, and see that simply having a preference for an idea can bias our thinking about it.  We will then look at some of the factors that influence our preferences, and conclude with some techniques to mitigate the effects of bias.

Section 2: Myside Bias

We have all seen cases of biased thinking.  Take for instance the serious sports fan that unfairly dismisses the possibility that her team might lose, or the political partisan who can’t see past his own views, or the mother who simply won’t believe that her child could have done such a thing.  Although we see it in others, we tend to think that our judgments are, on the whole, fair and impartial.  Other people are biased, but we are not.  Unfortunately, we are often wrong about this.  At least sometimes, we are not the neutral detached evaluators we think we are, and are guilty of the same biased thinking we see in other people.

In general, a bias is a preference that inhibits impartial evaluation.  In this chapter we will focus on a broad category of bias, namely myside biasMyside bias is the propensity to let our impressions, beliefs, and interests preferentially influence our evaluation of evidence. In general, we have a preference for views that conform to our existing beliefs and interests, and it can be difficult to separate our preferences from the evidence.  What is especially troubling about myside bias is that biased reasoning can feel the same “from the inside” as unbiased reasoning!  That is, the fact that we feel like we are being fair and neutral in our evaluation of the evidence is no guarantee that we are actually being fair and neutral.

The effects of myside bias can vary.  In general, however, the more personally invested we are in the truth or falsity of some claim, the more difficult it is to objectively assess the evidence for or against it. When we are personally invested in the truth or falsity of some claim, we tend to focus selectively on information that supports our preferred view.  Moreover, we tend to to over-estimate the evidential value of this supportive information, and to under-estimate the evidential value of information that challenges our preferred view.   For example, a person who has politically liberal beliefs will tend to pay closer attention to information that supports his political views, and less to information that raises doubt about them.  Furthermore, such a person will tend to see the evidence for his political beliefs as being stronger than it really is, and the evidence against his view as being weaker than it really is.[1]

In this vein consider the old saying that, “a man who is his own lawyer has a fool for a client.”  Unlike many proverbs, this one offers good advice and myside bias explains why; when your guilt or innocence is on the line it can be very difficult to objectively consider the merits and faults of the case against you, as well as the strength of your own defense.  Thus it is best to step aside and let someone who has less personal involvement take over.

However, we need not have a strong personal investment in some claim in order for the effects of myside bias to kick in.  The impartiality of our evaluations can be undermined by our own perspective in a variety of ways.  A particularly infamous case of this is the justification for the U.S. led invasion of Iraq in 2003.  In February of 2003 the U. S. Secretary of State, Colin Powell, stood before the United Nations and made a case for the invasion of Iraq.  Central to this case was his assertion that Iraq was in possession of weapons of mass destruction.  Soon thereafter Iraq was invaded, but no evidence of such weapons was ever found.  A subsequent review of how the intelligence community could have made such a mistake came to the conclusion that what we have called myside bias played a big role.  The review noted that:

The Intelligence Community (IC) suffered from a collective presumption that Iraq had an active and growing weapons of mass destruction (WMD) program.  This…dynamic led Intelligence Community analysts, collectors and managers to both interpret ambiguous evidence as conclusively indicative of a WMD program as well as ignore or minimize evidence that Iraq did not have active and expanding weapons of mass destruction programs.[2]

According to this report, the intelligence community collectively expected Iraq to have such a program, and this shared expectation led to an unintentionally biased evaluation of the evidence, and ultimately contributed to a decision to take military action.

Consider another case.  Scientists who study the efficacy of new drugs or treatments know that peoples’ wishes, desires, and expectations can bias their studies.  Obviously, a patient will want the new drug or treatment to work, and this can affect how they evaluate their own state.  In this way, a patient might feel as if the drug or treatment is working even when it is not, for example.  In part to limit these kinds of effects, researchers normally split subjects into two groups: a group that gets the experimental treatment and a group that does not.  The group that does not get the experimental treatment gets a placebo instead (a treatment with no medicinal value, e.g. a sugar pill).  Neither group knows whether they’ve received the experimental treatment or a placebo—they are “blind” to this factor as researchers put it.  Since the test subjects do not know what treatment they’ve received, researchers can separate biased effects that are the result of peoples’ preferences from effects that are the result of the treatment in question.  (Note: we will talk more about the reasoning behind this kind of experimental design in the unit on Scientific Reasoning.)

Subjects are not the only possible source of bias in this kind of context.  The researchers’ own preferences for one result or another can lead to an unconsciously impartial evaluation of the evidence, as well.  After all, researchers want these experimental drugs or treatments to work too.  In order to prevent these biases from coloring the results, researchers can effectively blind themselves so that—like the subjects—they do not know who has received the placebo and who has not (until the end of the study).  Studies conducted in this way are called double-blind studies since neither patients nor researchers know who is getting the treatment and who is merely getting a placebo.

Moreover, we know that double-blinding makes a difference.  Partly to emphasize the importance of double-blinding, a group of researchers studying treatments for multiple sclerosis (a debilitating and currently incurable disease that attacks the nervous system) decided to conduct two versions of the same study: a blinded version and an unblinded one.[3]  The medical aim of the study was to discover whether a promising new treatment was really more effective than a placebo.  Here is how they set up the two versions of the study: first, researchers divided patients into different categories: some patients received actual treatment while others received a placebo.  Second, they split the researchers into two groups.  One group knew which patients had received the placebo and which had not—that is, they conducted the study unblinded.  The other group of researchers were blinded—that is, they were prevented from knowing who had received the treatment and who hadn’t.  Both blinded and unblinded researchers were asked to examine the test subjects over a period of 2 years at 6 month intervals.

Graduated Cylinders in multiple colors
“testing tube” by wader CC BY-NC-SA 2.0

Unfortunately, the treatments turned out to be no more effective than a placebo.  However, on the whole, the unblinded researchers determined quite the opposite—they took the treatments to be effective!  The unblinded researchers surely did not set out to make biased judgments.  Presumably, they wanted to know whether the treatment was effective as much as anybody else and sought to be as objective as possible.  Nonetheless, their assessment was biased, and this illustrates the deceptiveness of myside bias.   Again, the problem is not just that our judgments can be biased by our own preferences, but that they can be biased despite our sense that they are not!

Section 3: Social Influences

Given that even slight preferences can, unbeknownst to us, influence our evaluations, it is important to have a sense for the forces that influence our preferences.  Of course, we live in a complex world, and there are all kinds of factors that influence us.  Nevertheless, perhaps the most important influence on our preferences is social.  The crucial observation here is that beliefs can have social value.  We first raised this in Chapter 2, where we noted that our desire be seen by others in a particular way can undermine the goals of cooperative dialogue.  In addition, the social value of an idea can create preferences that undermine fair evaluation.  Let’s see how this works.

The social value of a belief has to do with what other people will think about you upon finding out that you have it.  More specifically, when somebody else thinks better of you because you have a particular belief, that belief may have positive social value for you, and when they think worse of you in virtue of it, the belief may have negative social value for you.  Whether a specific belief or idea has positive or negative social value for you depends on a number of factors that it is worth making explicit.

First, it is important to understand that not every idea has social value—in fact, many do not.  For example:

Ex. 1

Noura: I think the grocery store should get new grocery carts.

Here Noura is expressing her opinion, and while people might agree or disagree with her about this, it is unlikely that anybody is going to think better or worse of Noura herself in virtue of this opinion.

Second, the social value of an idea depends on who is doing the judging.  Suppose, for example that Noura says:

Ex. 2

I think that people who go to church on Sunday are pretty much wasting their time.

We can easily imagine that people’s response to Noura’s claim might vary.  Suppose that upon hearing this Noura’s friend Maddy responds with disappointment, saying “Geez, I didn’t realize you had become so arrogant.”  Whereas another friend of Noura, Lori, responds in a different way saying, “At last, somebody brave enough to tell it like it is.”  In this case, Noura’s belief will have positive social value for Noura with respect to Lori, but negative social value with respect to Maddy. 

Third, the social value of an idea depends on whether, and to what extent, we care what the person who is doing the judging thinks.  In general, we want to be well-regarded by other people, but this doesn’t mean that we care what every single one of our acquaintances thinks of us.  Moreover, we care to varying degrees.  What your mom or best friend thinks probably matters a lot more to you than what your neighbor or dentist thinks about you.  To illustrate: suppose that Noura has recently fallen out with Maddy, no longer regards her as a friend, and as a result doesn’t particularly care what Maddy thinks.  In this case, Noura’s belief may have little to no social value with respect to Maddy—even though Maddy thinks less of Noura because of her belief.

How does the fact that ideas or beliefs can have social value relate back to the question of bias?  The answer is that our desire to be well-regarded by people who matter to us can give us a preference for ideas and beliefs that we think might have positive social value for us, and a preference against ideas that we think might have negative social value for us.  As we have seen, simply having a preference for an idea can unknowingly lead us to biased and unfair thinking. More specifically, this means that we will tend not to fully investigate or fairly evaluate ideas that have negative social value for us, and not be adequately critical of beliefs that have positive social value for us.  This is something we need to keep in mind, so that, at least when it really matters, we stop to think about the extent to which our thinking is being influenced by our desire to be liked (or not disliked) by other people.

Section 4: People and Ideas

Ideas can have social value for us because we care about what our friends and family think.  However, we can turn this around and think about it from the opposite perspective as well.  After all, just as you care what your friends and family think about you, so too your friends and family care about what you think about them.  As such, you contribute to the social value of ideas for your friends and family.  That is, if you’ll think less of one of your friends for believing X and they care about you think, then believing X may have negative social value for your friend.  The fact that we can have this effect raises a question about what kinds of attitudes we should take towards people on the basis of their ideas and beliefs.  How should we think about people when they disagree with us, propose dubious ideas, or ask questions we don’t like?  The short answer to this question is that we should only think worse of a person on this basis if we are justified in doing so.  Of course, it is a much tougher question to settle when we are justified in doing so, and like many questions we will take up, there is no one-size-fits-all answer.  However, there are a couple of observations it is worth keeping in mind.

First, recall from Chapter 2 that we have a tendency to unfairly vilify people who disagree with us, or who disagree with our in-group (a group of people we identify with as a member).  That is, we have a tendency to unjustifiably think less of people in these circumstances.  There is no doubt that there are people who are ignorant and unethical in ways that justify thinking less of them as people.  However, it is important to emphasize that simply because somebody disagrees with us doesn’t, all by itself, give much reason to think less of them as a person.  After all, there are many explanations for why someone might disagree with you outside of some moral or intellectual flaw: it may be that you have relevant information or experiences they do not have.  It may be that they have relevant information or experiences that you don’t; alternatively, it may be that you have the same information, but draw on different but reasonable principles or values to evaluate it.  Given this, in many cases, leaping to the conclusion that there is something wrong with a person or thinking less of them solely because they disagree with you is not only poor reasoning, but is also unfair to the other person.[4]

This brings us to the second point.  We should be careful about thinking worse of people in cases of disagreement for purely self-interested reasons—namely because doing so can undermine our own ability to think clearly and fairly.  After all, judgments about a person’s character or intelligence often themselves embody and generate preferences for and against individual people.  As we have seen, once we have even a slight preference for or against a particular person, the engine of myside bias can kick in with respect to other things that person says.  Once a person disagrees with us, for example, we might begin to think less of them as a person.  If we think less of them, then we may be more inclined to unfairly evaluate things they subsequently tell us.  This can hurt us; after all, when we think less of others and thereby prematurely discount what they say, we can unnecessarily lose out on relevant information or points of view that might otherwise inform and improve our own thinking.

In sum, the fact that ideas can have social value raises a question about what we should infer about a person when they disagree with us.  This is complicated, but given that (i) we have a natural tendency to unfairly think worse of those who disagree with us, and (ii) doing so can lead to us to think in biased ways, together suggest that we should be very careful about thinking less of a person on the basis of disagreement.

Section 5: Preferences for People—Other Influences

We have just seen that a preference for or against a person can influence how we think about what they say.  As it turns out, this is very common, and happens in all kinds of ways and circumstances.  People who have worked in sales, for example, know that if customers are favorably inclined toward them personally, they are more likely to accept what they say (all things considered); and if they are less inclined toward them, their customers will be less likely to accept what they say (again, all things considered).

In his book Influence: Science and Practice Robert Cialdini summarizes a number of the elements which can influence our feelings about other people.[5]  He begins by pointing out that we tend to be favorably inclined toward people we find physically attractive in some way.  Cialdini notes that a great deal of psychological research has shown that we have a propensity to unconsciously attribute a wide-array of favorable traits to people we find physically attractive.  We tend to see people we find as attractive in some way as being more intelligent, more honest, and more talented than we would otherwise.[6]  There is evidence that attractive job applicants are more likely to get hired than less attractive, but equally qualified, job applicants, and that attractive defendants (in criminal cases) tend to get lighter sentences than unattractive people in the same position.  Moreover, there are a number of studies that suggest that physical appearance is an element in our decisions about who to vote for.  In a recent study, Alexander Todorov and his colleagues presented people with pictures of candidates for the U.S. Senate and House of Representatives and asked them, on the basis of the picture alone, to rate each candidate’s competence.  He then compared the perceived competence of the candidates to election outcomes and found that the more competent looking candidate for Senate won 71.6% of the time, while the more competent looking candidate for the House of Representatives won 66.8% of the time.[7]  These results suggest something surprising: namely that the mere physical appearance of a candidate is contributing to our voting choices.

A second factor is similarity.  We tend to be favorably inclined toward people who are similar to us in some notable way, e.g. dress, background, beliefs, hobbies, etc.  Thus, we are likely to be favorably inclined toward people who are from the same area of the country as we are, or like the same music we do, or have the same political beliefs.  On one level, this is probably not all that surprising.  What may be surprising is the extent to which this influences our behavior.  For example, Cialdini draws attention to a study which suggests that people are twice as likely to complete and return a survey if the survey is sent by a person with a similar name!

Third, a great deal of research in social psychology has shown that we tend to be favorably inclined toward people with whom we are working on a shared task.  Thus, for example, players on a team tend to be favorably inclined toward their teammates.  Again, this is probably not too surprising.  What is important to note about this is that people can take advantage of this phenomenon by creating a shared task or goal for us.  Cialdini notes that a common technique among car salespeople during price negotiations is to act as if they have taken your side against the sales manager.  In doing so they have created a shared task: together you and the salesperson are working against the sales manager.  As such, you will tend to be more favorably inclined toward the salesperson than you might have been otherwise, and consequently more likely to be less skeptical of his or her claims.  Another example is the police tactic of playing Good Cop/Bad Cop during interrogations of suspects.  In these cases, one officer plays the role of “Bad Cop” by adopting an actively hostile and suspicious stance towards the suspect.  The other officer plays the role of the “Good Cop” by adopting a friendly and helpful stance toward the suspect.  In this situation it is easy for the suspect to see the officer playing the Good Cop as an ally against the Bad Cop, and to consequently be favorably inclined and less guarded toward them.

As these examples show, we have a propensity to assume that when a person has one positive feature, they probably have other positive features as well.  Psychologists refer to these kinds of associations as halo effects.  The idea is that positive qualities radiate a halo that makes other features of a person look positive too (note that this works in reverse too—negative features can radiate a halo that makes other features look negative as well).

Moon and Halo framed by palm trees
“22° moon halo tonight. With Jupiter, palm trees, and mountains.” by slworking2 CC BY-NC-SA 2.0

It is important to emphasize that in these cases nobody is thinking to themselves: “Since he is handsome, I bet what he says is true too!”  Put so explicitly we all recognize that this is a poor argument.  Nor are most people consciously deciding who to hire, convict, or vote for on the basis of their appearance.  Most of us recognize that making decisions on such a basis would be manifestly unfair.  These halo effects, like the biases we have looked at, occur largely below the level of conscious awareness.  Further, these factors do not dictate or determine our choices.  The effects are much more limited.  Nevertheless, they are still worrying.  After all, factors like appearance and similarity are not relevant in most cases, and so should not influence our judgments and decisions.

In light of this, we need to keep in mind that when we walk away from a personal interaction with vague fondness for or aversion to a person, we may well be experiencing a halo effect based on their similarity to us, their attractiveness, their agreement with our ideas, etc.  This is not to say that our interactions with other people cannot give us good evidence that they are honest or trustworthy or competent—they surely can.  The problem is that our impressions are not always based on relevant features of people, and moreover that often we don’t have any idea what features or cues our impressions have been based on in the first place.  Ultimately, then, when it comes to important decisions involving other people (e.g. hiring) we should take the extra time to explicitly identify relevant factors as a way of limiting the impact of inappropriate halo effects.

Section 6: Countering the Effects of Bias

As we have seen in our discussion of biases, we are often unaware of our preferences and unaware that they are biasing our reasoning.  There is something disturbing about this; we think that we are in control of what we believe and decide, but these biases suggest otherwise.  They suggest that our thinking and decisions are often influenced by external forces outside of our conscious awareness.  When it comes to biases, then, the question is: how can we take control of our thinking and avoid biased thinking and decision-making?

The most important thing we can do is be aware that we are subject to these biases, and pay attention to cases where they are especially likely to be at work.  Simply knowing that you have a real personal stake in a conclusion, and that as a result, you are likely to be biased towards it, can have dramatic effects.  After all, this realization puts you in a position to monitor your own thinking, and to ask yourself whether you’ve over-valued the evidence in favor of your preferred view, ignored relevant information, or under-valued the evidence against it.

This might sound like obvious advice, and it is—but it is advice that we tend not to follow!  As we saw in Chapter 1, in day-to-day life we naturally respond to our environment in a variety of ways.  Automatic and semi-automatic reasoning processes generate intuitive reactions to, and impressions of, people, places, circumstances, and ideas, and these impressions inform our conscious reasoning processes.  Halo effects are good examples: it is a largely automatic reasoning process that gives us a favorable impression of a person that we find appealing or similar to us in some way.  Many of the intuitions and impressions formed through these processes rise to the level of consciousness (though not all), at which point they become available for use in conscious reasoning processes.  Think back to the ball and bat example from Chapter 1.  When you first looked at the question, the answer ‘$1.00’ presumably came immediately to mind.  At the conscious level we can either accept the result of automatic processes, that is, we can take for granted our intuitions, impressions, and ways of thinking or we can treat them skeptically, as we should in the case of the bat and the ball.  As it turns out, however, people rarely question their impressions, intuitions, and ways of thinking.  We normally take our impressions for granted and use them as the starting points for our thinking.  We do this partly because overriding our impressions and normal ways of thinking is hard work, and in general we tend to avoid expending mental energy if we can.  The Nobel laureate Daniel Kahneman explains:

A general “law of least effort” applies to cognitive as well as physical exertion.  The law asserts that if there are several ways of achieving the same goal, people will eventually gravitate to the least demanding course of action.  In the economy of action, effort is a cost, and the acquisition of a skill is driven by the balance of benefits and cost.  Laziness is built deep into our nature.[8]

Just as we tend to complete physical tasks using as little energy as possible, so too do we tend to avoid mental exertion if we can.  Kahneman puts this in terms of laziness, but other psychologists have used different terms.  Keith Stanovich, for example, makes this point by saying that we are cognitive misers who are stingy with our energy.  Stanovich makes a particularly effective case for the importance of learning to distinguish and evaluate our automatic impressions and ways of thinking.  He writes:

Humans are cognitive misers because their basic tendency is to default to [automatic] processing mechanisms of low computational expense…Nevertheless, this strong bias to default to the simplest cognitive mechanism—to be a cognitive miser—means that humans are often less than rational.  Increasingly in the modern world, we are presented with decisions and problems that require more accurate responses than those generated by [automatic] processing.  [These] processes often provide a quick solution that is a first approximation to an optimal response.  But modern life often requires more precise thought than this.  Modern technological societies are in fact hostile environments for people reliant on only the most easily computed automatic response.  Think of the multimillion-dollar advertising industry that has been designed to exploit just this tendency…When we are over-reliant on [automatic] processing we lose personal autonomy.  We give up our thinking to those who manipulate our environments, and we let our actions be determined by those who can create the stimuli that best trigger our shallow automatic processing tendencies.[9]

Stanovich’s point is that it is important that we learn to question our intuitive responses and ways of thinking—i.e. to learn that we cannot always trust the way things seem to us.  Not only are our impressions sometimes inaccurate, but people can take advantage of these largely automatic processes to manipulate us.

The obvious solution to this problem is to treat our impressions more skeptically.  Of course, we cannot do this all of the time.  We have too much to do, our environment changes too fast, and we do not have enough energy to stop and think about all our impressions.  The simple fact is that we will have to trust most of our intuitive responses and ways of thinking.  However, when something important is at stake we should slow down and carefully identify and evaluate the impressions that drive our thought.  We do so by tracking down and identifying the sources of our impressions: “What about this person or product or situation is striking me as good or bad?”  “Are those features really indicators of goodness or badness?”.  We will not always be able to isolate these features, but in taking the time to briefly ask ourselves these questions we thereby exercise more control over our thinking and decision-making than we would have otherwise.

In addition to simply being aware of the possibility of bias, what other steps can we take to mitigate bias?  The most important step you can take is to separate yourself from the inquiry or idea.  In some cases, you can literally do this, as in the case of the lawyer who trusts his defense to someone else, or the researcher who blinds herself to the experimental identity of the subjects.  This strategy, however, is not realistic for most everyday situations.  A second strategy for mitigating bias is to discuss the issue with other people—that is, to seek cooperative dialogue.  A person without the same set of preferences may be able to see weaknesses (or strengths) that you would have had a hard time seeing on your own.

Unfortunately, cooperative dialogue is not always an option either.  In this event, the best we can do is try to separate ourselves from the idea or claim—in our imagination.  The idea is to try to envision the claim in question from a critic’s perspective.[10]  How would the critic object?  Put otherwise, the idea is to play devil’s advocate with your own views.  There are a number of ways to do this.  One way to do this is to imagine that you have to defend your view in front of a critical audience.  What objections would such an audience raise?  How would they argue against your position?  What evidence would they draw upon?  An alternative is to pretend that you have the opposite view to the one you actually have.  What would you say on behalf of “your” view?  How would you criticize your opponents?  Alternatively, you might imagine a future in which it turns out your view was mistaken or your decision the wrong one.  You can then think about the information you would have to gather in order to understand why you made the mistake.  Once you have that information in hand, you can take it into account in coming to your view or making your decision.

In sum, taking a different perspective in these ways can reveal unseen strengths and weaknesses in our preferred view.  There is no doubt that playing devil’s advocate with your own views is hard cognitive work, but when we want to know the truth about important matters it can be well worth the effort.

Man holding business card that says Devil's Advocate
“Contrarian” by drodesign CC BY-NC-ND 2.0

Section 7: Bubbles and Echo-Chambers

In closing this chapter, it is important to point out some of related situations that are wider in scope.  On this score, philosopher C. Thi Nguyen makes a useful distinction between “epistemic bubbles” and “echo-chambers”.[11]  Let us start with epistemic bubbles.  The term ‘epistemic’ means having to do with knowledge, and an epistemic bubble refers to an information source or network that omits, overlooks, or filters out relevant facts, arguments, or perspectives.

In order to illustrate this point, consider the fact that our social media accounts tend to be connected with friends or people we like, respect, or are positively inclined toward in some other way.  In itself, this is not problematic.  The problem is that many people get their news, analysis, and commentary from social media as well.  Because people we like tend to be similar to us, the information, analysis, and commentary we get from social media tends to be skewed toward our existing beliefs.  That is, our social media feeds can act as a filter that screens out information and perspectives that do not fit with our existing beliefs or the beliefs of people who are similar to us.  As Nguyen points out, friends make for good parties, but not necessarily good information networks.  Further, we may not be aware that our information has been filtered, since we don’t see the information or arguments we are missing.  Beyond social media, the proliferation of news channels and the 24-hour news cycle make it easy to pursue social, political, and economic, news and commentary that coheres with our existing beliefs and values (almost whatever they are!).  This need not be intentional.  As we saw in our discussion of myside bias, an inclination toward an idea or perspective can start up the engine of myside bias and lead us to look for and pursue information that fits our existing system of beliefs.

To be in an epistemic bubble is to lack certain information, arguments, or perspectives.  An echo chamber, on the other hand, is a community that actively undermines dissenting voices.  Within such a community, dissenting voices are unfairly and pre-emptively dismissed as insincere, corrupt, or otherwise untrustworthy.  A person in an echo chamber may well be exposed to dissenting perspectives, arguments, and data; but they will set these opposing views aside since, according to trusted voices within the echo chamber, these views come from unreliable sources.  Given this description, we can see that echo chambers are simply an extension of people’s predisposition to vilify those who disagree with them.  However, in an echo-chamber this tendency is amplified, and has an isolating effect, since it insulates members of the community from questions and information that challenge the echo-chamber’s views.

Importantly, there is nothing wrong with setting aside a person’s claims because they are being insincere or because they are an unreliable source on the issue.  The problem with an echo-chamber is that it does so unfairly.  Dissenting voices are undermined and dismissed simply because they question or dissent from the echo-chamber’s views.  That is, in an echo-chamber voices are set aside or disparaged when there is no good prior reason to doubt them.  In addition, we can think of both epistemic bubbles and echo-chambers as coming in degrees.  When it comes to echo chambers, a community can go to greater or lesser lengths to discredit opposition, and similarly a person within an echo-chamber can be more or less insulated from inconsistent information or dissenting voices.  As a final point, echo-chambers can grow up around all kinds of phenomena.  Nguyen gives as examples communities centered around, political positions, specific diets and exercise programs, activism, child-rearing techniques, and marketing programs.

Avoiding epistemic bubbles and echo-chambers is not complicated if we are paying attention.  In the case of epistemic bubbles, we need to make sure to get our information from varied sources, and to pay special attention to other people’s sources of information.  This makes sense particularly when people disagree with us.  This is an opportunity to ask: where is this opposing information coming from?  Is it a good source?  When it comes to echo chambers, we need to pay attention to how a community handles dissent.  Does it consider new information or perspectives or does it seek to vilify those who raise questions?  Does the community fairly represent criticisms, or do they misrepresent them (straw man) or mock them?

Exercises

Exercise Set 9A:

Directions: In order to practice combating bias, briefly lay out the strongest case a critic might make against each of the following.  Note: this might be uncomfortable.

#1:

The legal drinking age should be lowered to 18.

#2:

Each person has only one true soulmate.

#3:

Corporations only care about profit.

#4:

Firefighters are heroes.

#5:

Banning books always amounts to unacceptable censorship.

#6:

Standardized testing is a waste of everyone’s time.

Exercise Set 9B:

#1:

Some of the claims in exercise set 9A are provocative.  Why do you think you were asked to consider those claims in particular?

#2:

Evaluate the following argument:

  1. It seems to me that I have impartially evaluated the evidence in this case.
  2. So, I probably have.

What do you make of it?

#3:

Outside of sports and political contexts, where have you seen biased reasoning?  Give at least one example.

#4:

As we’ve seen, ideas, questions, and arguments can have social value.  How can this contribute to the generation of echo-chambers in particular?

#5:

What sources of news or information do you trust?  Are there sources you don’t trust, but that others do, or vice versa?  In general, what features or characteristics of a source give you reason to trust it, or not trust it?

#6:

Have you ever seen, experienced, or heard of anything like an echo chamber as defined above?  Explain.

 

 


  1. Lord, C. G., Ross, L., & Lepper, M. R. (1979). "Biased assimilation and attitude polarization: The effects of prior theories on subsequently considered evidence." Journal of Personality and Social Psychology, 37(11), 2098–2109.
  2. U.S. Congress, Senate Select Committee on Intelligence, Report on the Select Committee on Intelligence on the U.S. Intelligence Community’s Prewar Intelligence Assessment on Iraq, 18.
  3. Noseworthy, J. H. et al. (1994). “The impact of blinding on the results of a randomized, placebo-controlled multiple sclerosis clinical trial." Neurology: 44, 16-20.
  4. Garcia and King call this kind of inference the “attitude-to-agent fallacy”.  See Garcia, Robert K. and King, Nathan L. (2016), “Towards Intellectually Virtuous Discourse: Two Vicious Fallacies and the Virtues that Inhibit Them” in Intellectual Virtues and Education. Ed. Jason Baehr. New York: Routledge, 202-220.
  5. Cialdini, R. B. (2008). Influence: Science and practice, 5th ed. New York: HarperCollins College Publishers.
  6. This is a general tendency. Indeed, people who are typically regarded as very attractive in some way may be more likely to be regarded unfavorably in some contexts.
  7. Todorov, Alexander T., Mandisodza, Anesu, M., Goren, Amir, and Hall, Crystal C. (2005). “Inference of Competence from Faces Predict Election Outcomes.” in Science 308 (5728), 1623-1626.
  8. Kahneman, Daniel. (2011). Thinking, Fast and Slow. New York: Farrar, Strauss, and Giroux, 35.
  9. Stanovich, Keith E. (2009). What Intelligence Tests Miss: The Psychology of Rational Thought. New Haven: Yale University Press, 29-30.  Stanovich distinguishes between Type 1 and Type 2 systems of reasoning.  Since I have not adopted this same terminology, I have substituted the term ‘automatic’ in parentheses.
  10. Lord, C. G., Lepper, M. R., and Preston, E. (1984). "Considering the Opposite: A Corrective Strategy for Social Judgment." Journal of Personality and Social Psychology, 47 (6), 1231-1243.
  11. Nguyen, C. Thi. (2020). “Echo Chambers and Epistemic Bubbles” in Episteme, 17, 2: 141-161.

License

Icon for the Creative Commons Attribution-NonCommercial 4.0 International License

Arguments in Context Copyright © 2021 by Thaddeus Robinson is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, except where otherwise noted.

Share This Book