Information

What's the technical name for this cognitive bias?

What's the technical name for this cognitive bias?



We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

I'm pretty sure I've seen this cognitive bias described in the literature, but I can't find its official name, so I'll provisionally call it "the used-book finder's bias", based on the following vignette.

Say you are browsing through some used-book store, looking for nothing in particular, when you spot, tucked in some corner, almost hidden, a copy of some "classic" book. The book is in good shape, and the price is very attractive. What a find! You've got to have it, never mind the little voice reminding you that your reading list is already overcrowded, as are your bookshelves, and that you won't have time to read this book in the foreseeable future…

The cognitive bias I'm referring to is the intense compulsion to acquire the serendipitous find, even though one does not need it, and almost certainly will not do anything with it.

As someone who used to frequent used-book shops, I'm all too familiar with this cognitive bias. It led me to spend a lot of money, and acquire more books than I could even store, let alone read. (Eventually, I wised up.)

Mind you, the vast majority of the books I found this way were not really hard-to-find; I could easily get through other means (e.g. the library) if I ever really needed to read them. Moreover, if I had just paid full price for brand new copies of the very few among those books that I ultimately read, I would have still saved myself a lot of money.

The principal element of the bias, seems to me, is the strong feeling of good fortune one feels in such a situation. Conversely, one feels strongly that one would be a complete fool not to take advantage of such a lucky event.


I am not entirely sure about the proper scientific names but I think your issue revolves around buying things because they are

  • cheap;
  • hard to find.

The first is a notorious reason to buy stuff; the impulsive purchase of goods, simply because they are advertised as 'buy two, pay for one!'. This bargain hunting can indeed become pathological and addictive (source: Psychology Today).

The buying simply because a product is hard to find is a bit harder to frame in behaviors, but it is indeed a common phenomenon that people tend to buy stuff because they are exclusive (#55 here). Also, perhaps you simply wish to buy it just because you can (#26 here).

Further, buying stuff in general makes you feel good as it activates the reward circutry in our brain. As a consequence buying stuff can become an addiction (shopaholics).

There is actually a disorder named compulsive buying disorder. It was included in DSM-III, but not in later DSMs. It's not strictly defined but it has been associated with addictive disorders, obsessive-compulsive disorder, or mood disorders (Black, 2007).

Reference
- Black, World Psychiatry (2007); 6(1): 14-18


Not sure what you describe is a cognitive bias in itself, but I suspect the scarcity heuristic may be part of the purchaser's rationalization. (See the wikipedia article for academic references.)


2. Recency Bias

Definition

Our brains naturally put more weight on recent experience.

A Trader’s Example

In the market, this cognitive bias can manifest in over-learning from recent losses. We are more affected by recent losses. Thus, in trying to improve our trading results, we avoid trades that remind us of our recent losses.

For instance, you lost money in three recent pullback trades in a healthy trend. Hence, you concluded that pullback trading in a strong trend is a losing strategy. Then, you switched to trading range break-outs.

Due to your recent experience, you overlooked that most of your past profits came from pullback trades. By turning to trading range break-outs, you could be giving up a valuable edge in your trading style.

Lesson & Practice

Don’t learn from recent experience. Learn from your trading results over a more extended period.

Reviewing your trades and learning from them is crucial. However, in practice, we should not examine them and draw conclusions too often. We should make sure that we have a larger sample of trades across a longer period, before arriving at useful conclusions.


4. Choice Supportive Bias

People evaluate their own previous choices as being better than average.

I&rsquom not arguing that you should &ldquotrick&rdquo your recipients into making choices that benefit you because they will be &ldquotricked&rdquo into thinking it was a good decision.

Instead, I&rsquom arguing that if you can get small commitments from people, they will be more likely to continue working with you on bigger or more demanding projects (which should still be mutually beneficial).


Biases: An Introduction

It’s not a secret. For some reason, though, it rarely comes up in conversation, and few people are asking what we should do about it. It’s a pattern, hidden unseen behind all our triumphs and failures, unseen behind our eyes. What is it?

Imagine reaching into an urn that contains seventy white balls and thirty red ones, and plucking out ten mystery balls. Perhaps three of the ten balls will be red, and you’ll correctly guess how many red balls total were in the urn. Or perhaps you’ll happen to grab four red balls, or some other number. Then you’ll probably get the total number wrong.

This random error is the cost of incomplete knowledge, and as errors go, it’s not so bad. Your estimates won’t be incorrect on average, and the more you learn, the smaller your error will tend to be.

On the other hand, suppose that the white balls are heavier, and sink to the bottom of the urn. Then your sample may be unrepresentative in a consistent direction.

That sort of error is called “statistical bias.” When your method of learning about the world is biased, learning more may not help. Acquiring more data can even consistently worsen a biased prediction.

If you’re used to holding knowledge and inquiry in high esteem, this is a scary prospect. If we want to be sure that learning more will help us, rather than making us worse off than we were before, we need to discover and correct for biases in our data.

The idea of cognitive bias in psychology works in an analogous way. A cognitive bias is a systematic error in how we think, as opposed to a random error or one that’s merely caused by our ignorance. Whereas statistical bias skews a sample so that it less closely resembles a larger population, cognitive biases skew our beliefs so that they less accurately represent the facts, and they skew our decision-making so that it less reliably achieves our goals.

Maybe you have an optimism bias, and you find out that the red balls can be used to treat a rare tropical disease besetting your brother. You may then overestimate how many red balls the urn contains because you wish the balls were mostly red. Here, your sample isn’t what’s biased. You’re what’s biased.

Now that we’re talking about biased people, however, we have to be careful. Usually, when we call individuals or groups “biased,” we do it to chastise them for being unfair or partial. Cognitive bias is a different beast altogether. Cognitive biases are a basic part of how humans in general think, not the sort of defect we could blame on a terrible upbringing or a rotten personality. 1

A cognitive bias is a systematic way that your innate patterns of thought fall short of truth (or some other attainable goal, such as happiness). Like statistical biases, cognitive biases can distort our view of reality, they can’t always be fixed by just gathering more data, and their effects can add up over time. But when the miscalibrated measuring instrument you’re trying to fix is you, debiasing is a unique challenge.

Still, this is an obvious place to start. For if you can’t trust your brain, how can you trust anything else?

It would be useful to have a name for this project of overcoming cognitive bias, and of overcoming all species of error where our minds can come to undermine themselves.

We could call this project whatever we’d like. For the moment, though, I suppose “rationality” is as good a name as any.

Rational Feelings

In a Hollywood movie, being “rational” usually means that you’re a stern, hyperintellectual stoic. Think Spock from Star Trek, who “rationally” suppresses his emotions, “rationally” refuses to rely on intuitions or impulses, and is easily dumbfounded and outmaneuvered upon encountering an erratic or “irrational” opponent. 2

There’s a completely different notion of “rationality” studied by mathematicians, psychologists, and social scientists. Roughly, it’s the idea of doing the best you can with what you’ve got. A rational person, no matter how out of their depth they are, forms the best beliefs they can with the evidence they’ve got. A rational person, no matter how terrible a situation they’re stuck in, makes the best choices they can to improve their odds of success.

Real-world rationality isn’t about ignoring your emotions and intuitions. For a human, rationality often means becoming more self-aware about your feelings, so you can factor them into your decisions.

Rationality can even be about knowing when not to overthink things. When selecting a poster to put on their wall, or predicting the outcome of a basketball game, experimental subjects have been found to perform worse if they carefully analyzed their reasons. 3 , 4 There are some problems where conscious deliberation serves us better, and others where snap judgments serve us better.

Psychologists who work on dual process theories distinguish the brain’s “System 1” processes (fast, implicit, associative, automatic cognition) from its “System 2” processes (slow, explicit, intellectual, controlled cognition). 5 The stereotype is for rationalists to rely entirely on System 2, disregarding their feelings and impulses. Looking past the stereotype, someone who is actually being rational—actually achieving their goals, actually mitigating the harm from their cognitive biases—would rely heavily on System-1 habits and intuitions where they’re reliable.

Unfortunately, System 1 on its own seems to be a terrible guide to “when should I trust System 1?” Our untrained intuitions don’t tell us when we ought to stop relying on them. Being biased and being unbiased feel the same. 6

On the other hand, as behavioral economist Dan Ariely notes: we’re predictably irrational. We screw up in the same ways, again and again, syste­ma­tically.

If we can’t use our gut to figure out when we’re succumbing to a cognitive bias, we may still be able to use the sciences of mind.

The Many Faces of Bias

To solve problems, our brains have evolved to employ cognitive heuristics— rough shortcuts that get the right answer often, but not all the time. Cognitive biases arise when the corners cut by these heuristics result in a relatively consistent and discrete mistake.

The representativeness heuristic, for example, is our tendency to assess phenomena by how representative they seem of various categories. This can lead to biases like the conjunction fallacy. Tversky and Kahneman found that experimental subjects considered it less likely that a strong tennis player would “lose the first set” than that he would “lose the first set but win the match.” 7 Making a comeback seems more typical of a strong player, so we overestimate the probability of this complicated-but-sensible-sounding narrative compared to the probability of a strictly simpler scenario.

The representativeness heuristic can also contribute to base rate neglect, where we ground our judgments in how intuitively “normal” a combination of attributes is, neglecting how common each attribute is in the population at large. 8 Is it more likely that Steve is a shy librarian, or that he’s a shy salesperson? Most people answer this kind of question by thinking about whether “shy” matches their stereotypes of those professions. They fail to take into consideration how much more common salespeople are than librarians—seventy-five times as common, in the United States. 9

Other examples of biases include duration neglect (evaluating experiences without regard to how long they lasted), the sunk cost fallacy (feeling committed to things you’ve spent resources on in the past, when you should be cutting your losses and moving on), and confirmation bias (giving more weight to evidence that confirms what we already believe). 10 , 11

Knowing about a bias, however, is rarely enough to protect you from it. In a study of bias blindness, experimental subjects predicted that if they learned a painting was the work of a famous artist, they’d have a harder time neutrally assessing the quality of the painting. And, indeed, subjects who were told a painting’s author and were asked to evaluate its quality exhibited the very bias they had predicted, relative to a control group. When asked afterward, however, the very same subjects claimed that their assessments of the paintings had been objective and unaffected by the bias—in all groups! 12 , 13

We’re especially loath to think of our views as inaccurate compared to the views of others. Even when we correctly identify others’ biases, we have a special bias blind spot when it comes to our own flaws. 14 We fail to detect any “biased-feeling thoughts” when we introspect, and so draw the conclusion that we must just be more objective than everyone else. 15

Studying biases can in fact make you more vulnerable to overconfidence and confirmation bias, as you come to see the influence of cognitive biases all around you—in everyone but yourself. And the bias blind spot, unlike many biases, is especially severe among people who are especially intelligent, thoughtful, and open-minded. 16 , 17

This is cause for concern.

Still. it does seem like we should be able to do better. It’s known that we can reduce base rate neglect by thinking of probabilities as frequencies of objects or events. We can minimize duration neglect by directing more attention to duration and depicting it graphically. 18 People vary in how strongly they exhibit different biases, so there should be a host of yet-unknown ways to influence how biased we are.

If we want to improve, however, it’s not enough for us to pore over lists of cognitive biases. The approach to debiasing in Rationality: From AI to Zombies is to communicate a systematic understanding of why good reasoning works, and of how the brain falls short of it. To the extent this volume does its job, its approach can be compared to the one described in Serfas, who notes that “years of financially related work experience” didn’t affect people’s susceptibility to the sunk cost bias, whereas “the number of accounting courses attended” did help.

As a consequence, it might be necessary to distinguish between experience and expertise, with expertise meaning “the development of a schematic principle that involves conceptual understanding of the problem,” which in turn enables the decision maker to recognize particular biases. However, using expertise as countermeasure requires more than just being familiar with the situational content or being an expert in a particular domain. It requires that one fully understand the underlying rationale of the respective bias, is able to spot it in the particular setting, and also has the appropriate tools at hand to counteract the bias. 19

The goal of this book is to lay the groundwork for creating rationality “expertise.” That means acquiring a deep understanding of the structure of a very general problem: human bias, self-deception, and the thousand paths by which sophisticated thought can defeat itself.

A Word About This Text

Rationality: From AI to Zombies began its life as a series of essays by Eliezer Yudkowsky, published between 2006 and 2009 on the economics blog Overcoming Bias and its spin-off community blog Less Wrong. I’ve worked with Yudkowsky for the last year at the Machine Intelligence Research Institute (MIRI), a nonprofit he founded in 2000 to study the theoretical requirements for smarter-than-human artificial intelligence (AI).

Reading his blog posts got me interested in his work. He impressed me with his ability to concisely communicate insights it had taken me years of studying analytic philosophy to internalize. In seeking to reconcile science’s anarchic and skeptical spirit with a rigorous and systematic approach to inquiry, Yudkowsky tries not just to refute but to understand the many false steps and blind alleys bad philosophy (and bad lack-of-philosophy) can produce. My hope in helping organize these essays into a book is to make it easier to dive in to them, and easier to appreciate them as a coherent whole.

The resultant rationality primer is frequently personal and irreverent— drawing, for example, from Yudkowsky’s experiences with his Orthodox Jewish mother (a psychiatrist) and father (a physicist), and from conversations on chat rooms and mailing lists. Readers who are familiar with Yudkowsky from Harry Potter and the Methods of Rationality, his science-oriented take-off of J.K. Rowling’s Harry Potter books, will recognize the same irreverent iconoclasm, and many of the same core concepts.

Stylistically, the essays in this book run the gamut from “lively textbook” to “compendium of thoughtful vignettes” to “riotous manifesto,” and the content is correspondingly varied. Rationality: From AI to Zombies collects hundreds of Yudkowsky’s blog posts into twenty-six “sequences,” chapter-like series of thematically linked posts. The sequences in turn are grouped into six books, covering the following topics:

Book IMap and Territory. What is a belief, and what makes some beliefs work better than others? These four sequences explain the Bayesian notions of rationality, belief, and evidence. A running theme: the things we call “explanations” or “theories” may not always function like maps for navigating the world. As a result, we risk mixing up our mental maps with the other objects in our toolbox.

Book IIHow to Actually Change Your Mind. This truth thing seems pretty handy. Why, then, do we keep jumping to conclusions, digging our heels in, and recapitulating the same mistakes? Why are we so bad at acquiring accurate beliefs, and how can we do better? These seven sequences discuss motivated reasoning and confirmation bias, with a special focus on hard-to-spot species of self-deception and the trap of “using arguments as soldiers.”

Book IIIThe Machine in the Ghost. Why haven’t we evolved to be more rational? Even taking into account resource constraints, it seems like we could be getting a lot more epistemic bang for our evidential buck. To get a realistic picture of how and why our minds execute their biological functions, we need to crack open the hood and see how evolution works, and how our brains work, with more precision. These three sequences illustrate how even philosophers and scientists can be led astray when they rely on intuitive, non-technical evolutionary or psychological accounts. By locating our minds within a larger space of goal-directed systems, we can identify some of the peculiarities of human reasoning and appreciate how such systems can “lose their purpose.”

Book IVMere Reality. What kind of world do we live in? What is our place in that world? Building on the previous sequences’ examples of how evolutionary and cognitive models work, these six sequences explore the nature of mind and the character of physical law. In addition to applying and generalizing past lessons on scientific mysteries and parsimony, these essays raise new questions about the role science should play in individual rationality.

Book VMere Goodness. What makes something valuable—morally, or aesthetically, or prudentially? These three sequences ask how we can justify, revise, and naturalize our values and desires. The aim will be to find a way to understand our goals without compromising our efforts to actually achieve them. Here the biggest challenge is knowing when to trust your messy, complicated case-by-case impulses about what’s right and wrong, and when to replace them with simple exceptionless principles.

Book VIBecoming Stronger. How can individuals and communities put all this into practice? These three sequences begin with an autobiographical account of Yudkowsky’s own biggest philosophical blunders, with advice on how he thinks others might do better. The book closes with recommendations for developing evidence-based applied rationality curricula, and for forming groups and institutions to support interested students, educators, researchers, and friends.

The sequences are also supplemented with “interludes,” essays taken from Yudkowsky’s personal website, http://www.yudkowsky.net . These tie in to the sequences in various ways e.g., The Twelve Virtues of Rationality poetically summarizes many of the lessons of Rationality: From AI to Zombies, and is often quoted in other essays.

Clicking the green asterisk () at the bottom of an essay will take you to the original version of it on Less Wrong (where you can leave comments) or on Yudkowsky’s website.

Map and Territory

This, the first book, begins with a sequence on cognitive bias: “Predictably Wrong.” The rest of the book won’t stick to just this topic bad habits and bad ideas matter, even when they arise from our minds’ contents as opposed to our minds’ structure. Thus evolved and invented errors will both be on display in subsequent sequences, beginning with a discussion in “Fake Beliefs” of ways that one’s expectations can come apart from one’s professed beliefs.

An account of irrationality would also be incomplete if it provided no theory about how rationality works—or if its “theory” only consisted of vague truisms, with no precise explanatory mechanism. The “Noticing Confusion” sequence asks why it’s useful to base one’s behavior on “rational” expectations, and what it feels like to do so.

“Mysterious Answers” next asks whether science resolves these problems for us. Scientists base their models on repeatable experiments, not speculation or hearsay. And science has an excellent track record compared to anecdote, religion, and… pretty much everything else. Do we still need to worry about “fake” beliefs, confirmation bias, hindsight bias, and the like when we’re working with a community of people who want to explain phenomena, not just tell appealing stories?

This is then followed by The Simple Truth, a stand-alone allegory on the nature of knowledge and belief.

It is cognitive bias, however, that provides the clearest and most direct glimpse into the stuff of our psychology, into the shape of our heuristics and the logic of our limitations. It is with bias that we will begin.

There is a passage in the Zhuangzi, a proto-Daoist philosophical text, that says: “The fish trap exists because of the fish once you’ve gotten the fish, you can forget the trap.” 20

I invite you to explore this book in that spirit. Use it like you’d use a fish trap, ever mindful of the purpose you have for it. Carry with you what you can use, so long as it continues to have use discard the rest. And may your purpose serve you well.

Acknowledgments

I am stupendously grateful to Nate Soares, Elizabeth Tarleton, Paul Crowley, Brienne Strohl, Adam Freese, Helen Toner, and dozens of volunteers for proofreading portions of this book.

Special and sincere thanks to Alex Vermeer, who steered this book to completion, and Tsvi Benson-Tilsen, who combed through the entire book to ensure its readability and consistency.

The idea of personal bias, media bias, etc. resembles statistical bias in that it’s an error. Other ways of generalizing the idea of “bias” focus instead on its association with nonrandomness. In machine learning, for example, an inductive bias is just the set of assumptions a learner uses to derive predictions from a data set. Here, the learner is “biased” in the sense that it’s pointed in a specific direction but since that direction might be truth, it isn’t a bad thing for an agent to have an inductive bias. It’s valuable and necessary. This distinguishes inductive “bias” quite clearly from the other kinds of bias. ↩

A sad coincidence: Leonard Nimoy, the actor who played Spock, passed away just a few days before the release of this book. Though we cite his character as a classic example of fake “Hollywood rationality,” we mean no disrespect to Nimoy’s memory. ↩

Timothy D. Wilson et al., “Introspecting About Reasons Can Reduce Post-choice Satisfaction,” Personality and Social Psychology Bulletin 19 (1993): 331–331. ↩

Jamin Brett Halberstadt and Gary M. Levine, “Effects of Reasons Analysis on the Accuracy of Predicting Basketball Games,” Journal of Applied Social Psychology 29, no. 3 (1999): 517–530. ↩

Keith E. Stanovich and Richard F. West, “Individual Differences in Reasoning: Implications for the Rationality Debate?,” Behavioral and Brain Sciences 23, no. 5 (2000): 645–665, http://journals.cambridge.org/abstract_S0140525X00003435. ↩

Timothy D. Wilson, David B. Centerbar, and Nancy Brekke, “Mental Contamination and the Debiasing Problem,” in Heuristics and Biases: The Psychology of Intuitive Judgment, ed. Thomas Gilovich, Dale Griffin, and Daniel Kahneman (Cambridge University Press, 2002). ↩

Amos Tversky and Daniel Kahneman, “Extensional Versus Intuitive Reasoning: The Conjunction Fallacy in Probability Judgment,” Psychological Review 90, no. 4 (1983): 293–315, doi:10.1037/0033295X.90.4.293. ↩

Richards J. Heuer, Psychology of Intelligence Analysis (Center for the Study of Intelligence, Central Intelligence Agency, 1999). ↩

Wayne Weiten, Psychology: Themes and Variations, Briefer Version, Eighth Edition (Cengage Learning, 2010). ↩

Raymond S. Nickerson, “Confirmation Bias: A Ubiquitous Phenomenon in Many Guises,” Review of General Psychology 2, no. 2 (1998): 175. ↩

Probability neglect is another cognitive bias. In the months and years following the September 11 attacks, many people chose to drive long distances rather than fly. Hijacking wasn’t likely, but it now felt like it was on the table the mere possibility of hijacking hugely impacted decisions. By relying on black-and-white reasoning (cars and planes are either “safe” or “unsafe,” full stop), people actually put themselves in much more danger. Where they should have weighed the probability of dying on a cross-country car trip against the probability of dying on a cross-country flight—the former is hundreds of times more likely—they instead relied on their general feeling of worry and anxiety (the affect heuristic). We can see the same pattern of behavior in children who, hearing arguments for and against the safety of seat belts, hop back and forth between thinking seat belts are a completely good idea or a completely bad one, instead of trying to compare the strengths of the pro and con considerations. 21 Some more examples of biases are: the peak/end rule (evaluating remembered events based on their most intense moment, and how they ended) anchoring (basing decisions on recently encountered information, even when it’s irrelevant) 22 and self-anchoring (using yourself as a model for others’ likely characteristics, without giving enough thought to ways you’re atypical) 23 and status quo bias (excessively favoring what’s normal and expected over what’s new and different). 24 ↩

Katherine Hansen et al., “People Claim Objectivity After Knowingly Using Biased Strategies,” Personality and Social Psychology Bulletin 40, no. 6 (2014): 691–699. ↩

Similarly, Pronin writes of gender bias blindness:

In one study, participants considered a male and a female candidate for a police-chief job and then assessed whether being “streetwise” or “formally educated” was more important for the job. The result was that participants favored whichever background they were told the male candidate possessed (e.g., if told he was “streetwise,” they viewed that as more important). Participants were completely blind to this gender bias indeed, the more objective they believed they had been, the more bias they actually showed. 25

In a survey of 76 people waiting in airports, individuals rated themselves much less susceptible to cognitive biases on average than a typical person in the airport. In particular, people think of themselves as unusually unbiased when the bias is socially undesirable or has difficult-to-notice consequences. 27 Other studies find that people with personal ties to an issue see those ties as enhancing their insight and objectivity but when they see other people exhibiting the same ties, they infer that those people are overly attached and biased. ↩

Joyce Ehrlinger, Thomas Gilovich, and Lee Ross, “Peering Into the Bias Blind Spot: People’s Assessments of Bias in Themselves and Others,” Personality and Social Psychology Bulletin 31, no. 5 (2005): 680–692. ↩

Richard F. West, Russell J. Meserve, and Keith E. Stanovich, “Cognitive Sophistication Does Not Attenuate the Bias Blind Spot,” Journal of Personality and Social Psychology 103, no. 3 (2012): 506. ↩

… Not to be confused with people who think they’re unusually intelligent, thoughtful, etc. because of the illusory superiority bias. ↩

Michael J. Liersch and Craig R. M. McKenzie, “Duration Neglect by Numbers and Its Elimination by Graphs,” Organizational Behavior and Human Decision Processes 108, no. 2 (2009): 303–314. ↩

Sebastian Serfas, Cognitive Biases in the Capital Investment Context: Theoretical Considerations and Empirical Experiments on Violations of Normative Rationality (Springer, 2010). ↩

Zhuangzi and Burton Watson, The Complete Works of Zhuangzi (Columbia University Press, 1968). ↩

Cass R. Sunstein, “Probability Neglect: Emotions, Worst Cases, and Law,” Yale Law Journal (2002): 61–107. ↩

Dan Ariely, Predictably Irrational: The Hidden Forces That Shape Our Decisions (HarperCollins, 2008). ↩

Boaz Keysar and Dale J. Barr, “Self-Anchoring in Conversation: Why Language Users Do Not Do What They ‘Should,”’ in Heuristics and Biases: The Psychology of Intuitive Judgment: The Psychology of Intuitive Judgment, ed. Griffin Gilovich and Daniel Kahneman (New York: Cambridge University Press, 2002), 150–166, doi:10.2277/0521796792. ↩

Scott Eidelman and Christian S. Crandall, “Bias in Favor of the Status Quo,” Social and Personality Psychology Compass 6, no. 3 (2012): 270–281. ↩

Eric Luis Uhlmann and Geoffrey L. Cohen, “‘I think it, therefore it’s true’: Effects of Self-perceived Objectivity on Hiring Discrimination,” Organizational Behavior and Human Decision Processes 104, no. 2 (2007): 207–223. ↩

Emily Pronin, “How We See Ourselves and How We See Others,” Science 320 (2008): 1177–1180, http://web.cs.ucdavis.edu/

Emily Pronin, Daniel Y. Lin, and Lee Ross, “The Bias Blind Spot: Perceptions of Bias in Self versus Others,” Personality and Social Psychology Bulletin 28, no. 3 (2002): 369–381. ↩


What Are Cognitive Biases?

Cognitive biases are mental mistakes that people make when making judgments about life and other people. They typically help us find shortcuts that make it easier to navigate through everyday situations.

However, as people use their own perceptions to create what they believe to be a reality, they often bypass any rational thinking. People tend to use cognitive biases when time is a factor in making a decision or they have a limited capacity for processing information. And, without considering objective input, one’s behavior, judgment, or choices may end up being illogical or unsound.

Like most things, cognitive biases have their benefits and their drawbacks. For example, imagine that you’re walking home alone late one night and you hear a noise behind you. Instead of stopping to investigate, you quicken your pace with your safety in mind. In this case, timeliness is more important than accuracy–whether the noise was a stray cat rustling in some leaves or it was a perpetrator trying to harm you–you’re not likely to wait around to find out.

Alternatively, think of a jury judging someone who is being convicted of a crime. A fair trial requires the jury to ignore irrelevant information in the case, resist logical fallacies such as appeals to pity, and be open-minded in considering all possibilities to uncover the truth. However, cognitive biases often prevent people from doing all of these things. That said, because cognitive biases have been studied extensively, researchers know that jurors often fail to be completely objective in their opinions in a way that is both systematic and predictable.

Let’s take a look at 15 specific cognitive biases that many people experience. After reading this article, you will be able to spot these biases in your everyday life, which can prompt you to take a step back and further analyze a problem before coming to a conclusion.


The psychology of misinformation — the mental shortcuts, confusions, and illusions that encourage us to believe things that aren’t true — can tell us a lot about how to prevent its harmful effects. Our psychology is what affects whether corrections work, what we should teach in media literacy courses, and why we’re vulnerable to misinformation in the first place. It’s also a fascinating insight into the human brain.

Though psychological concepts originate in academia, many have found their way into everyday language. Cognitive dissonance, first described in 1957, is one confirmation bias is another. And this is part of the problem. Just as we have armchair epidemiologists, we can easily become armchair cognitive scientists, and mischaracterization of these concepts can create new forms of misinformation.

If reporters, fact checkers, researchers, technologists, and influencers working with misinformation (which, let’s face it, is almost all of them) don’t understand these distinctions, it isn’t simply a case of mistaking an obscure academic term. It risks becoming part of the problem.

We list the major psychological concepts that relate to misinformation, its correction, and prevention. They’re intended as a starting point rather than the last word — use the suggested further reading to dive deeper.

Cognitive miserliness

The psychological feature that makes us most vulnerable to misinformation is that we are ‘cognitive misers. We prefer to use simpler, easier ways of solving problems than ones requiring more thought and effort. We’ve evolved to use as little mental effort as possible.

This is part of what makes our brains so efficient: You don’t want to be thinking really hard about every single thing. But it also means we don’t put enough thought into things when we need to — for example, when thinking about whether something we see online is true

What to read next: “ How the Web Is Changing the Way We Trust ” by Dario Tarborelli of the University of London, published in Current Issues in Computing and Philosophy in 2008.

Dual process theory

Dual process theory is the idea that we have two basic ways of thinking: System 1, an automatic process that requires little effort and System 2, an analytical process that requires more effort. Because we are cognitive misers, we generally will use System 1 thinking (the easy one) when we think we can get away with it.

Automatic processing creates the risk of misinformation for two reasons. First, the easier something is to process, the more likely we are to think it’s true, so quick, easy judgments often feel right even when they aren’t. Second, its efficiency can miss details — sometimes crucial ones. For example, you might recall something you read on the internet, but forget that it was debunked.

What to read next: “ A Perspective on the Theoretical Foundation of Dual Process Models ” by Gordon Pennycook, published in Dual Process Theory 2.0 in 2017.

Heuristics

Heuristics are indicators we use to make quick judgments. We use heuristics because it’s easier than conducting complex analysis, especially on the internet where there’s a lot of information.

The problem with heuristics is that they often lead to incorrect conclusions. For example, you might rely on a ‘social endorsement heuristic’ — that someone you trust has endorsed (e.g., retweeted) a post on social media — to judge how trustworthy it is. But however much you trust that person, it’s not a completely reliable indicator and could lead you to believe something that isn’t true.

As our co-founder and US director Claire Wardle explains in our Essential Guide to Understanding Information Disorder , “On social media, the heuristics (the mental shortcuts we use to make sense of the world) are missing. Unlike in a newspaper where you understand what section of the paper you are looking at and see visual cues which show you’re in the opinion section or the cartoon section, this isn’t the case online.”

What to read next: “Credibility and trust of information in online environments: The use of cognitive heuristics” by Miriam J. Metzger and Andrew J. Flanagin, published in Journal of Pragmatics , Volume 59 (B) in 2013.

Cognitive dissonance

Cognitive dissonance is the negative experience that follows an encounter with information that contradicts your beliefs. This can lead people to reject credible information to alleviate the dissonance.

What to read next: “‘Fake News’ in Science Communication: Emotions and Strategies of Coping with Dissonance Online” by Monika Taddicken and Laura Wolff, published in Media and Communication, Volume 8 (1), 206–217 in 2020.

Confirmation bias

Confirmation bias is the tendency to believe information that confirms your existing beliefs, and to reject information that contradicts them. Disinformation actors can exploit this tendency to amplify existing beliefs.

Confirmation bias is just one of a long list of cognitive biases .

What to read next: “Confirmation Bias: A Ubiquitous Phenomenon in Many Guises” by Raymond Nickerson, published in Review of General Psychology, 2(2), 175–220 in 1998.

Motivated reasoning

Motivated reasoning is when people use their reasoning skills to believe what they want to believe, rather than determine the truth. The crucial point here is the idea that people’s rational faculties, rather than lazy or irrational thinking, can cause misinformed belief.

Motivated reasoning is a key point of current debate in misinformation psychology. In a 2019 piece for The New York Times , David Rand and Gordon Pennycook, two cognitive scientists based at the University of Virginia and MIT, respectively, argued strongly against it. Their claim is that people simply aren’t being analytical enough when they encounter information. As they put it:

“One group claims that our ability to reason is hijacked by our partisan convictions: that is, we’re prone to rationalization. The other group — to which the two of us belong — claims that the problem is that we often fail to exercise our critical faculties: that is, we’re mentally lazy.”

Rand and Pennycook are continuing to build a strong body of evidence that lazy thinking, not motivated reasoning, is the key factor in our psychological vulnerability to misinformation.

What to read next: “ Why do people fall for fake news?” by Gordon Pennycook and David Rand, published in The New York Times in 2019.

Pluralistic ignorance

Pluralistic ignorance is a lack of understanding about what others in society think and believe. This can make people incorrectly think others are in a majority when it comes to a political view, when it is in fact a view held by very few people. This can be made worse by rebuttals of misinformation (e.g., conspiracy theories), as they can make those views seem more popular than they really are.

A variant of this is the false consensus effect: when people overestimate how many other people share their views.

What to read next: “ The Loud Fringe: Pluralistic Ignorance and Democracy ” by Stephan Lewandowsky, published in Shaping Tomorrow’s World in 2011.

Third-person effect

The third-person effect describes the way people tend to assume misinformation affects other people more than themselves.

Nicoleta Corbu, professor of communications at the National University of Political Studies and Public Administration in Romania, recently found that there is a significant third-person effect in people’s perceived ability to spot misinformation: People rate themselves as better at identifying misinformation than others. This means people can underestimate their vulnerability, and don’t take appropriate actions.

What to read next: “Fake News and the Third-Person Effect: They are More Influenced than Me and You” by Oana Ștefanita, Nicoleta Corbu, and Raluca Buturoiu, published in the Journal of Media Research, Volume. 11 3 (32), 5-23 in 2018.

Fluency

Fluency refers to how easily people process information. People are more likely to believe something to be true if they can process it fluently — it feels right, and so seems true.

This is why repetition is so powerful: if you’ve heard it before, you process it more easily, and therefore are more likely to believe it. Repeat it multiple times, and you increase the effect. So even if you’ve heard something as a debunk, the sheer repetition of the original claim can make it more familiar, fluent, and believable.

It also means that easy-to-understand information is more believable, because it’s processed more fluently. As Stephan Lewandowsky and his colleagues explain:

“For example, the same statement is more likely to be judged as true when it is printed in high- rather than low-color contrast … presented in a rhyming rather than non-rhyming form … or delivered in a familiar rather than unfamiliar accent … Moreover, misleading questions are less likely to be recognized as such when printed in an easy-to-read font.”

What to read next: “The Epistemic Status of Processing Fluency as Source for Judgments of Truth” by Rolf Reber and Christian Unkelbach, published in Rev Philos Psychol. Volume 1 (4): 563–581 in 2010.

Bullshit receptivity

Bullshit receptivity is about how receptive you are to information that has little interest in the truth a meaningless cliche, for example. Bullshit is different from a lie, which intentionally contradicts the truth.

Pennycook and Rand use the concept of bullshit receptivity to examine susceptibility to false news headlines. They found that the more likely we are to accept a pseudo-profound sentence (i.e., bullshit) such as, “Hidden meaning transforms unparalleled abstract beauty,” the more susceptible we are to false news headlines.

This provides evidence for Pennycook and Rand’s broader theory that susceptibility to false news comes from insufficient analytical thinking, rather than motivated reasoning. In other words, we’re too stuck in automatic System 1 thinking, and not enough in analytic System 2 thinking.

What to read next: “ Who falls for fake news? The roles of bullshit receptivity, overclaiming, familiarity, and analytic thinking” by Gordon Pennycook and David Rand, published in Journal of Personality in 2019.

Look out for parts two and three and stay up to date with First Draft’s work by becoming a subscriber and following us on Facebook and Twitter.


2. Know & conquer : 16 key innovation specific cognitive biases

We next need to become consciously aware of the specific biases at work so we can identify them ourselves as they occur. Here are 16 cognitive biases to look out for that impact creativity and innovation process. They can originate from personal biases to group dynamics and politics and more. Here are some that affect divergent and creative thinking in working groups:

Cognitive biases poster

  1. Confirmation bias: we believe what we want to believe by favoring information that confirms preexisting beliefs or preconceptions. This results in looking for creative solutions that confirm our beliefs rather than challenge them, making us closed to new possibilities.

“The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow

- Jim Hightower

Is there a name for this cognitive bias?

This is something I've seen over the years, but it seems like it has been magnified lately.

The concept is that for some people truth is relative, and its relativity is based solely on what someone/something else believes. If that person/thing decides that 2+2=5, it's 5. Now if that person/thing changes course and 2+2=4, it's now 4 and every argument for it being 5 beforehand is irrelevant.

Obviously, this is highly irrational, but there is something interesting about it. This seems to be especially prevalent in religion, politics, etc.

Is there a cognitive bias that this would fall under?

Edit: also look up ɺsch effect,' in the same wiki if you just read the "psychological basis" tab.

I think its in the same realm. I think what I'm trying to get to is what makes that person an authority in that person's mind? Take DJT for example, he is not an authority in most of the fields he has opinions, but people believe what he says because it came from him and they believe in him. Why do they believe him no matter what they say, even if it conflicts with what he has previously said? If he's proven wrong, it's the corrupt media and not him being inconsistent.

Should we call that a cognitive bias or just "emotion"? And by emotion and/or feeling I mean: one's instantaneous belief and felt truth held independently of other thoughts, feelings, reasons or cognitive dissonance.

Aside: I know people who believe what they believe when they believe it. And I know I can't depend on what they believe at this moment to predict their behavior at another moment.

But aren't irrational emotional biases grounded in cognitive biases?

i'm not sure, but quite a few could apply. check out Belief Bias

In psychology it's known as the "multiple napoleons" bias. Basically the idea that everything is relative and that someone's nonsensical belief is just as good as a sensical one. In psychology that bias is used to show that, with this belief, no idea could be deemed pathological. So we have to understand that each person does experience a different reality, and that reality is subjective, but that to reach any sort of constructive understanding of reality more non sensical and pathological beliefs must be understood as such.

If you think this sounds irrational, you haven't got the whole picture, yet. It isn't, you just apply it in a slightly incorrect way.

Thing is: What our consciousness applies on the perceptions it receives, will define the emotions which will raise.
This mean, on a long term base, Whatever your consciousness applies, and this can be defined by thinking, will reflect in your emotions. By that, you make your relative truth. You cannot change the outside-world, but you can change thinking and by that, you can change the echo of perception within your consciousness.
This is, by no mean, irrational, but as things are.


  1. Schouteten, J. J., Gellynck, X., & Slabbinck, H. (2019). Influence of organic labels on consumer’s flavor perception and emotional profiling: Comparison between a central location test and home-use-test. Food research international (Ottawa, Ont.), 116, 1000–1009. https://doi.org/10.1016/j.foodres.2018.09.038.
  2. Wade, T.J., DiMaria, C. (2003). Weight Halo Effects: Individual Differences in Perceived Life Success as a Function of Women’s Race and Weight. Sex Roles 48, 461–465. https://doi.org/10.1023/A:1023582629538.
  3. Thorndike, E.L. (1920). A constant error in psychological ratings. Journal of Applied Psychology, 4(1), 25–29. https://doi.org/10.1037/h0071663.
  4. Harvey, S. M. (1938). A preliminary investigation of the interview. British Journal of Psychology. General Section, 28(3), 263–287. https://doi.org/10.1111/j.2044-8295.1938.tb00874.
  5. Talamas, S. N., Mavor, K. I., & Perrett, D. I. (2016). Blinded by Beauty: Attractiveness Bias and Accurate Perceptions of Academic Performance. PloS one, 11(2), e0148284. https://doi.org/10.1371/journal.pone.0148284.

Why do we think we understand the world more than we actually do?

What is illusion of explanatory depth? The illusion of explanatory depth (IOED) describes our belief that we understand more about.

Why is negotiation so difficult?

Reactive devaluation refers to our tendency to disparage proposals made by another party, especially if this party is viewed as.

Why is our confidence disproportionate to the difficulty of a task?

The hard-easy effect occurs when we incorrectly predict our ability to complete tasks depending on their level of difficulty.


Contents

Mirror imaging Edit

The most common personality trap, known as mirror-imaging [2] is the analysts' assumption that the people being studied think like the analysts themselves. An important variation is to confuse actual subjects with one's information or images about them, as the sort of apple one eats and the ideas and issues it may raise. It poses a dilemma for the scientific method in general, since science uses information and theory to represent complex natural systems as if theoretical constructs might be in control of indefinable natural processes. An inability to distinguish subjects from what one is thinking about them is also studied under the subject of functional fixedness, first studied in Gestalt psychology and in relation to the subject–object problem.

Experienced analysts may recognize that they have fallen prey to mirror-imaging if they discover that they are unwilling to examine variants of what they consider most reasonable in light of their personal frame of reference. Less-perceptive analysts affected by this trap may regard legitimate objections as a personal attack, rather than looking beyond ego to the merits of the question. Peer review (especially by people from a different background) can be a wise safeguard. Organizational culture can also create traps which render individual analysts unwilling to challenge acknowledged experts in the group.

Target fixation Edit

Another trap, target fixation, has an analogy in aviation: it occurs when pilots become so intent on delivering their ordnance that they lose sight of the big picture and crash into the target. This is a more basic human tendency than many realize. Analysts may fixate on one hypothesis, looking only at evidence that is consistent with their preconceptions and ignoring other relevant views. The desire for rapid closure is another form of idea fixation.

"Familiarity with terrorist methods, repeated attacks against U.S. facilities overseas, combined with indications that the continental United States was at the top of the terrorist target list might have alerted us that we were in peril of a significant attack. And yet, for reasons those who study intelligence failure will find familiar, 9/11 fits very much into the norm of surprise caused by a breakdown of intelligence warning." [3] The breakdown happened, in part, because there was poor information-sharing among analysts (in different FBI offices, for example). At a conceptual level, US intelligence knew that al-Qaida actions almost always involve multiple, near-simultaneous attacks however, the FBI did not assimilate piecemeal information on oddly behaving foreign flight-training students into this context.

On the day of the hijackings (under tremendous time pressure), no analyst associated the multiple hijackings with the multiple-attack signature of al-Qaeda. The failure to conceive that a major attack could occur within the US left the country unprepared. For example, irregularities detected by the Federal Aviation Administration and North American Air Defense Command did not flow into a center where analysts could consolidate this information and (ideally) collate it with earlier reports of odd behavior among certain pilot trainees, or the possibility of hijacked airliners being used as weapons.

Inappropriate analogies Edit

Inappropriate analogies are yet another cognitive trap. Though analogies may be extremely useful they can become dangerous when forced, or when they are based on assumptions of cultural or contextual equivalence. Avoiding such analogies is difficult when analysts are merely unconscious of differences between their own context and that of others it becomes extremely difficult when they are unaware that important knowledge is missing. Difficulties associated with admitting one's ignorance are an additional barrier to avoiding such traps. Such ignorance can take the form of insufficient study: a lack of factual information or understanding an inability to mesh new facts with old or a simple denial of conflicting facts.

Even extremely creative thinkers may find it difficult to gain support within their organization. Often more concerned with appearances, managers may suppress conflict born of creativity in favor of the status quo. A special case of stereotyping is stovepiping, whereby a group heavily invested in a particular collection technology ignores valid information from other sources (functional specialization). It was a Soviet tendency to value HUMINT (HUMan INTelligence), gathered from espionage, above all other sources the Soviet OSINT was forced to go outside the state intelligence organization in developing the USA (later USA-Canada) Institute of the Soviet Academy of Sciences. [4]

Another specialization problem may come as a result of security compartmentalization. An analytic team with unique access to a source may overemphasize that source's significance. This can be a major problem with long-term HUMINT relationships, in which partners develop personal bonds.

Groups (like individual analysts) can also reject evidence which contradicts prior conclusions. When this happens it is often difficult to assess whether the inclusion of certain analysts in the group was the thoughtful application of deliberately contrarian "red teams", or the politicized insertion of ideologues to militate for a certain policy. Monopolization of the information flow (as caused by the latter) has also been termed "stovepiping", by analogy with intelligence-collection disciplines.

There are many levels at which one can misunderstand another culture, be it that of an organization or a country. One frequently encountered trap is the rational-actor hypothesis, which ascribes rational behavior to the other side, according to a definition of rationality from one's own culture.

The social anthropologist Edward T. Hall illustrated one such conflict [5] with an example from the American Southwest. "Anglo" drivers became infuriated when "Hispanic" traffic police would cite them for going 1 mi/h over the speed limit although a Hispanic judge would later dismiss the charge. "Hispanic" drivers, on the other hand, were convinced that "Anglo" judges were unfair because they would not dismiss charges because of extenuating circumstances.

Both cultures were rational with regard to law enforcement and the adjudication of charges indeed, both believed that one of the two had to be flexible and the other had to be formal. However, in the Anglo culture, the police had discretion with regard to issuing speeding tickets, and the court was expected to stay within the letter of the law. In the Hispanic culture, the police were expected to be strict, but the courts would balance the situation. There was a fundamental misunderstanding both sides were ethnocentric, and both incorrectly assumed the other culture was a mirror image of itself. In that example, denial of rationality was the result in both cultures, yet each was acting rationally within its own value set.

In a subsequent interview, Hall spoke widely about intercultural communication. [6] He summed up years of study with this statement: "I spent years trying to figure out how to select people to go overseas. This is the secret. You have to know how to make a friend. And that is it!"

To make a friend, one has to understand the culture of the potential friend, one's own culture, and how things which are rational in one may not translate to the other. Key questions are:

If we can get away from theoretical paradigms and focus more on what is really going on with people, we will be doing well. I have two models that I used originally. One is the linguistics model, that is, descriptive linguistics. And the other one is animal behavior. Both involve paying close attention to what is happening right under our nose. There is no way to get answers unless you immerse yourself in a situation and pay close attention. From this, the validity and integrity of patterns is experienced. In other words, the pattern can live and become a part of you.

The main thing that marks my methodology is that I really do use myself as a control. I pay very close attention to myself, my feelings because then I have a base. And it is not intellectual.

Proportionality bias assumes that small things in one culture are small in every culture. In reality, cultures prioritize differently. In Western (especially Northern European) culture, time schedules are important, and being late can be a major discourtesy. Waiting one's turn is the cultural norm, and failing to stand in line is a cultural failing. "Honor killing" seems bizarre in some cultures but is an accepted part of others.

Even within a culture, however, individuals remain individual. Presumption of unitary action by organizations is another trap. In Japanese culture, the lines of authority are very clear, but the senior individual will also seek consensus. American negotiators may push for quick decisions, but the Japanese need to build consensus first once it exists, they may execute it faster than Americans.

The analyst's country (or organization) is not identical to that of their opponent. One error is to mirror-image the opposition, assuming it will act the same as one's country and culture would under the same circumstances. "It seemed inconceivable to the U.S. planners in 1941 that the Japanese would be so foolish to attack a power whose resources so exceeded those of Japan, thus virtually guaranteeing defeat". [3]

In like manner, no analyst in US Navy force protection conceived of an Arleigh Burke-class destroyer such as the USS Cole being attacked with a small suicide boat, much like those the Japanese planned to use extensively against invasion forces during World War II.

The "other side" makes different technological assumptions Edit

An opponent's cultural framework affects its approach to technology. That complicates the task of one's own analysts in assessing the opponent's resources, how they may be used and defining intelligence targets accordingly. Mirror-imaging, committing to a set of common assumptions rather than challenging those assumptions, has figured in numerous intelligence failures.

In the Pacific Theater of World War II, the Japanese seemed to believe that their language was so complex that even if their cryptosystems such as Type B Cipher Machine (Code Purple) were broken, outsiders would not really understand the content. That was not strictly true, but it was sufficiently that there were cases that even the intended recipients did not clearly understand the writer's intent. [ citation needed ]

On the other side, the US Navy assumed that ships anchored in the shallow waters of Pearl Harbor were safe from torpedo attack even though in 1940, at the Battle of Taranto, the British had made successful shallow-water torpedo attacks against Italian warships in harbor.

Even if intelligence services had credited the September 11, 2001 attacks conspirators with the organizational capacity necessary to hijack four airliners simultaneously, no one would have suspected that the hijackers' weapon of choice would be the box cutter. [3]

Likewise, the US Navy underestimated the danger of suicide boats in harbor and set rules of engagement that allowed an unidentified boat to sail into the USS Cole without being warned off or fired on. A Burke-class destroyer is one of the most powerful [ citation needed ] warships ever built, but US security policies did not protect the docked USS Cole. [3]

The "other side" does not make decisions as you do Edit

Mirror-imaging can be a major problem for policymakers, as well as analysts. During the Vietnam War, Lyndon B. Johnson and Robert S. McNamara assumed that Ho Chi Minh would react to situations in the same manner as they would. Similarly, in the run-up to the Gulf War, there was a serious misapprehension independent of politically-motivated intelligence manipulation that Saddam Hussein would view the situation as both Kuwait as the State Department and White House did.

Opposing countries are not monolithic, even within their governments. There can be bureaucratic competition, which becomes associated with different ideas. Some dictators, such as Hitler and Stalin, were known for creating internal dissension, so that only the leader was in complete control. A current issue, which analysts understand but politicians may not or may want to exploit by playing on domestic fears, is the actual political and power structure of Iran one must not equate the power of Iran's president with that of the president of the United States.

Opponents are not always rational. They may have a greater risk tolerance than one's own country. Maintaining the illusion of a WMD threat appears to have been one of Saddam Hussein's survival strategies. Returning to the Iranian example, an apparently-irrational statement from Iranian President Mahmoud Ahmadinejad would not carry the weight of a similar statement by Supreme Leader Ali Khamenei. Analysts sometimes assume that the opponent is totally wise and knows all of the other side's weaknesses. Despite that danger, opponents are unlikely to act according to one's best-case scenario they may take the worst-case approach to which one is most vulnerable.

The "other side" may be trying to confuse you Edit

The analysts are to form hypotheses but should also be prepared to reexamine them repeatedly in light of new information instead of searching for evidence buttressing their favored theory. They must remember that the enemy may be deliberately deceiving them with information that seems plausible to the enemy. Donald Bacon observed that "the most successful deception stories were apparently as reasonable as the truth. Allied strategic deception, as well as Soviet deception in support of the operations at Stalingrad, Kursk, and the 1944 summer offensive, all exploited German leadership's preexisting beliefs and were, therefore, incredibly effective." [7] Theories that Hitler thought to be implausible were not accepted. Western deception staffs alternated "ambiguous" and "misleading" deceptions the former intended simply to confuse analysts and the latter to make one false alternative especially likely.

Of all modern militaries, the Russians treat strategic deception (or, in their word, maskirovka, which goes beyond the English phrase to include deception, operational security and concealment) as an integral part of all planning. The highest levels of command are involved. [8]

The battle of Kursk was also an example of effective Soviet maskirovka. While the Germans were preparing for their Kursk offensive, the Soviets created a story that they intended to conduct only defensive operations at Kursk. The reality was the Soviets planned a large counteroffensive at Kursk once they blunted the German attack. German intelligence for the Russian Front assumed the Soviets would conduct only "local" attacks around Kursk to "gain" a better jumping off place for the winter offensive.

The counterattack by the Steppe Front stunned the Germans.

The opponent may try to overload one's analytical capabilities [9] as a gambit for those preparing the intelligence budget and for agencies whoss fast track to promotion is in data collection one's own side may produce so much raw data that the analyst is overwhelmed, even without enemy assistance.


What Are Cognitive Biases?

Cognitive biases are mental mistakes that people make when making judgments about life and other people. They typically help us find shortcuts that make it easier to navigate through everyday situations.

However, as people use their own perceptions to create what they believe to be a reality, they often bypass any rational thinking. People tend to use cognitive biases when time is a factor in making a decision or they have a limited capacity for processing information. And, without considering objective input, one’s behavior, judgment, or choices may end up being illogical or unsound.

Like most things, cognitive biases have their benefits and their drawbacks. For example, imagine that you’re walking home alone late one night and you hear a noise behind you. Instead of stopping to investigate, you quicken your pace with your safety in mind. In this case, timeliness is more important than accuracy–whether the noise was a stray cat rustling in some leaves or it was a perpetrator trying to harm you–you’re not likely to wait around to find out.

Alternatively, think of a jury judging someone who is being convicted of a crime. A fair trial requires the jury to ignore irrelevant information in the case, resist logical fallacies such as appeals to pity, and be open-minded in considering all possibilities to uncover the truth. However, cognitive biases often prevent people from doing all of these things. That said, because cognitive biases have been studied extensively, researchers know that jurors often fail to be completely objective in their opinions in a way that is both systematic and predictable.

Let’s take a look at 15 specific cognitive biases that many people experience. After reading this article, you will be able to spot these biases in your everyday life, which can prompt you to take a step back and further analyze a problem before coming to a conclusion.


4. Choice Supportive Bias

People evaluate their own previous choices as being better than average.

I&rsquom not arguing that you should &ldquotrick&rdquo your recipients into making choices that benefit you because they will be &ldquotricked&rdquo into thinking it was a good decision.

Instead, I&rsquom arguing that if you can get small commitments from people, they will be more likely to continue working with you on bigger or more demanding projects (which should still be mutually beneficial).


2. Know & conquer : 16 key innovation specific cognitive biases

We next need to become consciously aware of the specific biases at work so we can identify them ourselves as they occur. Here are 16 cognitive biases to look out for that impact creativity and innovation process. They can originate from personal biases to group dynamics and politics and more. Here are some that affect divergent and creative thinking in working groups:

Cognitive biases poster

  1. Confirmation bias: we believe what we want to believe by favoring information that confirms preexisting beliefs or preconceptions. This results in looking for creative solutions that confirm our beliefs rather than challenge them, making us closed to new possibilities.

“The opposite of courage is not cowardice, it is conformity. Even a dead fish can go with the flow

- Jim Hightower

Is there a name for this cognitive bias?

This is something I've seen over the years, but it seems like it has been magnified lately.

The concept is that for some people truth is relative, and its relativity is based solely on what someone/something else believes. If that person/thing decides that 2+2=5, it's 5. Now if that person/thing changes course and 2+2=4, it's now 4 and every argument for it being 5 beforehand is irrelevant.

Obviously, this is highly irrational, but there is something interesting about it. This seems to be especially prevalent in religion, politics, etc.

Is there a cognitive bias that this would fall under?

Edit: also look up ɺsch effect,' in the same wiki if you just read the "psychological basis" tab.

I think its in the same realm. I think what I'm trying to get to is what makes that person an authority in that person's mind? Take DJT for example, he is not an authority in most of the fields he has opinions, but people believe what he says because it came from him and they believe in him. Why do they believe him no matter what they say, even if it conflicts with what he has previously said? If he's proven wrong, it's the corrupt media and not him being inconsistent.

Should we call that a cognitive bias or just "emotion"? And by emotion and/or feeling I mean: one's instantaneous belief and felt truth held independently of other thoughts, feelings, reasons or cognitive dissonance.

Aside: I know people who believe what they believe when they believe it. And I know I can't depend on what they believe at this moment to predict their behavior at another moment.

But aren't irrational emotional biases grounded in cognitive biases?

i'm not sure, but quite a few could apply. check out Belief Bias

In psychology it's known as the "multiple napoleons" bias. Basically the idea that everything is relative and that someone's nonsensical belief is just as good as a sensical one. In psychology that bias is used to show that, with this belief, no idea could be deemed pathological. So we have to understand that each person does experience a different reality, and that reality is subjective, but that to reach any sort of constructive understanding of reality more non sensical and pathological beliefs must be understood as such.

If you think this sounds irrational, you haven't got the whole picture, yet. It isn't, you just apply it in a slightly incorrect way.

Thing is: What our consciousness applies on the perceptions it receives, will define the emotions which will raise.
This mean, on a long term base, Whatever your consciousness applies, and this can be defined by thinking, will reflect in your emotions. By that, you make your relative truth. You cannot change the outside-world, but you can change thinking and by that, you can change the echo of perception within your consciousness.
This is, by no mean, irrational, but as things are.


Biases: An Introduction

It’s not a secret. For some reason, though, it rarely comes up in conversation, and few people are asking what we should do about it. It’s a pattern, hidden unseen behind all our triumphs and failures, unseen behind our eyes. What is it?

Imagine reaching into an urn that contains seventy white balls and thirty red ones, and plucking out ten mystery balls. Perhaps three of the ten balls will be red, and you’ll correctly guess how many red balls total were in the urn. Or perhaps you’ll happen to grab four red balls, or some other number. Then you’ll probably get the total number wrong.

This random error is the cost of incomplete knowledge, and as errors go, it’s not so bad. Your estimates won’t be incorrect on average, and the more you learn, the smaller your error will tend to be.

On the other hand, suppose that the white balls are heavier, and sink to the bottom of the urn. Then your sample may be unrepresentative in a consistent direction.

That sort of error is called “statistical bias.” When your method of learning about the world is biased, learning more may not help. Acquiring more data can even consistently worsen a biased prediction.

If you’re used to holding knowledge and inquiry in high esteem, this is a scary prospect. If we want to be sure that learning more will help us, rather than making us worse off than we were before, we need to discover and correct for biases in our data.

The idea of cognitive bias in psychology works in an analogous way. A cognitive bias is a systematic error in how we think, as opposed to a random error or one that’s merely caused by our ignorance. Whereas statistical bias skews a sample so that it less closely resembles a larger population, cognitive biases skew our beliefs so that they less accurately represent the facts, and they skew our decision-making so that it less reliably achieves our goals.

Maybe you have an optimism bias, and you find out that the red balls can be used to treat a rare tropical disease besetting your brother. You may then overestimate how many red balls the urn contains because you wish the balls were mostly red. Here, your sample isn’t what’s biased. You’re what’s biased.

Now that we’re talking about biased people, however, we have to be careful. Usually, when we call individuals or groups “biased,” we do it to chastise them for being unfair or partial. Cognitive bias is a different beast altogether. Cognitive biases are a basic part of how humans in general think, not the sort of defect we could blame on a terrible upbringing or a rotten personality. 1

A cognitive bias is a systematic way that your innate patterns of thought fall short of truth (or some other attainable goal, such as happiness). Like statistical biases, cognitive biases can distort our view of reality, they can’t always be fixed by just gathering more data, and their effects can add up over time. But when the miscalibrated measuring instrument you’re trying to fix is you, debiasing is a unique challenge.

Still, this is an obvious place to start. For if you can’t trust your brain, how can you trust anything else?

It would be useful to have a name for this project of overcoming cognitive bias, and of overcoming all species of error where our minds can come to undermine themselves.

We could call this project whatever we’d like. For the moment, though, I suppose “rationality” is as good a name as any.

Rational Feelings

In a Hollywood movie, being “rational” usually means that you’re a stern, hyperintellectual stoic. Think Spock from Star Trek, who “rationally” suppresses his emotions, “rationally” refuses to rely on intuitions or impulses, and is easily dumbfounded and outmaneuvered upon encountering an erratic or “irrational” opponent. 2

There’s a completely different notion of “rationality” studied by mathematicians, psychologists, and social scientists. Roughly, it’s the idea of doing the best you can with what you’ve got. A rational person, no matter how out of their depth they are, forms the best beliefs they can with the evidence they’ve got. A rational person, no matter how terrible a situation they’re stuck in, makes the best choices they can to improve their odds of success.

Real-world rationality isn’t about ignoring your emotions and intuitions. For a human, rationality often means becoming more self-aware about your feelings, so you can factor them into your decisions.

Rationality can even be about knowing when not to overthink things. When selecting a poster to put on their wall, or predicting the outcome of a basketball game, experimental subjects have been found to perform worse if they carefully analyzed their reasons. 3 , 4 There are some problems where conscious deliberation serves us better, and others where snap judgments serve us better.

Psychologists who work on dual process theories distinguish the brain’s “System 1” processes (fast, implicit, associative, automatic cognition) from its “System 2” processes (slow, explicit, intellectual, controlled cognition). 5 The stereotype is for rationalists to rely entirely on System 2, disregarding their feelings and impulses. Looking past the stereotype, someone who is actually being rational—actually achieving their goals, actually mitigating the harm from their cognitive biases—would rely heavily on System-1 habits and intuitions where they’re reliable.

Unfortunately, System 1 on its own seems to be a terrible guide to “when should I trust System 1?” Our untrained intuitions don’t tell us when we ought to stop relying on them. Being biased and being unbiased feel the same. 6

On the other hand, as behavioral economist Dan Ariely notes: we’re predictably irrational. We screw up in the same ways, again and again, syste­ma­tically.

If we can’t use our gut to figure out when we’re succumbing to a cognitive bias, we may still be able to use the sciences of mind.

The Many Faces of Bias

To solve problems, our brains have evolved to employ cognitive heuristics— rough shortcuts that get the right answer often, but not all the time. Cognitive biases arise when the corners cut by these heuristics result in a relatively consistent and discrete mistake.

The representativeness heuristic, for example, is our tendency to assess phenomena by how representative they seem of various categories. This can lead to biases like the conjunction fallacy. Tversky and Kahneman found that experimental subjects considered it less likely that a strong tennis player would “lose the first set” than that he would “lose the first set but win the match.” 7 Making a comeback seems more typical of a strong player, so we overestimate the probability of this complicated-but-sensible-sounding narrative compared to the probability of a strictly simpler scenario.

The representativeness heuristic can also contribute to base rate neglect, where we ground our judgments in how intuitively “normal” a combination of attributes is, neglecting how common each attribute is in the population at large. 8 Is it more likely that Steve is a shy librarian, or that he’s a shy salesperson? Most people answer this kind of question by thinking about whether “shy” matches their stereotypes of those professions. They fail to take into consideration how much more common salespeople are than librarians—seventy-five times as common, in the United States. 9

Other examples of biases include duration neglect (evaluating experiences without regard to how long they lasted), the sunk cost fallacy (feeling committed to things you’ve spent resources on in the past, when you should be cutting your losses and moving on), and confirmation bias (giving more weight to evidence that confirms what we already believe). 10 , 11

Knowing about a bias, however, is rarely enough to protect you from it. In a study of bias blindness, experimental subjects predicted that if they learned a painting was the work of a famous artist, they’d have a harder time neutrally assessing the quality of the painting. And, indeed, subjects who were told a painting’s author and were asked to evaluate its quality exhibited the very bias they had predicted, relative to a control group. When asked afterward, however, the very same subjects claimed that their assessments of the paintings had been objective and unaffected by the bias—in all groups! 12 , 13

We’re especially loath to think of our views as inaccurate compared to the views of others. Even when we correctly identify others’ biases, we have a special bias blind spot when it comes to our own flaws. 14 We fail to detect any “biased-feeling thoughts” when we introspect, and so draw the conclusion that we must just be more objective than everyone else. 15

Studying biases can in fact make you more vulnerable to overconfidence and confirmation bias, as you come to see the influence of cognitive biases all around you—in everyone but yourself. And the bias blind spot, unlike many biases, is especially severe among people who are especially intelligent, thoughtful, and open-minded. 16 , 17

This is cause for concern.

Still. it does seem like we should be able to do better. It’s known that we can reduce base rate neglect by thinking of probabilities as frequencies of objects or events. We can minimize duration neglect by directing more attention to duration and depicting it graphically. 18 People vary in how strongly they exhibit different biases, so there should be a host of yet-unknown ways to influence how biased we are.

If we want to improve, however, it’s not enough for us to pore over lists of cognitive biases. The approach to debiasing in Rationality: From AI to Zombies is to communicate a systematic understanding of why good reasoning works, and of how the brain falls short of it. To the extent this volume does its job, its approach can be compared to the one described in Serfas, who notes that “years of financially related work experience” didn’t affect people’s susceptibility to the sunk cost bias, whereas “the number of accounting courses attended” did help.

As a consequence, it might be necessary to distinguish between experience and expertise, with expertise meaning “the development of a schematic principle that involves conceptual understanding of the problem,” which in turn enables the decision maker to recognize particular biases. However, using expertise as countermeasure requires more than just being familiar with the situational content or being an expert in a particular domain. It requires that one fully understand the underlying rationale of the respective bias, is able to spot it in the particular setting, and also has the appropriate tools at hand to counteract the bias. 19

The goal of this book is to lay the groundwork for creating rationality “expertise.” That means acquiring a deep understanding of the structure of a very general problem: human bias, self-deception, and the thousand paths by which sophisticated thought can defeat itself.

A Word About This Text

Rationality: From AI to Zombies began its life as a series of essays by Eliezer Yudkowsky, published between 2006 and 2009 on the economics blog Overcoming Bias and its spin-off community blog Less Wrong. I’ve worked with Yudkowsky for the last year at the Machine Intelligence Research Institute (MIRI), a nonprofit he founded in 2000 to study the theoretical requirements for smarter-than-human artificial intelligence (AI).

Reading his blog posts got me interested in his work. He impressed me with his ability to concisely communicate insights it had taken me years of studying analytic philosophy to internalize. In seeking to reconcile science’s anarchic and skeptical spirit with a rigorous and systematic approach to inquiry, Yudkowsky tries not just to refute but to understand the many false steps and blind alleys bad philosophy (and bad lack-of-philosophy) can produce. My hope in helping organize these essays into a book is to make it easier to dive in to them, and easier to appreciate them as a coherent whole.

The resultant rationality primer is frequently personal and irreverent— drawing, for example, from Yudkowsky’s experiences with his Orthodox Jewish mother (a psychiatrist) and father (a physicist), and from conversations on chat rooms and mailing lists. Readers who are familiar with Yudkowsky from Harry Potter and the Methods of Rationality, his science-oriented take-off of J.K. Rowling’s Harry Potter books, will recognize the same irreverent iconoclasm, and many of the same core concepts.

Stylistically, the essays in this book run the gamut from “lively textbook” to “compendium of thoughtful vignettes” to “riotous manifesto,” and the content is correspondingly varied. Rationality: From AI to Zombies collects hundreds of Yudkowsky’s blog posts into twenty-six “sequences,” chapter-like series of thematically linked posts. The sequences in turn are grouped into six books, covering the following topics:

Book IMap and Territory. What is a belief, and what makes some beliefs work better than others? These four sequences explain the Bayesian notions of rationality, belief, and evidence. A running theme: the things we call “explanations” or “theories” may not always function like maps for navigating the world. As a result, we risk mixing up our mental maps with the other objects in our toolbox.

Book IIHow to Actually Change Your Mind. This truth thing seems pretty handy. Why, then, do we keep jumping to conclusions, digging our heels in, and recapitulating the same mistakes? Why are we so bad at acquiring accurate beliefs, and how can we do better? These seven sequences discuss motivated reasoning and confirmation bias, with a special focus on hard-to-spot species of self-deception and the trap of “using arguments as soldiers.”

Book IIIThe Machine in the Ghost. Why haven’t we evolved to be more rational? Even taking into account resource constraints, it seems like we could be getting a lot more epistemic bang for our evidential buck. To get a realistic picture of how and why our minds execute their biological functions, we need to crack open the hood and see how evolution works, and how our brains work, with more precision. These three sequences illustrate how even philosophers and scientists can be led astray when they rely on intuitive, non-technical evolutionary or psychological accounts. By locating our minds within a larger space of goal-directed systems, we can identify some of the peculiarities of human reasoning and appreciate how such systems can “lose their purpose.”

Book IVMere Reality. What kind of world do we live in? What is our place in that world? Building on the previous sequences’ examples of how evolutionary and cognitive models work, these six sequences explore the nature of mind and the character of physical law. In addition to applying and generalizing past lessons on scientific mysteries and parsimony, these essays raise new questions about the role science should play in individual rationality.

Book VMere Goodness. What makes something valuable—morally, or aesthetically, or prudentially? These three sequences ask how we can justify, revise, and naturalize our values and desires. The aim will be to find a way to understand our goals without compromising our efforts to actually achieve them. Here the biggest challenge is knowing when to trust your messy, complicated case-by-case impulses about what’s right and wrong, and when to replace them with simple exceptionless principles.

Book VIBecoming Stronger. How can individuals and communities put all this into practice? These three sequences begin with an autobiographical account of Yudkowsky’s own biggest philosophical blunders, with advice on how he thinks others might do better. The book closes with recommendations for developing evidence-based applied rationality curricula, and for forming groups and institutions to support interested students, educators, researchers, and friends.

The sequences are also supplemented with “interludes,” essays taken from Yudkowsky’s personal website, http://www.yudkowsky.net . These tie in to the sequences in various ways e.g., The Twelve Virtues of Rationality poetically summarizes many of the lessons of Rationality: From AI to Zombies, and is often quoted in other essays.

Clicking the green asterisk () at the bottom of an essay will take you to the original version of it on Less Wrong (where you can leave comments) or on Yudkowsky’s website.

Map and Territory

This, the first book, begins with a sequence on cognitive bias: “Predictably Wrong.” The rest of the book won’t stick to just this topic bad habits and bad ideas matter, even when they arise from our minds’ contents as opposed to our minds’ structure. Thus evolved and invented errors will both be on display in subsequent sequences, beginning with a discussion in “Fake Beliefs” of ways that one’s expectations can come apart from one’s professed beliefs.

An account of irrationality would also be incomplete if it provided no theory about how rationality works—or if its “theory” only consisted of vague truisms, with no precise explanatory mechanism. The “Noticing Confusion” sequence asks why it’s useful to base one’s behavior on “rational” expectations, and what it feels like to do so.

“Mysterious Answers” next asks whether science resolves these problems for us. Scientists base their models on repeatable experiments, not speculation or hearsay. And science has an excellent track record compared to anecdote, religion, and… pretty much everything else. Do we still need to worry about “fake” beliefs, confirmation bias, hindsight bias, and the like when we’re working with a community of people who want to explain phenomena, not just tell appealing stories?

This is then followed by The Simple Truth, a stand-alone allegory on the nature of knowledge and belief.

It is cognitive bias, however, that provides the clearest and most direct glimpse into the stuff of our psychology, into the shape of our heuristics and the logic of our limitations. It is with bias that we will begin.

There is a passage in the Zhuangzi, a proto-Daoist philosophical text, that says: “The fish trap exists because of the fish once you’ve gotten the fish, you can forget the trap.” 20

I invite you to explore this book in that spirit. Use it like you’d use a fish trap, ever mindful of the purpose you have for it. Carry with you what you can use, so long as it continues to have use discard the rest. And may your purpose serve you well.

Acknowledgments

I am stupendously grateful to Nate Soares, Elizabeth Tarleton, Paul Crowley, Brienne Strohl, Adam Freese, Helen Toner, and dozens of volunteers for proofreading portions of this book.

Special and sincere thanks to Alex Vermeer, who steered this book to completion, and Tsvi Benson-Tilsen, who combed through the entire book to ensure its readability and consistency.

The idea of personal bias, media bias, etc. resembles statistical bias in that it’s an error. Other ways of generalizing the idea of “bias” focus instead on its association with nonrandomness. In machine learning, for example, an inductive bias is just the set of assumptions a learner uses to derive predictions from a data set. Here, the learner is “biased” in the sense that it’s pointed in a specific direction but since that direction might be truth, it isn’t a bad thing for an agent to have an inductive bias. It’s valuable and necessary. This distinguishes inductive “bias” quite clearly from the other kinds of bias. ↩

A sad coincidence: Leonard Nimoy, the actor who played Spock, passed away just a few days before the release of this book. Though we cite his character as a classic example of fake “Hollywood rationality,” we mean no disrespect to Nimoy’s memory. ↩

Timothy D. Wilson et al., “Introspecting About Reasons Can Reduce Post-choice Satisfaction,” Personality and Social Psychology Bulletin 19 (1993): 331–331. ↩

Jamin Brett Halberstadt and Gary M. Levine, “Effects of Reasons Analysis on the Accuracy of Predicting Basketball Games,” Journal of Applied Social Psychology 29, no. 3 (1999): 517–530. ↩

Keith E. Stanovich and Richard F. West, “Individual Differences in Reasoning: Implications for the Rationality Debate?,” Behavioral and Brain Sciences 23, no. 5 (2000): 645–665, http://journals.cambridge.org/abstract_S0140525X00003435. ↩

Timothy D. Wilson, David B. Centerbar, and Nancy Brekke, “Mental Contamination and the Debiasing Problem,” in Heuristics and Biases: The Psychology of Intuitive Judgment, ed. Thomas Gilovich, Dale Griffin, and Daniel Kahneman (Cambridge University Press, 2002). ↩

Amos Tversky and Daniel Kahneman, “Extensional Versus Intuitive Reasoning: The Conjunction Fallacy in Probability Judgment,” Psychological Review 90, no. 4 (1983): 293–315, doi:10.1037/0033295X.90.4.293. ↩

Richards J. Heuer, Psychology of Intelligence Analysis (Center for the Study of Intelligence, Central Intelligence Agency, 1999). ↩

Wayne Weiten, Psychology: Themes and Variations, Briefer Version, Eighth Edition (Cengage Learning, 2010). ↩

Raymond S. Nickerson, “Confirmation Bias: A Ubiquitous Phenomenon in Many Guises,” Review of General Psychology 2, no. 2 (1998): 175. ↩

Probability neglect is another cognitive bias. In the months and years following the September 11 attacks, many people chose to drive long distances rather than fly. Hijacking wasn’t likely, but it now felt like it was on the table the mere possibility of hijacking hugely impacted decisions. By relying on black-and-white reasoning (cars and planes are either “safe” or “unsafe,” full stop), people actually put themselves in much more danger. Where they should have weighed the probability of dying on a cross-country car trip against the probability of dying on a cross-country flight—the former is hundreds of times more likely—they instead relied on their general feeling of worry and anxiety (the affect heuristic). We can see the same pattern of behavior in children who, hearing arguments for and against the safety of seat belts, hop back and forth between thinking seat belts are a completely good idea or a completely bad one, instead of trying to compare the strengths of the pro and con considerations. 21 Some more examples of biases are: the peak/end rule (evaluating remembered events based on their most intense moment, and how they ended) anchoring (basing decisions on recently encountered information, even when it’s irrelevant) 22 and self-anchoring (using yourself as a model for others’ likely characteristics, without giving enough thought to ways you’re atypical) 23 and status quo bias (excessively favoring what’s normal and expected over what’s new and different). 24 ↩

Katherine Hansen et al., “People Claim Objectivity After Knowingly Using Biased Strategies,” Personality and Social Psychology Bulletin 40, no. 6 (2014): 691–699. ↩

Similarly, Pronin writes of gender bias blindness:

In one study, participants considered a male and a female candidate for a police-chief job and then assessed whether being “streetwise” or “formally educated” was more important for the job. The result was that participants favored whichever background they were told the male candidate possessed (e.g., if told he was “streetwise,” they viewed that as more important). Participants were completely blind to this gender bias indeed, the more objective they believed they had been, the more bias they actually showed. 25

In a survey of 76 people waiting in airports, individuals rated themselves much less susceptible to cognitive biases on average than a typical person in the airport. In particular, people think of themselves as unusually unbiased when the bias is socially undesirable or has difficult-to-notice consequences. 27 Other studies find that people with personal ties to an issue see those ties as enhancing their insight and objectivity but when they see other people exhibiting the same ties, they infer that those people are overly attached and biased. ↩

Joyce Ehrlinger, Thomas Gilovich, and Lee Ross, “Peering Into the Bias Blind Spot: People’s Assessments of Bias in Themselves and Others,” Personality and Social Psychology Bulletin 31, no. 5 (2005): 680–692. ↩

Richard F. West, Russell J. Meserve, and Keith E. Stanovich, “Cognitive Sophistication Does Not Attenuate the Bias Blind Spot,” Journal of Personality and Social Psychology 103, no. 3 (2012): 506. ↩

… Not to be confused with people who think they’re unusually intelligent, thoughtful, etc. because of the illusory superiority bias. ↩

Michael J. Liersch and Craig R. M. McKenzie, “Duration Neglect by Numbers and Its Elimination by Graphs,” Organizational Behavior and Human Decision Processes 108, no. 2 (2009): 303–314. ↩

Sebastian Serfas, Cognitive Biases in the Capital Investment Context: Theoretical Considerations and Empirical Experiments on Violations of Normative Rationality (Springer, 2010). ↩

Zhuangzi and Burton Watson, The Complete Works of Zhuangzi (Columbia University Press, 1968). ↩

Cass R. Sunstein, “Probability Neglect: Emotions, Worst Cases, and Law,” Yale Law Journal (2002): 61–107. ↩

Dan Ariely, Predictably Irrational: The Hidden Forces That Shape Our Decisions (HarperCollins, 2008). ↩

Boaz Keysar and Dale J. Barr, “Self-Anchoring in Conversation: Why Language Users Do Not Do What They ‘Should,”’ in Heuristics and Biases: The Psychology of Intuitive Judgment: The Psychology of Intuitive Judgment, ed. Griffin Gilovich and Daniel Kahneman (New York: Cambridge University Press, 2002), 150–166, doi:10.2277/0521796792. ↩

Scott Eidelman and Christian S. Crandall, “Bias in Favor of the Status Quo,” Social and Personality Psychology Compass 6, no. 3 (2012): 270–281. ↩

Eric Luis Uhlmann and Geoffrey L. Cohen, “‘I think it, therefore it’s true’: Effects of Self-perceived Objectivity on Hiring Discrimination,” Organizational Behavior and Human Decision Processes 104, no. 2 (2007): 207–223. ↩

Emily Pronin, “How We See Ourselves and How We See Others,” Science 320 (2008): 1177–1180, http://web.cs.ucdavis.edu/

Emily Pronin, Daniel Y. Lin, and Lee Ross, “The Bias Blind Spot: Perceptions of Bias in Self versus Others,” Personality and Social Psychology Bulletin 28, no. 3 (2002): 369–381. ↩


Contents

Mirror imaging Edit

The most common personality trap, known as mirror-imaging [2] is the analysts' assumption that the people being studied think like the analysts themselves. An important variation is to confuse actual subjects with one's information or images about them, as the sort of apple one eats and the ideas and issues it may raise. It poses a dilemma for the scientific method in general, since science uses information and theory to represent complex natural systems as if theoretical constructs might be in control of indefinable natural processes. An inability to distinguish subjects from what one is thinking about them is also studied under the subject of functional fixedness, first studied in Gestalt psychology and in relation to the subject–object problem.

Experienced analysts may recognize that they have fallen prey to mirror-imaging if they discover that they are unwilling to examine variants of what they consider most reasonable in light of their personal frame of reference. Less-perceptive analysts affected by this trap may regard legitimate objections as a personal attack, rather than looking beyond ego to the merits of the question. Peer review (especially by people from a different background) can be a wise safeguard. Organizational culture can also create traps which render individual analysts unwilling to challenge acknowledged experts in the group.

Target fixation Edit

Another trap, target fixation, has an analogy in aviation: it occurs when pilots become so intent on delivering their ordnance that they lose sight of the big picture and crash into the target. This is a more basic human tendency than many realize. Analysts may fixate on one hypothesis, looking only at evidence that is consistent with their preconceptions and ignoring other relevant views. The desire for rapid closure is another form of idea fixation.

"Familiarity with terrorist methods, repeated attacks against U.S. facilities overseas, combined with indications that the continental United States was at the top of the terrorist target list might have alerted us that we were in peril of a significant attack. And yet, for reasons those who study intelligence failure will find familiar, 9/11 fits very much into the norm of surprise caused by a breakdown of intelligence warning." [3] The breakdown happened, in part, because there was poor information-sharing among analysts (in different FBI offices, for example). At a conceptual level, US intelligence knew that al-Qaida actions almost always involve multiple, near-simultaneous attacks however, the FBI did not assimilate piecemeal information on oddly behaving foreign flight-training students into this context.

On the day of the hijackings (under tremendous time pressure), no analyst associated the multiple hijackings with the multiple-attack signature of al-Qaeda. The failure to conceive that a major attack could occur within the US left the country unprepared. For example, irregularities detected by the Federal Aviation Administration and North American Air Defense Command did not flow into a center where analysts could consolidate this information and (ideally) collate it with earlier reports of odd behavior among certain pilot trainees, or the possibility of hijacked airliners being used as weapons.

Inappropriate analogies Edit

Inappropriate analogies are yet another cognitive trap. Though analogies may be extremely useful they can become dangerous when forced, or when they are based on assumptions of cultural or contextual equivalence. Avoiding such analogies is difficult when analysts are merely unconscious of differences between their own context and that of others it becomes extremely difficult when they are unaware that important knowledge is missing. Difficulties associated with admitting one's ignorance are an additional barrier to avoiding such traps. Such ignorance can take the form of insufficient study: a lack of factual information or understanding an inability to mesh new facts with old or a simple denial of conflicting facts.

Even extremely creative thinkers may find it difficult to gain support within their organization. Often more concerned with appearances, managers may suppress conflict born of creativity in favor of the status quo. A special case of stereotyping is stovepiping, whereby a group heavily invested in a particular collection technology ignores valid information from other sources (functional specialization). It was a Soviet tendency to value HUMINT (HUMan INTelligence), gathered from espionage, above all other sources the Soviet OSINT was forced to go outside the state intelligence organization in developing the USA (later USA-Canada) Institute of the Soviet Academy of Sciences. [4]

Another specialization problem may come as a result of security compartmentalization. An analytic team with unique access to a source may overemphasize that source's significance. This can be a major problem with long-term HUMINT relationships, in which partners develop personal bonds.

Groups (like individual analysts) can also reject evidence which contradicts prior conclusions. When this happens it is often difficult to assess whether the inclusion of certain analysts in the group was the thoughtful application of deliberately contrarian "red teams", or the politicized insertion of ideologues to militate for a certain policy. Monopolization of the information flow (as caused by the latter) has also been termed "stovepiping", by analogy with intelligence-collection disciplines.

There are many levels at which one can misunderstand another culture, be it that of an organization or a country. One frequently encountered trap is the rational-actor hypothesis, which ascribes rational behavior to the other side, according to a definition of rationality from one's own culture.

The social anthropologist Edward T. Hall illustrated one such conflict [5] with an example from the American Southwest. "Anglo" drivers became infuriated when "Hispanic" traffic police would cite them for going 1 mi/h over the speed limit although a Hispanic judge would later dismiss the charge. "Hispanic" drivers, on the other hand, were convinced that "Anglo" judges were unfair because they would not dismiss charges because of extenuating circumstances.

Both cultures were rational with regard to law enforcement and the adjudication of charges indeed, both believed that one of the two had to be flexible and the other had to be formal. However, in the Anglo culture, the police had discretion with regard to issuing speeding tickets, and the court was expected to stay within the letter of the law. In the Hispanic culture, the police were expected to be strict, but the courts would balance the situation. There was a fundamental misunderstanding both sides were ethnocentric, and both incorrectly assumed the other culture was a mirror image of itself. In that example, denial of rationality was the result in both cultures, yet each was acting rationally within its own value set.

In a subsequent interview, Hall spoke widely about intercultural communication. [6] He summed up years of study with this statement: "I spent years trying to figure out how to select people to go overseas. This is the secret. You have to know how to make a friend. And that is it!"

To make a friend, one has to understand the culture of the potential friend, one's own culture, and how things which are rational in one may not translate to the other. Key questions are:

If we can get away from theoretical paradigms and focus more on what is really going on with people, we will be doing well. I have two models that I used originally. One is the linguistics model, that is, descriptive linguistics. And the other one is animal behavior. Both involve paying close attention to what is happening right under our nose. There is no way to get answers unless you immerse yourself in a situation and pay close attention. From this, the validity and integrity of patterns is experienced. In other words, the pattern can live and become a part of you.

The main thing that marks my methodology is that I really do use myself as a control. I pay very close attention to myself, my feelings because then I have a base. And it is not intellectual.

Proportionality bias assumes that small things in one culture are small in every culture. In reality, cultures prioritize differently. In Western (especially Northern European) culture, time schedules are important, and being late can be a major discourtesy. Waiting one's turn is the cultural norm, and failing to stand in line is a cultural failing. "Honor killing" seems bizarre in some cultures but is an accepted part of others.

Even within a culture, however, individuals remain individual. Presumption of unitary action by organizations is another trap. In Japanese culture, the lines of authority are very clear, but the senior individual will also seek consensus. American negotiators may push for quick decisions, but the Japanese need to build consensus first once it exists, they may execute it faster than Americans.

The analyst's country (or organization) is not identical to that of their opponent. One error is to mirror-image the opposition, assuming it will act the same as one's country and culture would under the same circumstances. "It seemed inconceivable to the U.S. planners in 1941 that the Japanese would be so foolish to attack a power whose resources so exceeded those of Japan, thus virtually guaranteeing defeat". [3]

In like manner, no analyst in US Navy force protection conceived of an Arleigh Burke-class destroyer such as the USS Cole being attacked with a small suicide boat, much like those the Japanese planned to use extensively against invasion forces during World War II.

The "other side" makes different technological assumptions Edit

An opponent's cultural framework affects its approach to technology. That complicates the task of one's own analysts in assessing the opponent's resources, how they may be used and defining intelligence targets accordingly. Mirror-imaging, committing to a set of common assumptions rather than challenging those assumptions, has figured in numerous intelligence failures.

In the Pacific Theater of World War II, the Japanese seemed to believe that their language was so complex that even if their cryptosystems such as Type B Cipher Machine (Code Purple) were broken, outsiders would not really understand the content. That was not strictly true, but it was sufficiently that there were cases that even the intended recipients did not clearly understand the writer's intent. [ citation needed ]

On the other side, the US Navy assumed that ships anchored in the shallow waters of Pearl Harbor were safe from torpedo attack even though in 1940, at the Battle of Taranto, the British had made successful shallow-water torpedo attacks against Italian warships in harbor.

Even if intelligence services had credited the September 11, 2001 attacks conspirators with the organizational capacity necessary to hijack four airliners simultaneously, no one would have suspected that the hijackers' weapon of choice would be the box cutter. [3]

Likewise, the US Navy underestimated the danger of suicide boats in harbor and set rules of engagement that allowed an unidentified boat to sail into the USS Cole without being warned off or fired on. A Burke-class destroyer is one of the most powerful [ citation needed ] warships ever built, but US security policies did not protect the docked USS Cole. [3]

The "other side" does not make decisions as you do Edit

Mirror-imaging can be a major problem for policymakers, as well as analysts. During the Vietnam War, Lyndon B. Johnson and Robert S. McNamara assumed that Ho Chi Minh would react to situations in the same manner as they would. Similarly, in the run-up to the Gulf War, there was a serious misapprehension independent of politically-motivated intelligence manipulation that Saddam Hussein would view the situation as both Kuwait as the State Department and White House did.

Opposing countries are not monolithic, even within their governments. There can be bureaucratic competition, which becomes associated with different ideas. Some dictators, such as Hitler and Stalin, were known for creating internal dissension, so that only the leader was in complete control. A current issue, which analysts understand but politicians may not or may want to exploit by playing on domestic fears, is the actual political and power structure of Iran one must not equate the power of Iran's president with that of the president of the United States.

Opponents are not always rational. They may have a greater risk tolerance than one's own country. Maintaining the illusion of a WMD threat appears to have been one of Saddam Hussein's survival strategies. Returning to the Iranian example, an apparently-irrational statement from Iranian President Mahmoud Ahmadinejad would not carry the weight of a similar statement by Supreme Leader Ali Khamenei. Analysts sometimes assume that the opponent is totally wise and knows all of the other side's weaknesses. Despite that danger, opponents are unlikely to act according to one's best-case scenario they may take the worst-case approach to which one is most vulnerable.

The "other side" may be trying to confuse you Edit

The analysts are to form hypotheses but should also be prepared to reexamine them repeatedly in light of new information instead of searching for evidence buttressing their favored theory. They must remember that the enemy may be deliberately deceiving them with information that seems plausible to the enemy. Donald Bacon observed that "the most successful deception stories were apparently as reasonable as the truth. Allied strategic deception, as well as Soviet deception in support of the operations at Stalingrad, Kursk, and the 1944 summer offensive, all exploited German leadership's preexisting beliefs and were, therefore, incredibly effective." [7] Theories that Hitler thought to be implausible were not accepted. Western deception staffs alternated "ambiguous" and "misleading" deceptions the former intended simply to confuse analysts and the latter to make one false alternative especially likely.

Of all modern militaries, the Russians treat strategic deception (or, in their word, maskirovka, which goes beyond the English phrase to include deception, operational security and concealment) as an integral part of all planning. The highest levels of command are involved. [8]

The battle of Kursk was also an example of effective Soviet maskirovka. While the Germans were preparing for their Kursk offensive, the Soviets created a story that they intended to conduct only defensive operations at Kursk. The reality was the Soviets planned a large counteroffensive at Kursk once they blunted the German attack. German intelligence for the Russian Front assumed the Soviets would conduct only "local" attacks around Kursk to "gain" a better jumping off place for the winter offensive.

The counterattack by the Steppe Front stunned the Germans.

The opponent may try to overload one's analytical capabilities [9] as a gambit for those preparing the intelligence budget and for agencies whoss fast track to promotion is in data collection one's own side may produce so much raw data that the analyst is overwhelmed, even without enemy assistance.


  1. Schouteten, J. J., Gellynck, X., & Slabbinck, H. (2019). Influence of organic labels on consumer’s flavor perception and emotional profiling: Comparison between a central location test and home-use-test. Food research international (Ottawa, Ont.), 116, 1000–1009. https://doi.org/10.1016/j.foodres.2018.09.038.
  2. Wade, T.J., DiMaria, C. (2003). Weight Halo Effects: Individual Differences in Perceived Life Success as a Function of Women’s Race and Weight. Sex Roles 48, 461–465. https://doi.org/10.1023/A:1023582629538.
  3. Thorndike, E.L. (1920). A constant error in psychological ratings. Journal of Applied Psychology, 4(1), 25–29. https://doi.org/10.1037/h0071663.
  4. Harvey, S. M. (1938). A preliminary investigation of the interview. British Journal of Psychology. General Section, 28(3), 263–287. https://doi.org/10.1111/j.2044-8295.1938.tb00874.
  5. Talamas, S. N., Mavor, K. I., & Perrett, D. I. (2016). Blinded by Beauty: Attractiveness Bias and Accurate Perceptions of Academic Performance. PloS one, 11(2), e0148284. https://doi.org/10.1371/journal.pone.0148284.

Why do we think we understand the world more than we actually do?

What is illusion of explanatory depth? The illusion of explanatory depth (IOED) describes our belief that we understand more about.

Why is negotiation so difficult?

Reactive devaluation refers to our tendency to disparage proposals made by another party, especially if this party is viewed as.

Why is our confidence disproportionate to the difficulty of a task?

The hard-easy effect occurs when we incorrectly predict our ability to complete tasks depending on their level of difficulty.


The psychology of misinformation — the mental shortcuts, confusions, and illusions that encourage us to believe things that aren’t true — can tell us a lot about how to prevent its harmful effects. Our psychology is what affects whether corrections work, what we should teach in media literacy courses, and why we’re vulnerable to misinformation in the first place. It’s also a fascinating insight into the human brain.

Though psychological concepts originate in academia, many have found their way into everyday language. Cognitive dissonance, first described in 1957, is one confirmation bias is another. And this is part of the problem. Just as we have armchair epidemiologists, we can easily become armchair cognitive scientists, and mischaracterization of these concepts can create new forms of misinformation.

If reporters, fact checkers, researchers, technologists, and influencers working with misinformation (which, let’s face it, is almost all of them) don’t understand these distinctions, it isn’t simply a case of mistaking an obscure academic term. It risks becoming part of the problem.

We list the major psychological concepts that relate to misinformation, its correction, and prevention. They’re intended as a starting point rather than the last word — use the suggested further reading to dive deeper.

Cognitive miserliness

The psychological feature that makes us most vulnerable to misinformation is that we are ‘cognitive misers. We prefer to use simpler, easier ways of solving problems than ones requiring more thought and effort. We’ve evolved to use as little mental effort as possible.

This is part of what makes our brains so efficient: You don’t want to be thinking really hard about every single thing. But it also means we don’t put enough thought into things when we need to — for example, when thinking about whether something we see online is true

What to read next: “ How the Web Is Changing the Way We Trust ” by Dario Tarborelli of the University of London, published in Current Issues in Computing and Philosophy in 2008.

Dual process theory

Dual process theory is the idea that we have two basic ways of thinking: System 1, an automatic process that requires little effort and System 2, an analytical process that requires more effort. Because we are cognitive misers, we generally will use System 1 thinking (the easy one) when we think we can get away with it.

Automatic processing creates the risk of misinformation for two reasons. First, the easier something is to process, the more likely we are to think it’s true, so quick, easy judgments often feel right even when they aren’t. Second, its efficiency can miss details — sometimes crucial ones. For example, you might recall something you read on the internet, but forget that it was debunked.

What to read next: “ A Perspective on the Theoretical Foundation of Dual Process Models ” by Gordon Pennycook, published in Dual Process Theory 2.0 in 2017.

Heuristics

Heuristics are indicators we use to make quick judgments. We use heuristics because it’s easier than conducting complex analysis, especially on the internet where there’s a lot of information.

The problem with heuristics is that they often lead to incorrect conclusions. For example, you might rely on a ‘social endorsement heuristic’ — that someone you trust has endorsed (e.g., retweeted) a post on social media — to judge how trustworthy it is. But however much you trust that person, it’s not a completely reliable indicator and could lead you to believe something that isn’t true.

As our co-founder and US director Claire Wardle explains in our Essential Guide to Understanding Information Disorder , “On social media, the heuristics (the mental shortcuts we use to make sense of the world) are missing. Unlike in a newspaper where you understand what section of the paper you are looking at and see visual cues which show you’re in the opinion section or the cartoon section, this isn’t the case online.”

What to read next: “Credibility and trust of information in online environments: The use of cognitive heuristics” by Miriam J. Metzger and Andrew J. Flanagin, published in Journal of Pragmatics , Volume 59 (B) in 2013.

Cognitive dissonance

Cognitive dissonance is the negative experience that follows an encounter with information that contradicts your beliefs. This can lead people to reject credible information to alleviate the dissonance.

What to read next: “‘Fake News’ in Science Communication: Emotions and Strategies of Coping with Dissonance Online” by Monika Taddicken and Laura Wolff, published in Media and Communication, Volume 8 (1), 206–217 in 2020.

Confirmation bias

Confirmation bias is the tendency to believe information that confirms your existing beliefs, and to reject information that contradicts them. Disinformation actors can exploit this tendency to amplify existing beliefs.

Confirmation bias is just one of a long list of cognitive biases .

What to read next: “Confirmation Bias: A Ubiquitous Phenomenon in Many Guises” by Raymond Nickerson, published in Review of General Psychology, 2(2), 175–220 in 1998.

Motivated reasoning

Motivated reasoning is when people use their reasoning skills to believe what they want to believe, rather than determine the truth. The crucial point here is the idea that people’s rational faculties, rather than lazy or irrational thinking, can cause misinformed belief.

Motivated reasoning is a key point of current debate in misinformation psychology. In a 2019 piece for The New York Times , David Rand and Gordon Pennycook, two cognitive scientists based at the University of Virginia and MIT, respectively, argued strongly against it. Their claim is that people simply aren’t being analytical enough when they encounter information. As they put it:

“One group claims that our ability to reason is hijacked by our partisan convictions: that is, we’re prone to rationalization. The other group — to which the two of us belong — claims that the problem is that we often fail to exercise our critical faculties: that is, we’re mentally lazy.”

Rand and Pennycook are continuing to build a strong body of evidence that lazy thinking, not motivated reasoning, is the key factor in our psychological vulnerability to misinformation.

What to read next: “ Why do people fall for fake news?” by Gordon Pennycook and David Rand, published in The New York Times in 2019.

Pluralistic ignorance

Pluralistic ignorance is a lack of understanding about what others in society think and believe. This can make people incorrectly think others are in a majority when it comes to a political view, when it is in fact a view held by very few people. This can be made worse by rebuttals of misinformation (e.g., conspiracy theories), as they can make those views seem more popular than they really are.

A variant of this is the false consensus effect: when people overestimate how many other people share their views.

What to read next: “ The Loud Fringe: Pluralistic Ignorance and Democracy ” by Stephan Lewandowsky, published in Shaping Tomorrow’s World in 2011.

Third-person effect

The third-person effect describes the way people tend to assume misinformation affects other people more than themselves.

Nicoleta Corbu, professor of communications at the National University of Political Studies and Public Administration in Romania, recently found that there is a significant third-person effect in people’s perceived ability to spot misinformation: People rate themselves as better at identifying misinformation than others. This means people can underestimate their vulnerability, and don’t take appropriate actions.

What to read next: “Fake News and the Third-Person Effect: They are More Influenced than Me and You” by Oana Ștefanita, Nicoleta Corbu, and Raluca Buturoiu, published in the Journal of Media Research, Volume. 11 3 (32), 5-23 in 2018.

Fluency

Fluency refers to how easily people process information. People are more likely to believe something to be true if they can process it fluently — it feels right, and so seems true.

This is why repetition is so powerful: if you’ve heard it before, you process it more easily, and therefore are more likely to believe it. Repeat it multiple times, and you increase the effect. So even if you’ve heard something as a debunk, the sheer repetition of the original claim can make it more familiar, fluent, and believable.

It also means that easy-to-understand information is more believable, because it’s processed more fluently. As Stephan Lewandowsky and his colleagues explain:

“For example, the same statement is more likely to be judged as true when it is printed in high- rather than low-color contrast … presented in a rhyming rather than non-rhyming form … or delivered in a familiar rather than unfamiliar accent … Moreover, misleading questions are less likely to be recognized as such when printed in an easy-to-read font.”

What to read next: “The Epistemic Status of Processing Fluency as Source for Judgments of Truth” by Rolf Reber and Christian Unkelbach, published in Rev Philos Psychol. Volume 1 (4): 563–581 in 2010.

Bullshit receptivity

Bullshit receptivity is about how receptive you are to information that has little interest in the truth a meaningless cliche, for example. Bullshit is different from a lie, which intentionally contradicts the truth.

Pennycook and Rand use the concept of bullshit receptivity to examine susceptibility to false news headlines. They found that the more likely we are to accept a pseudo-profound sentence (i.e., bullshit) such as, “Hidden meaning transforms unparalleled abstract beauty,” the more susceptible we are to false news headlines.

This provides evidence for Pennycook and Rand’s broader theory that susceptibility to false news comes from insufficient analytical thinking, rather than motivated reasoning. In other words, we’re too stuck in automatic System 1 thinking, and not enough in analytic System 2 thinking.

What to read next: “ Who falls for fake news? The roles of bullshit receptivity, overclaiming, familiarity, and analytic thinking” by Gordon Pennycook and David Rand, published in Journal of Personality in 2019.

Look out for parts two and three and stay up to date with First Draft’s work by becoming a subscriber and following us on Facebook and Twitter.


2. Recency Bias

Definition

Our brains naturally put more weight on recent experience.

A Trader’s Example

In the market, this cognitive bias can manifest in over-learning from recent losses. We are more affected by recent losses. Thus, in trying to improve our trading results, we avoid trades that remind us of our recent losses.

For instance, you lost money in three recent pullback trades in a healthy trend. Hence, you concluded that pullback trading in a strong trend is a losing strategy. Then, you switched to trading range break-outs.

Due to your recent experience, you overlooked that most of your past profits came from pullback trades. By turning to trading range break-outs, you could be giving up a valuable edge in your trading style.

Lesson & Practice

Don’t learn from recent experience. Learn from your trading results over a more extended period.

Reviewing your trades and learning from them is crucial. However, in practice, we should not examine them and draw conclusions too often. We should make sure that we have a larger sample of trades across a longer period, before arriving at useful conclusions.


Watch the video: 2. Η Προκατάληψη της Επιβεβαίωσης (August 2022).