šŸ“˜The cognitive illusions that shape our decision making

How optical illusions help us think about biases, the dangers of confirmation bias, and how Amazon makes decisions

In case you missed it, last week we talked about the differences between humans and machines through the lens of decision making. This week weā€™ll return to some classics!

This week:

  • šŸŖ„ Topic: What optical illusions can tell us about our biases and heuristics

  • šŸ§  Bias 6/50: Confirmation Bias

  • šŸ› ļø Tool: Decision Types

What optical illusions can tell us about our biases and heuristics

For the first week of this series, we talked about blind spot bias as a surprisingly difficult concept to grasp. Believing we have any control over the biases and heuristics that influence our judgment does more harm than good.

This is why many argue that ā€˜bias educationā€™ is a hollow pursuit. That in many cases, the awareness of biases can have an adverse effect. If someone believes their education makes them less biased, the blind spot is much stronger.

This leads us back to one of the Blueprintā€™s principles: We can not debias individuals, but we can adjust for biases in teams. As Olivier Sibony and Daniel Kahneman argue in their book, ā€˜Noise: A Flaw in Human Judgementā€™, There are techniques that help organizations systematically adjust for deviations in human judgment.

ā

ā€œBias and noiseā€”systematic deviation and random scatterā€”are different components of error.ā€

Daniel Kahneman & Olivier Sibony, Noise

But, as they go on to argue, these frameworks for judgment weā€™ve spent our lives building through experiences, shape our uniqueness. Theyā€™re the reason we disagree. Theyā€™re the reason that out of our conviction and dissent comes progress.

In order to arrive at the conclusion that this is a systemic problem, not an individual problem, we have to illustrate how, in this case, seeing is not believing.

Cognitive biases are much like optical illusions

Biases act like optical illusions. Even if we see or understand the illusion, we canā€™t necessarily control it.

When talking to teams about the importance of systematic approaches to combat bias, itā€™s helpful to walk through a few popular illusions and riddles to illustrate how these work.

Here we have a classic visual illusion that shows how our brain manipulates our perception to see depth:

Looking at the A square and the B squareā€¦ are they the same color? They look completely different. One looks almost objectively darker than the other.

But when we connect the squares, the illusion becomes clear. The squares are the exact same colors.

The illusion is so obvious when we put them together, but when we still look back at the original image, we still canā€™t definitively say theyā€™re different.

This is an important, often overlooked, reality of biases and heuristics. Much like visual illusions, once we ā€˜seeā€™ our ā€˜cognitive illusionsā€™, things arenā€™t suddenly clear. Much like looking back at the original image, weā€™re not just automatically enlightened that the colors are clearly the same.

We remain blind to these illusions.

With the shadow example, we all had the same experience - but our cognitive illusions often vary based on individual perception. Theyā€™re relative to our experiences, existing beliefs, preferences, etcā€¦

Remember this viral picture of a dress?

People tend to see two very different colors - they describe the dress as either ā€˜gold and whiteā€™ or ā€˜black and blueā€™. Donā€™t believe it? Ask your friends!

This went viral years back because people couldnā€™t believe that others saw such drastically different colors. People are typically shocked that others donā€™t see the dress the same way they do. It feels objective.

Years later, a few different papers studied the viral phenomenon of the dress.

The explanation is not about what we see, but what we assume. Each person who looks at this image reasons their way into assumptions about the environment the dress is in. Is it in a well-lit store? Is it in a dark closet?

Once our brains pre-determine where the dress is, it signals a perception of color (either blue/black or white/gold).

ā

ā€œThe perceived colors of the dress are due to (implicit) assumptions about the illuminationā€¦

Moreover, our results provide some evidence that prior experience with disambiguated images may push observers to interpret the ambiguous original photo in one or the other way. These observations suggest that the perception of the dress colors may be modulated by biasing observers toward one or the other interpretation of the photo.ā€

Whatā€™s interesting about this particular phenomenon is that itā€™s incredibly difficult for someone to ā€˜unseeā€™ the color they see. So I see blue and black - itā€™s almost impossible for me to see white and gold.

These researchers found that in scenarios where they primed participants, they were able to influence what people see. In this study, showing participants altered images (that clearly pre-determined the color of the dress) prior to exposing them to the original image, modified their perspective.

This is much like how biases and ā€˜information cascadesā€™ work. They are relative to our experience and unwinding the reasoning that leads to our perceptions is incredibly difficult.

By trying to change how we see the original image (knowing that itā€™s possible to see something completely different), we can see how strong this illusionary reasoning is - and how itā€™s difficult to change, even if we try.

Bias 6/50: Confirmation Bias

ā

ā€œThe lesson the researchers learned from all this, as they wrote in the introduction to When Prophecy Fails: ā€˜A man with a conviction is a hard man to change.ā€™ā€

Julie Beck, The Atlantic

How to combat confirmation bias

Itā€™s convenient to think that smart people with high-level reasoning skills wouldnā€™t fall prey to these effects, but unfortunately, intelligence isnā€™t the cure. Research suggests intelligence does not correlate to the impact of confirmation bias. In fact, others argue it may exaggerate confirmation bias.

As David Robson writes in his book, The Intelligence Trapā€¦

ā€œIntelligent and educated people are less likely to learn from their mistakes, for instance, or take advice from others. And when they do err, they are better able to build elaborate arguments to justify their reasoning, meaning that they become more and more dogmatic in their views. Worst still, they appear to have a bigger "bias blind spot," meaning they are less able to recognize the holes in their logic.ā€

Below are a few tools and frameworks that can help combat the effects of confirmation bias.

Question and criticize our own beliefs over others

Weā€™ve all had the thought ā€œHow does that person even think that?!ā€. Itā€™s easy for us to criticize othersā€™ thinking ā€” especially in hindsight.

We can build a muscle for questioning ourselves more than others. Daniel Kahneman recommends actively seeking ā€˜surprisesā€™ - reframing information that disproves beliefs to information that updates them.

ā

ā€œYou are more likely to learn something by finding surprises in your own behavior than by hearing surprising facts about people in general.ā€

Daniel Kahneman
Foster dissent and build an environment with diverse thinking, backgrounds, and experiences

Itā€™s easy to throw rocks at someone elseā€™s glass echo chamber, but we are all wired to do the exact same thing.

We favor opinions that agree with us and give more merit to people we like. Itā€™s human ā€” we seek approval and acceptance over almost everything else.

Increasing disagreeableness doesnā€™t mean we need to build a negative culture of fighting and friction, but complacency is dangerous. Teams need to have a level of disagreeableness

Treat a teamā€™s information intake like a healthy diet

Receiving information and feedback that supports our existing beliefs is like eating junk food. It feels good to be told weā€™re right. Our brain will naturally indulge in the information that supports our existing beliefs.

Welcome counterarguments and actively include people who have different opinions. It doesnā€™t come naturally. Red teaming is an interesting way to do this systematically.

Track, revisit, and adapt our beliefs

In environments of extreme uncertainty, our decisions depend on our ability to shape our beliefs. The cadence at which weā€™re able to revisit and adapt our beliefs, the better we are at updating our decision making model.

We might feel like this happens naturally, but it doesnā€™t. If we arenā€™t intentionally revisiting and adapting our beliefs to drive our decisions, we end up on our heels with an outdated, inaccurate model driving our decisions.

Tool: Decision Types

This method may be very familiar, but it feels like it needs to be covered.

Made popular by Jeff Bezos and Amazon in his 2015 letter to shareholders, this technique involves identifying two types of decisions: Type 1 and Type 2.

Type 1: Also known as one-way door decisions, these are irreversible and have long-term consequences. These are the decisions that set the direction and strategy of the company, such as opening a new fulfillment center or acquiring another company. These decisions are made slowly and deliberately, with a great deal of consideration and analysis.

Type 2: Also known as two-way door decisions, these are reversible and have short-term consequences. These are the day-to-day operational decisions, such as deciding on a new product feature or how to handle a customer service issue. These decisions are made quickly, with less analysis and more experimentation.

Type 1 decisions should be made slowly and deliberately, while Type 2 decisions should be made quickly and with a bias towards action. This system allows for a balance of cautious long-term planning and flexible short-term execution.

This decision making system is useful because it helps to not get bogged down in endless analysis paralysis and indecision.

Additional dimensions

We can build on these decision types to bring more nuance to the ā€˜speed and urgencyā€™ model to understand when to address a decision and how long that decision should take.

Use these dimensions with decision types to build your own models for decision triage, prioritization, and delegation.

Consequence of decisions

One dimension decision types donā€™t cover is the consequence of a decision. A one-way door decision that is relatively inconsequential would be handled differently than the same decision type with a higher consequence.

As Shane Parish at Farnam Street highlights, the definition of ā€˜consequentialā€™ and ā€˜inconsequentialā€™ is relative to you, your team, or your business. Something that might be inconsequential to you, maybe consequential to others.

For that reason, be explicit when defining what these terms mean and propose the action that needs to be taken when decisions fall in their respective quadrants.

Cost of reversibility

The decision types themselves are binary - either a decision is reversible (two-way door) or it isnā€™t (one-way door). It fails to incorporate the dimension of cost. Between two reversible decisions, there are often varying levels of cost to reverse the decisions. One decision may just be lost time or opportunity cost, while the other may require costly work to unwind back to the starting point.

Adding the dimension of cost may be represented as a general label (e.g. high, medium, low options), a relative scoring method (e.g. 10 for high cost, 1 for low cost), or as true monetary cost.

For one-way door decisions, this is much trickier. Based on their definition, we may assume that they are truly irreversible and therefore thereā€™s no need to define the cost of reversibility, but Annie Duke suggests the method of ā€˜decision stackingā€™.

Decision stacking is the method of breaking down a potentially complex one-way door decision into smaller, less costly decisions. This exercise may surface reversible decisions, but at the very least, provides checkpoints or tripwires to revisit a one-way door decision before it becomes more costly.

Delegation, escalation, automation

When categorizing decision types, thereā€™s an opportunity to continuously build a strategy for future decisions. With each new decision, the team can calibrate when decisions should be delegated or escalated and identify repeated decisions that can be automated.

For decisions that are delegated: Could we have delegated this decision sooner? Is there a systematic way to determine who (or what team) this type of decision delegates to?

For decisions that are escalated: Could we have escalated this decision sooner? Which decisions should particular individuals make on their own vs decisions that are escalated?

For repeating decisions: Could this decision be automated? What information would we need to automate this decision? How much would it cost to automate this decision?

Continuously building this rubric helps calibrate the triage to help the team only focus energy on decisions that are important and relevant in the long term - especially to avoid the tendency to create unnecessary processes.

ā€œAs organizations get larger, there seems to be a tendency to use the heavy-weight Type 1 decision-making process on most decisions, including many Type 2 decisions. The end result of this is slowness, unthoughtful risk aversion, failure to experiment sufficiently, and consequently diminished invention. Weā€™ll have to figure out how to fight that tendency.ā€ ā€” Jeff Bezos

How was this week's post?

We'd love to know what you think! (click one)

Login or Subscribe to participate in polls.

Reply

or to participate.