šŸ“˜ What makes a 'good' decision, good?

Exploring what research tells us a 'good' decision is and how we can focus on what matters

In case you missed it, last week we talked about decision velocity and the impact of groupthink on teams.

Today we're covering:

Upskilling team decision making: Quality

What makes a good decision, good?

Decision quality is measured by its process, not its outcome. Measuring decisions by observing the outcome is subject to outcome bias, or the resulting fallacy. Even if the best decision was made given the information available at the time, luck is a meaningful variable.

ā€œOutcomes donā€™t tell us whatā€™s our fault and what isnā€™t, what we should take credit for and what we shouldnā€™t. Unlike in chess, we canā€™t simply work backward from the quality of the outcome to determine the quality of our beliefs or decisions. This makes learning from outcomes a pretty haphazard process.ā€ ā€• Annie Duke

So if we say decision quality can only be observed from its process, what exactly are we looking for? The answer seems to be the ratio of process rigor (that comes at a cost - people and time) to risk (how consequential the decision is).

Of course, the last thing we'd want is relatively inconsequential decisions going through an overly-thorough decision making process - so there's a bit of an art to deploying the right tactics for the right decisions. This is something we'll explore in the Blueprint.

Is a decision 'good' just because it's 'data-driven'?

There's plenty of chatter about data-driven decisions (for good reason!), but what does it mean when we say process is an indicator of a quality decision? Are there other factors besides information and analysis?

McKinsey ran a study on 1,048 R&D decisions made over five years including investments in new products, M&A decisions, and large capital expenditures - they were trying to answer this question of what the meaningful factors were in a quality decision.

Specifically, what factors indicate high quality decision making?

For each decision, they asked questions that surfaced techniques used for data and analysis as well as process and dialog - which factors were better indicators?

"process mattered more than analysisā€”by a factor of six (Exhibit 2). This finding does not mean that analysis is unimportant, as a closer look at the data reveals: almost no decisions in our sample made through a very strong process were backed by very poor analysis. Why? Because one of the things an unbiased decision-making process will do is ferret out poor analysis. The reverse is not true; superb analysis is useless unless the decision process gives it a fair hearing."

In this context, 'dialog and process' means deploying techniques to curb the impact of cognitive biases - or as Daniel Kahneman and Olivier Sibony describe these systemic variations in judgement, 'Noise'.

How can we systematically improve decision quality?

This question is a core tenet of the Blueprint. Some examples of questions that tease out these techniques (whether used implicitly or explicitly) are:

  • Were dissenting opinions and alternative options entertained?

  • Were there principles and criteria that drove the evaluation of those options?

  • Was the information supporting the decision effectively interrogated (avoiding the issues of base rate neglect and the feature-positive effect)?

  • Was there counterfactual thinking that explored the probabilities of outcomes?

  • Were 'tripwires' or kill-criteria defined in case new information changes how we might think about this decision?

  • Were steps taken to reduce the impact of groupthink and other biases that impact collaboration?

Many individuals and teams implement these tactics implicitly. Some people or organizational cultures are better at skillfully disagreeing or playing devils advocate. This is the exception, not the rule.

But there's value in making these processes explicit. Much like a scientific paper breaks down the methods of the experiment, the decision process can show and justify the rigor. I believe the technical term for this is 'CYA' šŸ‘€

It insulates from the worst scenario for outcome bias - when a good decision results in a bad outcome.

Decision making processes in companies, teams, and individuals already exist - they're just implicit. Making these processes explicit opens them up to learning and iterating.

ā€œA wise leader, therefore, does not see herself as someone who simply makes sound decisions; because she realizes she can never, on her own, be an optimal decision maker, she views herself as a decision architect in charge of designing her organizationā€™s decision-making processes.ā€ ā€• Olivier Sibony

Bias 3/50: Bikeshedding

Again, we don't want inconsequential decisions going through an overwhelming, oversubscribed process that bogs down teams, but individuals and teams already have trouble sorting between things that should take their time and focus, and things that shouldn't.

Bikeshedding is our tendency to address menial, simple problems over complex ones ā€” regardless of their urgency or importance.

This tendency is also known as Parkinsonā€™s law of triviality and has a tremendous impact on how we spend time discussing problems as well as prioritizing decisions.

The concept of bikeshedding was first presented with the example of a committee tasked with making budget decisions around a nuclear reactor, a bike shed, and an annual coffee budget. Of course, nuclear reactors and the foundational concepts involved in even understanding how they work are quite complicated, so the committee spends little time discussing the power plant and moves on to the bike shed and the coffee budget.

They spend the majority of the time arguing about the details of these two, relatively menial topics.

ā€œThe time spent on any item of the agenda will be in inverse proportion to the sum [of money] involved.ā€ ā€” C. Northcote Parkinson

For anyone who has worked on large, complex projects, this probably resonates. We tend to spend an incredible amount of time and energy on trivial details. We do this because we gravitate towards clarity and understanding over uncertainty.

How does this impact decision making?

Bikeshedding impacts how we prioritize and collaborate on decisions. We naturally avoid ambiguity and complexity regardless of how urgent or important a decision or problem is.

Like in the nuclear reactor example, we spend a disproportionate amount of time on things that objectively donā€™t matter. This is exacerbated by other human tendencies that derail collaboration away from the most impactful, and often uncomfortable problems.

This kind of ambiguity aversion is best described by the Ellsberg Paradox, which statesā€¦

ā€œA decision-maker will overwhelmingly favor a choice with a transparent likelihood of risk, even in instances where the unknown alternative will likely produce greater utility. When offered choices with varying risk, people prefer choices with calculable risk, even when they have less utility.ā€

In short, research suggests we need to actively attack the highest level of ambiguity and uncertainty first ā€” even though weā€™re wired to avoid it. Often the more complex problems or lowest confidence assumptions would render the majority of the solution useless if not resolved.

Astro Teller, who leads Googleā€™s X labs, uses the idiom ā€œTackle the monkey firstā€. If youā€™re trying to teach a monkey to recite Shakespeare from a pedestal, itā€™s clear what the more challenging problem is. We should figure out how to train the monkey, even though bikeshedding would have us create the pedestal.

The 'Monkeys and Pedestals' framework helps mitigate our natural tendency to build the pedestal first, even when we know it's rendered useless without the Shakespeare-reciting monkey.

The Eisenhower Matrix

The Eisenhower Matrix is a general prioritization tool, so you may have already used this method for prioritizing problems or tasks.

For decision making, it helps us focus on the most important decisions, delegate relatively trivial decisions (to combat bikeshedding), and set schedules for the important decisions that often fall through the cracks.

Why do we need to prioritize decisions?

The speed at which we make decisions is heavily impacted by how we prioritize them. Decision velocity tends to suffer due to various biases that impact how we prioritize:

  • Our pursuit of perfect information: This is the analysis paralysis trap where we tend to gravitate towards certainty. Due to zero risk bias, We tend to delay decisions or even choose worse options in an effort to reduce risk - even if the expected value is higher for riskier decisions.

  • We donā€™t recognize non-decisions: We err towards doing what weā€™ve always done and weā€™re typically blind to this tendency. Every moment we continue to invest in one thing over something else is an implicit decision, but we tend not to see it that way. Because it doesnā€™t take as much energy, status quo bias makes it difficult to challenge existing decisions.

  • And of course, bikeshedding: Weā€™re allergic to complexity and tend to focus on menial topics over hard, ambiguous ones.

The Eisenhower Matrix for decision making

ā€œIt is often said that a wrong decision taken at the right time is better than a right decision taken at the wrong time.ā€ā€•Pearl Zhu

It's important to call out that there are slight variations to the classic Eisenhower Matrix when used outside of task prioritization. A classic Eisenhower Matrix typically labels the ā€˜neither urgent nor importantā€™ as ā€˜deleteā€™. Youā€™ll notice for decision making, weā€™ve labeled this quadrant ā€˜communicateā€™.

This Eisenhower Matrix still utilizes the four quadrants with the x-axis as urgency and the y-axis as importance, but what you might notice when using this method for decision making is that unlike tasks, even though there are decisions that are not important or urgent enough to start a dialog around, these typically take the form of questions that still need answering.

For that reason, we donā€™t believe you really end up with decisions that get ā€˜deletedā€™ like tasks that you wouldnā€™t do. Instead, these typically manifest as questions that need answering, and identifying, documenting and communicating those in a central decision log ensures that the answer to that question is automated in the future.

The quadrants

With that clarified, here is the modified Eisenhower Matrix for decision making with the four quadrants:

image (32).png

Urgent and Important | Decide: This should immediately enter the decision making process. Though the speed at which we might make the decision will vary on the level of complexity, this should be visible and top of mind. To keep a manageable level of focus, itā€™s best to think about these decisions as truly the critical decisions that need to be made ā€˜this weekā€™ or ā€˜this monthā€™. If itā€™s beyond that horizon, the decision is likely Important, but not urgent.

Important, Not Urgent | Schedule: If the decision does not need to be in active dialog on a very near horizon, then it should be in this quadrant. These decisions should be quickly triaged to determine a date when it will move to the ā€˜urgent and importantā€™ quadrant. The decision can always be rescheduled, but itā€™s important that it resurfaces for review.

Urgent, Not Important | Delegate: This quadrant can typically be relative to the decision maker (or group). Decisions typically make it to this quadrant because someone else has the authority to make the decision (e.g. a manager pushing a decision to an individual contributor) or it is relevant to another group (e.g. we need to decide whether or not we will allow image uploads, but a different team needs to decide if that breaks our security policy). Notice this starts to uncover dependencies that often fall through the cracks.

Neither Urgent, nor Important | Communicate: Again, this quadrant is modified from the original matrix where you would typically delete irrelevant tasks. In this case, you will often see decisions end up here because theyā€™ve already been decided and havenā€™t been made explicit. For that reason, these decisions should be quickly documented and made visible in a decision log. Much like design teams create a design system to answer questions that often repeat (e.g. what font should we use? Do we use round buttons?), this becomes a similar repository to automate answers to decisions that are queried often.

ā€œI do believe that an improved understanding of the multiple irrational forces that influence us could be a useful first step toward making better decisions.ā€ ā€• Dan Ariely, Professor of psychology and behavioral economics at Duke University

I hope this post was helpful interesting! Have feedback? Just reply to this email! It would be great to get in touch!

Reply

or to participate.