🔮 Applying Decision Hygiene to yield better judgment

Noise cancellation headphones for the organization

Good morning!

At the Uncertainty Project, we explore models and techniques for managing uncertainty, decision making, and strategy. Every week we package up our learnings and share them with the 2,000+ leaders like you that read this newsletter!

In case you missed it, last week we talked about patterns of Decision Architecture.

Upcoming talk:

Next week, Chris Butler will be talking about 360 Strategies; integrating inclusive, exclusive, proven, and aspirational strategies.

Applying Decision Hygiene to yield better judgment

Organizations make decisions within their unique decision architecture. The decision architecture moves out of the shadows when the organization explicitly “decides how to decide.”

In any setting, a decision can be broken down into two parts: a prediction, followed by a judgment.

When we make judgments, as human beings, in groups or organizations, we can make errors. Errors in prediction, and/or errors in judgment.

In some contexts, these errors are easy to see: the prediction is immediately exposed as wrong, or the judgment triggers actions that don’t produce desired results. But in other contexts, it can be devilishly difficult to even know whether the decision was a good one.

Sometimes we fall for “resulting”, where we (unfairly and incorrectly) associate the quality of the decision (i.e. the prediction and the judgment) with the (positive or negative) outcome. But there are other times, when there is a delay before any visible results, or the string of causality is so muddled, that we can’t get decent hindsight or feedback on the choices we’ve made. These are the “low-validity” environments where intuition is less valuable. 

So where can we start, in a study of decision quality, to better understand the possible errors that can result?

We can start here: Decisions rely on good judgment. 

In the landmark book, “Noise: A Flaw in Human Judgment” by Daniel Kahneman, Olivier Sibony, and Cass Sunstein, the authors establish that good judgment can be hampered by two kinds of error: bias and noise.

They use an archery analogy to illustrate the difference:

From “Noise: A Flaw in Human Judgment”

Four teams shoot arrows at a target. These are their results. Note the consistent difference of the “biased” team and the inconsistent differences of the “noisy” team. In our organizations today, we resemble the “biased and noisy” team.

How can an organization improve the judgments its professionals make? By seeking ways to minimize the errors that result from bias and noise.

But first: What do we mean by a “professional judgment”?

I like to think in terms of two classes of decisions or judgments: 

  1. Recurring judgments happening in high-validity environments, that allow for rapid feedback and learning. These tend to be found in our day-to-day efforts, when we are “running the business”.

  2. One-time judgments that struggle to be evaluated with effective hindsight. These tend to be found in our strategic efforts, when we are “changing the business”.

Judgment, with minimal error, is important for both classes. But the decision architectures can, and should, differ across the two. Much of the discussion of noise is centered around the first class, with examples focusing on domains and tasks like: 

  • Courtroom judges, setting sentences 

  • Insurance adjusters, valuing claims

  • Radiologists, making diagnoses

Strategic decision making falls in the second class. For these kinds of judgments, errors will occur, but noise is incredibly difficult to see. While bias is something you can “see”, and explain with a name, noise is an unpredictable error that is much harder to see and explain.

For example, think about a typical ideation or intake process for projects, in an organization. The portfolio leaders will make assessments of the project ideas, and usually offer a “go/no-go” decision. But across the company, there will be different leaders, running different portfolios, with different decision architectures.

How scattered will the arrows be? Likely, quite a bit. Think of the variety here.

Do their judgments rely on some kind of business case? Some treatment of expected value and expected cost? Some concept of payback period or cost of delay? Some risk assessment? Comparison to some threshold of ROI or payback period? Are there shared norms for what constitutes a “good” project, across the company? Is that even important?

There’s probably some noise in there. Like, it’s already loud.

So, how can an organization reduce judgment noise? 

It starts with an attempt to see it. Conduct a Noise Audit, when the decision architecture supports repeatable assessments (i.e. it’s easier for the first class of judgments above). If the audit exposes some noise, as a source of error in decision making, then we start to look for ways to reduce it.

Replacing human judgment with rules and algorithms would eliminate noise altogether, but they have their own problems and are rarely a universal substitute for judgment.

So instead, the authors introduce the idea of “decision hygiene”, as an effective way to combat noise. The concept acknowledges how hard it is to “see” noise. While we might seek to “treat” a known bias risk, we can only “sanitize” our way past the noise:

“Strategies for noise reduction are to de-biasing, what preventive hygiene measures are to medical treatment: the goal is to prevent an unspecified range of potential errors before they occur.”

Daniel Kahneman, Olivier Sibony, Cass Sunstein: “Noise”

And they warn us, it won’t feel as good as nudging our way around biases:

“Just like handwashing and other forms of prevention, decision hygiene is invaluable, but thankless. Correcting a well-identified bias may at least give you a tangible sense of achieving something. But the procedures that reduce noise will not. They will, statistically, prevent many errors. Yet you will never know which errors. Noise is an invisible enemy, and preventing the assault of an invisible enemy can yield only an invisible victory.”

So, how can an organization get out the soap and start “washing its hands” more thoroughly, to improve the judgments its professionals make? In “Noise”, the authors offer these ideas:

Find (or develop) better judges

Professionals who are skilled in judgment are experienced, smart, and open-minded, that is, they are willing to learn from new information. This suggests that an organization can seek to build a workforce that is well trained, has more intelligence than the competition, and brings the right cognitive style to the challenge of judgment.

Building this capability is easier when the needed expertise is something that can be verified; that is, good judgments are easy to to validate. In these contexts, we can look at an individual’s judgments from the past, and determine how “right” they have been.

In contexts where this is not possible, like complex domains where causality is tricky or impossible to nail down, a different kind of expert emerges. In “Noise”, the authors call this a “respect expert”, someone in whom confidence is given “entirely based on the respect they enjoy from their peers.”

This depends on having a peer group with shared norms, or some kind of professional doctrine that gives them “a sense of which inputs should be taken into account, and how to make and justify their final judgments”. Much of the dialog and debate we see in newsletters like this and in LinkedIn is about setting these kinds of professional norms, and establishing this kind of respect.

Note too that in the absence of these shared norms, a complex domain can easily fall back into the trap of “resulting”, where decision quality is (mistakenly) associated with positive outcomes.

Use a “decision observer” to spot bias

Bias is relatively easy to see, retroactively. But efforts to de-bias decision making practices have been disappointing. It’s one thing to discuss the different kinds of cognitive biases, and a whole other thing to build methods that prevent them from happening.

In “Noise”, the authors recommend creating a distinct role in the decision architecture, called a “decision observer”, whose job is to search for biases as the decision making is happening. People in this role would be “trained to spot, in real-time, the diagnostic signs that one or several familiar biases are affecting someone else’s decisions or recommendations”.

The key here is that they act as an outside observer, with no skin in the game. They are appointed (not self-selected), and use a standard checklist that is customized for the context to diagnose for the presence of known biases. This helps get past the “bias blind spot”, where people in the middle of decision making are unable to see the influence of bias, despite having good knowledge of the bias and its effects.

Sequence information to avoid early judgment

When forming a judgment, our brains want to find coherence in the information, and finish the job early. Our brains are lazy that way. So we latch onto some ideas sparked by the first bits of information we sift through, and the confirmation bias bends our interpretation of the rest of the information to support the initial idea.

This is the risk of “early judgment” that decision makers face. It gets worse in meetings, where information cascades can supply that initial bit to the whole group, and everyone locks in on the same idea together (i.e. groupthink).

To combat this tendency, we can be thoughtful of how we explore the evidence, or data. Maintaining independence of thought is crucial here as well; if other’s opinions are presented as additional evidence, it will sway our judgment. We can also hold off, or delay, our “final” judgment until after we have expressed opinions on the significance of all the pieces of evidence or data.

Aggregate multiple, independent judgments 

An extension of that idea of sequencing information is to prevent letting others’ judgment or assessments influence your own assessment. When an organization seeks multiple assessments or judgments, it can leverage the “wisdom of crowds”. When it builds a decision architecture that keeps those multiple people independent of one other, the wisdom is more varied, and stronger as a whole.

Establish guidelines for judgment 

Noise is the variability in judgments made by different individuals, for the same case. So one approach to reduce the “between-judge” variability is to create guidelines that offer boundary conditions, guardrails, or heuristics to decision makers. A simple checklist can serve as an effective aid that steers judgment without mandating a choice. This is the challenge for decision architects: Finding some influence over decision makers, to reduce noise, without compromising their decision authority.

Use a shared scale

Many decision architectures use gradations or scales to help communicate opinions across options (like a 1-10 scale to convey differences in the range from “poor” to “excellent”). When a judge or decision maker uses a scale like this, they have to translate their general impression of the option into the units of the scale. When different judges use different scales, or even interpret the scale differently, there will be noise.

Scales that ask for judgment of options relative to each other, instead of against absolute criteria, tend to be less noisy. This makes stack-rankings a better approach than strict ratings against fixed criteria.

Even better: find a way to create a scale that asks for relative judgment with an “outside view”, that is, comparisons that use base rates of similar options across wider populations (i.e. compare against a wider range of options, either historical or industry-wide). Also recognize that different judges will interpret scale levels like “good” or “great” differently, so to reduce noise, consider providing concrete examples of “good” and “great” to anchor the relative evaluation.

Structure complex judgments 

When a decision or judgment requires thoughtful consideration from multiple perspectives, vantage points, or lenses, we can get lost in the variety of assessments. Which angle should steer the final judgment?

Earlier, we talked about how we should try to delay that final judgment until we’ve had a chance to spin through all assessments. One way to help with this is to break things down, so that one decision can be considered as a set of component assessments or judgments. This decomposition allows us to evaluate, without jumping to final judgment prematurely (which our brains really, really want to do…).

The trick here is to find orthogonal criteria, or factors, that support those varied perspectives, across one or more contexts, such that each criteria or factor has enough depth to warrant an independent assessment. 

It’s like that classic parable of the blind men and the elephant; we choose to make ourselves blind to the other assessments, in order to form a partial opinion. Hopefully, we can reconcile our own distinct assessments better than the blind men in the story, though.

In “Noise”, the authors provide a method for decision making that combines these ideas into a single approach, called the Mediating Assessments Protocol. They introduce the method with an example where the leadership team must decide whether to pursue an acquisition of another company, then present it to their board. 

The story makes a compelling argument for how these ideas can support strategic decision making, in the presence of uncertainty. As a whole, the book, “Noise” makes for a challenging (cognitively-heavy) read, but Chapter 25, which introduces this method with the illustrative story, can be read on its own. I recommend finding a copy, and checking it out.

These gurus of decision making (Kahneman, Sibony, and Sunstein) also know what you are thinking at this point (lol):

“No doubt this emphasis on process, as opposed to the content of decisions, may raise some eyebrows. … Content is specific; process is generic. Using intuition and judgment is fun; following process is not. Conventional wisdom holds that good decisions - especially the very best ones - emerge from the insight and creativity of great leaders. (We especially like to believe this when we are the leader in question.) And to many the word process evokes bureaucracy, red tape, and delays.”

Daniel Kahneman, Olivier Sibony, Cass Sunstein: “Noise”

But they add, “Decision hygiene need not be slow and certainly doesn’t need to be bureaucratic. On the contrary, it promotes challenge and debate, not the stifling consensus that characterizes bureaucracies.” 

So… roll up your sleeves, and use these hygiene tips to seek new ways to make your decision architecture a little more resilient in the face of noise.

How was this week's post?

We'd love to know what you think! (click one)

Login or Subscribe to participate in polls.

Join the conversation

or to participate.