- The Uncertainty Project
- Posts
- đŽ Applying Decision Hygiene to yield better judgment
đŽ Applying Decision Hygiene to yield better judgment
Noise cancellation headphones for the organization
Good morning!
At the Uncertainty Project, we explore models and techniques for managing uncertainty, decision making, and strategy. Every week we package up our learnings and share them with the 2,000+ leaders like you that read this newsletter!
In case you missed it, last week we talked about patterns of Decision Architecture.
Upcoming talk:
Next week, Chris Butler will be talking about 360 Strategies; integrating inclusive, exclusive, proven, and aspirational strategies.
You can sign up for the virtual talk here!
Applying Decision Hygiene to yield better judgment
Organizations make decisions within their unique decision architecture. The decision architecture moves out of the shadows when the organization explicitly âdecides how to decide.â
In any setting, a decision can be broken down into two parts: a prediction, followed by a judgment.
When we make judgments, as human beings, in groups or organizations, we can make errors. Errors in prediction, and/or errors in judgment.
In some contexts, these errors are easy to see: the prediction is immediately exposed as wrong, or the judgment triggers actions that donât produce desired results. But in other contexts, it can be devilishly difficult to even know whether the decision was a good one.
Sometimes we fall for âresultingâ, where we (unfairly and incorrectly) associate the quality of the decision (i.e. the prediction and the judgment) with the (positive or negative) outcome. But there are other times, when there is a delay before any visible results, or the string of causality is so muddled, that we canât get decent hindsight or feedback on the choices weâve made. These are the âlow-validityâ environments where intuition is less valuable.
So where can we start, in a study of decision quality, to better understand the possible errors that can result?
We can start here: Decisions rely on good judgment.
In the landmark book, âNoise: A Flaw in Human Judgmentâ by Daniel Kahneman, Olivier Sibony, and Cass Sunstein, the authors establish that good judgment can be hampered by two kinds of error: bias and noise.
They use an archery analogy to illustrate the difference:
From âNoise: A Flaw in Human Judgmentâ
Four teams shoot arrows at a target. These are their results. Note the consistent difference of the âbiasedâ team and the inconsistent differences of the ânoisyâ team. In our organizations today, we resemble the âbiased and noisyâ team.
How can an organization improve the judgments its professionals make? By seeking ways to minimize the errors that result from bias and noise.
But first: What do we mean by a âprofessional judgmentâ?
I like to think in terms of two classes of decisions or judgments:
Recurring judgments happening in high-validity environments, that allow for rapid feedback and learning. These tend to be found in our day-to-day efforts, when we are ârunning the businessâ.
One-time judgments that struggle to be evaluated with effective hindsight. These tend to be found in our strategic efforts, when we are âchanging the businessâ.
Judgment, with minimal error, is important for both classes. But the decision architectures can, and should, differ across the two. Much of the discussion of noise is centered around the first class, with examples focusing on domains and tasks like:
Courtroom judges, setting sentences
Insurance adjusters, valuing claims
Radiologists, making diagnoses
Strategic decision making falls in the second class. For these kinds of judgments, errors will occur, but noise is incredibly difficult to see. While bias is something you can âseeâ, and explain with a name, noise is an unpredictable error that is much harder to see and explain.
For example, think about a typical ideation or intake process for projects, in an organization. The portfolio leaders will make assessments of the project ideas, and usually offer a âgo/no-goâ decision. But across the company, there will be different leaders, running different portfolios, with different decision architectures.
How scattered will the arrows be? Likely, quite a bit. Think of the variety here.
Do their judgments rely on some kind of business case? Some treatment of expected value and expected cost? Some concept of payback period or cost of delay? Some risk assessment? Comparison to some threshold of ROI or payback period? Are there shared norms for what constitutes a âgoodâ project, across the company? Is that even important?
Thereâs probably some noise in there. Like, itâs already loud.
So, how can an organization reduce judgment noise?
It starts with an attempt to see it. Conduct a Noise Audit, when the decision architecture supports repeatable assessments (i.e. itâs easier for the first class of judgments above). If the audit exposes some noise, as a source of error in decision making, then we start to look for ways to reduce it.
Replacing human judgment with rules and algorithms would eliminate noise altogether, but they have their own problems and are rarely a universal substitute for judgment.
So instead, the authors introduce the idea of âdecision hygieneâ, as an effective way to combat noise. The concept acknowledges how hard it is to âseeâ noise. While we might seek to âtreatâ a known bias risk, we can only âsanitizeâ our way past the noise:
âStrategies for noise reduction are to de-biasing, what preventive hygiene measures are to medical treatment: the goal is to prevent an unspecified range of potential errors before they occur.â
And they warn us, it wonât feel as good as nudging our way around biases:
âJust like handwashing and other forms of prevention, decision hygiene is invaluable, but thankless. Correcting a well-identified bias may at least give you a tangible sense of achieving something. But the procedures that reduce noise will not. They will, statistically, prevent many errors. Yet you will never know which errors. Noise is an invisible enemy, and preventing the assault of an invisible enemy can yield only an invisible victory.â
So, how can an organization get out the soap and start âwashing its handsâ more thoroughly, to improve the judgments its professionals make? In âNoiseâ, the authors offer these ideas:
Find (or develop) better judges
Professionals who are skilled in judgment are experienced, smart, and open-minded, that is, they are willing to learn from new information. This suggests that an organization can seek to build a workforce that is well trained, has more intelligence than the competition, and brings the right cognitive style to the challenge of judgment.
Building this capability is easier when the needed expertise is something that can be verified; that is, good judgments are easy to to validate. In these contexts, we can look at an individualâs judgments from the past, and determine how ârightâ they have been.
In contexts where this is not possible, like complex domains where causality is tricky or impossible to nail down, a different kind of expert emerges. In âNoiseâ, the authors call this a ârespect expertâ, someone in whom confidence is given âentirely based on the respect they enjoy from their peers.â
This depends on having a peer group with shared norms, or some kind of professional doctrine that gives them âa sense of which inputs should be taken into account, and how to make and justify their final judgmentsâ. Much of the dialog and debate we see in newsletters like this and in LinkedIn is about setting these kinds of professional norms, and establishing this kind of respect.
Note too that in the absence of these shared norms, a complex domain can easily fall back into the trap of âresultingâ, where decision quality is (mistakenly) associated with positive outcomes.
Use a âdecision observerâ to spot bias
Bias is relatively easy to see, retroactively. But efforts to de-bias decision making practices have been disappointing. Itâs one thing to discuss the different kinds of cognitive biases, and a whole other thing to build methods that prevent them from happening.
In âNoiseâ, the authors recommend creating a distinct role in the decision architecture, called a âdecision observerâ, whose job is to search for biases as the decision making is happening. People in this role would be âtrained to spot, in real-time, the diagnostic signs that one or several familiar biases are affecting someone elseâs decisions or recommendationsâ.
The key here is that they act as an outside observer, with no skin in the game. They are appointed (not self-selected), and use a standard checklist that is customized for the context to diagnose for the presence of known biases. This helps get past the âbias blind spotâ, where people in the middle of decision making are unable to see the influence of bias, despite having good knowledge of the bias and its effects.
Sequence information to avoid early judgment
When forming a judgment, our brains want to find coherence in the information, and finish the job early. Our brains are lazy that way. So we latch onto some ideas sparked by the first bits of information we sift through, and the confirmation bias bends our interpretation of the rest of the information to support the initial idea.
This is the risk of âearly judgmentâ that decision makers face. It gets worse in meetings, where information cascades can supply that initial bit to the whole group, and everyone locks in on the same idea together (i.e. groupthink).
To combat this tendency, we can be thoughtful of how we explore the evidence, or data. Maintaining independence of thought is crucial here as well; if otherâs opinions are presented as additional evidence, it will sway our judgment. We can also hold off, or delay, our âfinalâ judgment until after we have expressed opinions on the significance of all the pieces of evidence or data.
Aggregate multiple, independent judgments
An extension of that idea of sequencing information is to prevent letting othersâ judgment or assessments influence your own assessment. When an organization seeks multiple assessments or judgments, it can leverage the âwisdom of crowdsâ. When it builds a decision architecture that keeps those multiple people independent of one other, the wisdom is more varied, and stronger as a whole.
Establish guidelines for judgment
Noise is the variability in judgments made by different individuals, for the same case. So one approach to reduce the âbetween-judgeâ variability is to create guidelines that offer boundary conditions, guardrails, or heuristics to decision makers. A simple checklist can serve as an effective aid that steers judgment without mandating a choice. This is the challenge for decision architects: Finding some influence over decision makers, to reduce noise, without compromising their decision authority.
Many decision architectures use gradations or scales to help communicate opinions across options (like a 1-10 scale to convey differences in the range from âpoorâ to âexcellentâ). When a judge or decision maker uses a scale like this, they have to translate their general impression of the option into the units of the scale. When different judges use different scales, or even interpret the scale differently, there will be noise.
Scales that ask for judgment of options relative to each other, instead of against absolute criteria, tend to be less noisy. This makes stack-rankings a better approach than strict ratings against fixed criteria.
Even better: find a way to create a scale that asks for relative judgment with an âoutside viewâ, that is, comparisons that use base rates of similar options across wider populations (i.e. compare against a wider range of options, either historical or industry-wide). Also recognize that different judges will interpret scale levels like âgoodâ or âgreatâ differently, so to reduce noise, consider providing concrete examples of âgoodâ and âgreatâ to anchor the relative evaluation.
Structure complex judgments
When a decision or judgment requires thoughtful consideration from multiple perspectives, vantage points, or lenses, we can get lost in the variety of assessments. Which angle should steer the final judgment?
Earlier, we talked about how we should try to delay that final judgment until weâve had a chance to spin through all assessments. One way to help with this is to break things down, so that one decision can be considered as a set of component assessments or judgments. This decomposition allows us to evaluate, without jumping to final judgment prematurely (which our brains really, really want to doâŚ).
The trick here is to find orthogonal criteria, or factors, that support those varied perspectives, across one or more contexts, such that each criteria or factor has enough depth to warrant an independent assessment.
Itâs like that classic parable of the blind men and the elephant; we choose to make ourselves blind to the other assessments, in order to form a partial opinion. Hopefully, we can reconcile our own distinct assessments better than the blind men in the story, though.
In âNoiseâ, the authors provide a method for decision making that combines these ideas into a single approach, called the Mediating Assessments Protocol. They introduce the method with an example where the leadership team must decide whether to pursue an acquisition of another company, then present it to their board.
The story makes a compelling argument for how these ideas can support strategic decision making, in the presence of uncertainty. As a whole, the book, âNoiseâ makes for a challenging (cognitively-heavy) read, but Chapter 25, which introduces this method with the illustrative story, can be read on its own. I recommend finding a copy, and checking it out.
These gurus of decision making (Kahneman, Sibony, and Sunstein) also know what you are thinking at this point (lol):
âNo doubt this emphasis on process, as opposed to the content of decisions, may raise some eyebrows. ⌠Content is specific; process is generic. Using intuition and judgment is fun; following process is not. Conventional wisdom holds that good decisions - especially the very best ones - emerge from the insight and creativity of great leaders. (We especially like to believe this when we are the leader in question.) And to many the word process evokes bureaucracy, red tape, and delays.â
But they add, âDecision hygiene need not be slow and certainly doesnât need to be bureaucratic. On the contrary, it promotes challenge and debate, not the stifling consensus that characterizes bureaucracies.â
So⌠roll up your sleeves, and use these hygiene tips to seek new ways to make your decision architecture a little more resilient in the face of noise.
How was this week's post?We'd love to know what you think! (click one) |
Reply