- The Uncertainty Project
- Posts
- đŽ An Argument for Probabilistic Thinking
đŽ An Argument for Probabilistic Thinking
Reframing the challenge as getting from less sure to more sure
Good Morning!
At the Uncertainty Project, we explore models and techniques for managing uncertainty, decision making, and strategy. Every week we package up our learnings and share them with the 1,800+ leaders like you that read this newsletter!
In case you missed it, in our last post, we talked about 'How to Reduce Uncertainty in Product and Startup Traction'
An Argument for Probabilistic Thinking
This is going to be an oversimplification, but bear with me:
Leaders are only interested in these four questions:
âWhat do we see out there?â
âHow are we doing?â
âWhat can we do?â
âWhat should we do next?â
Thatâs it.
Any conversation boils down to some flavor of one of these four questions.
And thatâs a good thing! These are great questions!
Where it goes off the rails is when these same leaders expect an unrealistic level of certainty in the answers they get back. When the answers they get back are either (1) an overly confident statement with (unsupported) conviction, or (2) a shrug and the statement âit dependsâ... in either case, decision making will suffer.
The world exists in shades of gray. But when we limit our palette to just black and white, we lose the ability to really see.
So what can we do instead?
We can start by prefacing our answers to any of those four questions with, âWell, Iâm not sure, butâŚâ
This is the acknowledgment of uncertainty, one of the core principles here at The Uncertainty Project. Itâs surprising, still, how difficult this is to say, in most of our business cultures.
From there, an enlightened leader can lean in, and ask for help exploring the uncertainty. What do we know, with some certainty? What are the most important questions that we donât yet have answers to? What unknowns are âkeeping you up at night?â The severity of these questions (i.e. how scary they are) is what establishes the initial level of uncertainty. These kinds of discussions bring a laser-focus to the risks you are carrying.
Leading with questions, when done in a thoughtful (and not flippant) way, can help frame discovery efforts, frame experimentation, or frame a conversation you need to have with an expert on your team (or another team).
But most importantly, it allows us to express our answers to our questions in shades of gray. That is, in probabilistic terms.
âWhat do I see?â
âWell I think Iâm seeing pricing pressure in the market due to new entrants, but there might be other forces at play as well, so Iâm only 60% confident that the pricing pressure is real...â
âHow are we doing?â
âThis project has missed the last two milestones and lost a key member of the team last week. Iâm dropping the likelihood of us delivering on the desired outcomes by the target date from 80% to 65%...â
âWhat can we do?â
âWe are starting to hear about this pain point in the problem space, but itâs only from a couple existing customers right now. We donât yet have a theory on how solving for this pain can move the needle on much of anything. Iâm capturing this opportunity with just a 40% chance of being valuable to both our customer base and the business...â
âWhat should I do next?â
âWeâve identified these five things as our top risks, and we think we can mitigate the biggest one if we drive this change quickly. We believe this change would drop the probability of this risk from 80% to 50%, which would avoid a big negative impact. The change only has a 70% chance of producing the desired outcome, though, given what we know so far. But thatâs solid enough to commit some resources with a decision.â
These examples show how these questions open the door to navigating uncertainty in different realms, and how probabilistic statements (expressed as a %) can enrich the dialog.
When we ask, âWhat do I see?â...
âŚwe are exploring our external (or internal) environments or context, and expressing uncertain beliefs about what we are sensing. When two people see the same circumstance, but arrive at different views, they can express their relative confidence to convey how supported they feel in their viewpoint. Is it a hunch? Or is it based on a pattern they see, derived from years of experience? Or maybe both?
When we ask, âHow are we doing?â...
âŚthe question is relative to some concept of success. The real question (always) is âAre we going to be successful in this area?â Of course, that question is easier to tackle when the definition of success is clear. When desired outcomes are stated, and connected, to show expected support and causality, then our ability to think about our actions, choices, or decisions, and tie them to some future conditions are enhanced. Itâs still just a prediction, but it will be more crisp.
When we ask, âWhat can we do?â...
âŚwe are surveying the possibility space for opportunities. Some opportunities present stronger connections to future desired outcomes than others. This is the conversation we want. When we ask, âWhat is the likelihood that this opportunity for change will make an impact (somewhere)?â, we are unfolding our speculative map of the future terrain, and checking the navigability of possible paths.
When we ask, âWhat should I do next?â...
âŚwe are recognizing that we have finite resources, and that we will have to make decisions (that allocate these scarce resources) without the luxury of certainty. Moving forward with one choice absorbs the opportunity cost of another. These opportunity costs can be expressed with a magnitude, and more importantly, with a confidence interval. We can use algorithms to help us sort and stack, but we should know where the inputs are shaky.
So expressing uncertainty as % of confidence is a powerful way to enhance dialog in decision making, to make sure that the supporting information is understood for what it really is.
But thatâs not even the strongest advantage of adding this layer to your decision architecture.
The best part about this is that the % values can (and will) change over time. They change as you learn. They change as you watch plans get executed. They change as you observe market conditions shift. They change as you see competitors (or a key technology) debut something unexpected.
This ability to talk about how the probabilities are changing is the key to unlocking adaptability, or business agility, in your organization.
How can you convince people that this probabilistic framing is valuable? Maybe start with a picture.
A great way to think about this evolution of uncertainty over time, for a specific value or result, is the âcone of uncertaintyâ:
You can draw one of these for any key variable you are tracking, like target dates, planned costs, monthly users, or new signups. Variables tied to risks or assumptions are good places to start.
As time progresses, you hope to learn enough to reduce the uncertainty around the impact of your ongoing efforts.
âIn principle, the basis for assessing the value of information for decisions is simple. If the outcome of a decision in question is highly uncertain and has significant consequences, then measurements that reduce uncertainty about it have high value.â
We learn by collecting new information - for example, via measurements.
These occasional learnings are the oft-referenced-yet-elusive insights that drive adjustments to estimates, plans, forecasts, expectations, or roadmaps. An overlay of the cone of uncertainty can set some expectations around how, over time, we should be âbuying informationâ to reduce the uncertainty around the results that are most important to us.
It also suggests that insights should be happening at some frequency over time, to fuel the narrowing of the cone. If a few weeks go by, and nothing is changing - no new insights that support a reduction in uncertainty - then itâs time to pause and ask why.
In this way, the cone can be used as a set of control limits to this effort of âbuying informationâ. That is, if some time passes, and your confidence interval still exceeds the boundaries of the narrowing cone, then you may be âout of controlâ in terms of your ability to reduce the uncertainty.
Ask: âWhy arenât we more confident by now?â and âAre we applying our efforts in the best places to reduce the uncertainty around success?â. If youâve done your best, but the confidence is still lacking, maybe itâs time to revisit other opportunities, since the expected value of this one is still wavering.
And this perspective will be different, depending on your role in the organization. As a product manager, you care about results related to adoption, usage, and retention. As a project manager, you care about results related to delivering changes with a set of resources by a target date. In both cases, they are interested in applying new information to evaluate the probabilities of success. But the definition of success will be local. And those definitions of success are interrelated. This is where belief networks can model the way one uncertain result can shape the uncertainty around a related result.
So the cone of uncertainty can be a powerful visual to help leadership teams bring probabilistic thinking into their dialog.
Compare this probabilistic approach to the typical way we try to answer âHow are we doing?â for a project. When we say that a project is â90% doneâ itâs a reflection of the execution of changes to produce different outputs. But if the 10% that remains is the most crucial to the valued changes, then we are in trouble, right?
If, at the same point, we ask, âWhat is the probability of success of this project?â, I bet when the progress bar shows 90%, that we donât feel that success is 90% likely. A question that focuses on probability of success (however it is defined) is a better opener for an honest conversation about risk.
But the cone of uncertainty is just another visualization of a learning loop. And when an organization sponsors multiple concurrent efforts (which is always the case), then those four questions fold into a quarterly cycle like this:
The aim each quarter is to find answers to those questions that land like insights, and feed learning. When they string together in this repeating sequence:
âHow are we doing?â
âWhat can we do?â
âWhat should we do next?â
âWhat do we see out there?â
And when we update our probabilities as we learn, then we improve our ability to navigate uncertainty.
How was this week's post?We'd love to know what you think! (click one) |
Reply