- The Uncertainty Project
- Posts
- š® Bayesian Thinking and Product Risks

# š® Bayesian Thinking and Product Risks

## Skip the math, but keep the model

Good Morning!

At the Uncertainty Project, we explore models and techniques for managing uncertainty, decision making, and strategy. Every week we package up our learnings and share them with the 1,800+ leaders like you that read this newsletter!

In case you missed it, last week we talked about exploring causality and complexity in strategy.

# Bayesian Thinking and Product Risks

When we start to acknowledge uncertainty, our language evolves. Instead of saying, āI will do this, which will produce this resultā we frame ideas more in the syntax of a hypothesis:

*āI believe that if I do this, then this result will happen.ā*

This opens the door for more nuanced statements that include our confidence in the belief, a confidence which can (and should) change over time.

In the statement above, there are multiple clauses where we can bring a more probabilistic way of thinking:

How confident are you that the thing will happen? (i.e. that you will observe it)

How confident are you that the result will happen? (i.e. that you will observe it)

When we capture our confidence in our beliefs in probabilistic terms, it helps communication within our team. We can start to have a dialog on the āmissing informationā that we *wish* we had (or capabilities we wish were stronger), to reduce the uncertainty, and drive our confidence up. The dialog will often expose new information (since weāre drawing from the whole group) and counter individual biases that might slant the confidence levels (if done alone).

Most importantly, when we capture our initial confidence as probabilities, we can more effectively apply new information, or new evidence, to adjust our probabilities over time. As we learn, those probabilities can (and should) change.

Sometimes we observe things that drive our confidence up. Sometimes we observe things that drive our confidence down. This is the whole point of feedback loops, right?

Bayesian thinking brings a more structured, disciplined approach to this idea of updating our hypotheses when new evidence emerges. These initial probabilities are explicitly captured, to get a baseline of what we think we know.

Also, we model the relationships between a variable event like the āthing doneā above, and the āresultā we want to observe. Specifically, we refine our confidence model, up front, with these probabilities:

Whatās our confidence (right now) that this thing will occur?

Probability that this event will occur = 70%

*(thinking...mostly under our control)*

Whatās our confidence (right now) that the result will occur? (i.e. our hypothesis)

Hypothesis: I believe that if I do this thing, this result will happen

Probability that this hypothesis is true = 40%

*(thinking...lots outside our control)*

Whatās the likelihood that we did this thing, if the result actually happens?

This is a hypothetical, backward-looking probability that assesses the strength of the relationship between the event and the result.

Probability that the event will occur, given that our hypothesis turned out to be true = 60%

*(thinkingā¦we likely influenced that result, but luck might have had something to do with it too)*

Why go to all this trouble? Well, thereās some math that we can use to bring some more discipline to how we connect observation (i.e. of new evidence or information) to our revised views of our confidence in success.

And thatās important, right?

While there are tools that package this math in algorithms, my intent here is to just present it as a thinking model. The point is that observations and events should trigger adjustments to your confidence in your beliefs. The numbers just help drive the point home.

Bayesā Theorem takes those probabilities we captured and surfaces them (as our āpriorā view of the world) when we observe something new - something that we had placed a probability against earlier.

In this example, we had previously said that we had 40% confidence that the result will happen. We also said that if the result DID happen, it was 60% likely that the event occurred. And we also previously captured that we thought the probability of the event occurring was 70%.

When the event actually occurs, we have a new observation - some new information - that justifies a revisit of our confidence in our hypothesis. Bayes theorem essentially says,

*āWell, this event occurred, sure, but you were already pretty sure it would (70%), and you werenāt even that sure that, hypothetically, the event would have occurred if (at some point in the future) the result happened (60%), so I took that to mean the ties were weak. So given all that, I think you should have even less confidence in the result now (34%) than you did before (40%).ā*

Itās a little counter-intuitive, but heyā¦ thatās statistics, right?

Consider a different version of that story, where our initial probability on the event occurring was only 50%:

*āHey, this event, that you said was 50/50 (50%), actually happened, despite the odds. So given that you thought it was 60% likely that youād see this event, if the result happened, I think you should feel a lot more confident now (75%) than you had been (40%).ā*

In this second case, since you previously felt the event occurrence had longer odds, then, when it occurs, it has more significance for your adjustment. [Iām not sure the numbers in the math in the example pass the āsmell testā, but itās the mechanism weāre after here.]

So how could we apply these ideas from Bayesian thinking into our strategic decision making? I think there are a few key lessons:

It is important to quantify the uncertainty in our beliefs (e.g. big strategic hypotheses) in probabilistic terms, to support later adjustments, when new information and evidence comes in.

It is important to put a probability on events that we deem to be related (can you tell Iām being really careful not to use the word ācausalā here?) to the outcomes in our hypotheses.

We can show relationships between variables in graphical representations, and eventually even refine them into causal diagrams, if warranted (again to support communication and a shared understanding of beliefs around causality, and to bring in other models).

Letās build an illustration in a specific domain. Hereās a rough sketch for product managers, that highlights some variables involved in key hypotheses, and how they might be related. Itās derived from Marty Caganās āFour Big Risksā.

When we devise a product strategy, we develop hypotheses that connect these four variables. But do we make the relationships visible? Do we make our confidence visible in probabilistic terms? Not that often.

Hereās how to read the probabilities above. [Note that they would be provided by your āexpertsā, up front, perhaps as part of a risk exercise.]

Engineering experts said that they are 80% confident they can build it.

Design experts said that they are 60% confident that users will be able to use it, given that we are able to build it.

Marketing experts said that they are 60% confident that the offer is unique and valuable to customers.

Business experts said that they are 90% confident that they can meet business goals with it, given that customers will buy it, and users keep using it.

These probabilities in the diagram capture our context-specific impressions of our chances at success. And this example is structured, more or less, by what we (unfortunately) have as silos today: engineering tackling feasibility, design tackling usability, marketing tackling value, and the business leaders looking at viability.

A constant refrain is that these efforts are not synched and coordinated enough. Could this thinking model help? At least to drive dialogs around risk, and how risks are changing over time?

Conceptually, it seems to hold up. We are constantly monitoring the new information coming out of engineering efforts, discovery efforts, marketing explorations around new pricing models, product usage metrics, and (of course) sales figures. Today, all this new information drives our intuition around these variables, and the associated risks we are accountable for monitoring.

For better monitoring of what drives those risks (the ones we care the most about), we could ādouble-clickā into any of these four variables shown above, and build out a model one layer down, with new variables and new relationships that influence the ones up top.

For example, we could:

Think about variables for a project milestone or working software at a product demo, as an event that drives an adjustment in the probability of feasibility.

Think about variables from A/B testing or bug trends, as events that drive adjustments in usability probability.

Think about variables from market research and pricing experiments, as events that drive adjustments to the probability of having a unique value proposition.

Think about variables in external market conditions and competitor actions that produce events that should drive adjustments to the probability of business results.

It seems that this is what we already do, intuitively.

While it wonāt resolve into simple formulas, Bayesian thinking can help us evolve our thinking models to do a better job of closing our learning loops when presented with new evidence and insights.

It all starts with a deliberate attempt to capture your current beliefs, along with your current confidence levels, to set up that āpriorā that you will reflect upon later. And thatās relatively easy, right? Maybe just try that, with your leadership team, and skip the math for now :)

### Shoutout to the Long Game Project! š²

Games are a fascinating, fun, and disarming approach to strategic conversations. Dan Epstein is a doctor and avid D&D player turned tabletop game designer - him and Sanjana Kashyap founded the project to help organizations simulate strategic scenarios through games.

One of the creative and useful free resources they offer is a library of thought-provoking scenarios. They also offer templates for game types that can be run asynchronously through email chains. Check them out here!

āStrategy, leadership and decision making are learnable skills that should be practiced. Tabletop exercises are structured, collaborative activities designed to test and enhance an organisationās preparedness for real-world events or scenarios.ā

*We do not run paid ads, but we do like to shoutout interesting and useful projects related to strategic decision making!*

## How was this week's post?We'd love to know what you think! (click one) |

## Reply