🔮 Revisiting subjective probabilities

Pivoting from a focus on Bayesian priors to a focus on reference narratives

Good morning!

At the Uncertainty Project, we explore models and techniques for managing uncertainty, decision making, and strategy. Every week we package up our learnings and share them with the 2,000+ leaders like you that read this newsletter!

It’s a beautiful, yet disorienting thing, when you come across a great argument that forces you to rethink something you had started to take for granted.

Since the beginning of The Uncertainty Project, I’ve thought that capturing subjective probabilities, also known as personal probabilities, (i.e. “I think there is a 70% chance that will be true…”) was a solid building block to help express uncertainty. It seemed to foster good communication by forcing our conversations dealing with the “fog of uncertainty” out into the open. It also seemed to acknowledge uncertainty with a specific marker, so we could revisit our assumptions later (after you learn a bit more) and then move the marker accordingly. These are real advantages, when you’re currently just sweeping the uncertainty under the rug.

And then, when you finally get a handle on Bayes’ Theorem, and grasp how Bayesian Belief Networks can build inference engines to support decision making... Well, it’s all quite intellectually stimulating, isn’t it?

But John Kay and Mervyn King pop that balloon and make it clear: it’s kind of a sham.

In “Radical Uncertainty”, they give examples of appropriate (and inappropriate) uses of probabilistic thinking, while assailing the pseudo-quantification of uncertainty via subjective probabilities. 

To do this, they draw a distinction between two kinds of domains:

“Small Worlds”

“Large Worlds”

Solvable Puzzles

Unknowable Mysteries

Games of chance

Real world

Rules-based, model-driven

No fixed rules, no grand model

Stationarity in processes

Complex adaptive systems

Probabilities based on frequencies

Probabilities based on…. guessing?

These differences frame their concept of radical uncertainty: it’s what you face when you admit that you live in a “large world” (i.e. the real world), not the “small worlds” of models, theories and games.

“In games of chance, … everything is either known or unknown, deterministic or random. But that dichotomy does not exist in most real worlds. We know something, but never enough. That is the nature of radical uncertainty.”

They challenge the use of subjective probabilities in these “large worlds” on a number of counts:

  • It’s improperly framed - most times we’re asked for a probability, it’s for an event that will only happen one time (which is different than asking about a coin flip). It’s better framed as a subjective, relative “likelihood” assessment.

  • It’s dishonest - we’re really just guessing, since the future is unknowable. But giving our answer as a number implies precision.

  • It’s impossible to apply the theory, realistically - are we supposed to find all possibilities? Then list them all out and assign subjective probabilities to each one? Then make sure all the probabilities add up to 1? Yeah, right. Good luck with that.

  • It’s a hindrance to the “real work” of truth-seeking - Capturing numbers cues the siren song of spreadsheets, and casts the illusion of mathematical rigor on top of a shaky foundation (guesswork). We’d be better off spending this “spreadsheet time” out seeking new information, and thereby reducing the uncertainty.

They imagine two people talking about a horse race, to illustrate arguments made by proponents of subjective probabilities, and to throw cold water on the practicality of it all. 

The conversation would go something like this:

Jimmy Smalls: “Do you think ‘Bayes Watch’ will win the Kentucky Derby this year?”

Mr. Big: “I don’t know.”

Jimmy: “Well, would you bet on that horse if I gave you 5:1 odds?”

Mr. Big: “Umm, no.”

Jimmy: “20:1 odds?”

Mr. Big: “Let me check. Um, yes, I’d take that bet.”

Jimmy: “What about at 15:1?”

Mr. Big: “I guess I’d take that also.”

(… that continues for a while…until “willing-to-bet” odds are narrowed…)

Jimmy: “So it appears that you think ‘Bayes Watch’ has a 67% likelihood of winning.”

Mr. Big: “Okay, I guess so.”

Jimmy: “Now let’s do this for the other horses in the field.”

Mr. Big: “I really don’t have time for this. And I don’t know much about these horses…”

Jimmy: “But aren’t you rational? This is how rational people think! Don’t worry, I’ll keep track of it all -  to see if the probabilities across all the horses add up to one…”

Mr. Big: “This isn’t worth my time.”

[And after Jimmy figures out whether Mr. Big’s subjective probabilities add up to one, he’ll exploit that to take (or place) bets at Mr. Big’s expense. And Mr. Big probably sensed that was what was going on all along… because there’s no benefit in it from his point of view.]

Kay and King use this thought experiment to highlight how economists construct models of “rational actors” and how these actors (supposedly) think about uncertainty - with probabilities. To Kay and King, it “reveals the absurdity of the suggestion that people act as if they attach probabilities to every conceivable event.” 

So consider your own thinking models. If you’re like most humans, you don’t address uncertainty by trying to list every possible outcome (like a decision tree might) and assigning subjective probabilities to each possible outcome. This is a fool’s errand. You can’t anticipate every possibility in an unknowable future! 

Not to mention that assigning those subjective probabilities is pure guesswork.

So what should we do instead?

The authors recommend that you change your tooling, so to speak. Swap that spreadsheet out for a document. Replace that false precision of probability values with a well-crafted story. Take a step back, and tell the story (with you as the hero, if you’d like) about the path you’re on, a little about how you got there, and the realistic expectations you’ve got about the future.

They call this a reference narrative, and like subjective probabilities, we are allowed to update this narrative as we learn new things (or change our minds). The fundamental question that a reference narrative seeks to answer is:

“What’s going on around here?”

It covers your current trajectory, and where you think it’s taking you - into the future. You’ve already got this story (in your head), so this is (hopefully) just a matter of writing it down. Then you can share it with others, to tap into our evolutionary superpower of working our thoughts out collaboratively. 

Kay and King paint a picture of what this might look like in a corporate setting:

“For (a large bank), the overarching reference narrative is one in which the bank continues profitable growth. A large corporation will have many strategies for achieving that overarching objective in particular areas of its business and there will be a reference narrative relating to each business unit. Some of these business unit reference narratives may be very risky, but the corporation may tolerate such risks provided they do not endanger the reference narrative of the organization as a whole.” 

Constructing a reference narrative isn’t the same thing as constructing a vision (e.g. “a postcard from the future”) or constructing  a “future backwards” document like a PRFAQ. There is less focus on the details of that future state, and more about the assumptions in the chain of events that move into that future. 

It also captures supporting evidence, rationale, and appropriate references to research or learning. But it’s not about “getting it right”:

“The selection of relevant narratives is problem- and context-specific, so that the choice of fictions, numbers, and models requires the exercise of judgment in relation to both problem and context. The narratives we seek to construct are neither true nor false, but helpful or unhelpful.”

A reference narrative can then support an exploration of risk. In this view, risk is the failure of a reference narrative to be realized. Or in their words:

“Risk is the failure of a projected narrative, derived from realistic expectations, to unfold as envisaged.”

So what makes a reference narrative risky? It’s when - after a focused, collaborative discussion - we realize that parts of our narrative are not sufficiently robust and resilient (in the face of some potential futures that we can imagine).

After you’ve documented your reference narrative, you can explore risk by:

  1. Identifying a few significant future factors (usually these are trends that you’re already paying attention to)

  2. Spinning up a few (plausible) future scenarios from them

  3. Working with others to assess the robustness and resilience of your reference narrative against each of those scenarios 

  4. Capturing risks where you have exposed weaknesses via the exercise 

In this exercise, you’ve been trying to poke holes in the status quo, as defined in your reference narrative. A good next step is to write up a couple alternative narratives, to compare against the (active) reference narrative. This can support a strategic exploration of your possibility space.

These alternative narratives might (or might not) look more promising than your status quo. In a way, these narratives are competing against each other. Drawing on insights and learnings, evaluate and compare these stories, relative to each other. 

But think carefully about how you conduct the comparative evaluation:

Bad: Build a pros/cons analysis in a spreadsheet and assign numeric weights and assessments for criteria that makes a story robust and resilient. [This would appeal to your inner math geek again, but you’re selling false precision.]

Better: Stage a mock courtroom setting, with one person “representing” the reference narrative and a different person “representing” one of the alternative narratives. Have the accountable leader serve as judge. [You know it’s going well if someone says, “You want the truth? You can’t handle the truth!”]

Good leaders embrace this sort of exercise: 

“The mark of the first-rate decision maker confronted by radical uncertainty is to organize action around a reference narrative while still being open to both (1) the possibility that this narrative is false, and (2) that alternative narratives might be relevant. This is a very different style of reasoning from Bayesian updating.”

One of the most compelling (and relatable) parts of ”Radical Uncertainty" highlights the way we build up business cases for investment decisions, and, more specifically, how we handle the inherent uncertainty in these analyses.

Think about the last spreadsheet exercise you completed. Maybe you built a model with an expected Net Present Value (NPV) that quantified the expected benefits or return on an investment (ROI). What did it feel like, facing some radical uncertainty? 

Kay and King know what it feels like, and it feels dirty: “Modelling exercises rely on filling in gaps in knowledge by inventing numbers, often in immense quantities.”

This desire for certainty, when there is none, makes us do bad things.

“Good strategies for a radically uncertain world avoid the pretense of knowledge - the models and bogus quantification which require users to make up things they do not know and could not know.”

And even the models themselves are flawed (not just the numbers). All models are wrong (to cite the popular quote from George Box), but just because it is “useful” doesn’t make it “accurate”. When we build “small world” models and apply them to the real world, they might be useful, but they are brittle.

Why are they brittle? No surprise - it’s in their assumptions. Usually, their biggest assumption is that there is some stable, bounded process that governs the behaviors of these (rational) actors marching boldly into the future. Some processes have “stationarity”, like our models of planetary motion that allow us to send rovers into space, and land on Mars months later. Contrast that with the flimsy process models that forecast the future in our spreadsheets. We operate in complex adaptive systems, offering very little “stationarity” in the modelled processes. We make assumptions and simplifications in our spreadsheets, about how we shape future events, and this is our Achilles’ Heel. 

Kay and King emphasize, “These exercises necessarily assume, almost always without justification, stationarity of the underlying processes. In the absence of stationarity, these modeling exercises have no means of accounting for uncertainty and there is no basis for the construction of probability distributions, confidence intervals, or the use of tools of statistical inference. The opinions of different people about the value of a parameter, or the same consultant’s different estimates of the value of that parameter, do not constitute either a frequency or a probability distribution.”

So these expressions of probability disguise uncertainty rather than resolve it, by sprinkling in a pretense of knowledge that just isn’t there.

So a better approach is to put our “small world” models in the service of our assessments of the reference narratives:

“Deploy simple models to identify the key factors that influence an assessment… The useful purpose of modelling is to find ‘small world’ problems which illuminate part of the large world of radical uncertainty.”

“In the end, a model is useful only if the person using it understands that it does not represent the ‘world as it really is’, but is a tool for exploring ways in which a decision might or might not go wrong.”

“Models should not be judged by the sophistication of the mathematics - in itself neither good nor bad - but by the insights which that model provides into a particular problem that we are trying to solve… So the test of a model is therefore whether it is useful in making the decisions which need to be made in government, business and finance, and in households, in a world of radical uncertainty.”

So how could “small-world” models help with assessments of our narratives? Help make sense of the real world? 

The authors suggest that navigation apps like Google Maps or Waze offer an analogy. When asked for directions, these apps:

  1. Collect real-time info (i.e. from the front-lines) 

  2. Make suggestions (i.e. paths to desired outcomes) 

  3. Provide indications of the consequences of different choices (i.e. risks, potential roadblocks) 

  4. Suggest alternatives (i.e. monitor alternate paths)

“Waze is useful precisely because it uses data not to build general models but to provide rapid access to information suggesting the location of problems and possible solutions.”

This analogy really gets you thinking about how our tools could do a better job of combining data, “small-world” models, and our current intent, to play a starring role in our strategic dialogs (and decision making), when uncertainty is present. 

Coming soon: Navigation apps for the strategic landscape?

How was this week's post?

We'd love to know what you think! (click one)

Login or Subscribe to participate in polls.

Reply

or to participate.