- The Uncertainty Project
- Posts
- đŽ Revisiting subjective probabilities
đŽ Revisiting subjective probabilities
Pivoting from a focus on Bayesian priors to a focus on reference narratives
Good morning!
At the Uncertainty Project, we explore models and techniques for managing uncertainty, decision making, and strategy. Every week we package up our learnings and share them with the 2,000+ leaders like you that read this newsletter!

Itâs a beautiful, yet disorienting thing, when you come across a great argument that forces you to rethink something you had started to take for granted.
Since the beginning of The Uncertainty Project, Iâve thought that capturing subjective probabilities, also known as personal probabilities, (i.e. âI think there is a 70% chance that will be trueâŚâ) was a solid building block to help express uncertainty. It seemed to foster good communication by forcing our conversations dealing with the âfog of uncertaintyâ out into the open. It also seemed to acknowledge uncertainty with a specific marker, so we could revisit our assumptions later (after you learn a bit more) and then move the marker accordingly. These are real advantages, when youâre currently just sweeping the uncertainty under the rug.
And then, when you finally get a handle on Bayesâ Theorem, and grasp how Bayesian Belief Networks can build inference engines to support decision making... Well, itâs all quite intellectually stimulating, isnât it?
But John Kay and Mervyn King pop that balloon and make it clear: itâs kind of a sham.
In âRadical Uncertaintyâ, they give examples of appropriate (and inappropriate) uses of probabilistic thinking, while assailing the pseudo-quantification of uncertainty via subjective probabilities.
To do this, they draw a distinction between two kinds of domains:
âSmall Worldsâ | âLarge Worldsâ |
Solvable Puzzles | Unknowable Mysteries |
Games of chance | Real world |
Rules-based, model-driven | No fixed rules, no grand model |
Stationarity in processes | Complex adaptive systems |
Probabilities based on frequencies | Probabilities based onâŚ. guessing? |
These differences frame their concept of radical uncertainty: itâs what you face when you admit that you live in a âlarge worldâ (i.e. the real world), not the âsmall worldsâ of models, theories and games.
âIn games of chance, ⌠everything is either known or unknown, deterministic or random. But that dichotomy does not exist in most real worlds. We know something, but never enough. That is the nature of radical uncertainty.â
They challenge the use of subjective probabilities in these âlarge worldsâ on a number of counts:
Itâs improperly framed - most times weâre asked for a probability, itâs for an event that will only happen one time (which is different than asking about a coin flip). Itâs better framed as a subjective, relative âlikelihoodâ assessment.
Itâs dishonest - weâre really just guessing, since the future is unknowable. But giving our answer as a number implies precision.
Itâs impossible to apply the theory, realistically - are we supposed to find all possibilities? Then list them all out and assign subjective probabilities to each one? Then make sure all the probabilities add up to 1? Yeah, right. Good luck with that.
Itâs a hindrance to the âreal workâ of truth-seeking - Capturing numbers cues the siren song of spreadsheets, and casts the illusion of mathematical rigor on top of a shaky foundation (guesswork). Weâd be better off spending this âspreadsheet timeâ out seeking new information, and thereby reducing the uncertainty.
They imagine two people talking about a horse race, to illustrate arguments made by proponents of subjective probabilities, and to throw cold water on the practicality of it all.
The conversation would go something like this:
Jimmy Smalls: âDo you think âBayes Watchâ will win the Kentucky Derby this year?â
Mr. Big: âI donât know.â
Jimmy: âWell, would you bet on that horse if I gave you 5:1 odds?â
Mr. Big: âUmm, no.â
Jimmy: â20:1 odds?â
Mr. Big: âLet me check. Um, yes, Iâd take that bet.â
Jimmy: âWhat about at 15:1?â
Mr. Big: âI guess Iâd take that also.â
(⌠that continues for a whileâŚuntil âwilling-to-betâ odds are narrowedâŚ)
Jimmy: âSo it appears that you think âBayes Watchâ has a 67% likelihood of winning.â
Mr. Big: âOkay, I guess so.â
Jimmy: âNow letâs do this for the other horses in the field.â
Mr. Big: âI really donât have time for this. And I donât know much about these horsesâŚâ
Jimmy: âBut arenât you rational? This is how rational people think! Donât worry, Iâll keep track of it all - to see if the probabilities across all the horses add up to oneâŚâ
Mr. Big: âThis isnât worth my time.â
[And after Jimmy figures out whether Mr. Bigâs subjective probabilities add up to one, heâll exploit that to take (or place) bets at Mr. Bigâs expense. And Mr. Big probably sensed that was what was going on all along⌠because thereâs no benefit in it from his point of view.]
Kay and King use this thought experiment to highlight how economists construct models of ârational actorsâ and how these actors (supposedly) think about uncertainty - with probabilities. To Kay and King, it âreveals the absurdity of the suggestion that people act as if they attach probabilities to every conceivable event.â
So consider your own thinking models. If youâre like most humans, you donât address uncertainty by trying to list every possible outcome (like a decision tree might) and assigning subjective probabilities to each possible outcome. This is a foolâs errand. You canât anticipate every possibility in an unknowable future!
Not to mention that assigning those subjective probabilities is pure guesswork.
So what should we do instead?
The authors recommend that you change your tooling, so to speak. Swap that spreadsheet out for a document. Replace that false precision of probability values with a well-crafted story. Take a step back, and tell the story (with you as the hero, if youâd like) about the path youâre on, a little about how you got there, and the realistic expectations youâve got about the future.
They call this a reference narrative, and like subjective probabilities, we are allowed to update this narrative as we learn new things (or change our minds). The fundamental question that a reference narrative seeks to answer is:
âWhatâs going on around here?â
It covers your current trajectory, and where you think itâs taking you - into the future. Youâve already got this story (in your head), so this is (hopefully) just a matter of writing it down. Then you can share it with others, to tap into our evolutionary superpower of working our thoughts out collaboratively.
Kay and King paint a picture of what this might look like in a corporate setting:
âFor (a large bank), the overarching reference narrative is one in which the bank continues profitable growth. A large corporation will have many strategies for achieving that overarching objective in particular areas of its business and there will be a reference narrative relating to each business unit. Some of these business unit reference narratives may be very risky, but the corporation may tolerate such risks provided they do not endanger the reference narrative of the organization as a whole.â
Constructing a reference narrative isnât the same thing as constructing a vision (e.g. âa postcard from the futureâ) or constructing a âfuture backwardsâ document like a PRFAQ. There is less focus on the details of that future state, and more about the assumptions in the chain of events that move into that future.
It also captures supporting evidence, rationale, and appropriate references to research or learning. But itâs not about âgetting it rightâ:
âThe selection of relevant narratives is problem- and context-specific, so that the choice of fictions, numbers, and models requires the exercise of judgment in relation to both problem and context. The narratives we seek to construct are neither true nor false, but helpful or unhelpful.â
A reference narrative can then support an exploration of risk. In this view, risk is the failure of a reference narrative to be realized. Or in their words:
âRisk is the failure of a projected narrative, derived from realistic expectations, to unfold as envisaged.â
So what makes a reference narrative risky? Itâs when - after a focused, collaborative discussion - we realize that parts of our narrative are not sufficiently robust and resilient (in the face of some potential futures that we can imagine).
After youâve documented your reference narrative, you can explore risk by:
Identifying a few significant future factors (usually these are trends that youâre already paying attention to)
Spinning up a few (plausible) future scenarios from them
Working with others to assess the robustness and resilience of your reference narrative against each of those scenarios
Capturing risks where you have exposed weaknesses via the exercise
In this exercise, youâve been trying to poke holes in the status quo, as defined in your reference narrative. A good next step is to write up a couple alternative narratives, to compare against the (active) reference narrative. This can support a strategic exploration of your possibility space.
These alternative narratives might (or might not) look more promising than your status quo. In a way, these narratives are competing against each other. Drawing on insights and learnings, evaluate and compare these stories, relative to each other.
But think carefully about how you conduct the comparative evaluation:
Bad: Build a pros/cons analysis in a spreadsheet and assign numeric weights and assessments for criteria that makes a story robust and resilient. [This would appeal to your inner math geek again, but youâre selling false precision.]
Better: Stage a mock courtroom setting, with one person ârepresentingâ the reference narrative and a different person ârepresentingâ one of the alternative narratives. Have the accountable leader serve as judge. [You know itâs going well if someone says, âYou want the truth? You canât handle the truth!â]
Good leaders embrace this sort of exercise:
âThe mark of the first-rate decision maker confronted by radical uncertainty is to organize action around a reference narrative while still being open to both (1) the possibility that this narrative is false, and (2) that alternative narratives might be relevant. This is a very different style of reasoning from Bayesian updating.â
One of the most compelling (and relatable) parts of âRadical Uncertainty" highlights the way we build up business cases for investment decisions, and, more specifically, how we handle the inherent uncertainty in these analyses.
Think about the last spreadsheet exercise you completed. Maybe you built a model with an expected Net Present Value (NPV) that quantified the expected benefits or return on an investment (ROI). What did it feel like, facing some radical uncertainty?
Kay and King know what it feels like, and it feels dirty: âModelling exercises rely on filling in gaps in knowledge by inventing numbers, often in immense quantities.â
This desire for certainty, when there is none, makes us do bad things.
âGood strategies for a radically uncertain world avoid the pretense of knowledge - the models and bogus quantification which require users to make up things they do not know and could not know.â
And even the models themselves are flawed (not just the numbers). All models are wrong (to cite the popular quote from George Box), but just because it is âusefulâ doesnât make it âaccurateâ. When we build âsmall worldâ models and apply them to the real world, they might be useful, but they are brittle.
Why are they brittle? No surprise - itâs in their assumptions. Usually, their biggest assumption is that there is some stable, bounded process that governs the behaviors of these (rational) actors marching boldly into the future. Some processes have âstationarityâ, like our models of planetary motion that allow us to send rovers into space, and land on Mars months later. Contrast that with the flimsy process models that forecast the future in our spreadsheets. We operate in complex adaptive systems, offering very little âstationarityâ in the modelled processes. We make assumptions and simplifications in our spreadsheets, about how we shape future events, and this is our Achillesâ Heel.
Kay and King emphasize, âThese exercises necessarily assume, almost always without justification, stationarity of the underlying processes. In the absence of stationarity, these modeling exercises have no means of accounting for uncertainty and there is no basis for the construction of probability distributions, confidence intervals, or the use of tools of statistical inference. The opinions of different people about the value of a parameter, or the same consultantâs different estimates of the value of that parameter, do not constitute either a frequency or a probability distribution.â
So these expressions of probability disguise uncertainty rather than resolve it, by sprinkling in a pretense of knowledge that just isnât there.
So a better approach is to put our âsmall worldâ models in the service of our assessments of the reference narratives:
âDeploy simple models to identify the key factors that influence an assessment⌠The useful purpose of modelling is to find âsmall worldâ problems which illuminate part of the large world of radical uncertainty.â
âIn the end, a model is useful only if the person using it understands that it does not represent the âworld as it really isâ, but is a tool for exploring ways in which a decision might or might not go wrong.â
âModels should not be judged by the sophistication of the mathematics - in itself neither good nor bad - but by the insights which that model provides into a particular problem that we are trying to solve⌠So the test of a model is therefore whether it is useful in making the decisions which need to be made in government, business and finance, and in households, in a world of radical uncertainty.â
So how could âsmall-worldâ models help with assessments of our narratives? Help make sense of the real world?
The authors suggest that navigation apps like Google Maps or Waze offer an analogy. When asked for directions, these apps:
Collect real-time info (i.e. from the front-lines)
Make suggestions (i.e. paths to desired outcomes)
Provide indications of the consequences of different choices (i.e. risks, potential roadblocks)
Suggest alternatives (i.e. monitor alternate paths)
âWaze is useful precisely because it uses data not to build general models but to provide rapid access to information suggesting the location of problems and possible solutions.â
This analogy really gets you thinking about how our tools could do a better job of combining data, âsmall-worldâ models, and our current intent, to play a starring role in our strategic dialogs (and decision making), when uncertainty is present.
Coming soon: Navigation apps for the strategic landscape?

How was this week's post?We'd love to know what you think! (click one) |
Reply