- The Uncertainty Project
- Posts
- š® Thinking more like an economist
š® Thinking more like an economist
Exploring decisions and uncertainty with Paul Krugman
Good morning!
At the Uncertainty Project, we explore models and techniques for managing uncertainty, decision making, and strategy. Every week we package up our learnings and share them with the 2,000+ leaders like you that read this newsletter!
In case you missed it, last week we talked about applying decision hygiene to yield better judgement.
Thinking more like an economist
We hear a lot about trying to āthink like a scientistā as we develop a growth mindset and learn how to navigate uncertainty. But Iām starting to think we might be better off trying to āthink like an economist.ā
Establishing a laboratory-like setting, testing a null hypothesis, controlling for variables, crunching the dataā¦ those core capabilities of science still feel a little āover-the-topā for most of corporate strategy and product development. Creating a control group? Randomized tests? Conceptually, sure. But in reality, not so much.
But there are a group of professionals out there who have learned to answer questions in a complex space in which they have little direct control. A space where itās less about physics and more about human behavior. Less physical nature, more human nature. And it affects our happiness, so itās not trivial.
So maybe we should observe how economists approach their work?
The ādismal scienceā of economics has been forced to build models and explore hypotheses under complex and uncertain conditions since its birth. Economists operate within robust decision architectures that help them build and test beliefs about behaviors and trends in the complex domains, like the world economy.
Whatās really interesting is how transparent much of their work is.
This week, I read a weekly newsletter from economist (and Nobel-prize winner) Paul Krugman that delivered a master-class in good decision architecture. Now, granted, heās more of an observer/pundit/analyst/advisor than decision-maker for this, but still, I found myself thinking that if we could bring this mindset to our corporate settings, it would go a long way towards improving our organizational decision making capabilities.
His newsletter, titled āStumbling into Goldilocksā, provided an updated analysis of the current state of the US economy, tracking the efforts of the Fed as it aims to fight inflation while avoiding a recession. For a couple years now, economists (including Krugman) have been very vocal about their (competing) hypotheses on the best course of action. This post was Krugmanās check-in on the debate, and an assessment of what could be learned about beliefs and past decisions from new evidence.
What follows is a breakdown of what I observed in Krugmanās analysis, and a discussion of how we could aim to do something similar in our organizational contexts.
In the article, I saw several behaviors worth emulating. Letās take a closer look at what he was doing:
Explore alternative metrics to better seek truth
āStumbling into Goldilocksā, Paul Krugman.
Metrics derive from models, and all models are wrong but sometimes useful. So itās important to keep an open mind to additional metrics that can paint a richer picture of the fuzzy, emergent truth.
In this case, Krugman is questioning the standard metric, and using a comparison of two metrics to assess which one offers a better model of the true context. No one metric can tell the whole story, and we see weaknesses in our defined measures the more we use them. This isnāt a flaw, itās a benefit of learning.
Ask āWere the decisions good? or lucky?ā
āStumbling into Goldilocksā, Paul Krugman.
You couldnāt ask for a better example of āoutcome fieldingā. Here, Krugman is acknowledging that the decisions led to a desired outcome (i.e. bringing inflation down, without triggering a recession), but still he wondersā¦ is there a clear causality here? Were the hypotheses behind the decisions and actions validated? Or did we just get lucky with the outcome? This is the main question that frames the article.
Even better, he presents his evaluation as āmy takeā... just an opinion, not fact. When you field an outcome, youāre just making a second bet (to paraphrase Annie Duke) on whether your decision actually caused (or contributed to) the result. Itās just another bet, with another set of uncertainty.
Revise a causal theory, with hindsight
āStumbling into Goldilocksā, Paul Krugman.
To revisit the decision quality, we should remind ourselves of the information that we had available at the time the decision was made (hint: it was incomplete). We should also build (or revise) our story of causal connections to include events that played out after the decision.
Here, Krugman gives his interpretation of causes and effects that produced the inflationary conditions, which is critical to discuss, if we want to assess whether the decision accurately modeled the context, conditions, and domain.
At the time of the decision, with incomplete information, they were all guessing. With hindsight, we can build a better narrative of causality (with new facts) and answer the question: āIf I knew then, what I know now, would I make the same decision again?ā
Revisit the risks and identified possibilities
āStumbling into Goldilocksā, Paul Krugman.
When decisions are made under conditions of uncertainty, risks will be identified and alternative possibilities will be considered. The risks (hopefully) get tracked and mitigated over time, but itās also worth reflecting back on them after the probability of the risk has dropped to insignificant levels. We can ask:
Was the risk valid?
Did we misunderstand something?
Did we learn anything from watching the risk evaporate?
What beliefs should be challenged?
In this example, there was a healthy dialog and debate amongst leading economists on the best course of action. Different possibilities were explored. When a choice was made, one action was taken, the alternative possibilities were distilled into risks against the chosen path.
This highlights that when you make a difficult - but ultimately good - decision, itās important to remember that you easily might have chosen the other path. Revisiting the risks is one way to explore this ābizarroā alternative universe (as Chris Butler might say) or counterfactual, to revisit the beliefs you had at the time.
Conduct a retrospective analysis of forecasts
āStumbling into Goldilocksā, Paul Krugman.
When you get lucky, you achieve your outcome, but your forecasts were probably wrong in some ways. Consider holding retrospectives on your forecasts (it will be humbling).
When we publish forecasts, itās like making side bets, and we can use these to test our beliefs along the way.
In this example, the side bets exposed how the models (behind the rationale for the decision) did not hold up. Instead, they got lucky with a couple offsetting factors, yielding the desired outcome, but in a different way than they had forecasted.
Show humility: being happy to be proven wrong
āStumbling into Goldilocksā, Paul Krugman.
When you commit to seeking the truth, being wrong feels different (in a good way). When you find out that your model, beliefs, assumptions, or hypotheses were wrong, it's just a step toward being more right (overall). This takes a healthy chunk of humility. You still must work hard to build a strong conviction, but then you hold it loosely, which makes it easier to toss out when the evidence doesnāt support it.
Here Krugman shows that humility, as he did throughout the debate - where he framed the two schools of thought as āTeam Transitoryā and āTeam Persistentā (like a game or match), instead of a dogmatic, āIām right, so youāre wrongā argument.
The whole commentary in the newsletter exhibits a kind of Bayesian thinking that we should aim to bring to more of our corporate decision architectures.
This kind of thinking requires us to:
Document our prior beliefs, probabilistically. So at the time of the decision, we acknowledge the uncertainty by saying something like, āBased on what I know right now, I think this belief or hypothesis is 60% likely to be true.ā
Think about the conditions that would surface (as evidence) if this possibility becomes reality. We ask ourselves, āIf the belief does turn out to be true, then whatās the probability that weād see evidence for this condition?ā We would capture our thoughts on this probability up-front as well.
Think about the current (up-front) probability of the evidence appearing. Is it a long-shot? A coin-toss? Letās capture this as well.
Challenge our beliefs when the evidence appears. Know when weāve got enough to re-assess our initial beliefs (i.e. the probability in #1). When a long-shot hits (i.e. the odds in #3 were low), then if we previously said that this evidence was likely to be present if our belief is true (i.e. the odds in #2), thenā¦. Bayesā Theorem tells us that itās appropriate to increase our probability that the belief is true. After all, the long-shot hit, and we had said that it would be good evidence if it did!
In Krugmanās update, he didnāt go so far as to cast probabilities like this, but he does:
Identify the new evidence (i.e. lower inflation, solid growth)
Re-surface the initial beliefs (and their risks) at the time of the decision
Revisit the causal models that were in play at the time of the decision
Enhance causal models for what has transpired since the decision
In your organization, what would it look like if you did a retrospective on the most significant decision youāve made over the last 12 months?
Ask yourself, could you:
Compare actual-outcomes to the desired-outcomes from last year?
Re-surface the beliefs behind that decision, and the risks that emerged?
Revisit the causal arguments made in the rationale for the choice made?
Improve the models for how your actions can drive results, based on what youāve learned in the last 12 months?
If you can, then Iām eager to read your newsletter.
How was this week's post?We'd love to know what you think! (click one) |
Reply