šŸ”® Rationality and biases in complexity and uncertainty

Exploring ideas at the crossroads between the rationality discourse, notions of complexity, and radical uncertainty

At the Uncertainty Project, we explore models and techniques for managing uncertainty, decision making, and strategy. Every week we package up our learnings and share them with the 1,500+ leaders like you that read this newsletter!

In case you missed it, last week we talked about bringing an economic view to decision architectures.

This week we have a guest post from Marco Valente. Based in Malmƶ, Sweden, Marco is a partner and member of the executive team at Cultivating Leadership where he provides teams with formats to improve their capacity to make sense and skillfully act on complex challenges.

His work is informed by complexity theories, a decade of experience in facilitation, and over five years as a university lecturer, having taught sustainability science and leadership to over 300 master students.

Rationality and biases in complexity and uncertainty

In this post, we will explore some ideas at the crossroads between the rationality discourse and notions of complexity and radical uncertainty.Ā 

An exploration of this kind might be relevant because we are surrounded by a narrative about how biased and irrational our minds are, and as it has entered the boardroom we need to make sense of what this could mean for our ways of deciding and thinking, especially in the face of the inevitable complexity and uncertainty that we clearly see now. Not only the world is hopelessly complex - on top of that we are hopelessly irrational, too! What chances do we have to make sound decisions? My stance is not that pessimistic and this blog will explore six concepts at the intersection between rationality, biases, and complexity.Ā 

Claims of competence and incompetenceĀ 

What I donā€™t know. I am not a psychologist and have not carried out original experimental research myself. I may be making simplified arguments and quick assumptions of the kind ā€œhas no one thought about X?ā€ if I have not read the book where someone thinks about X. I would be appreciative of corrections to my claims and further references.Ā 

What I do know. I have read a dozen books on the subject of decision making and biases, and I work with complexity a lot, helping people and teams work with complexity and causal opacity -which isĀ a subject matter where I do have some knowledge.Ā 

My hope is that these six ideas can shed a light on topics that need further exploration. Does the notion that we are hopelessly biased even make sense in complexity?Ā Let's jump right in.Ā 

First idea.

Causality in complexity, what does it even mean to be biased?

In a complex world the notion of causality is radically different to that of a simpler world. If you lived in a laboratory, under controlled conditions, exploring events for which the link between cause and effect is known or knowable, you can clearly ascertain causes by observing effects and even predict effects from known causes. But complexity shows us a different world, one which is un-ordered as Snowden explains cogently with theĀ Cynefin framework. In the words of Karl Popper, it is a world of propensities, of patterns that tend to happen given certain constrains and environmental contingencies, but it does not mean that we can reliably predict what will happen. Against this backdrop, what does it even mean to say that someone is biased in a decision or an assessment about a specific future outcome, when two equally rational agents could either be wrong or right about a certain hypothesis due to luck and chance and not skill of their judgment? When can we reliably tell that someone is biased (vs rational) if they are making conjectures about an unknown future? There will be times where people could hold on to a certain "reference class" to gauge what probability an event has. But in some situations we don't even have those life boats, and navigate in completely unknown territory. Which brings us naturally to our next point.Ā 

References: Kay and King, Radical Uncertainty. Snowdenā€™s Cynefin Framework. Popper, A world of Propensities. Alicia Juarrero, Dynamics in Action. Anne Pendleton Jullian, Design Unbound.Ā 

Second idea.

The all-seeing-eye. Is there an all-knowing scientist who holds the answers?

You have heard theĀ bat and ball storyĀ a dozen times. Kahneman recounts the experiment of asking participants how much each item costs, and people consistently give a quick but often wrong answer. Given enough time to do the math, people can easily see their mistake. Contrast that with the following episode. On 28 February 2020 one of the proponents of the biases school, Cass Sunstein, wroteĀ an editorialĀ explaining to the rest of us how irrational it was for people in the US to be fearful of the new corona virus, given that at that time the number of known cases was abysmally small. These two examples are fundamentally different. In the bat-and-ball case you can imagine an all-knowing experimenter who holds the right answer, whereas in the corona virus prediction the columnist made a prediction about a future outcome that was unknown to everybody in the US, himself included, that turned out to be hopelessly flawed (we would have been better off worrying more -not less- about the corona virus). These events are radically different in their nature. 1) There are events where an answer is knowable and known (often to the researchers who are assessing how biased the participants of a study are); 2) There are events that are unknown to all, researchers included, partly due to conditions of high complexity explained above. When I read books about irrationality and cognitive biases, often the authors conflate the two types of events as if they could be treated the same way. Except for a few authors (Kay & King), the rationality debate runs the risks of treating all events as if there were always an ā€œall-seeing-eyeā€ that knew better, and from that watchtower was thus able to judge the irrationality of us biased mortals. The case of rationality researchers who dismissed some people as "irrational" because they "panicked" for being scared of the corona virus in February 2020 is a sobering reminder. If there is no such thing as a known-in-advance answer available to some, it is more helpful to see us all as navigating the same causal fog. The next question is, at what cost?Ā 

References: Kay and King in their book Radical Uncertainty make that useful distinction. Snowdenā€™s Cynefin framework is again super helpful.Ā Prof. Gigerenzer has also written about the notion of heuristics that have been shaped by evolution to help us navigate a necessarily un-knowable world, and is not that pessimistic about considering us all hopelessly biased. I recommend your read his articles and books.Ā 

Third idea.

Material consequences of being wrong are not the same (Talebā€™s Fat Tony problem).

Imagine we are in a somewhat equal playing field where no one had the answer. Uncertainty comes as a companion to our less-than-perfect knowledge of the world. And risk is closely connected to uncertainty. There are domains for which the uncertainty that comes with our imperfect knowledge of the world is immaterial. I predicted green light at the next junction, but itā€™s red now. Does it matter? Often it doesn't: I will arrive home sixty second later. For some situations, our inaccurate predictions and our biased thinking can cost a lot. Taleb talks about asymmetries we are exposed to, both the positive ones (if we start many companies, do we increase our chances to hit the jackpot?) and negative ones (if we get rewarded 10K dollars every time we play Russian roulette and survive, is it ā€œrationalā€ to play? And how often?) According to Talebā€™s pragmatic character, the investor Fat Tony, it does not matter to have the perfect picture of the world -because we canā€™t anyhow. What matters more is to take the safest route and expose oneself to positive asymmetries and away from risky ones in the face of tail risks. Again on the corona debate, everybody has been wrong on many sides of the debate, but to equate the wrong predictions of the Great Barrington Declaration advocates to the covid-zero advocates is simply bad faith. If I predict herd immunity via infection will be reached swiftly and with minimal losses in society, and you predict that covid will kill three times as many people in UK as it actually did, these predictions are equally wrong, but staying on the safest side would have saved us more lives, whereas believing an optimistic prediction in conditions of imperfect knowledge (and risk!) cost us a lot of suffering. Taleb and his character were right: everybody is biased, what ultimately matters even more are the consequences. And it was wise of Robert Luis Stevenson, too, to remind us that ā€œSooner or later, everybody sits down to a banquet of consequencesā€.Ā 

References: Talebā€™s Incerto is a great series of books that explores what the consequences of our uncertainty are.Ā 

Fourth idea

Rationality? It depends on the level of analysis.

Something that is deemed as irrational at one level of analysis is instead rational and even worth doing at another level. When do these considerations apply in the biases and rationality debate? Imagine I want to start a new tech business in Malmƶ, Sweden. Given the complexity and uncertainty we talked about, my chances are not clearly known. However, as Kahneman showed us over the years, we can use reference class forecasting to have a sense of how competitive the environment around me is. Say that from city statistics I learn that 80 out of 100 of similar enough businesses in Malmƶ fail within the first three years. (I made up the numbers but entrepreneurs know how difficult it is).Ā Then why would a reasonable entrepreneur do such irrational thing as starting a new company? For the individual player, they have incentives in the positive asymmetries at play: if my business fails, I will never sleep under a bridge (thanks Swedish social democracy) but if I win, I could hit it big. Now consider other levels of analysis: is it rational for a city to invest in entrepreneurship? At a bigger level, it benefits a city, a region, and an industrial ecosystem to support startups, for instance with business incubators, because manyĀ parallel attempts are all trying to succeed, and even if the individual can be seen as irrational in not weighing their chances accurately, the collective has a lot higher chances to innovate and better society. In conditions that require innovation, lateral thinking, and a lot of diversity of thinking, the notion of reducing ā€œnoiseā€ and zeroing in on reducing biases for everybody may not be that helpful, or even be counter productive. An investor could be irrational, an entrepreneur could be foolish, and a team may be creating a wacky prototype that does not hold promise, but many investors experimenting with multiple portfolios, and many teams innovating in novel product areas scan the system with a wider array of attempts, and even if some individual ones can be dismissed as ā€œirrationalā€, at a higher level it makes a lot of sense to let these experiments run (as long as we can learn from them).Ā 

References: To be fair, Kahnemanā€™s book TFAS recounts an example in which there were different incentives for people. While all the managers were afraid of taking a risk, the CEO wanted all of them to try, as it would have been beneficial for the organization as a whole. I try to think of larger levels through the lens of complexity theories (cities, regions, ecosystems, etc.)Ā Under which circumstances the notion of reducing noise does more harm than good?

Fifth idea.

What are the boundary conditions of System 1 and System 2?

If we had better information or more time to deliberate, our biases should go away. That at least is a tenet of System 1 and System 2. Kahneman taught us a great deal of lessons about our biased minds. For simple problems it may well be that people can easily spot the error in their reasoning and correct their view accordingly. For more complex matters, especially for strongly held opinions that people invest a lot of their identity on, it seems unclear to me that people with a bit more time and analysis will understand how erroneous their beliefs are. If that were the case, it seems very hard to see how people who create elaborate conspiracy theories spend hours in connecting dots out of thin air and drawing unicorns out of stars that are not even remotely aligned. There could be at least two reasons for why the notion of System 2 does not hold so well in situations of complexity. For one thing, often there is no such thing as a definite, final answer, as complex systems lend themselves to multiple and at times equally coherent interpretations, unless we can subject these interpretations to some sorts of severe tests. But in complexity, as Max Boisot said and as it was clearly explained by Snowden and Klein, sensemaking is not merely about connecting the dots or figuring out the riddle. There are so many dots that one can conjure almost any idea, no matter how implausible. The second reason is that we may invest a lot of our identity on certain "biases", and we will hold on to our beliefs much stronger than arguing about a simple math mistake we admit we made. For instance, research on our ā€œtribalā€ and ā€œpoliticalā€ minds seem to suggest that people with a high level of education, and with a lot of free time to investigate facts accurately, do not necessarily come to less biased conclusions about the world. There is robust evidence that shows just the contrary.Ā We may get trapped by simple stories in complexity due to our motivated reasoning or by wanting to protect our sense of identity, in spite of all the time and counter-evidence available to us.Ā 

My research question: under what conditions can we easily, without substantial cost make our biases go away? In situations where our identity is at stake, my guess is that we dedicate a lot of time and deliberate reasoning to create confabulated explanations, and not less.Ā 

References: Kahnemanā€™s Thinking, Fast and Slow holds the view of System 1 and System 2. Snowden and Thaghard speak about the notion of coherence in complexity, in Snowden's blog posts (search eitherĀ coherence,Ā Thagard, or sense-making) and in Thagard's on coherenceĀ here. There is literature about motivated reasoning which provides evidence against the notion of System 2.Ā Jennifer Garvey Berger's bookĀ MindtrapsĀ explains may ways in which we could be inclined to create simple stories in complexity to protect our sense of self.Ā 

Sixth idea.

Itā€™s not only what the irrational belief is, but what the irrational belief does.

The heuristics and biases school of thought brought forward by the most prominent researchers such as Kahnemann, Ariely, Pinker, etc. seems rooted in a worldview that epistemic rationality contributes to our wellbeing. This ā€œtraditionalist viewā€ as Prof. Bortolotti calls it, holds that we cannot be happy and well functioning if we hold on to incorrect beliefs about the world. We said that not all biases are born alike in terms of material consequences for holding an incorrect or irrational belief. Furthermore, some biases can shape action in a way that can be even beneficial. Take for instance the notion of optimistic biases about our health, romantic relationships, and our chances of succeeding at something. There is empirical evidence that such irrational beliefs not only hold some psychological and epistemic benefits, but also that they can contribute to our motivation and can under certain conditions fuel a self-fulfilling prophecy. While we can still hold a view that these beliefs are clearly false or inaccurate (and in some conditions we can judge them as such), prof. Bortolotti convincingly argues that there are some boundary conditions in which optimistically biased beliefs actually shape our self-esteem, our agency, our actions in a way that creates future behavior. So much so, that we can even close the gap between our incorrect assessment and that future reality. For instance, a person may be over-confident about his prospects of finding a job for which is he is under qualified. Research suggests that in some conditions this overconfidence can shape his motivation to such an extent that makes his pursuit of the job resistant to setbacks, even to a point that makes his goal objectively more likely over time.Ā Audaces fortuna iuvat. Even when our initial "audacity" would be deemed as objectively irrational by some.Ā 

References: Author Bortolotti holds nuanced and very rich views on this and does not claim that unwarranted optimism is always a good thing. I recommend you read her great little book, especially chapter 6.Ā The Epistemic Innocence of Irrational Beliefs.Ā 

This post has explored six simple notions that problematize the idea of rationality, biases, and irrationality in situations of complexity and uncertainty. I hope there is some added value in some of the questions, and that it could spark a much-needed conversation.Ā 

I would love to hear from your ideas, references, and comments.Ā 

How was this week's post?

We'd love to know what you think! (click one)

Login or Subscribe to participate in polls.

Join us for the upcoming Decision Architecture discussion series!

Weā€™ve set up a series of 5 live sessions to cover topics around Decision Architecture. Itā€™s free and exclusive to Uncertainty Project subscribers, but we will have a limited number of spots (just to make sure we can facilitate/manage actual discussion). Sign up for the waitlist if youā€™re interested!

Join the conversation

or to participate.