šŸ”® Autonomous agents in decision making

Exploring the role of human and machine

Good morning!

At the Uncertainty Project, we explore models and techniques for managing uncertainty, decision making, and strategy. Every week we package up our learnings and share them with the 2,000+ leaders like you that read this newsletter!

In case you missed it, last week we talked about ā€˜Getting Started, Navigating Uncertaintyā€™.

Autonomous agents in decision making

Based on the hype of today all decision making will be taken over by autonomous agents in the coming weeks. And maybe restarting the discussions about sentience and self-awareness

Letā€™s take a step back and a deep breath. There are lots of interesting and fantastic innovations that are taking place. However there are still plenty of ways this will require more work to productionalize.

I'm particularly interested in the intersection of decision making, organizational dynamics, and autonomous agents. Over the last year Iā€™ve been doing my thinking through my work, side projects like the employee manual of the future, various posts, and talks. 

What is the reality of having autonomous agents as part of our decision making processes? Where would they help? Where would they make things worse?

Worst of both worlds

What makes automation bad for people? First we should point to important work by Data & Society, Distributed AI Research Institute, and many others that have done work previously on the risks with automating decision making. Second we can point to recent tests like one showing that OpenAI GPT Sorts Resume Names With Racial Bias. There is significant risk in the problems that automation bias brings about.

When people canā€™t disagree, escalate, or opt-out we create systems that put people in horrible situations. We shouldnā€™t do that.

A recent paper that really struck me was AI, Algorithms, and Awful Humans. It sets out a case that we need to make sure that what humans do best and machines do best are what they should actually do. But we get systems that try to take over too much from humans and get into dire straits. It is really a great paper so Iā€™d recommend you read the whole thing.

What we donā€™t want is systems that remove the parts of decision making that humans do really well:

  • Using emotion, intuition, and tacit knowledge that is hard to quantify.

  • Making exceptions when we are trying to be empathetic or compassionate.

  • Revising or adding to the criteria we need to ā€œsatisficeā€ rather than assuming we get it perfect at first.

  • Only putting in enough effort to get something done - we are good at being lazy and great at ā€œhalf-assingā€ things in just the right way.

With todayā€™s systems there are many things they are good at which we should let them do:

  • Creating a function of the data we have collected to infer things we canā€™t put rules to.

  • Taking huge data sets and sifting through them based on known criteria.

  • Encoding biases that humans have into systems - for good and bad.

  • Generating content that may or may not be right but could be an extension of what we might have considered - this includes provocations and just enough confusion.

There are overlaps but not many. There are clear differences in how we should utilize people and automations in decision making processes.

Cognitive artifacts

Don Norman started talking about ā€œcognitive artifactsā€ that help humans in their work in 1991. Most things we do that use technology ends up mediating the world to us through these cognitive artifacts. 

Then David Krakauer took this further to distinguish between complementary and competitive cognitive artifacts (quoted from Competitive Cognitive Artifacts and the Demise of Humanity: A Philosophical Analysis):

  • Complementary Cognitive Artifacts: These are artifacts that complement human intelligence in such a way that their use amplifies and improves our ability to perform cognitive tasks and once the user has mastered the physical artifact they can use a virtual/mental equivalent to perform the same cognitive task at a similar level of skill, e.g. an abacus.

  • Competitive Cognitive Artifacts: These are artifacts that amplify and improve our abilities to perform cognitive tasks when we have use of the artifact but when we take away the artifact we are no better (and possibly worse) at performing the cognitive task than we were before.

While Krakauer has said he is not making a normative judgment and simply trying to point out there is a difference, I feel the terminology gets us into the realm of saying a tool is ā€œcompetitiveā€ to humans is probably badā€¦

Yes, there will be tools that will take over a certain capability and we will no longer be good at it. But if that allows for higher level abstractions that get to even better outcomes for people, isn't that a good thing? Isnā€™t specialization and automation ā€œgoodā€ if it helps people?

This gets even more murky with autonomous agents. At a certain point they are no longer just an extension of the person (like a hammer becomes) but they are potentially agents that will go off on their own to take actions on their behalf. 

How is this different than when we have a team of people and we delegate particular work to them? Even though the manager of the group canā€™t do the work that is being delegated, is that bad?

The key is that we should be wary of handing off aspects that make us lose touch with the important parts of decision making and we should automate (or delegate) that which helps extend what we are capable of. Simon Wardley recently wrote about how critical decision making is something that doesnā€™t change when we are using these new systems.

This ends up being a spectrum between a tool which is an extension of our bodies (or mediator) and another agent. But it doesnā€™t change the need for decision making.

The reality of automation is that we will utilize tools across the spectrum. 

Where are these systems best?

In a recent talk for ProductCamp Chicago I took a quick look at where in our meta-decision making process we might include AI, ML, generative AI, autonomous agents, and other automations. 

Iā€™ve started to think through the ways we might task humans and automation for each of these steps:

Step

Goals

Humanā€™s job

Automationā€™s job

Identification

Minimize the total times you get into the decision loop.


Decide if the ā€œblast radiusā€ of a decision is larger than their own team and if there is an intuition that the cost of the process is worth it.

Using past criteria that was observed to be helped by a more formal decision possess, it automatically kicked one off.

Discourse

Maximize number of viewpoints and options. 

Ask a lot of questions about the potential impact, outcomes, options, etc. 

Automation of collecting viewpoints and simulating them.

Simulation of option outcomes. 

Decision 

Minimize the number of people and agents involved.

Make the decision.

Maybe consider whether this should be automated. 

Automation of application of decisions to new cases that are the same with known characteristics.

Communication

Maximize the ability for people to contextualize, minimize the number of people that get spammed.

Crafting the message to contain the most accurate sentiment of the decision and be ambiguous enough.

Automation of dissemination and contextualization of information.

Feedback

maximize learning about the process, minimize concern of the outcome.

Build more tacit knowledge of the process and handle escalations.

Automation of tripwires and theme collection across decisions. 

A key question is whether some decision is worthwhile and ā€œgoodā€ to be automated. In general, if there are edge or special cases that are severely impacted then it isnā€™t a great idea. 

I think Roger Martin said this well in a recent piece about LLM/AI use in organizations:

If you want to discover the median, mean, or mode, then lean aggressively into LLM/AI. For example, use LLM/AI if you want to determine average consumer sentiment, create a median speech, or write a modal paper. If you are looking for the hidden median, mean, or mode, LLM/AI is your ticket ā€” and that will be the case plenty of the time. Being an LLM/AI denier is not an option.

He continues about special cases:

However, if your aspiration is a solution in the tail of the distribution, stay away. This is the enduring problem for autonomous vehicles. Tail events (like the bicyclist who swerves in front of you to avoid being sideswiped by another car) are critical to safety ā€” and still arenā€™t consistently handled by automotive AI.

If we are aware of when we should split the job between humans and automation we can make better decisions setting up our systems. If we arenā€™t we will create systems that hurt people and make worse overall decisions.

Future going forward

We should not forget that a wise person at IBM put this statement in a manual back in 1979:

We, as humans, need to continue to hone the skill of asking good questions and critical thinking that leads to decisions. From there we can start to leverage the automation that minimizes the errors of human decision making while maximizing the human-ness of those decisions. 

How was this week's post?

We'd love to know what you think! (click one)

Login or Subscribe to participate in polls.

Join us this week for the Decision Architecture discussion series!

We have 3 more live sessions to cover topics around Decision Architecture. Itā€™s free and exclusive to Uncertainty Project! (just to make sure we can facilitate/manage actual discussion). Join us this week if youā€™re interested!

Reply

or to participate.