Now Streaming on

YouTube listen on podcast

Dr. Brian Klaas is Professor of Global Politics at University College London, an associate researcher at the University of Oxford, and a contributing writer for The Atlantic. He was recently named one of the 25 “Top Thinkers” globally by Prospect Magazine. Klaas is the author of five books, including Fluke: Chance, Chaos, and Why Everything We Do Matters (2024) and Corruptible: Who Gets Power and How It Changes Us (2021). Klaas writes the popular The Garden of Forking Paths Substack and created the award-winning Power Corrupts podcast, which has been downloaded over three million times.

About our guest

Dr. Brian Klaas is Professor of Global Politics at University College London, an associate researcher at the University of Oxford, and a contributing writer for The Atlantic. He was recently named one of the 25 “Top Thinkers” globally by Prospect Magazine.

Brian is the author of five books, including Fluke: Chance, Chaos, and Why Everything We Do Matters (2024) and Corruptible: Who Gets Power and How It Changes Us (2021). Klaas writes the popular The Garden of Forking Paths Substack and created the award-winning Power Corrupts podcast, which has been downloaded over three million times.

In this episode of In other words, Brian speaks with host Jason Hemingway, and Phrase CEO Georg Ell, about  how unpredictable forces shape and influence our world and why traditional approaches to control and optimization may need rethinking in today’s complex business environment. Drawing on stories from his book Fluke, Brian shares how seemingly small decisions alter history, why leaders should prioritize resilience over control, and how businesses can foster cultures of experimentation to thrive amid uncertainty.

Episode transcript

[00:00:00] Brian Klaas:  What you’re basically saying with we control nothing but influence everything is saying that you are never going to be able to fully forecast or understand chaotic systems. I think social systems are, by their very nature, chaotic because 8 billion people are interacting, and the noise of one person’s life becomes the signal of someone else’s. It’s these tiny decisions that somebody makes to send a text message at an inopportune time and then someone dies because they get hit by a car or the ripple effects of our ancestors, all these sorts of things. They’re constantly happy.

[00:00:33] Jason Hemingway:  Welcome to In Other Words, the podcast from Phrase, where we speak to business leaders shaping how organizations grow, adapt, and connect with their customers around the world. I’m your host, Jason Hemingway, CMO of Phrase, and I’m really excited about today’s guest because a few weeks ago, we had hosted Brian Klaas at our company All Hands’ meeting and his talk left a massively lasting impression on the team and I. And he spoke about chaos, uncertainty, and how events can shape massive outcomes. And it struck a chord across our team, so we’ve invited him to the podcast to explore these ideas further. And just to give you a little background on Brian, he’s a political scientist and author and someone who spent years studying complex systems, randomness, and decision making. And in his latest book, Fluke, it’s all about how the world is shaped by unpredictable forces we can’t often see, but how we can still navigate that world with both intention and, I guess, influence. So today, we’re gonna get into some of those ideas behind Fluke and explore chaos theory and pattern detection, resilience AI, and many other things. And perhaps really why letting go of control might be one of the smartest things a leader can do. So, it should be interesting. And we’ve also got Georg Ell, our CEO who first met Brian at LocWorld, an event we went to earlier this year. So, Brian, Georg, welcome.

[00:01:51] Brian Klaas:  Thanks for having me on the show.

[00:01:52] Jason Hemingway:  That’s an absolute pleasure, Brian. So let’s get into it then. So in your session, our recent company, All Hands, you talked about Fluke. And for listeners who haven’t read Fluke, can you start us off with perhaps the big idea and the central message behind the book, and what made you want to explore chaos and unpredictability in the first place? And I should point out before you answer that question that my background is political science as well. So, I’m not gonna be testing you on your knowledge here, hopefully, that’s all common ground. 

[00:02:23] Georg Ell:  I studied it too at university for my sense, as did my wife, but she loves to remind me she got a better grade than I did. So

[00:02:29] Jason Hemingway:  Oh, brilliant. Well, no, Brian. Let’s get over to you. We’re not gonna talk about political theory. Let’s talk about this big idea behind the book, Fluke.

[00:02:36] Brian Klaas:  Yeah. So I think, you know, the sort of assumption that most people have about how the world works is that big events have big causes. Right? That if there’s a major thing that happens, it must have a major driver. And so the book is trying to debunk that by arguing that very often small changes can have really outsized effects. And it sort of draws on some of the previous work in this field, Nassim Nicholas Taleb, who did the black swan and so on, and some of these ideas about how there’s long tails to our behavior. And so what I’m effectively doing is taking the analogy of chaos theory, which argues that small changes can have really big effects over time, and trying to apply it to social systems. And that’s the core message of the book.

[00:03:19] Jason Hemingway:  What drew you to that kind of thinking at the beginning?

[00:03:22] Brian Klaas:  Well, it was in political science, actually, because what I studied in my PhD was political violence. So I was analyzing how rigged elections provoke mass scale political violence in terms of coups and civil wars, so, you know, large casualty events and so on. And these are really rare. They’re really idiosyncratic, and sometimes coups fail or succeed on the smallest details. So just very briefly, one of the stories that I tell in the opening part of Fluke is about, it’s from my research on coups. It was in Zambia, and I interviewed the coup plotter who basically tried to kidnap the army commander in the middle of the night. And the army commander ran outside and tried to climb up this wall of the compound outside his house. And the soldier I interviewed grabbed his trouser leg, and the guy slipped through his fingers because he just didn’t grab quite quickly enough. And he ended up alerting the government to the coup plot, and it failed. And it’s one of these cases where, you know, it’s highly likely that if he had had a better grip, if he’d been a second earlier, etcetera, that the Zambian government would have fallen. And so, you know, when I was studying that, I was like, how do I put this into a model? Because models have big drivers for big events. They assume that there’s a linear relationship between things and so on and the nerdy way of saying that we don’t have the tools to take chaos theory seriously in social change. So, I was trying to develop a way of thinking that way where you actually take the small details as important.

[00:04:49] Jason Hemingway: Fascinating. And Georg, to bring you in a little bit, you introduced Brian at the LocWorld event that I mentioned earlier and what was it about that message that made you want to bring it to a wider conversation?

[00:05:03] Georg:  Well, I think it’s the expansion of the same idea that Brian was just talking about into business as well. I think in business, we also look for narrative, storytelling, and evidence-based forecasting. We want big ideas that are gonna drive big changes. And yet, you know, if we think about I mean, we could mention any number of political events from Arab Springs to the Fall of Berlin Wall to Rise of ISIS to the collapse of the regime in Afghanistan like, all of these things happen differently or faster or in unexpected ways relative to what all of the analysts in all of the world thought was going to happen. And I think that’s something in business that we struggle with a little bit. And since in our world here at Phrase, we deal with these incredibly complex systems and, you know, the whole world is dealing with AI, and we’re, of course, riding that wave as well. I thought it was really, really pertinent, and I thought Brian had a very neat way of actually explaining something that is instinctively difficult to understand and I love that model, that framework, which we’ll explore today on the podcast as well.

[00:06:06] Jason Hemingway:  Let’s get into it a little bit more then, Brian, into the chaos theory angle. You tell two stories, particularly, I know you’ve just told one, but there’s two stories that I think bring it really more to life. And one involves a decision during World War II. Can you walk us through the story, what I would call the story of Kyoto and the moment that everything, like you say, hinged on something smaller, and it’s as small as a vacation in this sense, but I’ll let you tell the story.

[00:06:33] Brian Klaas: Yeah. It’s a story from 1926 in Kyoto, Japan with an American official and his wife who went on a fact finding mission/vacation to Japan. And they stopped in Kyoto. And in my research, I found the room they stayed at, the Miyako Hotel, this little hotel in Kyoto and their ledger, you know, their signatures in the ledger of this book signing into the hotel. And you look at this and it’s sort of, you know, who cares about this. But this signature, this little mark of their presence in Kyoto really changed the world because nineteen years after they stayed in Kyoto where they developed this soft spot for the city, they fell in love with the culture, the temples, and the beauty of the place. Stimson, ended up, his name is Henry Stimson, he ended up as the secretary of war in 1945. And so, he’s the chief civilian that’s deciding where to drop the new weapon, the atomic bomb. And they basically put together something called the target committee, which is mostly scientists and generals, and they draft different proposals of where to drop the bomb. And what was interesting is that Kyoto was agreed upon by everyone as the top choice. There are various reasons for this, but one of them was that it had an airplane factory, one was that it wasn’t substantially damaged previously so you could show the full scale of destruction, all these sorts of things, but they all agreed this was a good target. And Stimson, you know, gets this report and basically is horrified. And part of the reason he’s horrified is because of the cultural heritage that he saw, the soft spot he had from this vacation and so on, and the firsthand knowledge of the city. So, he got two face-to-face meetings with president Truman in 1945 and eventually convinced Truman not to drop the bomb on Kyoto. And so this is the origin story of August 6th, 1945 and the first atomic bomb being dropped on Hiroshima. The target for that day was going to be Kyoto, and instead it was Hiroshima. And the second target, interestingly, was supposed to be a place called Kokura, but there was cloud cover that briefly obscured the bomber’s site of the target zone. And so they didn’t drop the bomb there. They circled for a while, and then they eventually went to the secondary target, which is Nagasaki. So, you know, the reason this story spoke to me is because it’s the long trajectory of seemingly insignificant events. But I think it’s highly unlikely that Kyoto would still exist in its current format and would have been spared the atomic bomb had this couple not gone on vacation there in 1926. I mean, there’s a million other things that would have had to happen too if the Germans had gotten the bomb first or if the Battle of Midway hadn’t happened but the point is that there are a near infinite number of causes, and some of them are really pivotal on tiny details. That’s why I opened the book with that story because I think it just shows you that 200,000 people live or die in different cities based on a seemingly unimportant forgotten detail from one couple’s vacation history.

[00:09:33] Jason Hemingway: Yeah. And it’s also when you look at it at face value, it’s a really difficult detail to uncover, isn’t it? That correlation. And it really hits hard, that opening story and, Georg, I remember that story hitting hard when I first heard it. What goes through your mind or went through your mind when you first heard it from Brian, when Brian shared it?

[00:09:52] Georg:  I think it’s an emotionally resonant story. And, therefore, for the purposes of explaining the chaotic nature of these systems, I think it’s a very effective story. And particularly, for me, even more than the holiday one, it’s the cloud cover one because that was just a physical natural phenomenon that was utterly outside of anybody’s control, whether there was, it just obscured the target. So and that changed the trajectory of, like, hundreds of thousands of people’s lives and their descendants in a way that they didn’t know, they never even knew that they’d been affected, and that was pretty telling. The second story that Brian also tells is around weather prediction from that period of time. And in a way, I’d love to invite you, Brian, to tell that one about how the data science approach really, I think, is it fair to say, invented or uncovered chaos theory from the point of view of weather prediction?

[00:10:42] Brian Klaas: Yeah, it completely invented it. And interestingly, I didn’t show this in the talk I gave, but, bizarrely my grandfather on my mom’s side was a weatherman based in London during the part of World War II. He was doing early meteorological forecasting tied to D-Day and so on and sort of, the meteorological forecasting that existed in World War II was pretty weak. I mean, you know, they would have to make a guess, and it was largely based on, you know, what’s happening right now, you know, or reports from people who are a little bit west of you and what’s coming and so on. The point is that this guy, Edward Norton Lorenz, who’s the scientist who discovered chaos theory, he was involved in forecasting weather patterns around the Pacific feeder. So, it may have been the case. I looked really hard, and I couldn’t find details on this, but it may have been the case that he was actually involved in the Kokura incident where it went to Nagasaki instead but it is certainly plausible because he was a meteorologist at the time. Anyway, the point is that after the war, he thinks as a scientist, like, we can do better than this and there’s very early computers that have started to exist, the very, you know, rudimentary ones that are a thousand times weaker than even the most basic computer today, but they could do sort of simple modeling. And so he put together this very basic computer model for the weather. It had 12 variables, the sort of stuff that you’d expect, you know, temperature, air pressure, wind speed, all that stuff. And he ran a series of simulations trying to figure out if they could get better at forecasting the weather. One day, he decided to rerun a simulation but didn’t want to go all the way back to the beginning because it would just add this massive amount of time. So, he decided to plug in the variables from the halfway point. And the logic of this made complete sense. This would never have been objectionable to anyone because if you have the exact same numbers in the exact same computer model, it should spit out the exact same results, and it didn’t. And it was radically different. The thing that was really bizarre was that, you know, two or three weeks into the future, you were getting really, really different weather patterns. You know, storms instead of clear skies and so on. And so he figured that he had inputted the data wrong, double checked it, triple checked it, turned out the data was all correct, and eventually had this eureka moment when he realized that the computer printout that he was using for the data inputs was truncating the values after the third decimal place. So if a number was 1.23456789, it would just read on the computer printout 1.234. And you would think this wouldn’t matter. Right? But it turned out it really did. These rounding errors are the things that were changing weather patterns in dramatic ways, and this would have affected obviously the lives of everyone who thought it was going to be a clear sky day and instead it’s a thunderstorm and so on. And so this was the origin story of chaos theory. It’s where the butterfly effect idea comes from, which is that, you know, a butterfly flapping its wings in one part of the world can cause a storm, and the other part. That’s more of an analogy than reality, but it’s a rough way of saying that small changes over time can have really big effects. The relevance of this origin story for all the things we’re going to be talking about today as well is that there is actually a hard scientific limit on some forecasting. That’s the lesson of chaos theory, that unless you have genuinely perfect data, which is impossible, you will never be able to fully solve forecasting. And so, that’s the basic problem of chaos theory. It suggests to us that this thing that we keep on chasing, whether it’s with computer data, more rigorous models, now AI, actually has an unsolvable problem at its center and that we’re not going to basically invent our way out of that.

[00:14:22] Georg:  And that, by the way, we’ll return to. So listeners, keep that in mind.

[00:14:26] Jason Hemingway: Yeah. And I think it segues nicely into the next question that I had, so in a sense, you use the phrase well, in a very real sense, you use the phrase, “you control nothing but influence everything.” Can you explain how that fits into Chaos Theory and what you’ve just said and what that means for how we live and lead.

[00:14:46] Brian Klaas:  Yeah, so, this quote is written off an idea from a complex system scientist named Scott Page at the University of Michigan. And, you know, what you’re basically saying with we control nothing but influence everything is saying that you are never going to be able to fully forecast or understand chaotic systems. And I think we live in a chaotic system. I think social systems are, by their very nature, chaotic because 8 billion people are interacting, and the noise of one person’s life becomes the signal of someone else’s. It’s these tiny decisions that somebody makes to send a text message at an inopportune time and then someone dies because they get hit by a car. These sorts of things or the ripple effects of our ancestors, all these sorts of things, they’re constantly happening. And so, when you think about that, my interpretation of this is that everything about what we’re supposed to do in society, what we’re told we’re supposed to do in society is assert control and it’s what makes us feel most natural, that we feel like we have control. And I think there’s a mentality shift that’s beneficial for good decision making, which understands that there’s a limit.There are some things we can control. They’re in short time periods, in very controlled circumstances, I mean, we can understand what a chemical reaction is going to do in very controlled lab conditions, etcetera. But most of life is not like that and so if you switch your mentality to thinking about influence, it’s not just more scientifically accurate. It’s also, I think, A, a more uplifting way of thinking about the world, and, B, a more effective way of strategizing. One of the things that I think is really difficult for people in the 21st century is they keep getting walloped by this lack of control. It’s really depressing, and you keep searching for something that’s never there. And the corollary of that is that a lot of people feel really interchangeable in modern work. You have a lot of people who think like, okay, if I don’t show up to work, somebody else would do the exact same thing. And the point of the control nothing, influence everything mentality is that every little detail matters. Right? The vacation history of a government official 19 years later radically changes the lives of 200,000 people plus their descendants. All sorts of things like this. I mean, it’s not usually so dramatic, hopefully, and so tragic, but I think this is the antidote to the sense of meaninglessness that a lot of people struggle with in professional lives and personal lives. And it’s also, I think, a good way of thinking about how each decision actually is going to have ramifications, so there’s no throwaway choice. It twins two things very usefully; empowerment and good strategizing. It’s rare that a theory does make us both feel good and also makes us smarter. So, that’s the power of it to me.

[00:17:16] Jason Hemingway: I would imagine that for a leader, that’s quite a tough mindset shift. It’s a change for sure. So, what stands out to you when you think about that idea?

[00:17:25] Georg:  Well, it’s interesting. I was gonna use the exact same word of empowerment and I like uplifting because I think this idea that what you do as an individual can really matter and that all the things that you do matter. Whether you send that follow-up message, whether you smile at someone as you pass them in the corridor, whether you thank someone or whether you take the time to give someone feedback that is uncomfortable to give, or you just send someone a note to say I’m thinking of you. I’m talking about business, but I suppose it’s in personal life. All of these things have ripple effects that we can’t completely control, but actually can have an incredible impact on how we’re perceived and whether we get a callback and all that kind of stuff. I think it’s actually an incredibly empowering message that all of us matter. We talk a lot at Phrase about how leadership is a behavior, not a job title. And I think they’re twinned with this idea. If you just act in a proactive leadership manner, regardless of your role, the whole company can be really uplifted by that. So, I do think it’s very empowering. I think what’s hard is when you try to take it back to a spreadsheet, when you try to, you know, in a spreadsheet, abstract, people and roles and conversion rates and, you know, you’re trying to create models of a complex system. And then we all know that they’re a bit wrong, but we don’t like that fact. And what I find empowering about this chaos theory is that it’s academic, but also rooted in physics, inescapable, and therefore, you know, you don’t have to try and conquer it.  The reality is that you can never perfectly know the outcome because you can never know all the variables in a complex system to all the decimal points. 

[00:19:10] Jason Hemingway: I think you’re right. I think it is empowering and the opposite of that is when you try to over-detect patterns, which is what we wanna sort of get into. It’s like you can wind yourself up in knots over trying to detect patterns. And you speak a lot, Brian, about this, how humans over-detect patterns to our detriment and we see that from a marketing point of view. We’re always looking for patterns in the data. There’s their whole debate over attribution and all those kinds of things in customer experience. And as Georg sort of says, in leadership, you see it everywhere. So why is that the case, Brian? What problems does it create when you’re trying to decide things in those complex environments?

[00:19:47] Brian Klaas:  The basic idea here is that our brains evolved to over-detect patterns because it used to help us survive. And so, when you think about prehistoric humans, which is when almost all of the brain evolution happened. I mean, we are this tiny little slice at the end of the most recent chapter of the human story, but, 99.9% of the species existed in hunter gatherer tribes, basically. Right? So, like, small groups of hunters and gatherers, and that’s how our brains were formed. And so in that sort of environment, the example I like to give is that if you were to hear some rustling in the grass, you could either presume that this is nothing or that it’s a predator. And if you presume that it’s nothing and it turns out to be a predator, you’ll die. But if you presume that it’s a predator and it turns out to be nothing and you run away, you’ll just waste a few calories, the cost is low. So it’s basically that under-detecting patterns might have had catastrophic consequences, whereas over-detecting patterns would often be slightly costly but mostly harmless. This is where the idea of pattern detection over time was really honed into our cognition. And I think the problem is there’s something called an evolutionary mismatch which exists today, which is that, in the past, the kinds of problems that people were trying to solve were very local, they were very simple, and the cause and effect was really obvious. If there’s rustling in the grass, okay, there’s maybe a predator you have to run away from. Now it’s like, how do you avoid risk? Okay. Well, there’s 8 billion people, there are 192 plus countries in the world. There’s all these different leaders. Some of them wake up in a bad mood. They do things, you know, that might create, you know, serious problems for the planet, for the world, etcetera. Economies crash for unknown reasons. You have stock market problems. Try to make a simple story about anything that happens in the modern world, and you end up underplaying the complexity. And we still make simple stories, like, it’s the way our brains have evolved, so the most effective people who are communicators reduce that complexity down to straightforward cause and effect because it’s how our brains work. And I have a footnote in Fluke where I’m like, look, I understand that I am doing this. I am giving you a simple version of stories because it’s the only way that I can convince you this is true because your brain and my brain are both the same. We both have this evolution. So, you sort of have to do that. What was particularly funny to me about this was, like, you know, when you go and you’re just talking about marketing, Jason, and, like, when I was in a marketing meeting for the book, it was like, how are we gonna make this take off? And I was like, look, like, the lesson of the book is, I don’t know. Why did Sapiens sell 40 million copies? Who knows?This guy was previously unheard of and all of a sudden, it’s like in every airport bookshop in the world. The guy’s richer than you can imagine, and he’s like this intellectual guru. I’m not knocking the book. I’m just saying it’s like, there’s not a good reason that in this period, the world is crying out for this exact book. There’s a million factors, tipping points, you know. So I think that’s the issue, what you can do is overlearn past successes. One of the best examples actually comes from my previous book called Corruptible, where I was talking about decision making and leadership. And it’s from the Challenger Explosion where, you know, you look at the Challenger Explosion and before it happened, the same O-ring, which eventually caused the explosion, was in place and so all of the audits of decision making were just about the outcome. Right? They’re like, everything was perfect. All of the Challenger launches were fine. They were all good. What actually was the case is that there were whistleblowers saying, like, look, I think this is a problem, and if it gets too cold, this is gonna blow up. And if you look at the actual decision making, it’s really poor. But because the Challenger didn’t blow up, the lesson they learned is that NASA is really good at managing risk and so, this is where I say the problem with over-detecting patterns is you end up focusing on outcomes rather than process. And so if a product takes off or if a book sells well or whatever it is, you just assume that that must have been done perfectly. You know, there are cases where really bad marketing campaigns produce best sellers, and there’s also cases where really flawless execution of business ideas flops. That’s why I say you audit the decision making. You don’t necessarily audit the outcome every time.

[00:24:11] Jason Hemingway:  And that is very interesting. Georg, what do you think about that? And do you ever catch yourself in those situations where you’re trying to or you make those mistakes, or you’re thinking about explaining something with too little data, or or inferring something that perhaps is a bit more complicated than perhaps at first glance?

[00:24:29] Georg: I think it’s one of the distinguishing features of people at different levels of business maturity, we all have this natural inbuilt desire to tell a story about a situation and extrapolate out from it. But I think if you’re a really mature business thinker, you can take an experience you’ve just had, a call with the customers you’ve just had, for example, and you can say, what’s the implication of this for the bigger picture? What should I be learning from this? What should I be noting simply as a data point? Right? So it’s like, a really mature business person can say, that’s a data point. What I need to do is find out if there are others, and then, can I draw some patterns? Or what’s the implication? I do think you see in the less mature business person, you’ll see a little bit more of following their instinctive desire to say, well, this is the truth, and this must now apply everywhere. Being able to control that natural tension is important because, as Brian said, it is also nevertheless important that we are very effective storytellers. So we can do that when we’re trying to explain our value proposition, for example. So we have to be able to tell those stories in a way that people can consume whilst critically analyzing them when we encounter them. And that’s what we call it, a polarity. There’s two things, intentions both true.

[00:25:43] Jason Hemingway:  And to sort of further that, is there anything, Brian, where we can know when we’re making those kinds of leaps? Or is there any sort of advice you’d give?

[00:25:53] Brian Klaas:  You can’t really undo evolutionary patterns of your brain, but what you can do is be aware of them. That’s the main reason I’m highlighting this because decision makers who understand these cognitive biases are more attuned to them, and they make more rational judgments when they understand that they may instinctively draw the wrong conclusion. It’s the kind of just basic stuff, what is the devil’s advocate point of view when you’re making a decision? Like, how would somebody argue against this? What are the weaknesses in my argument? And we don’t like to do this. Right? We like to say, here are all the great reasons why this is the right decision. And oftentimes, someone has decided something and then justifies it. And I think, you know, the best decision makers are those who expose themselves to the weaknesses in their arguments, to audits of the process of their argumentation when they’re making a decision. Did you get the right people involved? Did you have the right data? It’s an unavoidable problem. Right? I mean, there’s no way you can solve this completely, but people who blindly go into this and just assume that the past is going to be predictive of the future are going to make serious mistakes. And I think, you know, this is one of the things that’s particularly a problem from my point of view, with the way that businesses often will think about success is they assume that there’s a static normal time that things are operating in. And so it’s like, okay, whatever happened you know? Okay. The pandemic happened, but let’s go back and look at 2019 and see what was working then. It’s like but now the world’s different. So, what worked in 2019 might not work in 2025, but you don’t have data from 2026. Right? You only have data from the past, and you might have to throw a bunch of it out because it’s weird. Right? It’s like the weird period where the pandemic happened. So you end up with a choice between no data, flawed data because it’s from a really strange time that obviously no longer has any coherence with the present, and then old data, which may not have coherence with the present because of technological and social change. And that’s it. You don’t have another option. So I think the point is that you don’t ignore data. It’s that you understand that all your sources of past patterns are potentially not perfect. And people who constantly use past patterns to predict the future get burned quite a lot, but people who don’t learn from the past get burned even more. It’s a paradox that you can’t really resolve. You just have to have the intellectual humility to understand that that problem exists.

[00:28:16] Jason Hemingway::  Those phrases like strong opinions lightly held or, you know, when I use from time to time is like that I reserve the right to change my mind based on new information, which I think is not a bad thing, you know, that’s probably the way to think about it.

[00:28:27] Georg:  I think it goes to the resilience and experimentation point that we’ll come on to as well.

[00:28:31] Jason Hemingway:  Let’s just talk a little bit. So, let’s get into the idea of how this impacts how we think about something that’s a big topic of today, which is, you know, AI and all the ramifications of AI but let’s start that discussion around this idea, another idea that you you talk about, which is the idea between closed and open systems and why AI poses a greater risk in the latter of those open systems. So, if you’ll just explain the distinction and why that matters so much.

[00:28:58] Brian Klaas:  The way I think about it is that, you know, when you have a closed system, you have a system in which an AI agent or any sort of AI system cannot really affect what’s going on in some other distant totally separate system. Okay? So if you have AI that only operates on identifying X-rays and trying to make diagnoses of hairline fractures in a bone, that is not going to affect the US economy because it’s not integrated with another system. It’s supposed to be contained. It’s not supposed to do anything besides its one optimization task, which is a diagnosis of a bone. It’s sort of the archetypal low risk AI. And I think that kind of usage of AI will be really transformative in lots of ways. The open systems are where you get sort of like, you can get chains of AIs operating on each other, you can have them affect behavior outside of the system. ChatGPT can be an open system if it’s interacting with people. Right? If it’s giving people ideas that they then go and act on, whereas, is this photo showing a hairline fracture or hairline break in a bone or whatever it is, that’s not likely to affect, you know, the behavior of a conspiracy theorist in another part of the world. So, that’s how I think about it and, basically, what you have is a gradient of risk where the lowest risk systems are the closed ones because if things go wrong in them, there’s less of a risk of spilling over into other things that can quickly spiral out of control. And the biggest risk ones are rapidly changing open systems that involve economics and politics which is basically a lot of the business world because you can really rapidly change how the world operates without totally understanding both the speed or the process by which this occurs with AI.

[00:30:53] Jason Hemingway: And I think that’s something, George,  you’ve been thinking about, haven’t you? So, how that risk level plays out, especially as more decision makers handed over to sort of AI tool.

[00:31:01] Georg:  This is, I guess, where I’d like to address those of our listeners that are in the industry that Phrase operates in as well around language technology. And I have been thinking about it. Actually, something Brian just said made me think about it even more. So I was trying to consider this definition of open and closed systems, and I had thought about, like, maybe evolving and not evolving systems. So human anatomy, not evolving, certainly not on a timeline that matters versus say business, culture, society constantly changing an evolving system. But, actually, what you just said about the point around AI interaction is actually, another dimension is isolated systems and interlinked systems. So, where you’ve got an isolated system and AI is operating in a closed dataset, but in an interlinked system, you’ve got multiple AIs all actually, it’s like throwing pebbles in the pond from many different directions, and actually the pond is now linked to other ponds via canals. Right? So it was a hugely complex system. The whole subject of today’s talk is something I’ve really been thinking about in terms of the implication for businesses because a lot of people are asking, like, what is AI gonna do to our business? It doesn’t matter whether you’re in insurance underwriting or self driving cars or I don’t know, pick something else or language. And to me, the big kind of ‘aha’ moment from everything that I’ve heard from Brian at the conference and in our internal talk even today, it’s like every time I talk to Brian, I get deeper understanding, so I’ve really enjoyed these sessions, is that the fundamental truth from physics is that chaos theory says, “no matter how many decimal points you understand all the variables in a complex system to, it’s never enough because the decimal points that you don’t understand still have massively disproportionate impacts on the outcome”, which is why in weather, even though compute power has improved by bazillions of times since the 50s, our predictive ability has gone up from, like, two days to nine days or two days to ten days. So, it is just completely uncorrelated to the compute power we can throw at it. And I think that is true also when you think about AI and its implications for business. Yes. AI is gonna get vastly more powerful. But in these highly complex systems, when you have 8 billion people talking to each other, talking to companies, talking to brands, society is changing, current events, politics, weather, sporting events, people’s personal interests are all changing all the time as well. If you think that AI is gonna magically get to some incredible point where it can deliver predictable outcomes at scale, you’re essentially trying to overcome the boundaries of chaos theory, this is physics. This isn’t like biology or chemistry. Right? This is physics. You can’t do that. So, ultimately, it’s my belief that in our industry, AI is gonna be a massive tailwind. We already see that. But our expectations may simply be too high if we think it’s magically gonna solve everything. Chaos theory says we cannot eliminate unpredictability in large systems, and we’re going to need to put some boundaries around that. And so there’s hope for human beings still being, you know, at levers of control. There’s a need, a necessity for rules and data and enterprise software to do some predictable things around the edges of that and try and control it. And I think that’s a very important realization for many people in our industry or trying to use AI in business generally. It doesn’t matter if you go five years into the future or a hundred years into the future. Chaos theory will still exist. 

[00:34:27] Brian Klaas:  Yeah. I mean, this is why, like, when I launched the book, I did a TV interview in New York, and one of the first questions was like, won’t AI just solve this? And I was like, okay, uh, no. In fact, in some ways, it’ll probably amplify the risks of some of these changes and so on. There’s other things too that are different from the weather. Right? So, like, I think what AI researchers call dogfooding where you have AI learning from other AI. If that ends up being the case, like, that’s not what weather systems do. Right? Weather modeling actually uses real world weather. It doesn’t just constantly learn on itself, and sometimes you may get that as the internet gets awash with AI, you may have models that are learning on AI generated content exacerbating some of these feedback loops and so on. I think there is a series of things that AI will do extremely, extremely well. The idea that it’s going to solve everything is really misguided and dangerous, frankly, and I think, you know, there’s parallels throughout history. The most obvious one is the Internet boom and the, sort of, .com bust in, like, the late 1990s, early 2000s where, oh, this will solve everything, it will radically, I think AI is going to change the world in ways that lots of previous technological revolutions have not because of the nature of it. I just think that it’s also going to amplify some of the catastrophic risks that we worry about, and it will also not solve the problems that are fundamental to what I’ve been talking about today.

[00:35:48] Jason Hemingway::  Let’s go into that a little bit more. So, you know, at first glance, you could be quite worried as an individual or a leader and go, okay, so it’s really complicated. I’m never gonna know everything, and, you know, there’s a lot of risk in what I’m doing, and all these tools are coming out to do it just may maybe even give me more risk, which is where that idea that you talk about and George, you alluded to earlier is that idea of resilience and the argument that resilience now becomes much more important than perhaps optimizing processing systems.What does that look like? Give us your thoughts on that, Brian, if you will.

[00:36:22] Brian Klaas:  I argue that there’s a trade off between resilience and optimization at the high levels of efficiency. Right? So one thing I have to quickly say is that I’m in favor of efficiency generally. Right? There’s not a value to being grotesquely inefficient, just for the sake of argument, let’s imagine you have a system that’s, like, 90% efficient, and you dial that up to 95% efficiency, and then you keep dialing it up. At some point, there’s going to be a trade off where it’s going to be over-optimized. And at that point, it will become brittle. It’ll become fragile. So the example I love to use on this is from the Suez Canal in 2021 where there are these hyper-optimized global supply chains. They’re really efficient, and one boat getting stuck in one canal wreaked havoc on the global economy. Right? It cost $50,000,000,000 in economic damage from the single boat being stuck in the canal. So, that system did not have redundancy. It did not have slack. It didn’t have the ability to absorb a problem when it arose. The general principle that I suggest is that you want to have a system that has enough redundancy, resilience, slack, however you wanna call it, that when something unexpected happens, it’s not catastrophic. There’s some systems where this doesn’t really get required because the costs of an unforeseen disaster are pretty low. So if I have all the time in the world to make it to work, if I leave an hour and a half earlier and then there happens to be a traffic jam, I don’t need to also have a train booked because I know that I’ve, you know, got enough. It’ll be fine. So when you think about how these principles operate, if there’s a high level of potential harm, if something goes wrong and if the system is potentially over-optimized, you should dial down that optimization slightly and dial up the resilience more. And a lot of it is, as I say, it’s redundancy. It’s planning for things that could go wrong so that you’re not sort of dealing with an emergency with no contingency plans in place and so on. You know, I think I said this in the meeting as well, like, this is how nature works. I have a side interest in evolutionary biology, and the species that over-optimize die because the environment changes and the species that have the ability to sort of repurpose junk DNA and some of the DNA that seems to do nothing, but then a mutation comes up with this ingenious solution to a new environment, they thrive. There’s certain species like sharks that have been around longer than trees have been on the planet, and there’s other species that, you know, have a very short period of time and a lot of that is chalked up to how well they can adapt to changing environments, which is you know, it’s a very good parable for the business world because the economy is radically different from how it was when I was born. And the smart businesses are those that are resilient and adapt.

[00:38:58] Jason Hemingway: That’s an interesting point, isn’t it? And if you could just expand on that a little bit. You talk about experimentation a lot, which is, you know, if you take that evolutionary biology thing of the mutations, they are basically experimentation, I suppose. Can you just explain to us a little bit more about that kind of forced experimentation that you talk about?

[00:39:14] Brian Klaas: The wisdom of nature is, basically, you have some slack in the genome, so you have some extra bits that you don’t use in your DNA currently. You then experiment, which is what mutations are, and then new traits emerge, new skills and so on, and that’s where animals evolve. And those principles apply to humans really, really well. If you have a little bit of slack such that you have the capacity to experiment, right, a perfectly optimized system never experiments. It’s just perfect. Right? You don’t need a car to experiment. A car is supposed to be fully optimized as much as possible, at least, you know, sort of a normal human driving car is supposed to just do its job. You want it to work but in businesses, in social systems, you need to have enough space that you can experiment. Right? So that has to be enough slack for that and then the experimentation is, hey, maybe there’s a better way of doing things. One of the stories I talked about, I said in the meeting as well, is the classic one from my perspective is the Tube Strike in London where all these people were forced to find a new way to get to work, and researchers did something very clever. They looked at the cell phone data, the mobile phone data to see the pathways and how they changed over time. And what they found was that 5% of the people who came up with a new pathway to work because they were forced to, stuck with it for years afterwards. In other words, like, hundreds of thousands of people found a better commute because they were forced to try something new. Now what that tells you is that the system was potentially over-optimized from their perspective, but actually under-optimized in terms of the objective perspective because they actually had a better opportunity they were not taking and so that’s where you get into better outcomes through experimentation when environments change.

[00:41:00] Jason Hemingway: And, Georg, you know, to get your perspective on that, you know, does that reflect what you’ve seen in your experience?

[00:41:06] Georg:  100%. I think that the companies that will succeed in the years to come are the companies that are good at the process of getting better. They’re good at the process of experimentation and are able to do some of that forced experimentation. I’ve experienced some of that in a variety of different contexts where leadership took a decision that was pretty universally unpopular, caused some stress in the organization, nevertheless, new ways of working were discovered. And then as the world perhaps came back a little bit to how it was before, some of that stuck. And we saw that through COVID, we’ve seen that through in different contexts, I’ve seen that for, like, rounds of redundancies like, when I was at Tesla, Elon, famously would say, “you’ve got to make some pretty drastic reductions.” Very painful at the time but, you know, the organization adapts and that kind of forced experimentation by the way, not in favor of large scale force redundancies, but that kind of experimentation, for example, to say we’re just I don’t know, let me make one up. We’re going to eliminate all one-to-one meetings for a month, like, all meetings have to have a minimum of three people in. So, now you’re gonna have conversations that are trilateral rather than bilateral for a month and see what happens. I think that kind of forced experimentation could be fascinating just to see what it’ll be deeply unpopular but nevertheless, maybe beneficial. And that’s an interesting tension for leaders to navigate.

[00:42:24] Jason Hemingway:  And I think, those forced experiments, how, Brian, in your opinion, should leaders think about that when you know, the meetings are low stakes. Right? We can think about that. But when the stakes are high, when it’s like an experiment that has high stakes, how should business leaders think about framing that?

[00:42:42] Brian Klaas:  Yeah. It’s a great question.The way that I would approach it is that the best experiments are probably not forced. Right? Because even though forced experimentation reveals the fact that experiments can be useful, it would have been better if people had figured out they could go on the tube, if they figured out a better way to work on their own, if it wasn’t through the tube strike. If you put this analogy to business, obviously, it would have been better if people had figured out a work-life balance that suited the organization and the company, not just through a giant global pandemic. It’s also that when you get experiments wrong in crises, the consequences can be bad if your experimentation is really poor. So what I would say is you’d wanna have parts of the business that are constantly experimenting. Just so you know, a lot of innovation in some companies, they have a research and development wing that just, their sort of thing is experimentation all the time. And the problem with that is that you then think, oh, the rest of the organization doesn’t have to do any experimentation. So it’s, sort of, trying to make sure that experimentation is part of daily culture in a business. It’s not just something that you do when a crisis happens. It’s not just something you do as a gimmick. It’s a space for people to try new things, especially when it’s manageable risk because then, that’s where you can say, hey, we did this on a small scale with a small amount of money at risk and so on, and it worked really well. Can we scale this up? That’s the sort of model that you want to replicate as opposed to not advocating, you just then wait for a pandemic and then, oh, we managed to survive. How great. Right? It’s more adapting proactively and also looking for environmental changes that might be subtle. Right? I mean, the thing is everyone knows the world changed during the pandemic, but the world is constantly changing. So, it’s being really attuned to things that are shifting slightly in culture, in politics, in economics, and so on, and then trying to experiment around those changes as well.

[00:44:40] Jason Hemingway:  That’s really interesting, Brian. So if you could just leave listeners with one principle for navigating this sort of world that they can’t control, they need to experiment in, what would it sort of be at?

[00:44:50] Brian Klaas:  The main advice I give to everyone is about prioritizing resilience more. And I think it’s because it’s one of those things on a personal level and a professional level. The world is genuinely changing faster than it ever has in history for humans and so that’s the period when the dislocation is most abrupt and most potentially harmful. So when that’s the case, the resilience dial has to go up and I think the problem is, I’m very much, sort of, arguing against the herd in this because every podcast, YouTube channel, etcetera, is all about optimization. My argument is, yes, optimize certain things in life, by all means, try to maximize whatever it is that you want to optimize if it’s your health or whatever it is but in general, I think experimentation resilience are better guiding principles in a rapidly, rapidly changing world. And the people who over-optimize and get caught out are the ones who have midlife crises who have become people who have the companies collapse, it’s just a better way to be, it makes you more survivable over the long term, and I think that’s one of the main principles that businesses should think about.

[00:46:05] Jason Hemingway:  I love it. Brian, thank you so much. I think we’re gonna close it now. I’ll just offer, you know, over to Georg for a final thought, one idea that maybe from this conversation, you said there’s always an idea when you talk to Brian, but that you’re still thinking about.

[00:46:19] Georg: I think every section of this conversation merits, like, listening to it a couple of times, if I could be that presumptuous to our listeners. But if you think about where we talked about chaos theory, the illusion of control, the illusion of the ability to forecast in very large systems at scale. We talked about overfitting pattern detection balanced against storytelling. We talked about the importance of resilience and experimentation. I think there’s a huge amount there for personal lives and in business. But I think one of the things that I’m still thinking about, and I think a lot of people in our industry of technology broadly are thinking about, is how to explain to business leaders that complex systems can’t be magically solved with a simple solution that’s called AI. And it’s something which business leaders struggle with because they have this magical experience on their phone, but that is such a tiny little system. It doesn’t represent the complexity that many people in technology are trying to deal with at enormous scale. So I think, for me, these conversations, this whole series of conversations that Brian has given me a fantastic framework for understanding chaos theory at scale, the inability to overcome the laws of physics, and trying to build a narrative and a framework to tell a story around that for business leaders to have realistic expectations around AI. And I think that’s really valuable. So, again, thank you, Brian. Really appreciate your time and insights.

[00:47:49] Brian Klaas:  Thank you so much for having me on the show.

[00:47:51] Jason Hemingway:  Thank you so much, Georg. Thank you for your time and your thinking as well. I mean, what a fascinating discussion. So, we’ll speak to you again soon, hopefully, Brian. And thanks again.

[00:48:01] Jason Hemingway:  Well, that’s it for another episode of In Other Words, a podcast from Frase. I’ve been your host, Jason Hemingway. If you enjoyed today’s episode, be sure to subscribe to In Other Words on Spotify, Apple Podcasts, or your favorite podcast platform. You can also find more conversations on leadership growth and what it really takes to scale globally at phrase.com. Thanks for tuning in, and see you next time.

All episodes

  • Rema Vasan TikTok

    Brands that own the moment, own the market

    Rema Vasan, Head of North America Business Marketing at TikTok, joins our CMO Jason Hemingway, to explore how culture drives business growth, why agility is a non-negotiable, and how AI is…

    Listen here