Sir David Spiegelhalter is a British statistician and Emeritus Professor at the University of Cambridge, known for his work on risk communication and public understanding of statistics. He is the author of The Art of Statistics, a former President of the Royal Statistical Society, and was knighted in 2014 for his services to medical statistics. He also presented BBC documentaries and is the founder of the Winton Centre for Risk and Evidence Communication at Cambridge.
The Art of Uncertainty: How to Navigate Chance, Ignorance, Risk and Luck
Episode Description
News stories are filled with tales of risk and uncertainty. We're told the probable chance of a weather event or how likely it is we might contract an illness. There's an art to telling stories with uncertainty that provides context and nuance that is often missing. That is the focus of this episode of Stats+Stories with guest David Spiegelhalter.
+Timestamps
Inspiration Behind the Book (1:11)
Defining Uncertainty and Its Impact (4:14)
Storytelling and Examples in the Book (7:48)
Probability and Communication (12:54)
Trustworthy Communication (17:34)
Application of Trustworthy Communication Principles (19:14)
Deep Uncertainty and Imagination (27:42)
+Full Transcript
Rosemary Pennington: News stories are filled with tales of risk and uncertainty. We’re told the probable chance of a weather event or the likelihood we might contract an illness. There’s an art to telling stories of uncertainty in a way that provides context and nuance, and that is often missing. That’s the focus of this episode of Stats + Stories, where we explore the statistics behind the stories and the stories behind the statistics. I’m Rosemary Pennington. Stats + Stories is a production of the American Statistical Association in partnership with Miami University’s Departments of Statistics and Media, Journalism, and Film. Joining me, as always, is regular panelist John Bailer, emeritus professor of statistics at Miami University. Our guest today is Sir David Spiegelhalter. Spiegelhalter is emeritus professor of statistics in the Centre for Mathematical Sciences at the University of Cambridge. He also serves as a non-executive director of the UK Statistics Authority. He has presented BBC documentaries and published several books, including Sex by Numbers, The Art of Statistics, and COVID by Numbers. He’s here to talk about his new book, The Art of Uncertainty: How to Navigate Chance, Ignorance, Risk, and Luck. David, thank you for joining us for your fourth time.
David Spiegelhalter: Do I get a prize?
John Bailer: Yes — the opportunity for a sixth.
David Spiegelhalter: I’m delighted to be back on the show.
John Bailer: So, your new book — What inspired you to write it?
David Spiegelhalter: Once The Art of Statistics became really successful — translated into 11 languages and selling hundreds of thousands of copies — my publisher essentially said they’d take anything I wrote. So I saw a great opportunity to write about things I’ve been interested in my entire career: uncertainty, probability, chance, and risk. I was pleased they accepted the proposal, and I even got a good advance. The book is long, but I had so much I wanted to put in. I tried to keep the math out and make it driven by stories — finding a story to illustrate each point and the lessons I was trying to convey.
Rosemary Pennington: There are so many different understandings of what uncertainty and risk mean — many not statistical. As you were writing this book, what audience were you considering? And how did you imagine it engaging with other understandings of risk and uncertainty?
David Spiegelhalter: I wanted to write for people who are interested in risk and uncertainty. The word “risk” is almost meaningless — it can mean anything to anyone — so I try to avoid it. But many people work in areas related to uncertainty: risk management, risk assessment, modeling, and bureaucracies that rely on these ideas. I hoped to raise the level of discussion about what we mean when we talk about the chance or probability of something happening, how we assess it, and how we weigh subjective judgment versus modeling. I view risk as just one subset of uncertainty. Uncertainty encompasses luck, coincidences, chance — the things we talk about in life. I use a definition I borrowed from somewhere: uncertainty is the conscious awareness of ignorance. I love that because it brings the concept back to the relationship between us and the world. It’s not necessarily a property of the outside world (except maybe at the quantum level). In our daily lives, we are exposed to concerns about the future based on ignorance. We don’t know what’s going to happen. Maybe life is predetermined or the will of God, but regardless, we are uncertain. We live in a tiny bubble of knowledge surrounded by an ocean of ignorance. I think that’s wonderful. I can’t imagine anything worse than knowing exactly what will happen. Writing the book actually changed my life.
Rosemary Pennington: Wait — it changed your life? I want to know how.
David Spiegelhalter: Right at the beginning of the book, I talk about how my grandfather barely survived World War I. If a shell had landed just a little closer to him, I wouldn’t exist. And there are other unlikely events that led to my existence. My mother was captured by pirates in the South China Sea in the 1930s. My father nearly died of tuberculosis after World War II. There are all these moments where I so easily might not have been born. Then I researched the circumstances of my own conception — something most people avoid thinking about because it involves imagining their parents doing certain things. My parents have passed away, so I felt I could face it. It would have been in November 1952, in an unheated stone cottage in North Devon. I looked up the weather — that month was one of the coldest in years. A huge cold snap. So they were likely shivering there, maybe trying to get warmer — and here I am. This is what’s called existential luck — that I exist at all is the result of a chain of micro-contingencies that so easily could have gone differently. Thinking through this put me in perspective. I am nothing more than the product of chance in a sequence of events. It changed my philosophy of life.
John Bailer: Well, this show is all about philosophy of life, so that’s perfect. You’ve done a great job of bringing stories into your descriptions. I wasn’t even surprised that Casanova was a probabilist.
David Spiegelhalter: Yes — I have a chapter on probability, and I thought I had to include permutations, combinations, all that stuff. But I hate it. It’s just counting, not probability. It bores people, yet it shows up in probability classes. I needed it because it lets us calculate probabilities assuming randomness, but I wanted a story to make it interesting. I had read Stephen Stigler’s book about Casanova’s lottery. Before the French Revolution, Casanova offered to design a lottery for the French state that wouldn’t guarantee profit on each draw, but would guarantee profit in the long run, because he could compute the odds. People chose combinations of numbers, and he worked out payouts using basic combinatorics. The lottery had a decent payout — better than the UK National Lottery today — but still made huge money. At one point, it generated 4% of the national income. It was halted during the Revolution, then started again afterward. I thought it was a great story. I kept it short — stories shouldn’t drag — but it’s fascinating.
Rosemary Pennington: How did you decide which stories to include? In one chapter you talk about the Bay of Pigs. I wasn’t expecting that.
David Spiegelhalter: I think I’ve been collecting these stories in my mind for years. I wanted an example of failure caused by using words instead of numbers. I found a book by Charles Wyler that analyzed the Bay of Pigs invasion. The CIA planned it: 1,500 Cuban exiles invading Cuba in 1961. Kennedy learned about it only after he became president. The Joint Chiefs of Staff investigated and concluded it had about a 30% chance of success — 70% chance of failure. But when the report went to Kennedy, the numbers were removed and replaced with the phrase “a fair chance.” Disastrous. That phrase may not be the only reason things went wrong — there was groupthink and other factors — but that wording mattered. Kennedy approved the invasion, and it was a fiasco. Intelligence agencies learned from this. In the UK, if something has a 30% probability, it must be labeled “unlikely.” There’s an official scale. I even got a mug with it when I gave a talk at MI5. It shows the categories — what counts as “likely,” “unlikely,” and so on. Another example is the hunt for Osama bin Laden in 2011 — exactly 50 years after the Bay of Pigs. Multiple teams estimated the probability he was in the Abbottabad compound. Obama said later that some put it at 80–90%, some at 30–40%. He thought it was roughly 50–50. But he decided to go ahead — and it turned out to be correct. These are powerful examples. They show how important it is to express uncertainty clearly.
Rosemary Pennington: You’re listening to Stats + Stories, and we’re talking with Sir David Spiegelhalter about his new book, The Art of Uncertainty: How to Navigate Chance, Ignorance, Risk, and Luck.
John Bailer: I really appreciated those examples. I didn’t know the backstory of the Bay of Pigs, or that “a fair chance” replaced 30%. Would I bet on something with a 30% chance? And you also talk about mapping probabilities to words, like the Intergovernmental Panel on Climate Change (IPCC) does — terms like likely, unlikely, very likely. That mapping is so subjective.
David Spiegelhalter: It’s not as subjective if you have an agreed scale. The IPCC is a leader here. They provide a clear mapping of verbal terms to numerical probability ranges. So when they use a word like “likely,” they show what they mean. Many groups have different scales — there’s even a NATO document called Variants of Vague Verbiage — but the IPCC scale is becoming widely used. These scales don’t force people to produce precise probabilities, which would be unreasonable, but they reduce misunderstandings. Instead of vague language, they provide ranges, like “likely” meaning, say, 55%–80%.
John Bailer: That reminds me of work I encountered as a graduate student. A cognitive psychologist was studying what “likely” means in military communication — like “you’re likely to encounter an enemy.” That’s a difficult concept to map from language to numbers. So would you say that this kind of mapping is best practice when reporting uncertainty?
David Spiegelhalter: Yes. If you’re making subjective assessments, it’s valuable to support them with numbers — even rough ones. This also ties into work on superforecasting. In forecasting competitions, people assign quite precise probabilities — not vague ranges. Skilled forecasters can distinguish between, say, 73% and 75% likelihood. They become expert in calibration. The key point I emphasize is that these probabilities are constructed — they’re judgments, not truths. But we can evaluate how good they are using scoring rules, especially squared error loss, introduced by Glenn Brier in 1951 for weather prediction. That’s where the “probability of precipitation” came from. Weather forecasters have used it for 75 years. The same scoring rules are used for super forecasting. In the book, I include a quiz to help readers measure their own probability judgments — essentially, a measure of ignorance. I do this in talks too. I’ll ask audiences questions like: Which Cambridge has a larger population: Cambridge in the UK or Cambridge in Massachusetts? People must choose and indicate their level of certainty on a scale. With squared error scoring, if you say you’re 10 out of 10 certain and you’re wrong, you get penalized heavily. It punishes overconfidence. I’m pleased to raise the profile of this method. Judgments may be subjective, but they can still be numerical and testable.
Rosemary Pennington: For the record, I missed most of the questions on that quiz.
David Spiegelhalter: They’re supposed to be difficult. The real issue is whether you recognized that you didn’t know. If you guessed with high certainty and got a low score, that shows overconfidence — and the scoring rule trains you out of it.
Rosemary Pennington: I definitely thought I knew more than I did. And your point is that the scoring method penalizes that.
David Spiegelhalter: Exactly. It’s meant to reveal and reduce overconfidence.
John Bailer: You’ve spent much of your career promoting good communication about risk. This is particularly important when we consider issues like misinformation and disinformation. Could you explain how you distinguish between those, and how that affects how we communicate what we know?
David Spiegelhalter: Simplistically, I view disinformation as deliberately misleading — knowingly telling lies to manipulate people. Misinformation is when someone spreads something misleading without knowing it’s wrong — maybe an honest mistake, or simply forwarding something they believe. Much of social media misinformation comes from people who genuinely believe what they share.
Rosemary Pennington: In your book, you write about trustworthy communication, which feels especially critical when we talk about uncertainty. Could you walk us through the principles you identify, maybe applying them to a real case?
David Spiegelhalter: I feel strongly about this. I’ve spent a lot of time working with psychologists and communication experts on what it means to communicate in a way that earns trust. The key insight comes from philosopher Onora O’Neill. As scientists, we often say we want people to “trust us.” But that’s the wrong question. Our goal should not be to gain trust; it should be to demonstrate trustworthiness. Trust, if it comes, should be a byproduct of acting in honorable and transparent ways. Our research shows that when communicators demonstrate trustworthiness, they do become more trusted, but that cannot be the main aim. When assessing trustworthiness, O’Neill says we should look for honesty, competence, and reliability. Trustworthy communication also has specific features, especially where uncertainty is involved. We wrote a paper in Nature identifying five principles I discuss in the book: Be clear about whether you are informing or persuading. Some people fool themselves into thinking they’re just informing, when they’re actually trying to influence attitudes or behavior. One exception: it’s fine to persuade people to be interested. Be engaging — but not manipulative. There’s no point being trustworthy if you’re dull. Stories can help. Good communication skills matter, but they shouldn’t distort information. Present a balanced view — benefits and harms. If you discuss vaccines, for example, you must discuss both benefits and potential harms. Acknowledge uncertainty and the quality of evidence. How confident are you in your data and models? That needs to be explicit. Preempt misunderstandings. Don’t wait to correct misinformation after the fact. Pre-bunk potential misinterpretations right away. Say clearly what the data do not mean. These principles are increasingly accepted. In the UK’s Office for National Statistics and in the UK government’s communication service, pre-bunking is now standard practice — explaining up front what data should not be used to infer. And please, never bury warnings in small text like: “Use with caution.” That’s useless. Warnings must be up front and clear. I also have a strong dislike for the slogan “vaccines are safe and effective.” It’s manipulative. A more trustworthy statement would acknowledge that vaccines are safe enough and effective enough for certain people under certain conditions. Our research showed that when you present a balanced view, skeptical audiences actually trust you more. A slogan like “safe and effective” reduces trust in the very groups you most need to reach.
John Bailer: That’s such an important point. These principles don’t only guide scientists — they give the public a way to evaluate what they’re hearing. If communicators don’t show uncertainty, don’t present pros and cons, don’t explain confidence levels, we should demand it. It becomes a checklist for journalists as well. If a press release can’t answer these questions, it’s incomplete.
David Spiegelhalter: Exactly. When listening to someone present data, we should ask: What are the pros and cons? What’s uncertain? How strong is the evidence? What doesn’t this data mean? These are simple but powerful prompts. Another situation where trustworthy communication matters is during crises — like the early days of COVID-19. Everyone is scrambling, knowledge is limited, and decisions must be made under deep uncertainty. I use work by John Krebs, former head of the UK’s Food Standards Agency. During his tenure, there were crises like foot-and-mouth disease and mad cow disease — awful for agriculture. He developed a communication playbook that I carry in my mind and use to evaluate others. His guidance: Start with what you do know. Be clear about what’s known with confidence. Then state what you don’t know. Acknowledge uncertainty immediately — don’t bury it. Explain what you’re doing to find out more. Describe the experiments, monitoring, or investigations underway. Tell people what they can do in the meantime. Offer actions individuals can take, based on current knowledge. Commit to return with updated advice — and say clearly that recommendations may change. Everything said now is provisional. You must be willing to revise guidance. That last step — explicit provisionality — is crucial. It’s the one politicians often refuse to do. They feel compelled to sound absolutely certain. And when they later change their minds, people accuse them of contradiction or failure. During COVID, we saw people still scrubbing surfaces long after scientists knew surface transmission was negligible. Politicians had delivered early guidance with absolute certainty, and they couldn’t backtrack. If they had acknowledged provisionality from the start, they could have adapted recommendations more easily.
John Bailer: So what’s next for you?
David Spiegelhalter: I feel like I’ve put everything I know into this book — all my areas of interest: luck, randomness, coincidences, risk, uncertainty, and communication. It ranges from philosophical questions to practical decision-making. Right now, I’m especially interested in deep uncertainty. In classrooms, we teach risk analysis as if you can list all options, know their outcomes, assign probabilities, and compute expected values. In real life, that’s nonsense. You don’t know all possible futures. You don’t know the values. You don’t know the probabilities. We’re living in a profound age of uncertainty — maybe not as extreme as the 1930s, but close. In deep uncertainty, standard analytic tools struggle. You have to be imaginative. The UK Ministry of Defense even hires science fiction writers to help envision possible futures. They’re explicitly not predictions — just stories that stretch our imagination. They also encourage red teaming — deliberately challenging dominant assumptions, like Obama did before the bin Laden raid. Red teams look for what might go wrong. They counter “group think.” Good superforecasters often function as “one-person red teams” in their own minds. They check their own biases and seek counterevidence. We can all do that individually. We should have an internal red team.
Rosemary Pennington: That’s all the time we have for this episode of Stats + Stories. David, thank you so much for joining us today. Stats + Stories is a partnership between the American Statistical Association and Miami University’s Departments of Statistics and Media, Journalism, and Film. You can follow us on Spotify, Apple Podcasts, or wherever you get your podcasts. If you’d like to share your thoughts on the program, send us an email at statsandstories@amstat.org or visit us at StatsAndStories.net. And be sure to listen for future episodes, where we discuss the statistics behind the stories — and the stories behind the statistics.