Countering Vaccine Skepticism | Stats + Stories Episode 383 Pt. 2 / by Stats Stories

Dr. Jeffrey Morris is the George S. Pepper Professor of Public Health and Preventative Medicine and Director Biostatistics Division, Department of Biostatistics, Epidemiology and Informatics at the Perelman School of Medicine University of Pennsylvania. He has been actively involved in scientific communication efforts on social media and with various media outlets. He is also a distinguished research fellow at the Annenberg Center for Public Policy.

Episode Description

Three hundred and thirty-two days, that was the international statistic of the year in 2020, as identified by the Royal Statistical Society. That was the length of time between scientists publishing the genetic sequence of COVID-19 on the 11th of January, and an effective vaccine being administered on the 8th of December. This vaccine was an integral part of the world's pandemic response. Vaccines aren't new. In a World Health Organization report describing the history of vaccines, Dr. Edward Jenner is credited with the world's first successful vaccine for smallpox in 1796. In the last 100 years, vaccines were developed for yellow fever, pertussis, polio, hepatitis B, measles, mumps, rubella, and more. Well, how do we know vaccines are safe and effective? Why do some people argue against using vaccines? That's the topic of this episode with guest Dr. Jeffery Morris.

Full Transcript

John Bailer
In early January 2026, the U.S. Centers for Disease Control and Prevention announced changes to the childhood immunization schedules, reducing the number of vaccines recommended for children. This change led to the U.S. being an outlier in terms of required vaccines for children.

One reason people express concern about vaccines is fear of adverse reactions. It may surprise many that there is a comprehensive system in place to monitor adverse outcomes.

In the second part of our conversation with Dr. Jeffrey Morris, we explore how vaccine safety is studied. While safety is a story of accumulating evidence across different studies and systems, we also consider how cherry-picking individual studies can lead to claims that mislead and misinform about vaccine safety.

I’m John Bailer. Stats + Stories is a production of the American Statistical Association, as well as Miami University’s Departments of Statistics and Media, Journalism, and Film. I’m joined in the studio by my colleague Rosemary Pennington, chair of the Department of Media, Journalism, and Film.

Our guest today is Dr. Jeffrey Morris. Morris is the George S. Pepper Professor of Public Health and Preventive Medicine and director of the Biostatistics Division in the Department of Biostatistics, Epidemiology, and Informatics at the Perelman School of Medicine at the University of Pennsylvania. He has also been actively involved in scientific communication efforts on social media and with various media outlets and is a distinguished research fellow at the Annenberg Center for Public Policy.

Jeff, it’s a delight to continue our conversation about vaccine safety with you today.

In your recent report for the Annenberg Public Policy Center, you provided a comprehensive review of the U.S. vaccine safety monitoring system. Can you start by giving us a quick overview of the separate components before we dig into each in more detail?

Jeffrey Morris
Sure, yes. Vaccine safety monitoring is very challenging, and the system is complex, with many components involved.

First, there are pre-licensure clinical trials, which establish basic efficacy and safety endpoints, as well as immunogenicity endpoints. However, they also have limitations in terms of detecting rare events. That’s why, after approval, post-approval safety monitoring is required, and why we have a multicomponent system in place.

One component is a passive reporting system called VAERS, where individuals or doctors can report events that occur after vaccination. These are recorded and kept in a database. This is called passive safety monitoring because reporting is voluntary, and it’s not always clear who is reporting.

There are also active safety monitoring systems used to follow up on signals that might be identified in the passive system.

So, the limitations of pre-licensure clinical trials are the reason we require post-approval safety monitoring. There are different components to that system, with two major ones: passive monitoring and active monitoring.

The passive monitoring system involves individuals and doctors reporting events that occur after vaccination. These reports are collected and investigated to determine whether any events appear more frequently than expected and might represent safety signals.

Active monitoring systems are then used to follow up on these signals. These typically involve medical record systems, where researchers examine whether events are actually occurring more often after vaccination than in appropriate comparison groups. For example, this includes the CDC’s Vaccine Safety Datalink and the FDA’s PRISM system. There are also other government medical record systems that can be used to validate signals identified through passive reporting.

There’s also a third component: the Clinical Immunization Safety Assessment (CISA) Project. This focuses more on clinical consultation and in-depth investigation of individual cases. While passive and active systems are primarily epidemiological—looking for patterns across populations—this component allows for deeper exploration of individual cases, which can generate additional insights or hypotheses.

Rosemary Pennington
Jeff, I know that for some people who are concerned about vaccines, the clinical trial process can feel confusing or unclear. Before we move into post-market monitoring, could you walk us through what the clinical trial process for vaccines looks like?

Jeffrey Morris
Yes, absolutely. The exact details can vary depending on the context, but typically, for a vaccine to be approved, there need to be randomized controlled trials demonstrating efficacy and a basic level of safety.

If the vaccine is for a new disease with no existing vaccine, these trials will often use a placebo control. If there’s already an existing vaccine, that may be used as the control instead. Researchers evaluate efficacy endpoints, basic safety endpoints, and immunogenicity endpoints.

For vaccines that have been studied previously—such as updated formulations like the annual flu vaccine—smaller trials may be used, focusing primarily on immunogenicity to ensure the expected immune response is generated.

John Bailer
And one thing you mentioned in describing these clinical trials is their limitation—they just can’t be large enough to detect very rare adverse outcomes. Even with thousands or tens of thousands of participants, they don’t have that level of sensitivity.

Jeffrey Morris
That’s right. I talk about this in more detail in the white paper I wrote for Annenberg. For example, the phase three trials for the COVID-19 vaccines were double-blind, placebo-controlled trials with about 20,000 participants per arm, which is considered quite large.

But I showed that, with that design, to detect a statistically significant adverse event, it would need to occur at a rate of about 1 in 1,000 or more. Many of the adverse events we’re concerned about—especially for vaccines given to healthy populations—are much rarer than that.

For example, myocarditis associated with mRNA COVID-19 vaccines occurred at rates around 1 in 50,000 to 1 in 100,000 in the general population. Even in the highest-risk subgroup—males receiving a second dose in their late teens or early twenties—the rate may have been around 1 in 3,000 to 1 in 8,000.

Even at those higher rates, these events would not have been detected in the clinical trials. That’s why post-approval safety monitoring systems are essential—for detecting rare events, delayed effects, or outcomes that occur under specific conditions, such as cumulative dosing.

Rosemary Pennington
So you mentioned one of those systems—VAERS—as a way of monitoring safety. Could you walk us through what that is, and maybe some of its pros and cons?

Jeffrey Morris
Yes. The Vaccine Adverse Event Reporting System, or VAERS, is a passive reporting system. Individuals, family members, or healthcare providers can submit reports of events that occur after vaccination.

These reports are not verified, and anyone can submit one. There’s no determination of causation—only that an event occurred and was reported.

One strength of VAERS is its openness. Anyone can report, and any type of event can be included. This means it can potentially capture unexpected or previously untracked events. For example, myocarditis may not have been a primary concern before COVID-19 vaccine trials, but signals in VAERS and similar systems helped identify it as a potential issue.

Another strength is transparency—the database is publicly available, allowing anyone to access and analyze it. However, this also creates challenges. When people don’t fully understand the data, it can lead to misinterpretation and the spread of misinformation, which we saw during the pandemic.

There are also important limitations. First, because it’s a passive system, reporting is incomplete and inconsistent. We don’t know how many people who experienced an event actually reported it, and reporting rates can vary widely depending on the event, its severity, and timing. Because of this, VAERS cannot provide reliable estimates of how often events occur in the population.

Second—and perhaps more importantly—there is no control group. Many events, such as heart attacks, strokes, or even deaths, occur at a baseline rate in the population. Some of these will happen after vaccination purely by coincidence. Without a control group, it’s very difficult to determine whether the number of reported events is higher than expected.

For this reason, VAERS is best understood as a hypothesis-generating system. It helps identify potential safety signals, but those signals must be evaluated through more rigorous studies to determine whether they truly represent increased risk following vaccination.

John Bailer
I thought it was interesting that, in the report you generated, some of the outcomes reported after vaccination included things like sunburn, toothache, X-rays of limbs, and abnormal dreams—things that clearly have no connection to vaccines.

You mentioned that the system is passive in the sense that people engage with it voluntarily, but it’s also open for anyone to explore. Others can access this information and analyze it, correct?

Jeffrey Morris
Exactly right. Yes—it’s an open system. You can download the data and analyze it yourself, and some people have even created dashboards to make it easier to access.

John Bailer
That seems like it really leaves the system open to misuse, especially if it’s used naively—not as a hypothesis-generating tool, but as a way to confirm what someone already believes about the risks associated with vaccines.

Jeffrey Morris
Absolutely. Transparency is critical, but the downside is that there’s no quality control over how people analyze the data. In today’s social media environment, people can make claims that aren’t vetted by scientists, and those claims can go viral and reinforce existing beliefs.

That dynamic drives a lot of misinformation, and VAERS is probably one of the biggest sources of that.

A major issue is that people don’t understand the concept of background event rates—bad things happen all the time, regardless of vaccination. Getting a vaccine doesn’t mean you won’t experience unrelated medical events.

People often look at raw counts and assume all reported events were caused by vaccines. Then they take it a step further, assuming underreporting and inflating the numbers—sometimes dramatically. For example, a commonly cited figure is that only 1% of events are reported, leading some to multiply reported events by 100 and claim that as the true number of vaccine-caused events. That’s completely misleading.

There’s even a widely circulated claim that the measles vaccine has caused more deaths than measles itself. What’s actually happening is that people are counting all deaths reported after measles vaccination in VAERS and labeling them as vaccine-caused, then comparing that to actual measles deaths. Since measles is now rare due to high vaccination rates, the comparison is misleading—but it spreads widely nonetheless.

Rosemary Pennington
So how does V-safe differ from VAERS?

Jeffrey Morris
V-safe is interesting because it was introduced during the COVID-19 pandemic, and I hope they continue to develop it. It’s also a reporting system, but it works differently.

When people received their COVID vaccine, they could sign up for V-safe. The system would then prompt them to report any symptoms or events, either through structured questions or open-ended responses.

Because of this, you can estimate incidence to some degree, since you know how many people enrolled in the system. That said, you still don’t know how many people consistently respond, so there are still survey-related limitations.

During the pandemic, V-safe focused mainly on short-term outcomes, so it wasn’t as useful for identifying rare or serious adverse events. However, it was very effective at characterizing common short-term reactions—like missed work or typical side effects.

I think it has real potential as part of the broader system because it provides some sense of the denominator—the population being monitored. In principle, you could also collect demographic information and adjust for biases in who chooses to participate.

John Bailer
You’re listening to Stats + Stories, and we’re talking with Jeff Morris about vaccine safety.

Jeff, with all of these systems in place, how can there still be such strong assertions that vaccines are unsafe or linked to conditions like autism or other serious effects?

Jeffrey Morris
If you listen to those making those arguments, there are a few common themes.

First, they claim that vaccine safety hasn’t been studied, which can lead people to believe there’s no evidence supporting safety.

Second, they point to specific studies that suggest possible connections between vaccines and adverse outcomes, without acknowledging that these are often cherry-picked. They may ignore stronger studies or rely on research with serious methodological flaws.

Another tactic involves carefully worded claims—for example, stating that none of the vaccines currently given to children were tested in placebo-controlled randomized trials. This is technically framed in a way that can sound compelling, but it’s used to imply that we don’t know anything about vaccine safety.

That implication feeds into a broader narrative—that such studies weren’t done because they would reveal that vaccines are dangerous. It taps into a conspiratorial mindset.

By emphasizing uncertainty or gaps in knowledge, it becomes easier to suggest that information is being suppressed. Then, claims can be inserted—such as linking vaccines to autism—and framed as “not disproven,” which is presented as evidence. That framing can make the argument seem reasonable, even when it isn’t supported by the broader body of evidence.

Rosemary Pennington
As you were talking about this, it made me think of a term you shared with us—the Nirvana fallacy—when it comes to discussing vaccines and vaccine studies. Could you walk us through what that is? It feels very related to what you’ve been discussing.

Jeffrey Morris
Yes, I think it is related.

The Nirvana fallacy is the idea that because something is imperfect, it is therefore worthless. You see this in arguments about COVID vaccines—for example, people saying, “Well, the vaccines didn’t completely stop transmission, so they didn’t work.” That’s a classic example. Even though the vaccines clearly reduced the risk of infection and severe outcomes, because they weren’t perfect, they’re dismissed as failures.

In this context, the fallacy shows up in claims that if a vaccine hasn’t been studied in a placebo-controlled randomized trial, then we don’t know anything about its safety. Some go further and argue that we need long-term placebo-controlled studies—lasting five to seven years—to observe outcomes like autism or ADHD.

They may also suggest that we need trials examining every possible combination of vaccines and their interactions. These are essentially impossible standards.

So the argument becomes: because we don’t have a perfect study that answers every question, we have no knowledge about safety. At the same time, they dismiss observational studies by saying we don’t have “pure” vaccinated versus unvaccinated comparisons.

By doing this, they ignore the extensive body of evidence on vaccine safety, which is actually quite large and well-established. The Nirvana fallacy, in this case, sets an unattainable standard and then uses that to claim safety hasn’t been demonstrated.

John Bailer
Gosh, I’m trying to think if I’ve ever seen a perfect study in any context. Every study has some limitation—something that could be improved or strengthened.

It seems like part of the issue is the expectation that a single study will answer all questions. That’s what I find compelling about having a system of safety monitoring that spans the entire lifecycle—from initial vaccine development through post-market use, with passive systems, active systems, and case investigations.

That seems comprehensive. How do you help people understand that they shouldn’t rely on just one individual study?

Jeffrey Morris
That was really the main theme of the white paper I wrote, and one of the reasons I wrote it.

I think the public is generally aware of VAERS—though not always fully understanding it—but many people don’t see how all the components of the system work together. Each piece complements the others.

Even well-designed, placebo-controlled clinical trials have limitations. They can’t answer every question, so we need multiple approaches. Each component provides different types of evidence.

What I often say—especially on social media—is that we need to critically assess all available evidence across different study types, using scientific principles. That allows us to determine what we can confidently say is true, what we can confidently say is false, and where uncertainty remains.

Identifying those gaps is important so that additional studies can be conducted to build knowledge. But the key point is that only by looking at the full body of evidence can we get a complete picture.

Cherry-picking a single study—or one part of the system—while ignoring the rest prevents an objective and comprehensive assessment.

Rosemary Pennington
Jeff, journalists often fall into the trap of reporting on individual studies as they come out. As someone who has done science and medical reporting, I know it can be difficult—not just reporting on single studies, but also communicating uncertainty.

Statisticians understand uncertainty, but explaining it clearly to a general audience is challenging. And I do think journalists have sometimes contributed to the spread of misinformation by giving platforms to voices that shouldn’t have them.

What advice would you give journalists who are trying to cover vaccine safety and monitoring responsibly and ethically?

Jeffrey Morris
One thing I’ll say is that, before the pandemic, I didn’t interact much with the media. Since then, I’ve worked with many journalists—especially in scientific media—and I’ve been very impressed.

I think many journalists genuinely want to understand these issues and communicate them clearly. So I don’t think it’s helpful—or fair—to broadly criticize the media. Instead, we should see this as an opportunity for partnership.

The scientific and statistical communities have an important role to play, especially when it comes to explaining evidence and uncertainty. These are concepts statisticians think about all the time, but they’re not always intuitive to others.

One example is the phrase “trust the science.” That can give the impression that science is a collection of fixed truths being handed down to the public. But that’s not what science is.

Science is a process—a set of methods for learning from data. Through that process, we accumulate evidence. Sometimes that leads to strong consensus; other times, uncertainty remains.

People can exploit that uncertainty, suggesting that because something isn’t absolutely proven, it isn’t true. That’s a misunderstanding of how science works.

So my advice to journalists is to avoid reinforcing overly simplistic, black-and-white narratives. Instead, try to communicate the nuance—the process, the uncertainty, and the strength of evidence.

I would also encourage journalists to engage more with statisticians, epidemiologists, and other data experts—especially when a story involves complex analysis or interpretation. Subject-matter experts are important, but they may not always fully address the nuances of evidence and uncertainty.

Stronger collaboration between journalists and data experts can help improve how these topics are communicated to the public.

John Bailer
That’s all the time we have for this episode of Stats + Stories. Jeff, thank you so much for joining us today.

Jeffrey Morris
Thank you very much for having me.

John Bailer
Stats + Stories is a partnership between the American Statistical Association and Miami University’s Departments of Statistics and Media, Journalism, and Film.

You can follow us on Spotify, Apple Podcasts, or wherever you listen to podcasts. If you’d like to share your thoughts, email us at statsandstories@amstat.org or visit statsandstories.net.

Be sure to listen for future editions of Stats + Stories, where we discuss the statistics behind the stories and the stories behind the statistics.