Reading Racy Research | Stats + Stories Episode 371 / by Stats Stories

Regina Nuzzo is an award-winning science journalist and Gallaudet University professor who talks with audiences around the world about communicating stats creatively. She's written for such outlets as Nature, the New York Times, Scientific American, New Scientist and ESPN, the magazine. She's also served as a guest panelist on Stats and Stories in 2022.

Kristin Sainani is a Stanford professor and science journalist. She teaches the popular Coursera course on writing in the sciences, available in 22 languages, and offers an online medical statistics certificate program through Stanford Online. She also wrote a beauty column for Allure.

Check Out Their Podcast

Description

Have you ever wondered if what you eat is aging you, or whether women in red really are sexier? In addition to turning to Reddit for the answers to those questions, you can now tune into a new podcast. Normal Curves focuses on sexy science and serious statistics, and it's the focus of this episode of Stats and Stories with guests Regina Nuzzo and Kristin Sainani.

+Full Transcript

Rosemary Pennington
Have you ever wondered if what you eat is aging you, or whether women in red really are sexier. In addition to turning to Reddit for the answers to those questions, you can now tune into a new podcast. Normal curves focuses on sexy science and serious statistics, and it's the focus of this episode of stats and stories, where we explore the statistics behind the stories and the stories behind the statistics. I'm Rosemary Pennington, stats and stories is a production of the American Statistical Association in partnership with Miami University departments of statistics and media, journalism and film. Joining me is regular panelist. John Bailer, emeritus professor of statistics at Miami University. The two hosts of normal curves are joining us on the show today. Regina Nuzzo is an award winning science journalist and Gallaudet University professor who talks with audiences around the world about communicating stats creatively. She's written for such outlets as nature, the New York Times, Scientific American, New Scientist and ESPN, the magazine. She's also served as a guest panelist on stats and stories. Kristen Sinani is a Stanford professor and science journalist. She teaches the popular Coursera course writing in the sciences, available in 22 languages, and offers an online medical statistics certificate program through Stanford online. She also wrote a beauty column for allure. For 10 years, newso and tsunami have collaborated on a number of things, including a statistics column for physical medicine and rehabilitation, and now on the normal curves podcast. Thank you both so much for joining us today. What made you guys decide to create a podcast?

Nuzzo
It was our students, actually our students at Stanford, Kristen and I, when we first started collaborating co teaching this course, we decided that we wanted to have video lectures, but do them video podcast style. So instead of just lecturing the students with some slides, we talked with each other, and we did this through case studies, through stories. So we were telling stories and teaching statistics at the same time, and the students loved it, didn't they?

Sainani
Kristen, yeah, one of them told us that they were watching it on Friday night instead of Netflix. So we didn't know if we were just getting buttered up, but we took that as a great compliment.

Rosemary Pennington
I don't know that anyone has ever chosen to listen to us over Netflix on a Friday night, so kudos to you.

Bailer
I'm just not picturing instead of Netflix and chill normal curves and chill. I mean, what's the what's the next step with this? Oh, that's brilliant. That's brilliant. So how do you decide on some of the topics that you will tackle, some of the case studies that you will explore?

Sainani
A lot of them are things to start with that we had written about. So I wrote about health for allure and Regina. Actually wrote a column about the science of sex and relationships for the LA Times for a while. So our first few episodes, we were drawing on things we'd written about before and wanted to revisit in more depth, because, you know, we have very little space in a journalism article, so we really got to do deep dives. And some of the more recent ones are actually just things that we've seen in the news. And so it's now tempting to say, Oh, just a paper just came out. So let's pick apart the chat GPT study from MIT, for example.

Rosemary Pennington
So when I was in grad school, there was a researcher who used to host these things called Sex salons. So she was a sex researcher. She's actually been on our show a few times, but she would go out into the public and talk about, you know, sex research, research in a bar in, like a dive bar in Bloomington, just to be able to, kind of like socialize some of this information, make it more accessible. But she was said, I remember talking to her about it one time, and how she was always sort of walking this tightrope around, like what was going to be okay to say, and how to frame things, and to make sure she wasn't framing things in a way that's felt overly salacious, even though this is already a kind of salacious, salacious topic. So I wonder, as you guys are preparing for your episodes, there was one on on the average size of a male's penis. So how do you talk through and prepare to tell these stories and sort of tear apart these studies in a way that are informative and fun, but also don't sort of like, I guess, toe over into

Nuzzo
salaciousness. Yes, we want to push the envelope, but only so far. We don't want to rip the envelope. We consider this to be PG 13, and we keep in mind that is our audience, and that is the level. So we can make some jokes here and there, but we figured the average penis size, this is actually a scientific research topic, as we talked about. So this is, this is not how to advice. It's a

Rosemary Pennington
different podcast.

Nuzzo
Yes, different podcast,

Bailer
you know, so can you? Can you just describe some of the components of the podcast? I mean, I think there are some, there's some features that I don't want to steal your thunder in describing. So, so Can, can you talk about some of the unique elements that you've introduced into your show?

Sainani
Sure, we always try to have a claim so that it's structured around an actual scientific claim, like, sugar is aging you. If you eat too much sugar, it that's one of our claims that gives us something to focus on, and then we are looking at that claim through the lens of research studies. Sometimes I've gone on, you know, 30,000 papers worth of research studies. Sometimes we try to focus on one paper, which is easier, and then at the end, that allows us to actually rate the evidence for the claim. And part of that is trying to convey that statistics is uncertain, and there's uncertainty. So we have our smooch rating scale, which we can talk about. And also we'd like to give methodologic morals at the end. So what's the takeaway from the statistician point of view, kind of like an Aesop's fable?

Bailer
So, so I went through, you know, I want to hear more about the smooch going, by the way, I did an analysis of all of your smooch scale ratings. Oh, the 12 episodes that have been have been released, and my descriptive analysis, I was my, my claim is that the they would be equally likely to be one rating higher than the other. But on the 12, on these 12 episodes, Regina actually rated lower on eight of the 12 episodes. Wow. So I'm just, I'm just wondering, you know, is, you know, is it a harder sell with Regina than it is with Kristen? But, but tell me, tell me a little bit about what, what is the smooch scale, and how do you decide on where you fall in it.

Nuzzo
Smooch scale is very non scientific, and it it reflects our own personal view of the evidence, and we actually like it when we disagree, because I feel like this just shows that everyone has a different threshold for evidence of what they consider to be convincing or strong or not, and we are also very, I think, transparent about our biases. We'll say I really wanted this one to be true, so I'm going to give it an extra smooch because of that. And that's one of the things that we want to bring to this podcast, is showing that we are, are humans, and we have biases and, and this is okay,

Bailer
I did notice that the, the other thing I'd mention is that the largest difference was only two on the smooch scale. And, you know, so that's a five point scale. And by the way, there was only once, one time that you agreed, which I thought was interesting as well. So, so I just did just, I thought it was interesting. So, so one, so there's one to five, right? So what one means, what

Sainani
little or no evidence for the claim five is very strong evidence. Occasionally I've needed a martini in the face, which is, I'm off the scale in the negative direction. So negative.

Bailer
Notice that there was one where you said, Gee, why did we have less than zero? That was a pretty strong statement there.

Rosemary Pennington
Chris, how do you guys decide what topics you're going to tackle? It's been a free for all.

Nuzzo
It has been, it has been, somehow it has fallen out that I've tended to take more of the relationship, more of the sex, more of the dating, kind of the weird, fun things, and Kristen is taking on very serious topics. If you have not listened to the episode two episodes on vitamin D, then I strongly recommend that she did a lot of investigative reporting, statistical sleuthing to really look at this claim that there is an epidemic of vitamin D deficiency, and the results really surprised me. So she's doing that, and I'm doing and I'm choosing one on penis length.

Sainani
Regina is more Regina is more comfortable with some of the sex and dating topics, having written about it before, and so, yeah, and I like to do the deep dive. So my challenge is to not pick a topic that's, you know, 80,000 papers worth that. I have to then figure out how to weed through

Rosemary Pennington
what drew you to the vitamin D story.

Rosemary Pennington
I was someone who actually has vitamin D deficiency. I was like, Oh, let me listen to these episodes. What drew you to them?

Sainani
So I've been following that topic for a while. I wrote about it, what for my health column, back for allure. And I was very skeptical even then, because especially, I think that was that Hawaiian surfer study that came out, and I think it was 2007 or 2008 which said, Hey, Hawaiian surfers are all low in. Vitamin D, or half of them are low in vitamin D, which got my little, you know, Spidey Sense, skeptical brain going, and so I'd always wanted to do a deeper dive for some kind of journalistic endeavor, and hadn't gotten around to it. I was surprised. I was skeptical to begin with, but I was really surprised at what we found with some of the papers related to Michael Hollick and some of the, you know, just blatant, misleading statistics he's thrown around.

Bailer
Yeah, yeah. Another, another feature that you mentioned that's that's kind of unique to your show, and I've really liked these are the methodological morals that you you have a couple of gems that you introduce at the end of each episode. I'd like to hear your favorites. Is there a particular favorite that you would share, and why was it suggested by the episode?

Nuzzo
First of all, I wanted to talk about why we are doing methodological morals in the first place. And I think this is really getting to the purpose of the podcast and stories and statistics. Sometimes we joke that we are encouraging people to come for the sex, to come for the science, and stay for the statistics, and although we rate the strength of evidence for the science, we also wanted people to come away thinking about the statistics. It's not just about the science, at the end of the day, it's about the tools and how we're evaluating the evidence. So we wanted to bring it back to this moral of what is the takeaway, in some fun sense. So that was methodological morals, and we have a lot of fun with those. Sometimes we come up with a whole list, and we have to winnow it down. Kristen, what would you say that your favorite is? I love yours. Yeah, one

Sainani
of my favorites. I think this one might have been one might have been one of yours. Actually, Regina. It was statistical errors. Are like cockroaches, where there's one, there's many, that was from the vitamin C, as we all know, right? You you uncover one thing that's wrong in the paper, and it's some little thing where the numbers don't add up, and then you start to, you know, I described it as unraveling a ball of yarn. It starts to, oh, the whole thing starts to unravel. Yes, so that was from the vitamin D episode.

Nuzzo
That was, I think that one just came up spontaneously, Kristen, when you and I were talking about vitamin D, and you said, Oh, I think that, you know, it may be pedantic to focus on just one statistical error. And I said, Oh no, no, when there's one, there's many. And just like cockroaches, right? There's never one, you got to stamp them all out. Yes, I have a number of them. Maybe one of my favorite is a recent one. When we talked about the chat GPT paper, it was a pre print, actually. And Chris and I were very curious about this, and we were surprised to discover that it was not as robust, let's say, as we had expected. That's an understatement. That is an understatement. And somewhere throughout the episode, I made the analogy that a pre print is a little bit like a blind date, and you need to take it seriously. This is your your forced impression to the world. And so I think that my moral was something like, treat your pre print like a blind date. Show up showered with your teeth brushed, you know, clothes ironed.

Rosemary Pennington
Yes, you're listening to stats and stories, and we're talking about the normal curves podcast with hosts Regina Nuzzo and Kristen Sinani. I have a question about the stats part. So how do you so, like, it feels like a lot of these studies are pulling on right? Are things are going to make people interested again? Like, as someone who has a vitamin D deficiency, I'm like, Oh, let me listen to that. Right? The red dress thing. Red's my favorite color. I was like, Let me listen right? How do you approach talking about the statistical part of these studies in a way that remains accessible to the audience that's tuning in?

Sainani
Yeah, that's a great question, because this is very deliberate. We're trying to draw people in with a little, you know, PG 13, or topics of interest to general audiences, and then we are saying, hey, but now we're going to take a little statistical detour. So we actually probably spend the most time thinking about those statistical detours. What are the statistical topics that we're going to focus on? So we're picking things out of the papers or out of the general topic to focus on, and then we're probably spending the most time figuring out how to deliver that content, because if I go off on a little tangent, that gets a little too much, like a lecture, Regina has to rein me back in and say, Okay, let's, that's too many numbers, and that's too much, and let's, let's pare it down a little. And we, we have people listening. We have listeners who don't have a statistics background, and they want to learn. And so we're trying to make it so it's accessible, for sure.

Bailer
Yeah, I really appreciate the the effort that's gone in also to the support materials on the website for the for the podcast. I mean, I think that the, you know, the listing of statistical topics, along with your morals, the methodological morals, and even technical appendices and perhaps data, you know, I think that you're, you certainly saw. Celebrate open source data and in open analyzes that have been shared. So I think that that's that's been a real success. So So one thing that that I started thinking about is, how might you recommend a journalism class or a statistics class, how they might use your podcast or integrate your podcast as part of what's going on in their class?

Nuzzo
That's a great question. You have the answer. Maybe you could

Sainani
tell us we are thinking about that, because that's you know, why we're picking out those statistical topics and kind of highlighting the statistical detours that there. We haven't done it yet, but we have in our grand plans having maybe some of the statistics detours clipped out so a classroom could use them. I'm certainly thinking about ways to incorporate that into my classroom. You could assign people to listen to the podcast. That's why we're keeping it. PG 13, you know, maybe high school and undergraduates could listen to as well. And that way you are learning some statistics and learning some of the terms and what's important to think about and how to think critically, but in a fun way. So, yeah, we definitely envision this as a teaching material. Yeah.

Bailer
So one thing I thought about looking at your list of topics, I could imagine, kind of a spreadsheet of episodes, and then maybe having something that says producing data and sort of dimensions describing data, drawing conclusions from data, or modeling or whatever, where you have kind of topics then describe. Because if I were, if I were sampling from the the episodes that you have for for using them as supplemental material, in the class, I'd say, oh, wait a minute, you know, gosh. And in the age gaps episode, they're going to talk about effect sizes and linear regression and logistic regression odds ratio. So if I'm, I'm doing logistic regression, maybe this is the, this would be an episode I would encourage a class to listen to. So I think finding kind of a some this, this meta construct for, for kind of being able to see all of the episodes in one glimpse with the topics would be, would be pretty slick. I could see it being, being a pretty useful tool.

Rosemary Pennington
You're both seasoned science communicators and have been working in various media for a long time. What have you learned about science communication by doing this podcast?

Nuzzo
For me, all the science communication I did before this was in writing, except for one comic cartoon that I did, but it had been in writing and or giving lectures, maybe where I have the benefit of slides, and now all of a sudden, we cannot see our audience. It's very linear. People need to listen. They're listening when they're doing other things, when they're going for a run at the gym, or they're, you know, doing errands, and they cannot see what we're referring to. And we need to keep it light. We need to keep a narrative going the entire time. We need to bring a lot of metaphors and analogies for these abstract statistical concepts, which

Sainani
we've had a lot of fun with. You

Bailer
know, one thing I'm curious about in some of the episodes that I've listened to, there's, there's been some serious critique of work that's been done. I mean, the example that you gave of the the using the chat GPT tools and some of the analysis of that, and, you know, there's, there was a lot of really good feedback that you came that you derived as part of this critique. Have you heard back from any of the authors of work that you've critiqued on the show? Not yet.

Sainani
Sometimes authors are smart to stay quiet, because sometimes when you push back that it escalates so and we don't always call out, we didn't call out authors names or anything. The only, I think, specific person we've talked about is Michael Hollick, because he played such a big role in the vitamin D, the whole vitamin D epidemic narrative is largely due to him. So we're not trying to pick on anybody, and we do try to balance, even when we're being critical with saying, you know, we don't know what his intentions were. We don't, you know, we're trying to balance out and give some positive feedback too.

Bailer
So, you know, as you look into the future for episodes, are there certain Can you give us a you know, the trailer, you know, the preview of coming attractions that that's kind of on your mind. What are some of the things that you think you might talk about in the future, although it ultimately may be in the past. Once this releases,

Nuzzo
we have a long running list. Kristen will constantly text me things that papers she's reading or headlined, and say, What about this one? What about this one? So we have a huge backlog of ideas. I know that I have one coming up on, does fear make you horny? It's going to be a Halloween episode. Oh, perfect. There you go.

Sainani
Yeah, we have, we're going to do one on P values at some point, too. Two so that I know that's a definite one coming up. We may start one on silly headlines in the news, and what are the studies behind them? So we have some ideas like that.

Rosemary Pennington
Yeah, I wonder, if you to give much thought to sort of the larger environment that your podcast, and I guess our podcasts do sort of exist in where it seems like people have become increasingly skeptical of expertise and end of knowledge production. And like, how do you how do you guys tussle with that and think about that in relation to the work that you're doing, and sort of what feels like the importance of it, which is, again, like trying to help people understand how to think through scientific research and statistics.

Sainani
Yeah, I think that is actually at the forefront of our minds. Is giving people tools to not just say I'm going to accept something that's on, you know, I don't want to pick on anybody, but like the Huberman podcast, or, when we call them, the bro podcast, the broadcast, and actually know how to to look at a paper think skeptically, and in today's environment with all the misinformation that we think that's important to be honest about, yeah, there's some bad science out there, but here is how you differentiate bad science from good science. So we're very that's probably why we have kind of even on the smooch scale, is that we're deliberately trying to pick some good studies as well and call out what's good and not just criticize the ones that are bad.

Nuzzo
Skepticism is good. It's just the right amount, healthy amount of skepticism. It's not that all of science is bad. Science is self correcting. What we are doing is what should be done in science. It's gone through peer review, it's fine. It's put out in the world, and then we're just taking a deeper look at it, and we're using our own statistical lens to evaluate it and decide how we would change our behavior based on this or how strong we think this is, and this is okay. The fact that there is uncertainty and that it's not perfect, and that people are putting out things that are not perfect, that's okay, being deliberately sloppy, the statistical malpractice, not good. But again, it's that, that balance in there and and that is the attitude that Kristen and I bring in. So we're hoping to model that attitude to listeners as well.

Bailer
Yeah, I really like the number of the episodes that I listened to. There was the celebration of scientific thinking, that the idea of and really kind of, sort of applauding investigators that were willing to change their minds based upon evidence to the contrary. And that's, I thought that was really, that's an important message that that I, you know, I don't know how we keep we have to keep telling it. Or how do we help people embrace that? Can you give sort of an example or two in episodes where, where that was, was featured,

Sainani
I think we talked about that in the red dress episode, red dress effect that the researchers put out a study with, I think it was 23 college students, and that got a lot of press, and they were willing later to update that and collect more evidence and work with adversarial collaborators who actually were questioning their work. And actually they that was, we called that a Cinderella story, because they changed all of their methods. They started doing sample size calculations, pre registration, all the things that we want people to do. So that was one episode, and I think the backfire effect episode is another one where the researchers had been very invested in this idea that if you correct misinformation when you're fact checking, you could accidentally put the idea in people's minds. This is a backfire and those researchers were very invested in that, but also changed their minds as new research came out, and people tried to replicate and it didn't replicate. So I think those are two of the ones that we had, kind of Cinderella story. Cinderella stories

Bailer
in, you know, since, since Rosemary asked the stat question, I'll ask a journalist journalism question. You know, one thing that I thought was pretty interesting about the story about your brain and chat GPT was the the attention that was basically generated by a pre print and I'm, I'm wondering about, you know, what, what advice to journalists about, about taking, you know, early draft research and, and, really, you know, pushing it to headline level.

Nuzzo
Normally, pre prints do not get such attention. They are filed in archive.org and they languish there. I don't know if the research team here put out a press release, if they contacted reporters, if they they publicized this pre print. I'm disappointed that more journalists did not bring a more skeptical view, and they did talk to outside researchers, but it was a very dense paper, and if you give an outside researcher the paper the pre print and say, Okay, I'm going to interview you in two hours. For your comment on this paper that it's very difficult to do it justice at that point. So I wish that they had held on and delayed writing their story. But in journalism, you can't afford to do that. There's there's a headline cycle, there's a news cycle, and you can't then three days later, come in and say, Oh, well, you know, here's my take on it. It doesn't work well.

Sainani
I wish the journalists had read the paper a little more carefully, because it was not hard to pick out the statistical errors we found. We are not talking that you'd need to be a statistician like the first bar graph. There was something like 54 people in the study, and they were giving the degrees that people had, and there was, like, 64 high school degrees in the bar graph. So like, basic there, there was a bar graph that the first bar went up to 100 we didn't know if it was percentage, because the y axis wasn't labeled, but there was a bar going up to 100 which we assume was percent, and then a bunch of other bars that were so, like, added up to more than 100% so very, very easy to pick out. It's sometimes journalists just need to open up the paper and really look a little bit to see that there were pretty obvious errors here.

Nuzzo
Yeah, this is something that Chris and I do keep in mind when we are doing the podcast and also in our other education work, is empowering journalists to feel comfortable, to open up the paper and look at it journalists, I think it's very easy for them to sometimes say, Well, I'm not a scientist, I'm not a statistician. How can I understand this? I'm just going to go straight to the experts. And what Christy and I do a lot of is walking through examples and showing that it's not always the sophisticated things that you can, you know, find flaws on, but just bringing a skeptical eye and feeling empowered to then find these things and then ask the authors.

Sainani
I'll share a fun one from the from the vitamin D episode. There was a statistic in a paper that said up to 100% of healthy adults in Europe are vitamin D deficient. So I was like, where did that statistic come from? Well, it came from. There was citation. So I looked up the citations, and guess where that 100% came from? It was a study of Italian centenarians, 96 to 104 years old. So we just got a laugh out of that, because it's like, yes, they were all vitamin D deficient. They were also not able to walk, and they were quote, unquote healthy, but we think that means that they were like, still breathing, right?

Nuzzo
But also, how do you have a upper confidence bound of 100 up to 100% of adults in Europe are vitamin D deficient. These are the sorts of things that we would love journalists to look at that. And Kristen and I talk about our spidey sense saying, Now wait a minute. That doesn't make any sense. Where did that come from?

Rosemary Pennington
Well, that's all the time we have for this episode of stats and stories, or Gina and Kristen. Thank you so much for joining us. Thanks a ton. Thanks for having us. Thanks. Stats and stories is a partnership between the American Statistical Association and Miami University departments of statistics and media, journalism and film. You can follow us on Spotify, Apple podcast or other places where you find podcast. If you'd like to share your thoughts on the program, Send your email to statstories@amstat.org or check us out at stats and stories.net and be sure to listen for future editions of stats and stories where we discuss the statistics behind the stories and the stories behind the statistics.