To P, or Not to P, That is the Question | Stats + Stories Episode 194 / by Stats Stories

Robert Matthews is a visiting professor in the Department of Mathematics, Aston University in Birmingham, UK. Since the late 1990s, as a science writer, he has been reporting on the role of NHST in undermining the reliability of research for several publications including BBC Focus, and working as a consultant on both scientific and media issues for clients in the UK and abroad. His latest book, Chancing It: The Laws of Chance and How They Can Work for You is available now. 

His research interests include the development of Bayesian methods to assess the credibility of new research findings – especially “out of the blue” claims; A 20-year study of why research findings fade over time and its connection to what’s now called “The Replication Crisis”; Investigations of the maths and science behind coincidences and “urban myths” like Murphy’s Law: “If something can go wrong, it will”; Applications of Decision Theory to cast light on the reliability (or otherwise) of earthquake predictions and weather forecasts; The first-ever derivation and experimental verification of a prediction from string theory.

Episode Description

For years now, the utility of the P-value in scientific and statistical research has been under scrutiny – the debate shaped by concerns about the seeming over-reliance on p-values to decide what’s worth publishing or what’s worth pursuing. In 2016 the American Statistical Association released a statement on P-values, meant to remind readers that, “The P-values was never intended to be a substitute for scientific reasoning.” The statement also laid out six principles for how to approach P-values thoughtfully. The impact of that statement is the focus of this episode of Stats and Stories with guest Robert Matthews.

+Full Transcript

Rosemary Pennington
For years now the utility of the P-value in scientific and statistical research has been under scrutiny, the debate shaped by concerns about the seeming over reliance on P-values to decide what's worth publishing or what's worth pursuing. In 2016, the American Statistical Association released a statement on P-values meant to remind readers that quote, the P-value was never intended to be a substitute for scientific reasoning and quote, The statement also laid out six principles for how to approach P-values thoughtfully. The impact of that statement is a focus of this episode of stats and stories, where we explore the statistics behind the stories and the stories behind the statistics. I'm Rosemary Pennington. stats and stories is a production of Miami University's Department of Statistics and media, journalism and film, as well as the American Statistical Association. Joining me are our regular panelists, John Bailer, chair of Miami statistics department, and Richard Campbell Professor Emeritus and media journalism and film. Our guest today is Robert Matthews. Matthews is a visiting professor in the Department of Mathematics at Aston University, and Birmingham, UK since the late 1990s. As a science writer, he's been reporting on the role of NHS T and undermining the reliability of research for a number of publications, including BBC focus, as well as working as a consultant on both scientific and media issues. for clients in the UK and abroad. His latest book, Changing it, the laws of chance, and how they can work for you is available now. Some of His research interests include the development of Bayesian methods to assess the credibility of new research findings, what's been called the replication crisis and the math and science behind urban myths. Matthews also recently wrote a piece for a significance magazine looking back at the 2016 P-value statement. Robert, thank you so much for joining us today. Before we talk about your article and sort of the HSA statement, I wonder if you could remind listeners of the context that helped produce that original statement, and maybe what some of the early responses were to it.

Robert Matthews
The thing that capitalized that aasa report was the emergence of what's now been called the replication crisis. And that is a series of studies, which attempted to replicate, often very highly cited research studies in important areas of research, which were re-investigated using as closely as possible the original setup. And the results basically failed the standard tests of statistical significance, more than they were expected to. So the typical threshold for a statistically significant result is a is a P-value of one in 20, which is sort of a hand waving way people think means that only about one in 20 of these positive results is actually the result of fluke actually doesn't mean that but that's another issue. doubtless we'll go into. And these things were failing, like 30-40, 50-60% in some fields of research. And that sparked concern about well, are we using the right ways of analyzing data emerging from scientific studies? And the NSA decided to the board asked Ron Wasserstein, the executive director of the NSA, to set up a committee to look into the whole issue of people that is with the view to putting out some recommendations. And that's what emerged in March 2016. And the initial response was one of a monster statistics community well together, saying we didn't know I mean, this debate over the unreliability of P-values as a way of deciding whether a result should be taken seriously or not goes back decades. In fact, just a few years after they started to really catch on which was in the 1920s and 30s. So there's nothing new to the statistics community. It came as a bit of a shock to those to a lot in the scientific community. And it also raised a question amongst them, which was okay, you've convinced us there's something wrong with P-values, you're telling us the same? Well, we don't quite understand what it is. But anyway, tell us what we should be doing. And cane the answer non from the the raw content of the asase public announcement. But that was immediately rectified by Ron Wasserstein and his colleagues who set up a famous colloquium, which pulled together all these different ways of going beyond P-values, which is where we are now.

John Bailer
I, you know, I think that part of what's going on is that that people are trying to understand how to think about this. And, and, you know, within the aasa discussions, there were there was discussions of the fact that these tests are more than just a test of hypothesis. There's embedded in this assumptions about models, assumptions about structure of data, you know, all of which, when when a test is done, might result in some outcome. And so it's, it's, you know, sort of all of that is kind of being juggled is when this is when this is considered, I often find that there are a number of things that happen. One is that, you know, how quickly can you teach the nuance of doing something like this. So given if you have some scientists that have only one bite at the apple of taking an intro of a stat class, and they're going to be employing this for the rest of their careers? What kind of messages can you convey? I mean, one part of the suggestions that I would take away and thinking about this was was the idea of just effect size estimation, right? You know, if there was one message that I would be communicating, it's that, you know, don't stop it just doing a simple, you know, thumbs up thumbs down in terms of some hypotheses you've evaluated, say how big the effect is, if if that was kind of the routine expectation? Yeah, that seems like that would be a tremendous step forward. And that's relatively simple recommendation. I mean, what do you what do you think about that? Robert,

Robert Matthews
I think it's a great idea. It's one that was taken up in medical journals, quite

John Bailer Epidemiology journals, a lot of them have gone this route.

Robert Matthews
Yeah. And it's manifests itself in the confidence intervals, which are still open to misinterpretation, it's a more subtle misinterpretation, that's still open to it, but it does do so the P-values, don't do P-values throw away so much information that's lurking in the data. So you don't get any idea about effect size, the whole thing is subsumed into this single number. And if it's less than a certain value, then there's something going on. If it's greater than a certain value, then there isn't, which is such a waste. So confidence intervals do more, they basically tell you what's called the point estimate, which is the most likely value of the thing you're interested in from the data you've collected, plus a measure of the uncertainty. And you can tell so much more than that. And he developed some techniques that allow these confidence intervals to be unpacked to tell you all sorts of things about the credibility of the results that you found in the light of what we already know. So undoubtedly, just making that small extra step from P-values to confidence intervals is a huge improvement.

Richard Campbell
Robert, there's a lot of suggestions in the significance article about the phrase statistically significant. And and I'll see really good newspapers, like the New York Times will use that phrase to report we talked about that statistically significant doesn't necessarily mean proof. What I'm interested in is how should journalists report on this? If they are going to abandon the term statistically significant? How should they be communicating some of these ideas to the general public?

Robert Matthews
Yeah, it's a it's a, it's a difficult question, because a lot of journalists have problems, getting their news editor to accept, you know, and an interpretation of some fun that goes beyond what's said in a prestigious journal, like no journal, the American Medical Association, or New England Journal of Medicine or something like that. And it's sort of playing with fire for a lot of journalists to sort of say, well, it was statistically non significant. But the effect size was clearly showing a benefit here. And statisticians have a little problem with that, because they know that, you know, basically, the problem was that the design is that the trial didn't have enough patients in the trial to give a nice clear cup result that stood above the noise. And I think one of the most important things that that journalists can do is is basically to say that it is statistically not significant. But nevertheless, the research has found evidence, some evidence for a genuine effect donate or going on a you know, genuine, beneficial effect or genuine harm. They can say that because there is some evidence, and that's the thing that so often gets thrown away in using P-values and things like that. You know, if it's non significant, they say, there is nothing. If there's statistical significance in the result, they say, we've demonstrated this drug works, and neither of those statements is true.

John Bailer
You know, this is a I like that when you're talking about this is particularly and your significance piece, when you were reviewing and revisiting this topic, that the idea that there there needs to be a ways perhaps of inoculating researchers against the the most pernicious effects of nhst. That was I, you know, I like your I like the way you've described this, and how this this plays out. So can you talk about the idea of, of what kinds of things might be done to help with this, this inoculation as you describe it?

Robert Matthews
Yeah, well, as I say, accepting that very often with clinical trials, for example, I mean, the big problem with all clinical trials is you have to have a guess at how big the effect is likely to be. To be able to know how many patients to recruit actually practice, it all goes the other way, you start with the budget, work, how many work out how many people, you can get into trial for that, and then assume an effect size, which is usually far too big, much bigger than you would expect to see. Because basically, you haven't got that many patients to play with. So the whole thing sort of backwards. But in terms of inoculating people, it's this thing of just going beyond pass fail, you know, the true false dichotomy, it is so tempting for people to want that degree of black and white. And just to get used to the idea that very few studies are ever definitive, the best that most studies can do is to contribute some evidence in the direction of some conclusion or another. And then, only when we either persuade somebody to carry out a huge randomized controlled trial, for example, which we've seen examples of joining this COVID pandemic, to test whether certain drugs work, or we pull the all these little findings together in a meta analysis to pull all the, all the signals and, and hope it rises above the noise, that we're unlikely to know for sure that we need to persuade journal editors to stop treating the scientific research that they published, like a newspaper article where, you know, we show what happened in to some celebrity or whatever, no, it's never like that in size. It's rarely like that, the best we can do is just to accumulate data and accumulate evidence one way or the other. So to wean people off this true false. Yeah, we've, we've got something here. And it's just the one study. So that's that mystery cleared up. I just stopped them behaving like journalists, which is what so many of the big journals basically do.

Rosemary Pennington
You're listening to stats and stories. And today we're talking to Aston University visiting professor, Robert Matthews. Robert, as we're having this conversation, I was flashing back to grad school. And one of my research methodology classes was taught by a sort of fairy big named figure in my field, who's a quantitative scholar, and I remember him talking us through this issue of P-values and how, you know, point oh five is generally what you would look for. But that doesn't mean you should throw the data away if it's not that right. But then thinking about the reality of being a young scholar who is working in this environment, where that seems to be the threshold, that you know, if you're, you're submitting to a journal, and sometimes even conferences, if it's not that number, it's hard to get anyone to really pay attention to your research. So I wonder if you have advice for young researchers who are sort of in this environment now, right, where P-values are more visibly under discussion than they've been? But yet they're still like this edifice of sort of scientific practice where it still has been seen as like, this, this gatekeeping device, right? Where if it's, if it's point oh, five, we let you in, if it's not, we're gonna knock you out and not even review you. And I wonder what advice you have for young scholars trying to sort of navigate all of this.

Robert Matthews
Yeah, I would say learn some Bayesian methods, which allow you to go beyond this dichotomy and also to set any, any finding you you make in context of what's already known. So to add value to the results that you're fighting if a particular journal can't cope with that and increasingly they will. It's Bayesian methods aren't quite the pariah. They used to be even 20 years ago. Then If they weren't accepted, then go to some other journal that will. But Bayesian methods allow you to extract much more insight out of a given set of data. And as I say, to set it in context, so they can't complain that you're hiding something or not adding value by using these methods. So I would advise everybody to who wants to move on beyond P-values. To do some, that's actually, when I started, it was actually quite hard, which is defined a relatively simple textbook on basic methods, but now they are really starting to starting to emerge, especially medical statistics.

Richard Campbell
So I'd recommend that. Robert, to follow up on to follow up on rosemary, how much of an obstacle are the prestigious journals that require? I mean, there's a whole system set up here? Would the change if it comes have to be driven by the journals themselves? And their editorial boards, or, as a change going to come from statisticians from the bottom up, or scientists? how's that gonna work?

Robert Matthews
Yeah, I think the journals have big Joe's have a real issue here, because in that significance article that you mentioned, I cite the example for 2019 of a randomized control trial of an approach to treating patients with sepsis, which is a condition that has gotten all too familiar during the COVID. pandemics, where basically, it's where the body's immune system overreacts, do you need to step in very quickly. So the randomized controlled trial was set up to investigate two ways of judging whether a patient needed a certain type of treatment, to combat sepsis, this is really important stuff. So they set up a randomized control trial, they found that the risk that they found that there was a eight to 9% benefit from using one particular indicator for for treatment, rather than the other problem, the P-value, which just slipped over the naught point naught, five threshold. So they thought, why is this important? It's important issue. And so clearly, there's evidence the This method works. This method is really widely acceptable, even in countries with, you know, relatively modest health care facilities and things like that this is important, we thought, okay, let's submit it. So they submitted to the Journal of the American Medical Association, they got it bounced, and we're going to bounce because although we showed up, you know, fairly meaningful 9% improvement, which when you're dealing with large numbers of patients is worth having. They said the P-value, saris failed the test. And they said, Well, we still think that something in it, and they were told, look, you can either say the didn't find anything, or you can just clear off and give it to another job. Okay, so what do you think they did, and they published their their findings, and that you can see them in the in the paper, it's called the Andromeda shock trial, it's published in JAMA 2019. And they said, the results, you know, point to, there not being a difference here between the two approaches. And you can see there's an 80% improvement. And if you sketch it out on a graph, you don't need P-values or anything to see that there's actually a benefit here. And yet, it went down in the JAMA as being a fail. And then other researchers took it up and use Bayesian methods to analyze the these findings. And they showed that it's 90% likely that there is a benefit from using this approach this life saving approach. So yeah, there's liquid and I could cite you loads of other examples where stuff has been just fundamentally misinterpreted by the referees of these journals. I don't know where they get them from.

John Bailer
So I get they get them from their scientific community. It's often it's so in some sense, what they're doing is replicating their their their own very basic primitive understanding of the data and analyses they do. I think that you, I mean, you raise some really interesting points. And I want to follow up on a couple of things. One is that Rosemary's comment about gatekeeping I mean, the thing that's beautiful about a simple number and gatekeeping is it's a simple number. And I think that if you're looking for just a simple checkmark, pass, go collect $200 or do not pass go, do not publish. You know, this is kind of the it's a simple rule that people understand but but in some ways, that understanding Knowing this fundamental misunderstanding of science, exactly, I mean, the site, you know, your comment about that those single study stands alone resonates. I mean, that's, you know, science, the scientific method is based on the idea of a series of studies and content providing context for understanding whether or not these hypotheses are meaningful. Yeah, man. So so in some ways to say there's a, by the way, I'm not an apologist for any perspective on statistics, nor for you know, for hypothesis testing. I'm a big fixed size guy in terms of my, my is a proponent of summarizing study results. But But I think that, you know, just maybe this is this, this fundamental understanding of not knowing what sciences? Yeah, exactly. I mean, if you're so if you're so wedded to the idea that a single study with a single number that passes some single arbitrary benchmark is definitive, then you don't know what you're what this means in terms of the scientific enterprise?

Robert Matthews
Yeah. You know, when I came into science journalism in the early 80s, they seem to be a bit more of a nuanced view. And then sometime around the late 80s, early 90s, something started to happen. And it became clear that, you know, a lot of results were being sliced and diced and just thrown out there by people who weren't that bothered whether they'd actually done a good bit of size, but whether they got a publishable unit was the is the metric of merit. And we've just seen this just get worse and worse, it's led to, you know, all sorts of malfeasance, like no, pee hacking by people failed together statistically significant P-value. And so they run around in the data until they find something anything. And then it's given to a referee who doesn't understand anything and just looks for the P-value. And then like some chimpanzee has been taught using reward methods just sees Oh, yeah, okay, that P-values getting the point naught five, okay, reject, you know, what is this? I mean, this is just a parody of science.

John Bailer
Well, you know, that's, you know, that that description is one of, you know, like you I'm glad you mentioned, pee hacking, because I think that part of what's happened is that, it's a lot easier to test lots of hypotheses, given some of the tools that we have access to now. And it makes it easy to easier to torture your data till it confesses. And it's gonna confess to some crime, whether it's a crime it commits, and I think that, you know, all of a sudden, this sense of the need to think about registries for hypotheses, you know, sort of in advance about what you're doing, and whether, whether it's, you know, I guess, David Spiegelhalter, weighed in on one of your pieces and talked about whether it was an exploratory or confirmatory? Yeah. Summary. I mean, you think these are, these are sort of the these are some important dimensions of thinking about this?

Robert Matthews
Yeah, absolutely. And I, the thing that is so frustrating is that the public and when they see these results, I mean, this is what put me on to this whole issue in the first place as a science journalist working for national newspapers. Firstly, the times, and then the Sunday Telegraph in the UK. You know, I started to realize that all these stories about you know, coffee is increase your risk of heart, oh, no, it doesn't. He says, it's fine. And then, you know, with breast cancer scares, and things like that, and the pill and the way that the research was yo yoing this way, and that, and you can sense from your readers letters and stuff like that. So another load of nonsense, real science correspondent, and I'm just reporting what was in the journals. And I started to think there's something screwy going on here, because my understanding of science is that it converges towards reality by successive approximation, then yes, yo, yo, around, that's a sign of noise. Okay, when st just doesn't converge. And it's that that actually led me to get in, get interested in statistics actually get involved in the trying to find ways of improving the situation, just and it's bringing the whole of size into disrepute, because the public is the guy, you know, just wait a couple of weeks, there'll be tellers that eating Bacon's good for us. It's all nonsense. And of course, it isn't all nonsense. It's just the stuff that gets in the journals is often nonsense. It's very frustrating.

Richard Campbell
I haven't quite john, you can jump in here, too. I'll be I'm gonna pretend to be a general member of the public here. So you've offered Bayesian analysis as one sort of solution. So first of all, why isn't this more accepted? I don't understand it, first of all, but I also don't understand why. If this is a solution, why isn't a more accepted?

Robert Matthews
Right? Okay. So the great thing about the way that statistics is typically done with P-values is they gives the illusion of objectivity. In other words, you just got the data. You just feed it into, you know, some mathematical formula. And then if you get a value that's less than point naught five, you've got a real result. If it's greater than that, then it's a loadable. Low needs fit only for the bid. It's an illusion. It was created by Ronald Fisher, who was one of the founders. And undoubtedly a genius, and one of the founders of modern statistical methods. But he had a problem with the alternative, which he knew about and dispense with in a single paragraph in his famous textbook, which was Bayesian methods, which are much older, they date back to the 18th century. And the problem with Bayes is that essays, it's a role for saying, Well, you've carried out a study is giving you some data. And that data should change your whatever your current belief in that hypothesis, you're testing by this amount, it should either increase it or decrease it. And sometimes it'll pretty much leave it alone. But whatever you used to believe at now, you can change that. And here's the rule for doing that. The problem is that you have to start with some prior belief. And this led to the myth that, well, this means that somebody who believes in ESP you can say, Well, I'm already totally convinced in the in the existence of telepathy or clairvoyance. So I only need to test one person to see if I can guess correctly, the color of these playing cards I'm doing. And then that's it, I've proved it. Because your prior evidence was so huge in the first place, didn't take much extra evidence to take you over the edge to like, 99% convention that you've got something so and then a skeptic comes and does the same thing to her and said, Well, I don't believe in ASB. So I'm starting from a prior probability of, you know, 10 to the minus 10, that there's anything in this. And I've just carried out a massive trial. And it's increased the evidence by a factor of 10 to look for, but I'm still, you know, 10 to the sixth of the money he has. So it's still baloney, as far as I'm concerned. So Bayes gives the impression that it might turn science into anarchy that anybody can reach any result they like. And that's just nonsense. Because there are so many areas of research now where we have a pretty good feel for what's reasonable and what isn't. And so we can have reasonable insights into prior levels of belief into you know, what is plausible and what isn't, before we collect the data. And then if the if we have a nice big study, then frankly, what your prior beliefs were become pretty much immaterial, because they are completely blown apart by this strength of evidence you've got from your really impressive study in that you've done and approach that I've been working on. That's another way of solving this problem of isn't based just anarchy, allowing you to draw any conclusion you like, is to actually reverse Bayes and to say, okay, we've carried out this study, and it's given us this level of evidence, what level of belief would someone already have to hold, like the level of skepticism to say, well, this result might be statistically significant, but you know what, I still don't believe it. And if the evidence is quite weak, you don't need much skepticism to knock it on the head. Okay? Or if the, the study is very strong, you know, it's based on lots and lots of data, then you are going to have to be ludicrously almost irrationally skeptic skeptical to knock it on the head. So that's another way of using Bayes. It's a way of using Bayes in reverse where you say, where you ask yourself what would I have to believe in order to knock this on the head? So but it's been really difficult for babies to shed this idea which Fisher did nothing to alleviate, that if you let braise into the door, it's just gonna be anarchy people are just gonna be making up staff. It's not true.

Unknown Speaker
I have a question that in john, I, you know, I'd love to hear your thoughts on this too. But I wonder how much has the publish or perish environment of scientific research perhaps fed this point? Oh, five situation? Completely?

John Bailer
Yeah, what he said I think that's it's it's your it's your Pascoe, it. Gotcha.

Robert Matthews
Yeah. Yeah. When you know, 10 year old, you know, your next grant proposal rests on, you know, what the, what the kitchen scales, say, when you stick all your papers on them, then yeah, of course, it matters to get stuff published and journals say oh, well, you know, because we're pushing for space we're only interested in in positive results we're not interested in in knocking on their head and other hype. offices, any number of stuff out there is baloney. We already know that what we need to know is the stuff that stands up. You know, that's that's the problem, you're faced with the best and most encouraging sort of development, statistical development that can deal with this, I think is the idea of registered reports where journalists say, you know, what, we'll take your research, whether they're statistically significant or not, we will publish them. But you have first got to convince us that the way you are proposing to investigate this hypothesis is rock solid. And that puts the the emphasis on whether something gets published or not on exactly where it needs to be, which is how credible is this approach to this question, okay. So all the referees attention is focused on that. And if it passes muster, and then it still comes out statistically non significant or significant, then well, that's, you know, that's a contribution to the evidential base that we can we can rely on because we know it passes muster as a design of experiment. I think it's a great development.

John Bailer
Yeah, I'm going to weigh in just on one one comment that we came up earlier than I have a question. And one is that that in terms of the Bayesian methods, I think that while that might have been a holy war, when I was finishing up my graduate study, I think now it's pretty it's it's viewed as just an it's another tool in the toolbox of practicing statisticians that you should be considering. However, whether it's, it has the kind of penetration into kind of introductory classes, it's probably not. I mean, if I said, you know, the difference that you see with statistics is a scientist says, oh, if I'm going to be a scientist, I need to have four semesters of chemistry to even start thinking about being a biologist, you only need one semester of statistics to start thinking about doing this. So So really, you know, if you're going to if you're going to set a higher bar for this kind of competence and nuance, then you get to a higher bar in terms of what you'd expect in terms of your preparations. So that's kind of a weighing in on that part of it. The second is, Robert, I think that that you have, you know, in terms of the inoculation, I think you you know, when you've been discussing this, I have the challenge here. And it's every one of these journals need to do a case, a multiple case study collection, where they have two articles that they analyze by three different methods, you know, like you, like you were doing with that one trial. I mean, I think, in essence, it has to be kind of this compare, you know, it's sort of composition, first year composition, compare and contrast, you know, that there has to be an opportunity to say, Okay, here's the kind of data that we often see in our work. Let's approach this from a couple of different strategies. And let's compare the results. And if you have dramatically different answers, depending upon the method, you know, then you might worry some or maybe not, maybe it's clear what should be done. But I think part of it is there has to be kind of this demonstrated practice where it matters to a scientist or where it matters, that community so I mean, I think that what you are starting and discussing with significance is a take off point that you could say, okay, you know, psych, you know, cognitive psychology, journal, social psychology, sociology, gerontology, you pick, you know, biology, whatever, you pick all the disciplines, media, journalism, and film, you know, I'll talk to my friends, okay, you know, Media Studies, if you're doing quantitative research and Media Studies, you know, having a series of kind of comparisons where this, the same data set is approached, from multiple strategies with kind of the insights that are gleaned might be one of the ways to help with kind of this customized inoculation. Yeah,

Robert Matthews
but could could well be something in that, although I do struggle to see the benefits of assessing whether a result is basically significant or not, then we could drop that, what we could do is make P-values work harder, which is something that a number of statisticians, notably Sander Greenland have been working on, which is you work out P-values, according to a number of different hypotheses. And so you can put a probability, it's not just a threshold, there's a whole range of P-values that allow you to assess the compatibility of your finding with a whole different set of a range of hypotheses, which I think is a fascinating idea, don't have to know anything more than some pretty basic stats to be able to do that calculation. It's so much more informative, the amount of waste of information that has gone on over the decades. By this insistence on on P. Bates, it just defies belief.

Rosemary Pennington
That's all the time we have for this episode of stats and stories. Robert, thank you so much for being here today. Stats and Stories is a partnership between Miami University’s Departments of Statistics, and Media, Journalism and Film, and the American Statistical Association. You can follow us on Twitter, Apple podcasts, or other places you can find podcasts. If you’d like to share your thoughts on the program send your email to statsandstories@miamioh.edu or check us out at statsandstories.net, and be sure to listen for future editions of Stats and Stories, where we discuss the statistics behind the stories and the stories behind the statistics.