Monitoring Health Data | Stats + Stories Episode 164 / by Stats Stories

colopy.jpg

Glen Wright Colopy completed his DPhil at the Oxford Institute of Biomedical Engineering in 2018. His primary research interests are in probabilistic modeling, time series analysis, and stochastic optimization. He has been doing research in healthcare since 2011, and  Glen's primary machine learning goal is to provide presentations that people can enjoy and learn from. His most recent public project is the "Philosophy of Data Science" series which investigates the role of scientific reasoning in practical data science.

Episode Description

When an individual is admitted to a hospital they are quite often hooked up to a pan plea of monitoring devices all designed to help the doctors and nurses caring for them meet their medical needs. Increasingly hospitals are exploring how machine learning can help them better monitor patient vital signs and that’s a focus of this episode of Stats and Stories with guest Glen Wright Colopy.

+Full Transcript

Rosemary Pennington: When an individual is admitted to a hospital they are quite often hooked up to a pan plea of monitoring devices all designed to help the doctors and nurses caring for them meet their medical needs. Increasingly hospitals are exploring how machine learning can help them better monitor patient vital signs and that’s a focus of this episode of Stats and Stories, where we explore the statistics behind the stories and the stories behind the statistics. I’m Rosemary Pennington. Stats and Stories is a production of Miami University’s Departments of Statistics and Media, Journalism and Film, as well as the American Statistical Association. Joining me are regular panelists John Bailer, Chair of Miami’s Statistics Department and Richard Campbell, former Chair of Media, Journalism and Film. Our guest today is Glen Wright Colopy. Colopy is a machine learning scientist who’s worked in the healthcare biomedical and pharmaceutical industries since 2010. His primary research interests are in probabilistic modeling, time series analysis and optimization. Colopy has been doing research in healthcare since 2011 and his primary machine learning goal is to provide presentations that people can enjoy and learn from. Glen, thank you so much for joining us today.

Glen Wright Colopy: Well thank you for having me, that was very impressive Rosemary I think I’m going to have to steal you for my own podcast.

John Bailer: Hey no way man, keep your hands off.

Colopy: Who knows how long John has left you know?

[Laughter]

Pennington: This is starting well already. So Glen before we get started talking about the patient monitoring stuff, I think when people talk about machine learning, I don’t know if there’s always a great grasp of what that is exactly especially when it gets to talk about in relation to sort of you know AI and these other things. Can you take a moment to explain how you define machine learning in your work and sort of, so we have a grounded understanding?

Colopy: Yeah definitely. And I think you have hit on something that’s important because one thing that I have noticed is that a lot of statisticians and data scientists they feel very uncomfortable with those terms, science, statistics and things like that. When you them things like machine learning, they all the sudden started to believe well do these topics apply to me? And I believe the answer is yes. Machine learning to me means a few things. One it means that many of the tasks that statisticians or data scientists or general analysts perform, but components of them are now needing to be automated. So, essentially a machine is in charge of one or more components of the data analysis test and so in the simplest thing- this could be simply processing the data stream and outputting some summary statistic. In my case what it means typically is that the machine is responsible for specifying taking the data, automatically cleaning it, specifying the model from a selection of models and of course parameterizing a performance inference on those models and then the final step which is if that model says something interesting, letting the human being know. So yeah there’s- for me, the machine learning bit the machine has to do some of the learning for you and so for me that means typically model selection inference and then letting people know what it knows. Bailer: So Glen, one of the things that I really enjoyed when I attended a talk that you gave earlier this year was just thinking about patient monitoring in a different way and I see so many times you’d look over time and you’d say whether it’s growth curves for kids, and you say okay you know do you go outside of these particular bounds based on the population at various age points, or whether your FEV is less than some percentage of normal? So, there’s some historical patterns by which we monitor these vital characteristics of patients. Can you give us a little bit of the foundation of what was done in terms of patient monitoring and some of the ways that you’ve been thinking about some of the tools that you use to customize and personalize those? Colopy: Yeah, so what was done and I think this is also what is done currently, is that when a doctor or a nurse is trying to identify whether or not a patient is deteriorating- so typically patients in hospitals are there because they are not healthy to begin with, but whether or not they are acutely deteriorating is another issue. So already we’re having to say that this segment of the population that we’re looking at is typically not the normal healthy population that we already have but the question is not all of those unhealthy people die or have a cardiac arrest and so the question is who’s going to have those more extreme adverse events and are they going to have them tomorrow or in the next hour. And so the easiest way to do that is obviously to examine the physiology so for example we have those from medical dramas we’ve seen the heart rate monitors and the bedside monitors that show you things like heart rate, respiratory rate, blood oxygen saturation, blood pressure and they look at those as an individual snapshot in time and decide is this patient abnormal and the reference is typically well, what do other patients on the ward look like? So, the simplest way would be imagining- we’ll keep it super simple. Just imagine a nice big old bell curve over heart rate and it says does this patient look like they’re on their tails and they are therefore more acutely abnormal or are they more towards the center and at least this metric is not something that we’d infer them deteriorating. And so that’s obviously a very sensible puristic- you know, if you did not have computational resources at your disposal, that is a very good puristic and just as a quick note- one thing that nurses in particular are good at is they sort of front run this intellectual process of it in the sense that they start noticing, for example, what a patient in specific branches look like so by virtue of whereas a machine won’t do that the nurses are quite effective at it. So, I think that describes pretty well what is currently done. Now if we reference population, it might change a little bit from one clinical setting to the other, but effectively we have a single reference population and that’s how we decide whether or not a patient is abnormal. Now the question is what- can we do better than that? And there’s quite a few things. For example, looking at an individualized range so, one of the first things if someone is like popping into this field, one of the first things I’d really ask that you do is break up all your data by patient, look at that patient’s individual range search them by median you’ll see this really nice sigmoidal distribution across different patients and so the issue there is that essentially patients occupy different ranges in that the population-based range is nothing like an individual patient based range, and then from there we can go into things like but I think that’s the first motivation.

Richard Campbell: Glen, in your work you probably have to talk to doctors and nurses, so how hard is it to explain what you’re doing to medical people who don’t understand machine learning? And is that part of what you do?

Colopy: Yeah you can’t do- uh yeah it’s quite easy to explain if you don’t like attack them with equations. So, I think that’s like the ground rule.

Colopy: Everyone needs one equation, but I think it does help in one of my main motivators for as John’s saying, I invest very heavily in the visual aspects when I present. And that is I think pretty much a direct product of the fact that they won’t follow your math because the math, frankly, is irrelevant to the output that they can consume, but they have a really good intuition when you show them your machine learning in action. So when you show them a gaussian- they’re not going to understand what the gaussian process is, but they can certainly see that you fit a time series with a mean and a standard deviation around the time series in which you’re forecasting for, so I think it’s one of those things where- I’m trying to think- let me append a caveat at the end of this, but they are experts in their domain and so the expertise in their domain does give them a very good intuition, you just need to basically meet them halfway in using that intuition. So, I found that even when they don’t understand a model per say, they can very much understand why you wanted to create that model. For example, we want this to all be flexible because I don't think that a patient's chance series is a linear function. Yes, I did they can understand intuitively, and they can also understand when your machine learning model applied something for example a rapid drop or rapid escalation. They don't need equations, but they can intuitively understand why the machine learning would find that. And I think that that's something where you can certainly meet there on the application side. And one other caveat on that is of course that as many of us know, statistical education is a part of medical training. However, it is typically in the area of you know, like the traditional progression analysis Cox proportional hazard model, and so they are statistically trained and some of the best clinicians I've worked on are effectively Jedi when they come to like really understanding applying specific statistical models to the problems that they’re familiar with, and it's only getting them to reach that other than next level that is really neat.

Bailer: Since your called out the Jedi I am wondering who the Sith are? But we don’t need to go there.

Colopy: I think it was a very unlucky grad student who misinterpreted some statistics. You know the Sith was more like when Anakin came in- oh never mind.

Bailer: We’re going in an entirely different direction than any of us anticipated. Thank you for that. So let me ask you, as you're thinking about these these efforts and these endpoints, can you give some examples where you think that these proposals have been just remarkably superior to kind of past sort of practice where you're referencing some general population and indices within a general population and maybe some times when it’s been hard to work? So just a couple of explicit examples.

Colopy: Yeah, and actually I think one example and I'll just screw safe. I was not involved in this example, but it is too good of an example not to have. So, back in 2008 doctoral students were preceded mine. It was a collaboration between the University of Pittsburgh Medical Center and Oxford University and just for those who aren't as familiar with critical critical care monitoring, University of Pittsburgh Medical Center is a Premier Research University in this regard, and obviously, I'm sure it's pretty well known too and so it was a very good collaboration between these two between these two groups and what the engineering department came up with was a very simple kernel density estimates. Essentially. They do didn’t do any of the time series analysis stuff that I did, they simply modeled more appropriately, the actual distribution that you'd expect from population, and they did some data plate things like that and when the trial was done, the nurses and clinical staff refused to stop using the algorithm to help them monitor patients. It was just too useful. And so yeah, it's like I wish I wish my success stories were as good as that one but it was it was like you couldn't pry it out of their hands his because it was so successful and statistically the modeling wasn’t complex, the kernel density estimate is something- I think it was developed in the 1950s- which just means it’s so good because people still use it, but I think the main benefit was that it just simply helped the clinician visualize what components of each vital sign were contributing to this score of anomaly, and so you give them enough of interpretation that they could view it, valuate, it wasn't black-boxing anything and they could just appreciate it and it wasn't capturing odd dynamics it was simply better describing the population and I think that's probably one of the best stories and something that I'd like to aspire to in my own career. Now since I'm here, I am also trying to wave my own flag on something. So, my focus has been on personalized model. And so, both these- I think I think I've two good examples in the value fundamentally behind them was- they were I would say from a machine learning perspective they're fairly modest things in the sense that it wasn't thinking too complex, as long as like personalizing the gaussian process in real time isn't considered complex, but basically both each were highly visualizable. One of them was identifying us this effectively step changes or erratic dynamics in the time series. So essentially what we're doing is a classical stick breaking problem for engineering or basic or change detection problem for statistics in what we're going to do with bottomless time series and trying to identify what their rapid deterioration from expected trend and doctors like that because it was very interpretable, like package would show you a picture of every time that this thing went off and more often than not it actually looked like something that someone said Oh Yes, actually no when this happened or you know, did this was the sometime when not the patient was was sedated and that's why we see this to change or I'm actually glad that you brought this to my attention. And so that bit, while they wouldn't really care about the underlying modeling mechanisms, they could intuitively see why that was valuable, and another one was create dictionaries of healthy patients. Also, this is wrong about that Oxford University actually patented and what they did, well, what I did was I had a huge number of healthy patients and so as we know in most data that you care about you have fewer examples of the stuff you care about the most. So the critically ill patients, we don’t have too much data on them, whereas the patients who do not deteriorate you usually have plenty of data on them, so what I thought- what I did was we’ll make use of that huge swab of healthy patient data. And so I went through the data and an automated this process where I take chunks of a time series across and at hundreds of points in time across these hundreds of patients and add hundreds of points of time across these hundreds of patients. I would look at the data time series. I would fit in with gaussian process make sure it's cleaned up and then I would summarize that with a big statement information theoretic. I would almost say metric, but because that's actually not correct, but you know a measurement and that would allow us to quickly sort through and see does a new patient look like healthy patients if they don't, think I'm yeah, but I think that I think that was an example again. It was very visually intuitive, because I can show them dictionary and I can show them the process that could show them the comparison and even like I did show them with a few clicks, this is the closest healthy patient to your current patient and you know, obviously that that's helpful and it does build that trust is missing. This is why function of the ownership.

Pennington: You’re listening to Stats and Stories and today we’re talking to Glen Wright Colopy in patient health care. Glen one of the things that I’m interested in- because I do study data technologies different from what you’re doing, is the issue of sort of bias feeding into the algorithms and the architectures that we use right? So, there’s a lot of debate around the way particular things are measured and whether you know you’re bringing biases that you don’t recognize into the machines that you’re using, right? And so, I wonder what advice would you have for someone who is interested in pursuing machine learning? About ways of avoiding importing those biases that sometimes shape the way we measure things about us really.

Colopy: Yeah definitely, and I can give you an example of what you’re talking about first and then I will talk about why some hints for avoiding this and I think there’s a really good way to actually circumvent these issues and so the example is there was actually a study done at my former group at Oxford and it was actually quantifying the difference between so as we talked about before these reference distributions are obviously drawn from a population and you can probably already guess where I’m going with this, but patients usually follow into- who go into critical care work are typically not you know 25 year old healthy people. They are by and large- they are on the older side of the population; they are typically less healthy to begin with and so a simple question that arises, well if we look at this information how much of your metric is effectively just being determined by the age of the people? So effectively will you get a better- I’m not sure, a better treatment is the correct word- but is this metric effectively biased in that they are helping the older population and effectively is not well suited for a younger population. Old people for example have different arteries and so effectively, the question is what can- let me pause for a second, the question is then- to what extend is the carrier getting a function compared to an older population and that is fairly quantifiable. And so now the question is what do you do about it? And simple answers are well, you start stratifying it for things like age like sex and by clinical conditions, you know here’s a crazy one, why don’t we just stratify by what the actual clinical condition of the person is at the time and just use that as a reference. Now here’s the fun workaround when you look at personalized modeling, you are effectively circumventing some of that work because you are literally looking at just modeling that one person, it doesn’t matter that time that you are a 25 year old male; it’s already embedded in your time series, that algorithm is already there, so you don’t have to say well what is the average person’s heart rate over time? You don’t even have to say what’s the average 25 year old’s heart rate over time because you have restingheart rate right here and you can start measuring heart rate among that so the expectation is already embedded and that’s just a matter of stratifying your data, and working through and I think that is something that is very promising. Now the challenge that comes with that, of course, is you now just effectively ignored a large amount of other data and so the question is well a lot of this doesn’t matter once you have the other person’s data but at the same time you are flying blind with regard to having large amounts of data and so what do you do with that and that’s where the machine learning for example, how do you provide principled inference on that that’s why for example basing on the metric method. The question of popular in our field flexibility and provide an a-framework for which to incorporate the knowledge that we’d like to [inaudible]. So yeah it is an interesting question, obviously clinical research is heavily invested in properly stratifying these different issues. I think it’s very different from the big tech type stuff or have to really- I don’t know if they’ve really thought about it- I don’t think that they have. And whereas this is very much embedded. Some group analysis is embedded in medical statistics. Like medical statistics is like is subgroup analysis and so I think that that issue is something that is already much more at the front of the field and this just further puts on to it.

Bailer: You know as you’re talking about this I love this idea of the patient monitoring where your subject will be their own control. That you’re essentially monitoring my trajectory and you’re looking for deviations from what you would predict as normal given my record of this. It sounds like that kind of gives you this complete avoidance of this issue of constructing a reference population in some way that would be relevant to me as a patient or to anyone else as a patient. I’m curious if you know this kind of formulation has just been tremendously facilitated by all of the internet of things devices as well that all of the monitoring and measuring devices that we can have routinely in our own lives or as well as being integrated into medical contexts.

Colopy: Uh yeah definitely Yeah, definitely. So first thing that I'll say is and I think this is very important for like the very eager people in machine learning is that we aren't throwing out the old metrics, you know, the old baseline univariate threshold basement you're not throwing those out. They're still there. You know, if you have a heart rate of 200, I don't need a personalized metric. I don't need a personalized metric for that, like you're either at the end of a 10K and you’re an Olympic 10K athlete or you're in trouble and you know, it's one of those two things and so we’re not throwing those out and I think one of the very important things that machine learning people need to remember especially medical fields is that our goal isn't to supplant, it’s to supplement and so yes, so I think that’s one thing and I’ve always tried to position my work is here's a this method is a useful supplement, to either current for respect for other algorithms that are be out there. Um, see even I know in machine learning we will always like that our battles of the algorithms and say this one now overthrows the other one it’s like King Kong beating Godzilla and like in these- it’s like King Kong, Godzilla and the I don’t know the Empire State Building, they’re all just going to get along together and they’re all there and they all supplement for this and they’re like- so now averaging your question which I happened to do with the sea internet of this issue and I think this is interesting for two things and maybe all to the statistical audience on this issue a lot of these benefits have come into the hard work of software engineers and hardware engineers and I think that well yes we do get ramped up in the data that is a part of these things, don’t forget like there are other stem professionals out there really laying that foundation and frankly if that foundation were not there no one would care what a statistician had to say because the data wouldn’t be there and even the data pulled would not be there and I’ll admit that I am a bit biased in this regard because I actually worked at a medical device startup they like they tell me that you’re going to hear about if you haven’t already over the next few years and what I think was very important was that while I do understand it was very excited to have data scientists with a degree and and that type of stuff but what had forgotten was a very important medical device that people can take home and that will and reliably is the key word, wirelessly monitor them over the course of their day was a huge engineering feat. Separate from anything to do with data science and that was where a lot of data it’s very nice to be able to say. On top of that process so they enjoyed that so I feel like I got a little bit off topic there but you know the IOT internet of things, the wearables, they are very promising and I think this is where most people are going to start appreciating – I’ve actually talked about this a little bit that I think wearables, like call them like the where there are odd physiologies out there and there are weird physiological mechanisms that we have learned yet because we haven’t yet observed them and they are out there. I think from some of the friends I’ve talked to we’ve noticed I know this sounds vague but there are some very odd physiologies in ways that we thought that these measurements were uniform or within a patient and there’s evidence for example that they are not even over like a second by second basis and that’s a for example our data channels or actually can be a fusion for multiple other mechanisms over time and I think that’s something where the scientific discovery aspect that data science has been offering is very important I know that in machine learning we get very interested in the prediction and the automation and things like that but we shouldn’t forget that there is a scientific discovery process that we can and I think that that’s probably the most interesting thing that’s going to come out and so well it’s well I hope that the rest of the field is as equally interested in this so that they can just pursue it as opposed to [inaudible]

Campbell: Hey Glen speaking of off-topic this is kind of a broader general question about living in this kind of scary time where there’s a resistance to science, a resistance to data that’s spreading across the cultures not just in the states of course, how do you go about explaining the importance of your work and the importance of data science in general to both the general population that doesn’t understand or to an idiot journalist like myself that may be a kind of primer in the kind of work you’re doing and why it’s important? Could you talk a little bit about that?

Colopy: Uh yeah well first of all I don’t view most people as idiots so even in my most condescending voice I don’t have it-

Campbell: That’s a self-description in a way.

Colopy: But yeah no, but I think so a few things so I’m just addressing them in points one I think that like as a society we do generally sometimes we underappreciate how much a scientist is still appreciated by society and there have been a number of hot topics that are very contentious to where people are either ignoring scientific intuition or there ignoring a set of data they ought to be paying attention to but at the end of the day we are in a culture that by and large does appreciate science. Like no ones not getting on airplanes because they are worried about them falling out of the sky. So, I think that that base line level of appreciation for science is always appreciated. How do I sell the value of my work to the population? Honestly, I just tell them I help computers monitor people and like I help them monitor people and that’s a really easy sell. You know it’s an easy sell, it’s like yes, computers they are basic like spiders, they can just sit there and wait for something to sort of shake the web and so I think in that aspect that’s a very easy sell. It’s- I think it’s an easier sell than saying you know I helped design click bait for on Facebook or something like that. so that aspect is immensely easy. I think that there is one issue that is challenging and something that I’m still trying to sort out in my head where there is a conflict between two things. We say oh well why don’t people listen to scientists more versus why don’t we listen to data and the problem with that sort of black and white comparison is that the data isn’t always well correct and the fact is we do understand that there will be some physiological mechanisms versus biological mechanisms and if you and the data can often conflict with what those intuitions are so I think for example one of the biggest challenges is when we have disagreeing information from an epidemiological level versus a clinical level so a quick example might be so there was a time during when COVID was rolling out and someone said you know like there’s no evidence that masks work and I was like well I was really pushing this thing back to like February or March and so the thing is like I was saying at that time in February March yeah I bet there wasn’t any evidence there was no real data but mechanistically there was no reason to believe that it wouldn’t be beneficial you know it’s literally you have this physical structure that prevents the exhale of the viral particles to go to somebody else. You know that another person didn’t get infected because personally is infected if they reach a certain viral load within the room’s system and so you could say yes there’s probably no data about this right now especially now on the epidemiological level not the population based level. Certainly not the patient specific level. At the same time the mechanism is understood well enough that we should go with the mechanism and I think that’s something that we should really be trying to extract where and I know people like the term data driven science, I’m not a big fan actually of the data journalism side I think we need theory related science I think we need theory driven science and I then we can sort of make sure that the data goes around that but there is an interplay there the data doesn’t tell you everything. Theory doesn’t obviously tell you everything obviously there is a lot of mechanisms we don’t understand like what I was describing but I think that this conflict between people saying things like why aren’t they listening to this piece of data it seems fundamental or why aren’t they listening to this expert? It is because there are a series of conflict between this mechanistic understanding and the empirical data that’s showing up. That and sometimes it’s just fun to ignore people. But you know still there’s that issue where there’s a conflict here that is being under described and you know there is a difference there’s a reason why chemistry isn’t just applied physics there’s a reason why epidemiology isn’t just applied biology and so there’s reasons why clinical isn’t just applied biology and so I think we are coming that across healthcare a crux of this thing where a lot of this subject matter where we’ve reached the mechanistic and sort of evidential conflicts behind these different studies and when you have something like a pandemic that’s when some of these subjects are coming together. I hope that sort of answers your question.

Campbell: Sounds to me like you’re in the right job.

Pennington: Before we go you mentioned a podcast at the top of this do you want to give a shout out? I’m making you talk about your podcast because I am scared of pronouncing the name. So, I’m going to make you do that.

Colopy: Ah. Yeah no so it’s called the pod of Asclepius so obviously it is a pun off of the broad of Asclepius and the idea is that is a data science healthcare podcast. And so effectively one of the conversations that we were just having over the issues that we’re talking about and the difference between our mechanistic understanding of this phenomena versus the data driven understanding of these phenomena. Back when I was at Oxford when I got my first successful patent and my supervisor thought well he can think scientifically and he’s useful the next thing was he probably knows- when you have a doctoral student you like who can do something you like you start having them talk to other doctoral students to sort of start that conversation with other people and many of the same conversations we’re having over and over again, we’re covering the same topics, issues like well what is- what do you actually know versus what are you inferring from data what do you logically know about the mathematics or the algorithm that would mean that this algorithm wouldn’t work? From the get-go. You know data scientists were happily paid people, our assembly wastes out expense of hours and so I think that there’s a lot of going off in the tracks others are. But to get back to things I wanted to start having those conversations where more people could hear. scientists I think that there’s a difference you know you talk to Andrew Gelman and there’s a difference between reading Andrew Gelman’s book and hearing him talk in a presentation and I bet anything that there’s a difference in hearing him talking a presentation and having a one on one conversation with him. And I think that’s the truth in these topics a lot where there’s a lot of intuition and there’s a lot of sort of institutional knowledge that’s lost by our medium of communication and so I think that there’s a lot of important very quickly I actually doubled down on that subject and we currently are doing this thing called the floss which is I want to get back to the topic of scientist reasoning in data science because I think a lot of it no offense to our field but I think it’s been lost. There’s so much going on we do have to keep track math on new models we have to keep up to date with the news, we have to keep up to date with the data. And we do tend to- I think we’ve lost a little bit of our scientific reasoning foundations. So, I want to give a podcast dedicated to that. So including things like the basics, inductive reasoning, deductive reasoning, adductive reasoning and how does it actually play in data science because you know I think that there are elements of critical reasoning that we don’t appreciate because we’ve overlooked. My biggest flaw is that I tend to overlook for example mathematic guarantees of algorithms. I tend not to value them because I think the assumptions violate so quickly that they never play out whereas you know offered first big contribution was essentially offering a new deductive guarantee in an algorithm. She has lots to share with that and so effectively something that she is very strong with she’s strong with a lot of things, and with that and so of course it’s a blind spot with me and it would be helpful to have that conversation, and so I think that basically trying to have those conversations about what are the blind spots in data scientists current perspective is important and I guess just one final example before we run out of time I think that a lot of early scientists think that they need to learn a huge number of models and you have to understand the number of complex models most complex models in and I’ve got to learn that one next and I don’t think that’s the most valuable use of their time. It’s- might be valuable some of the time but I think there’s a lot else at play that can help you add value and not make you feel like you’re constantly needing to play catch up to the experts in your field you can quickly become an expert in your field as long as you’re a good reasonable scientist and I think you can become a scientist faster than you can become a statistician

Pennington: Well Glen thank you so much for taking time to talk to us today.

Colopy: Thank you so much, I enjoyed you having me on, and I enjoyed the conversation

Campbell: Thanks, Glen.

Pennington: Stats and Stories is a partnership between Miami University’s Departments of Statistics and Media, Journalism and Film, and the American Statistical Association. You can follow us on Twitter, Apple Podcasts, or other places where you can find podcasts. If you’d like to share your thoughts on the program send your emails to statsandstories@miamioh.edu or check us out at statsandstories.net and be sure to listen for future editions of Stats and Stories, where we explore the statistics behind the stories and the stories behind the statistics.