Big Data and Big Laughs | Stats + Stories Episode 157 / by Stats Stories

timandra.jpeg

Harkness writes and presents BBC Radio 4 documentaries including the series FutureProofing and How To Disagree, and Are You A Numbers Person? for BBC World Service. She formed the UK’s first comedy science double-act with neuroscientist Dr. Helen Pilcher, and has performed scientific and mathematical comedy from Adelaide (Australia) to Pittsburgh PA with partners including Stand Up Mathematician Matt Parker and Socrates the rat. 

Her latest solo show, Take A Risk, hit the 2019 Edinburgh Festival Fringe with randomized audience participation and an electric shock machine. A fellow of the Royal Statistical Society, she’s a founder member of their Special Interest Group on Data Ethics. Timandra’s book Big Data: does size matter? was published by Bloomsbury Sigma in 2016.


Episode Description

Statistics is generally a field not known for its humor, at least to the broad public. Which is a shame because humor is a way to make complicated subjects – like statistics or big data – accessible to general audiences. The intersection of humor and stats is the focus of this episode of Stats and Stories with guest Timandra Harkness, coming to you from the annual meeting of the Royal Statistical Society with guest host Brian Tarran.

+Full Transcript

Rosemary Pennington: Statistics is generally a field not known for its humor, at least to the broad public, although I will say John Bailer has been an exception in my life

John Bailer: That’s because you laugh at me.

Pennington: It’s a shame though because humor is a way to make complicated subjects like statistics or big data accessible to general audiences. The intersection of humor and stats is a focus of this episode of Stats and Stories coming to you from the annual meeting of the Royal Statistical Society. I’m Rosemary Pennington. Stats and Stories is a production of Miami University’s Departments of Statistics and Media, Journalism and Film as well as the American Statistical Association. Joining me as panelists are John Bailer, chair of Miami Statistics Department, and Brian Tarran, editor of Significance Magazine. Our guest is writer, comedian, and presenter Timandra Harkness. Harkness writes and presents BBC Radio for documentaries including the series Future-Proofing and How to Disagree and Are You a Numbers Person for BBC World Service. I, frankly, am not. She formed the UK’s first comedy science double-act with scientist Dr. Helen Pilcher and has performed scientific and mathematical comedy from Australia to Pennsylvania with partners including stand-up mathematician Matt Parker and Socrates the Rat. Her latest solo show Take a Risk hit the 2019 Edenborough fringe with randomized audience participation and an electric shock machine. A fellow of the Royal Statistical Society, she is a founding member of their special interest group on data ethics. Timandra’s book Big Data Does Size Matter? was published by Bloomsbury Sigma in 2016. Timandra thank you so much for being here today.

Harkness: It's a pleasure.

Pennington: I am just going to ask I think the obvious question is how does a comedian take on technology and math and science as a focus of her work?

Harkness: That’s a relief because I thought you were going to ask about the electric shock therapy.

[Laughter]

Pennington: I do want to know about that though.

John Bailer: My question Timandra, I’m going to ask that next.

Harkness: I may be the only fellow at the Royal Statistical Society that likes firing electric shock machines. Well, interestingly, there’s a lot of people now that use comedy as a way of getting across their particular subject, whether it’s science or math or something else, and I came in the other way. I came in the other direction. I was already a professional stand-up comedian and so was Pilcher, although she had a day job at the time, and we met at a meeting at the Royal Society on stem cells because I was trying to write something about it. And we bumped into each other in the coffee room and I was really surprised because I’d only ever seen her in rooms above pubs making jokes about beer bellies, and there she was looking smart with a badge on and so I sidled over and went what are you doing here and she said I’m a stem cell scientist, that’s my day job, what are you doing here? And so, we went, oh, we should do some comedy about science. Because we were both getting really bored with the things that comedy was always about. It was always about the differences between men and women and about drugs, about sex, about alcohol, and we just wanted to do some comedy about something more interesting. Although, ironically, when I look back at the things I’ve done comedy about, I have actually done, now, comedy about the differences about men and women and sex and drugs, but from a scientific and mathematical point of view. So, it was really for me, and then I went on to do a degree in math statistics, but for me, it was comedy that reignited my curiosity about science and mathematics and statistics. So, it’s more the other way around for me. It’s less why do you use comedy to talk about mathematics, it’s more how did you end up in mathematics having started out in comedy?

Bailer: You know I think there’s an element of you have to change arts before you change heads and that the comedy is opening up to message. It’s engaging and getting excitement and interest. And if you can get the interest, then the messaging can also be connected to it.

Harkness: Yes, all of that is true and I think a lot of people do use it for that but, absolutely, genuinely for me, it was the other way around. I like doing comedy because I like making people think. That’s absolutely true. I always have. I’ve always been more interested in the kind of comedy where people laugh and then go oh, that’s interesting, why did I laugh at that? Because it opens people’s minds up a bit. It catches them unaware, and also it is enjoyable, which is always a plus. And then it was my curiosity then about science and mathematics that I kind of came to in that direction, and then I thought well if I find it interesting, why wouldn’t anybody else find it interesting and it does make a change from talking about the same old same old thing. Because this was back in 2000-2001. So now there are a lot of good people doing good comedy about science, statistics, mathematics at the time we genuinely were the first two people in the UK, I think there were a couple of guys in Australia doing it.

[Laughter]

Harkness: The electric shock machine, I first got it when I did a show at Brave Science Agenda, costarring Socrates the Rat. His job was to be male and a rat and I- one of the differences that psychologists find, on average between taking risks. And I wanted something that I could demonstrate this very graphically to the audience, preferably with audience participation. So a psychologist friend don’t you is there like a civilian version of the equipment that you use that I could buy to do you know harmless pain on an audience member and he said this is great timing, I’m about to relocate to Singapore. I have an electric shock machine; I don’t want to take it with me; it’s yours. And so he gave me this laboratory machine with all the safety instructions, it’s got a seven-page risk assessment and everything and I would invite people in the audience in the show about sex differences to get up and basically gamble. Take a 50-50 bet, and if they lost the bet I’d get to give them electric shock and if they won the bet they get to give me electric shock and I gave them some money. And I have to say whoever was flipping the coins on that, who is another audience member- let’s just say I looked back at the end of the tour and I was well down on money and electric shocks so I don’t think it was fair action going on there. And then when I went to do a show about risk, this was my obvious thing, and again, basically, I used it for gambling; to let people in the audience think about their own decision making around risk. And your previous guest, Tim Harford, I think has probably looked at this where it’s never a purely mathematical calculation; there are always psychological elements. It’s never just about going on average I will win if I do this because you might say well, I’m prepared to take a quite a large risk of a very small electric shock, but I’m not prepared to take even a very small risk for a very large electric shock because there’s a kind of maximum amount of pain that I’m prepared to risk. So, it was I really always get people randomly selected from the audience and offer them a chance to do this gamble about whether to get electric shock or not as a way of saying that whenever we make these decisions, it’s not just about can you do arithmetic in your head. It’s always in the context about much wider decisions that we make.

Pennington: I love that you phrased it harmless pain. I would not do that because any pain to me feels harmful.

Harkness: Well, there is actually- I had to get people to sign a consent form because people with a pacemaker, for example, it’s very dangerous for them and certain other medical conditions. Also, it really ups the ante on stage when some audience member volunteer is having to read a consent form. It ups the fear level, which makes the whole thing more dramatic. And it also gives them a little point where they could elegantly back out, you know, if they’re having second thoughts, they can really say oh well, no, I’ve got a medical condition, so I can’t do this.

Bailer: You know, in reading through your Big Databook, I really liked the historical tour of thinking about data and society and statistics and also about computing and how that emerged. And then you have this organizing statement here of data, where you touch on these different components. Would you kind of summarize for folks who haven’t read it how you’ve organized your thinking about big data?

Harkness: Oh, my backronym.

Bailer: Your backronym?

Harkness: Backronym. Yes, I thought everybody knew this word backronym, which is where you want an acronym. So, you want a word that spells out your ideas, but then you reverse engineer it to get the word that you wanted. So, I felt I would do this so that I could get data, D-A-T-A, now obviously big data is partly big; there is a lot of it that is part of it. But I thought it’s not just that; it’s not just that there is more of it than there used to be, it’s also these other things and I did, I managed to get the big D-A-T-A. so these are diversity diverse or dimensions if you want to get a bit technical. The idea that you can have different types of data and when you combine them you get a multidimensional picture, whether it’s of an individual or something that you’re studying. So I mean domestic sounds and that was a brain scientist called Professor Paul Matthews, who said if you have lots of brain scans, for example, he said, I have brain scans, but if you have lots of brain scans that’s just large data. Big data is when you combine the brain scans, the patient records, the postcodes where the patients have left, the weather records of those postcodes, and then you put them all together and then you ask a different question from what the people were collecting the data for. In this case, he wanted to know how many hours of sunshine had the patients had, and did that correlate with the progression of their illness? So, there’s D’s, different divergence damages. A is automatic because so many things we do now just automatically generate data, so it’s almost collected by default. T is for time because things are pretty much collected in real-time it lends itself really well to making a time series and you can project that into the future and see how things are going to change. And then the other A is for AI, for artificial intelligence because the products used to analyze data very much are what you might call artificial intelligence. I mean I don’t want to make claims that it can be fake but there’s an element of unsupervised processing, where it’s sort of saying follow every step in this program. You say to the computer I want you to separate these into sick and well or healthy and I’m going to give you one dataset that’s presorted well, I’m going to let you use the rules that you need to follow to sort of rest the data. So that’s different, diverse, automatic, time and AI.

Tarran: On the subject of algorithms and AI I guess you could do a lot I think I saw, was it a tweet, that you said about you’ve become an overnight expert in algorithms

Harkness: Well I think it’s really quite well really, this is a classic case where

Harkness: We take the grades that the teachers have given often based on previous exams that kid has taken at least as your starting point, but they didn’t do that. they went what’s really important to us is that the overall pattern of the grades will closely resemble the previous three years. So what we’re going to do is for each school we will take the results for the previous three years and we’ll get an average of those and we’ll say okay well those are the grades that your school is going to get this year; this pattern. You know, so many As, so many Bs, and so on. And then oh okay how are we going to decide which kid gets which one well we’ll get the teachers to rank them in order from best to worst and then we’ve already got this select box of grades that we’ve decided your school is getting and we’ll give them out in that order from top to bottom, and that was what they did. The only role that the kids' exam results that the actual kids getting this result and to the algorithm, the only previous exam results played was as a whole class they would say if they had done spectacularly better or worse than previous year then we’ll adjust it upwards or downwards, or if they were in a very small subject group at which point say we’ve got ten or eight kids in a class, yeah okay it’s probably an old fad and just allocate for previous years. So, in that case, we will take them into account. But I just thought it was a, an astonishing decision, and b, also horrible typical in fact of the way a lot of algorithms work that make decisions about us, that then really that minimally based on anything we do or are or have done very largely based on what the population of people who are deemed to be like us have done in the past.

Harkness: Well, yes, and no. I mean I think that we are a bit more aware of these things but yes, it is a bit astonishing to see that the whole Juggernaut if you like rumbles on the same way. In fact, that’s the thing that I'm interested in now and look at that now is to say we're surprised let's go.

Harkness: Human being are the ones who built this stuff; human beings decide what data to collect this is all human beings doing this. The question really is. What is it about us? What is it about human beings here and now at this point in history, that makes us so very keen to hand over decisions to algorithms? No matter how many times we see how flawed and how biased how incomplete they can be the for this urge to hand over human judgment decision-making to an algorithm.

Pennington: You're listening to Stats and Stories recording at the annual meeting of the Royal Statistical Society. Our Guest is writer comedian and presenter Timandra Harkness. Why so use was talked about how now like we need to sort of step back a little bit from our trust in algorithms. I guess I the question I wanted to pose is sort of why you felt compelled to write about big data in the first place. There's a lot of people writing about and publishing about big Data. What was it that made you feel like you had to publish and write that book?

Harkness: It's actually- it started a few years before with me getting into statistics and I doubt that Brian remembers this but the first thing I ever wrote for Significance magazine was an article called Seduced by Stats Question Mark, which was probably about the time that my partner and I were doing the show called mass death on the fridge. And it was because I was confused. After all, you know, I really like math. So that's why I went back and studied it again. I've always liked it. But I've always realized that this is a kind of a minority sport really, most people don't like mathematics and they'd be very happy to never have to look at it again. And yeah those same people getting really excited about the statistics. They were getting really excited about infographic displays in newspapers, what your previous guest Tim Harford was talking about and I thought well, this is odd because I like statistics. I'm quite excited about what you can do with them. But I know for a fact that all of you people really hate mathematics, so why are you getting so excited about some graphs? It’s as if you think he has some magical Oracle of objective truth that in a difficult time where nobody really knows what's going on, you can at least look at the numbers of the numbers will appeared in shining light and tell you what to do and then as things evolved as such to see people talk about big data in the same way and I was thinking, well again the kind of mathematical side because this is really exciting. Can you really do all this stuff just by collecting loads data and applying mathematical processes to it because that’s really exciting if you can do all the things that you’re claiming this could really transform those things. But on the other hand, is this those same people that got really excited about infographics in newspapers are they now really excited about big data because it’s big and shiny and I don't understand it. So maybe it's really clever. And in fact, I took to an American Scholar called Christine Rosen who is looking at it and I said to her, you know if you got a definition of Big Data because this is this when I was making a program that for really for about it, and she said yes, it's an oracle. People look at it. They think it's going to give them all the answers. So so it was that really and maybe you know part of it was my mathematical interest; me going look isn't this clever you get all this data and you did this to it and it tells you this thing that you never knew before and I do still find that really exciting, but then the other bit of me was you know me as a citizen if you like going, why are we so convinced that all these quite difficult messy complicated human problems can be solved if you just collect enough data put it a big enough computer.

Pennington: I’m going to pull in a question from the audience and this is a reminder. If you have questions for to Timandra, we will try to get to them throughout the rest of the show and certainly at the end, but someone just post a question whose decisions do you think are more biased, algorithms or people? And I felt like a nice sort of question to sort of scoop in there.

Harkness: That's brilliant question. That is a big question. I mean, I think it partly underlies to get algorithm you think well, I know that I'm biased, I'm full of all these shortcuts and loyalties and emotions, so maybe an algorithm could step back from that and be more objective. Well, I think there's two things that play one of them is that algorithms are made and designed by people. They are as flawed and imperfect as the people that build them. The advantage they have if you like is that by building an algorithm you kind of have to build assumptions into it, but it does help you be more aware of what the assumptions are that you're building in and even though you can't have a fair algorithm in an unfair world, for example, to go back to the a-level schools results algorithm, the truth is that in a normal year where the kids took exams a lot of them would find that their exam results were lower than the teachers have predicted. So, this does tell us something about the the unfairness the school system probably but in a normal year, the kids get state exam themselves. So at least they get to affect their own outcome, and this year they didn’t. So, you can't actually have an algorithm that is going to dish out a completely fair result because the world is not fair. What you can do is say okay well, but we need to be explicit about what kind of fairness we trying to achieve. Are we trying to achieve everybody goes in on equal basis in which case we know that what comes out will be unfair because it's not a fair world, or are we going to say we want things to come out and look fair by some other measure in which case maybe we have to adjust people and not treat me fairly on the way in? I mean off called were big defensive. They said well, we have tested our algorithm by all these measures we test them like are poor children that are disadvantaged. No. Are boys or girls going to be disadvantaged? No. Are all these different ethnic groups can be disadvantaged? No. All these subpopulations are going to come out roughly is in the same state as they would if they take the exams. So, on a population level, they said we've been totally fair. Look every subpopulation has been treated fairly as like if every individual has been treated fairly. So so that I think it can be good that building an algorithm makes you think she decide well what's fairness look like what kind of fairness you want? And also, by the way, this reveals to us what the other fairness as there are in the world. But but the thing is there's also this slightly underlying assumption that people are basically all biased and prejudiced and awful and I think you have to remember the difference between algorithms and people is that a person can reflect on themselves go, oh, yeah, I should just caught myself assuming that, you know all boys were like this. But actually, You know, even if the data says that on average more boys are like this and I shouldn't assume that of every boy that I meet and therefore I'm going to change my attitude in future and deliberately try not to think that and deliberately set myself up so that I don't slide into this habit. What is an algorithm has no moral sense? I'll run this with this is going to be wrong and algorithms going to do exactly what you programmed it to do.

Bailer: So, you know one thing about the the algorithms I wonder I love this idea that you phrase from this issue of turning over human judgments to algorithms. But I also wonder if it's how people sell algorithms and the results of algorithms that perhaps they sell them as if they have this this this level of precision that they really don't have. That they oversell predict the precision and the uncertainty and variability that are baked into this.

Harkness: Yeah, I think that's a point very well made and again, When it's not there is just a very basic thing of especially if you're a corporate entity and you're designing an algorithm you go hey, our magic algorithm will help you do this and you go you've just given me two decimal places there so that basically into making this up. I'm not going to you seriously at all and this is a problem, sometimes, that you want to question, how did you get those outcomes? And especially if they're private companies they go we can't tell you it's a statute, is it's a secret. It's our commercial Secret. But the other thing is I think that- it's that uncertainty question, which I think is a much bigger question. I think we look to things that have numbers attached for certainty. I think that's one of the great deep appeals at the moment of statistics and data and numbers is that the world is very uncertain, it's very unpredictable feels. It feels risky, even though actually it's safer than any other period of history still even in spite of the pandemic is still a very safe period but because it's hard to make sense of Because it's a world that's changing socially and politically as well as everything else. I think people feel very insecure. They feel fearful about the future and they hope that numbers and data will give them something very definite. So, you may know that the future is going to be awful, but at least you'll know it's going to be awful in with mathematical precision. Whereas of course also all statisticians know that approximately 95% of your job, two or three close to either side is actually just quantifying uncertainty, is saying well, we think it's probably within this range, but the like the more you narrow that down the less certain you can be about it. So you can look at you could easily look at the whole of Britain and go well, we’re certain London is in there somewhere, but then the smaller the area you pull out in the city unless you know is London. So I think that- I think more be more upfront about inserting would really help in a lot of cases, just if we all need to learn to accept risk, not just in the sense of going out on your bicycle and getting in a terrible accident like poor Tim did, but risk in the sense that you don't know what the future's going to be and sometimes you don't even know things about the present, you know, we don't really know how many people have Coronavirus. We can make an estimate by various methods, we can have various figures and go. Okay. Well, these different but they give us a ballpark figure, but we don't know, we probably never will know what we have to do is become better at making decisions. Accepting that we don't know things were certain, and all we can really do is get an idea of roughly what something is and how to how uncertain this is.

Pennington: So, we have Tim Harford still with us and I believe he has a question for you.

Tim Harford: So to bother you you raised a couple of times of the puzzle that we put so much trust in algorithms and I wanted to ask you about that a little bit more in the a-level predictions thing is it's a really stark example. This is a situation where if you put it all the government said we are going to cancel your exams, this is not safe and then a computer will give you the grade that you would have got if you sat the exams. Which of course when you put it like that possibly be true. How did they- how is it that they managed to fool themselves into thinking that it might be true? How did the rest of us nod and accept it like, oh, yeah, I suppose that'll do, and is there anything that we can do to have a more realistic view of what algorithms can and can't achieve? Because they’ve got their place of course?

Harkness: Well exactly I mean possibly hopes that the fact we had teenagers out on the streets with signs that say things we probably can't say the podcast but they're very rude about what the algorithm does. I’d slightly hope that that will actually sink in and people will go, yeah lot of this is just hyperbole. How could- How could an algorithm ever possibly know that? I do think it's less a sign of how powerful its data are more sign of how much we lack in human fields of go politics economics philosophy, even with that. We do have a government in the UK at the moment that is quite technocratic. We have been certainly dominant coming since one of the chief advisors is really really keen on data prediction algorithms and getting more people into government and understand data, which you know, what level would be great. It would be great if more of them understood stats data, but they there is a slight air of well, yeah, you don't get enough clever people enough data and that will give us all the answers, and I rather want to say we are the government you should have ideas. You should have policies; you should have principles. You should have a vision of where you want to take this country to, and that's what's going to get us through and data and algorithms, however, good, they are can only be a means to help us get that. They can possibly give us a better idea of where we are and a better idea of the outcomes if we do different things, but they don't get to tell us what we should do. I do and I do think it's that I think it's the lack of direction, a lack of vision and lack of self-confidence that leads us to put far too much confidence in algorithms.

Pennington: That's that feels tied into a question we have we got from an audience member who asks if we're spending enough time scrutinizing the questions we're trying to have big data answer for us.

Harkness: No, I don't. That's a really good question. Exactly for that reason. I think if you formulate the question, right then finding the answer is often most the easier part and I think that if you ask a lot of statisticians their job is to go in early and help people formulate the right question in the right way. You know, I would still say even though I that I was more a writer than a statistician and I always say if I can ask the right questions I consider it a job well done rather than giving the answers for somebody else.

Pennington: So, we have two more comments that came through one just from someone who said they attended you show last year, very entertaining and instructive but they did not volunteer for the Shockwave. So, this is kind of related. So, someone is asking if you would mind telling your favorite statistical joke?

Harkness: Well, they might set it before because it is my favorite and I do tell it all the time. But why should you never tell a statistician he was average? Because it's means.

Pennington: That sounds like one that John Bailer might have actually said.

Bailer: I have to tell you Timandra, my family thought this was an impossibility that there could be someone who could have humor and statistics as part of their life, except I have a worse one. It may be a bit UK specific. I don't know if this will make sense for an American audience, but what is a statistician’s favorite sandwich filling? Correlation chicken.

Harkness: See, I don’t know if you have Correlation chicken. American listeners going, huh?

Pennington: I am familiar with it. I do before we before we go I saw you. So I was stalking you this morning as I did preparations and I saw that you tweeted out that you have a new piece out in Significance about and I figured Brian would like me to ask about that about John Gronte. I don't know if I'm pronouncing it Gronte. Yeah. I'm a bit obsessed with superhero of stats why? Harkness: Yeah, well because, now he was born 400 years ago this year as I discovered right this piece more Significance because he makes a tiny part in my book because I try and get over the ideas of the stats of this the book by telling you the story of the person who first thought of them because then they will make more sense. And he said he lived through the English Civil War, fought English Civil War of parliament side waves of plague because he was in London. He was a founder member of the Royal Society in spite of just being humble haberdasher. But he wrote this one book which is about the bills of mortality was about the death records of what people are died of and in this book he just didn't fetch it it all these concepts which he needed to try and get information out of the data it basically this raw data for about 50-60 years of mortality and he went through and he said well, you know, but you can see these patterns. If you do this, you can see that pattern, so he came up for example with the idea of excess deaths. He looked they said well in this, you know this year we says about plague year because this many plague deaths listed but hang on, if we look at deaths from other diseases in the years before and after this year, they were about seven or eight thousand. And in this year, it's seven- It's in this year's 18,000. So where did these 10,000 other deaths come from? The must-have been more plague deaths than were written down as plague. And so many ideas which you know, you didn't have the language for it, but he basically invented a lot of statistical ideas and yet there's not a statue that's not even a little plaque to say where he lived.

Bailer: There should be.

Harkness: There should be I'm going to start campaigning like there’s a few fans actually. I might just start a fan club and it's got no statue of John Gronte, and then he lost everything in the Great Fire of London and then he was persecuted because he had converted to being a Roman Catholic, which at the time, was very unpopular and basically died in poverty aged only 53. I know of his sight his life is a roller coaster and he invented all these statistical ideas. There should be a Hollywood movie about it. If there are any Hollywood producers listening, write to me.

Bailer: That’s our biggest listener audience segment, Timandra, that’s clearly who we’re appealing to in this series.

Harkness: Absolutely. George Clooney could totally play him.

Pennington: I’m going to launch, just as we’re wrapping up, John’s question he normally asks, some sort of maybe what advice would you give to statisticians who want to maybe not shock people in an audience, right, but maybe want to communicate to a broad public? What advice would you give to them as they're thinking about how to present they’re their research or connect with those those audiences outside the statistical community?

Harkness: Basically, you've got to start where those people are and I think this is always whether you're trying to do comedy or radial write books or whatever you're trying to do just start with those people are listen to them more than you talk to them. Think about, well, what's you know, what are they concerned with? Have a look at what their newspapers are, to see what the stories are what the adverts are for. Those are the things that those people are interested in start from there and go to where they are. Look for things that will arouse their emotions and it's taken us right back to Tim Harford started to dissolve. It's the feelings that will grab them and make them care. If you can't make them feel something about what you want to talk about, then why would they give you any attention at all?

Pennington: Oh, that's great. Thank you so much for being here today. That's all the time we have for this episode Timandra.

Harkness: It’s been an absolute pleasure.

Pennington: We’d also like to thank the Royal Statistical Society for allowing us to record two programs as part of their annual meeting. Stats and Stories is a partnership between Miami University’s Departments of Statistics and Media, Journalism and Film, and the American Statistical Association. You can follow us on Twitter, Apple Podcasts, or other places where you can find podcasts. If you’d like to share your thoughts on the program send your emails to statsandstories@miamioh.edu or check us out at statsandstories.net and be sure to listen for future editions of Stats and Stories, where we explore the statistics behind the stories and the stories behind the statistics.