Mona Chalabi (@MonaChalabi) is the Data Editor of The Guardian US and a columnist at New York Magazine. As well as co-producing a four-part documentary series about vaginas , Mona has written for TV shows on National Geographic, the BBC and VICE. Mona draws. Her illustrations, which are designed to make numbers more relatable, can be viewed on her Instagram account and were recently commended by the Royal Statistical Society. Before getting into journalism, Mona worked in the nonprofit sector, first at the Bank of England, then Transparency International and the International Organization for Migration.
+ Full Transcript
John Bailer: I'd like to welcome you to today's Stats and Short Stories episode. Stats and Short Stories is a partnership between Miami University and the American Statistical Association. Today's guest is Mona Chalabi who is a Data Editor from the Guardian USA. I'm John Bailer. I'm Chair of the Department of Statistics at Miami University and I'm joined by my colleague, Richard Campbell, Professor in the Department of Media, Journalism and Film. We're delighted to be speaking to Mona on our short episode today. Well welcome, Mona.
Mona Chalabi: Hi, Thanks so much for having me.
Bailer: It's a delight…just a true delight. The question I have for you is we want to be able to spot fake news. We know you have a nose for this, so what kind of advice might you give us and our listeners for how do you detect fake news.
Chalabi: Well, unfortunately you have to be patient because sometimes it can take a little while. It's definitely worth doing but it could take some time. So it's basically about pursuing lots of leads that are kind of buried in a story. Let's say, for example, that the news piece has mentioned that it took their data from the US Census Bureau. One of the first I asked is what data did they use. Are they definitely using the most recent data available? So I try to go back to the original source and see if the numbers for the information checks out. If it's not a government organization likes the Census Bureau, let's say it's been collected by a polling company, I have a few kind of basic questions that I ask any polling data, so I want to find out how many people were included in the survey. As you are well aware, a thousand is kind of used as a benchmark and I would argue somewhat of arbitrary benchmark sometimes. So if the poll used 30 people and claims it's a national poll of all US citizens, you can immediately kind of discredit that because 30 people is not enough to understand this population by any stretch of the imagination. And then for, polling company as well, It's really important to find out who the polling company is; who backs them; are they affiliated with any political party. For example, there was a poll which I actually mentioned my TED talk but I didn't mention where exactly it had come from. It was a poll that was trying to measure attitudes of U.S. citizens. The poll found that, well to be specific, one question in the poll found that some of the respondents said that they backed the idea of Jihad. What wasn't reported was a later question in the poll also asked people how they defined Jihad and many of the respondents defined it as kind of peaceful---a peaceful, personal struggle. But also, that polling company was, I actually forget the exact name of it, I want to say Women Trend---and it was a polling company. It was by a woman called Kellyanne Conway, who is now an adviser to President Trump. He might have had some interest, you know, with results being a certain way. I realize that I've been going on for quite awhile. Should I keep going?
Bailer: No, please. So what other guidance would you give us?
Chalabi: Well I mentioned the number of respondents in a survey but that in itself isn't enough. And this is something that again we saw during the U.S. election, right? Part of the problem with polling increasingly now is that the U.S. population has become more and more diverse. And as the population has become more diverse, it's difficult to get an accurate representation of opinion of everyone in this country. Polling companies use something called weighting to try to adjust for that. Speaking very, very crudely, let's say that ten percent of the U.S. population is black; only five percent of the people in the poll were black; the poll will adjust their responses to make it look as if ten percent of the responses were black. That's a very, very simplistic explanation. Now in some cases, you can kind of say, fair enough. Apart from---every single poll should publish their raw numbers of respondents, right? So how many in the poll were male, female, different age groups, different races and ethnicities. And if you see that any of those numbers are incredibly low, that's kind of a bit of a warning sign, right? If you see in the top line of the poll that, I don't know. 60 percent of the women think this and when you dig into it, there were only three female respondents, you've got to ask yourself, would you ask three of your female friends to understand what every woman around you thinks about something? I think people instinctively understand that is not going to get your anywhere near an accurate answer. And often, honestly, the number is not going to be three, I'm kind of exaggerating some of his, which is difficult as well because how do you figure out what is and is not a sufficient threshold. But again, I think actually following your instincts. I think people have quite a healthy cynicism, right? Again, if it was just three out of 100 people, is that going to be enough to get you accurate numbers? I think it's safe to say no.
Richard Campbell: So, a lot of times, what you're dealing with the general public audience in these kinds of thing is, that you have politicians out there who will praise good poll numbers and downgrade---and often, it's like if the numbers are going up, that must be a good poll and if the numbers are going down, that must be a bad poll. And, in fact, the general public often, you know, politically agree with one candidate or the other, that comes to be their view. That this must be a bad poll because their numbers are going down. So how do you even get pass that, where people have a stake in this that's emotional---you know, this is politics?
Chalabi: Well honestly, I think the polls just need to play a smaller role in our public discourse anyway. I just really, really don't see them as particularly healthy for democracy. Partly because I think public policy shouldn't be about, necessarily purely based on public opinion. I think it should be based on public need. And accurately measuring public need is the job of government---federal statistical agencies. It's not the job of pollsters. So I would also say that not only have polls undermined democracy in that respect but I also think that part of the reason why the public relationship with statistics in general has shifted is because people have not been able to necessarily separate out things like, I don't know, 40 percent of Americans support Obamacare and statistics on insurance rates. These are two completely different sets of knowledge; they're two completely different sets of numbers. And they way that they are collected, there's no resemblance whatsoever to one another. One set of numbers will be collected by a polling agency, which, at best, might speak to a couple of thousand people. And the other set of numbers is collected by a federal agency and government statisticians, that for not a huge amount of work---work their butts off in a non-partisan way to collect accurate numbers on these things.
Bailer: Very good. It's been our distinct pleasure to have Mona Chalabi join us on Stats and Short Stories. Stats and Stories is a partnership between Miami University's Departments of Statistics and Media, Journalism and Film…and the American Statistical Association. Stay tuned and keep following us on Twitter or iTunes. If you'd like to share your thoughts on our program, send your emails to StatsandStories@miamioh.edu . Be sure to listen for future episodes.