Measuring the Data that Shapes Public Policy | Stats + Stories Episode 109 / by Stats Stories

Logo.png

Libby Pier is the Research Manager at Education Analytics, overseeing and executing EA's diverse educational research portfolio, encompassing social-emotional learning, predictive analytics, academic growth measures, human capital analytics, and program evaluation.

Nichole Webster is a research analyst at Education Analytics. She examines the item properties and performance of Social and Emotional Learning surveys and estimates teacher and school performance metrics in R. She’s part of ongoing research that examines how Item Response Theory models estimate error. She studied Mathematics and Applied Economics at the University of Wisconsin

+ Full Transcript

John Bailer: How do you know you’re measuring what you claim to measure? How do you know the measurement is any good? Measuring data for public policy, particularly in education is the focus of this episode of Stats & Short Stories, where we explore the statistics behind the stories and the stories behind the statistics. I’m John Bailer. Stats & Stories is a production of Miami University’s Departments of Statistics and Media, Journalism and Film, as well as the American Statistical Association. Joining me in the studio today are regular panelists Rosemary Pennington and Richard Campbell from the Department of Media, Journalism and Film. We have two guests joining us today. Both from the education research non-profit Education Analytics. Nicole Webster is a Research Analyst with the organization and Libby Pier is Research Manager. Libby and Nicole thanks so much for being here.

Libby Pier: Thanks so much for having us.

Bailer: You know I’d like you to start with a question to try to respond like a really easy one. That always scares you right? To hear somebody starting that way? Can you describe how you know that a variable that you’re measuring is measured in a reliable and valid way? And maybe the best way to do that is to start with an example of measurement and then illustrate.

Nicole Webster: The work that I’ve been doing with Education Analytics is around social and emotional survey that’s given to a large number of students in California’s urban school districts. The variables that we like to measure in that survey are students’ social, emotional, non-cognitive capacities and growth mindset, self-advocacy, self-management, and social awareness. We believe that those variables are measured accurately by looking at how well the items perform on the survey, looking at the item’s difficulty, their discrimination, their neutrality to different age or demographic or English learner status variables, how often the item is skipped and whether the items hang together in a distinct construct.

Bailer: In reading some of the work that Education Analytics did I came across the phrase “potential omitted variable bias”. Can you talk about what that is and give us an example?

Libby Pier: Sure, omitted variable bias introduces different noise factors based on whether a model is correctly specified. One example in the social/emotional learning space is like if we were to omit English learner status in SCS growth model we might not see the outcomes that we’re hoping to see, when we don’t control for that variable. For example, the growth mindset items are correlated with reading ability and English learner status may not have the highest reading ability so looking at how those items respond or are correlated with different variables like English learner status or a student’s reading ability is really important when making sure that the noise is properly contained in a model.

Rosemary Pennington: You’re working with students across age groups from I think third grade through twelfth. How do you make sure that a measure that works for a twelfth grader works for a third grader?

Bailer: And it means the same thing.

Webster: That’s a very fascinating question. We are in the midst of measuring how items perform art different age levels. For example, in social awareness a fourth grader may think that playing on the playground with someone is being socially aware whereas a junior in high school may think “I wasn’t asked to prom, I must not be very socially aware”. So those two – or in that construct where the items are performing differently over time and we’ll look at average responses and differential item functioning based on how other items are performing at the grade level to see whether the item is measuring the intended construct as well as it is at different age levels.

Bailer: That’s fascinating, great work. Well unfortunately that’s all the time we have for this episode of Stats & Short Stories. Libby and Nicole thanks so much for being here.

Pier: Thanks for having us.

Bailer: It was great to have you. Stats & Stories is a partnership between Miami University’s Departments of Statistics and Media, Journalism and Film, and the American Statistical Association. You can follow us on Twitter or Apple podcasts or other places where you find podcasts. If you’d like to share your thoughts on our program send your email to statsandstories@miamioh.edu or check out statsandstories.net. be sure to listen for future editions of Stats & Stories, where we discuss the statistics behind the stories and the stories behind the statistics.