Episode 46: Polling 101 with Ashley Amaya

Dr. Ashley Amaya is a senior survey methodologist at Pew Research Center. She has a PhD in Survey Methodology and is an expert when it comes to polling the country’s opinions. Our conversation highlights how the simple polling numbers you see on the news are the results of months—sometimes years—of work.

Dr. Amaya shares how Pew recruits and maintains high-quality samples of survey respondents, carefully designs the questions that get asked, and checks their surveys’ demographics against the broader population. We also talk about what consumers should look for when assessing a poll’s legitimacy and where else experts are looking for the public’s opinion.

A few things that come up in this episode:


Transcript

Download a PDF version of this episode’s transcript.

Andy Luttrell:

Hi, this is Andy calling from Opinion Science Insights. We’re polling the American public to gauge their opinions on new media. Do you have a moment to answer some questions? How did I get your phone number? We actually have a computer program that dials random phone numbers and you’re the lucky one today. Not so lucky. Sure. I get it. But do you have a few minutes to answer some questions? Great. Thank you. Okay, do you listen to podcasts? Got it. Okay. And how frequently would you say you listen to podcasts? Once a month, once a week, several times a week, or every day? Sorry. Sometimes is not an option. You can choose from once a month, once a week, several times per week, or every day. Once a month, once a week, several times per week…

We see the results of public opinion polls constantly in the media. Before elections, it’s a constant question of where the country is leaning. Throughout the COVID-19 pandemic, polling firms have been chasing questions about the public’s willingness to wear masks, get a vaccine, stay at home, and there’s plenty of survey work to see what people think about science and technology, the economy, media outlets, and all sorts of things. And it can seem simple. Just ask everyone you can find what their opinion is, but it’s really not so simple. For every percentage we see on the news, like 65% of U.S. adults think there’s intelligent life on other planets, a team of survey researchers had to develop just the right question, ask just the right people, and crunch the numbers just the right way so that we’re confident the number that comes out the other side is an accurate reflection of the whole population. 

I’m sure my silly example a few minutes ago broke about a dozen rules of good polling. And by the way, that statistic about two thirds of Americans thinking there’s probably intelligent life out in the universe, that’s real. It comes from Pew Research Center, a nonpartisan fact tank that informs the public about the issues, attitudes, and trends shaping the world. For decades, they’ve been gauging public opinion about all sorts of things. Sometimes aliens, but mostly things like politics, social issues, and other things like that. But how do they do it? Can we really get a sense of where the whole country stands without asking each and every person living here? Yeah. We can. I mean, that’s like the whole reason this episode exists. 

You’re listening to Opinion Science, the show about our opinions, where they come from, and how they change. I’m Andy Luttrell, and this week I talk to Ashley Amaya. She’s a Senior Survey Methodologist at Pew Research Center. I’ve been wanting to have someone from Pew on this podcast for a while, because I’ve always respected the work they do and thought they could give an overview of how this whole polling business really works, and Ashley is just the right person for the job. She got her Master’s in Survey Methodology from the University of Michigan and her PhD from the University of Maryland, and before Pew, she was a survey methodologist for a couple other big hitters in the polling world. I talked to Ashley about how she got involved in public opinion polling and she walked me through how Pew goes from a survey idea to the final numbers we see on the news. She also shares how we can be smart consumers of polling results and what the future of polling holds.

You know, this is a podcast about opinion science, right? That’s the name of it. So, my background is in social psychology and the work that I do is in attitudes and persuasion. And so, along the road, I’ve read plenty of the just survey methodology stuff, and in the podcast, we talk more conceptually about attitudes, and how they change, and how we talk about them, but we’ve never really played with the idea of like how do we actually even know what a person’s opinion is to begin with, right? Which is this methodological wonder that 100 years ago there was this paper that the whole contribution was, “Oh, we can measure these things.” And now we’ve made so many strides since then. 

So, I wanted to talk to someone who really sort of has their feet firmly planted in that world and introduce people to… I think, you know, you hear polls on TV and in newspapers, and I think it’s easy for people to kind of brush them off or not really… Just kind of let the numbers wash over them and not realize like, “No, it actually took a lot of work to get those numbers.” And so, we’ll sort of talk about where they come from. 

Ashley Amaya: 

Great. 

Andy Luttrell: 

Great. 

Ashley Amaya:

Happy to help.

Andy Luttrell: 

Good. So, as a sort of a foundation for that, I’m curious what brought you into the world of this whole thing. I mean, Pew is not your first polling rodeo, and so what is it that drew you into this world and what is it that brought you to where you are now? 

Ashley Amaya:

Sure. I actually started as an engineering student at University of Michigan, and I joined their undergrad research opportunity program and wanted to do something a little different, and so I got placed as a research assistant for the surveys of consumers. And it was great. It did… It was all the things I loved, which was doing a bunch of math, but it was a lot more social, both in the subject that we were studying and just the surroundings in general. And so, I ended up switching my major. I kept that job for the rest of my undergrad career and then I went on to get a master’s and PhD in survey methods. 

Andy Luttrell: 

That was that. 

Ashley Amaya: 

It’s been 20 years now. 

Andy Luttrell:

So, yeah, what is it… Are there things that keep you, like what are the new things that sort of keep that world interesting and exciting and worth pushing forward?

Ashley Amaya:

Several things. I think number one is that I’m never studying just one topic, right? So, I get to dabble in economics. I get to dabble in religion. I get to dabble in health. And so, it’s always learning something new from a substantive perspective, which I love. It’s also evolving challenges and evolving methods with emerging technology, right? So, as we move into the internet age, as we moved into cell phones, as new data become available, and now as we also move into alternative data sets or more computing power, which allows us to do more types of analyses than we were previously capable of. 

So, there’s always a new road to go down and question to answer. 

Andy Luttrell: 

So, your position at Pew is what, exactly? I read the title, but I didn’t quite know what that actually means your day to day looks like. 

Ashley Amaya:

Neither does my mother. So, I am a Senior Survey Methodologist at the Pew Research Center, and that means that I do kind of two jobs. Number one is that most of the domestic data that we collect and produce at the center comes from what’s called our Americans Trends Panel. And that’s a group of about 12,000 people that have agreed to answer surveys for us every few weeks. And so, my job is a lot of quality control there. Are those people becoming what we would typically deem professional panelists, right? So, they’re changing their behavior and how they answer questions just by participating in surveys all the time. Or are we getting the right mix of people on those surveys or recruited into the panel? Or are different kinds of people quitting the panel and does that affect our data quality? 

So, that’s kind of one part of my job, and the other part of my job is to help design some of the larger one-off projects. So, for example, a few years ago Pew has done the religious landscape survey, which really tells us what kind of people and what their religious beliefs are in the United States. So, that’s a very large endeavor. It’s complicated. They make estimates in all 50 states. And so, I have to figure out, all right, for the next round, how many interviews do we need to do in each state? Do we need translations in alternative languages? What modes of data collection are we gonna use? How long can the survey be without harming our response rate? People don’t like long surveys. How do we weight those data? Et cetera. And so, I help consult on those bigger projects. 

Andy Luttrell: 

To the first part about… So, we’ll get to the evolution of a project like that in a second, but the first thing, so with the panel that you maintain, two questions come up for me about that. One is you mentioned that you’re monitoring, like are they becoming professional survey respondents, and what are you looking for to give you clues that that’s happening, right? Like what are the signs on the other end that this person is answering questions as a survey respondent, not as a person in the world? 

Ashley Amaya: 

So, we actually just published a report on this, which is why it’s top of mind, so you can go to the Pew Research Center website and look at the Panel Conditioning report, is what it’s called. But we looked at it in three different ways. We look at it to see if people that have been panelists for longer periods of time, if they’re really different than our new panelists, right? If they are, if they’ve kind of changed their behavior, then that tells us something. We also looked at their voting records over time, so we have the voter files. We can append that information to our panelists. So, we can actually see how, whether or not they were voting prior to joining the panel, and whether or not that kind of on average has changed now. 

And then we also looked… We actually did an experiment where we ask some panelists questions about a topic more often than other panelists to see if the frequency with which we’re asking these questions actually changes their behavior. 

Andy Luttrell: 

And what kinds of behaviors could be changing? 

Ashley Amaya:

So, people can change their actual opinions or actual behaviors, right? So, for example, somebody might not know a lot about a topic, right? So, if I ask you about whether or not Nancy Pelosi is doing a good job as Speaker of the House, if you don’t know who that is or don’t know what her political affiliation is, now that I’ve asked you that question you might go Google it. You might form an opinion that you didn’t otherwise have. So, that’s one kind of true change. 

The other is kind of reporting change. So, I ask you how much media consumption do you consume, or how many hours a day or articles a day do you read, or listen to, et cetera. And you might not really have thought about that before, so the first time you answer that question might be a little unsure. You’re just trying to give me an answer. And then over time, I’ve been more thoughtful, I’ve now actually considered how I’m watching or reading, and now I have a more concrete answer. So, your answers might actually become more accurate over time because you’re just more thoughtful and conscious. 

Andy Luttrell:

And you can, yeah, just see that people are settling in. And so, in the one sense, if those responses are becoming more thoughtful and conscious, people do that, right? That’s just kind of how… That’s the evolution of learning about a topic. Would that be grounds for saying this person is probably not a naïve respondent anymore and can’t participate further? Or is it just something that’s worth correcting for or acknowledging in looking at the output?

Ashley Amaya:

I think it’s really just you actually suggested your data’s improving over time, right?

Andy Luttrell:

Sure.

Ashley Amaya:

You’re getting more accurate answers. So, they might be a little different, but it’s actually in your best interest from a measurement perspective.

Andy Luttrell:

That makes sense, because that was my impression. I was like, “Well, this kind of doesn’t sound bad.” At first, I thought this was like this is how we’re catching people who are becoming too professional and just responding, but they’re just learning about the topic, right? They’re responding in that way. 

And so, that sort of raises the other question I have, which is I have to imagine when people hear this they go, “Oh, so all of this stuff we’re learning about the country is coming from 12,000 people who have agreed to take a survey?” Why, and I’m guessing you would say I still think it’s valuable, and the question is why, right? What are you doing in that to maintain this pool of people that makes us confident that it’s actually reflective of what the broader society is thinking? 

Ashley Amaya:

Sure. So, first and foremost with the Center’s research on the American Trends panel, we recruit pretty much every year. So, we are replacing people that did kind of opt to not participate any longer. We are growing the panel, so it doesn’t… It’s not just 12,000. It becomes 13, 14, 15,000 over time. We also look to make sure that we’re trying to recruit a representative population, so we look at age distributions, we look at race and ethnicity distributions, we look at gender distributions, education, et cetera, to see who is in our panel and is that a good average of the population. Are we representing all kinds of people? 

And then we also… Nobody’s perfect. And so, we have to weight the data. So, for example, women are more cooperative and more likely to participate in a poll or a survey than men, and so we actually have to kind of re-equalize that or weight those data so that we have… that we count the women as a proportion of the women in the population and we count the men as a proportion of the men in the true population, to get those numbers to be more representative. 

And then we also compare our data on a variety of questions to government statistics, right? So, surveys that are larger or have higher response rates, or are otherwise considered the gold standard, and so we make those comparisons to see are we getting similar results. And if not, why not? And is that a good thing or a bad thing?

Andy Luttrell: 

You used the words poll and survey sort of interchangeably. I forgot this is… I’ve been wanting to ask someone this question. It feels like polling is sort of the popular term for it, but the methodology literature is all about survey research. Are they interchangeable or to you is a poll different from a survey? This might seem pedantic, but I’m very curious. 

Ashley Amaya:

You know, I think you might get different answers from different people on this. I use them entirely interchangeably. I generally think about polls as being more about attitudes and behaviors and surveys being more a broader definition of kind of anything. It would include fact-based questions or knowledge-based questions as well. But for the most part, they’re the same. 

Andy Luttrell:

And so, why do we do it? Obviously, there’s a nerdy fascination with it, right? Like I read the Pew reports that come out and I go, “This is very cool.” Obviously, I’m interested in opinion. But is it just a curiosity? Like what is the value? We wouldn’t have all these high intensity survey operations in this country if they didn’t have some value. So, to you, what do you think the point of doing this work is? 

Ashley Amaya:

I think it depends on the person, right? So, for the nerds in all of us, or the people that like to watch politics, we love it when the election comes around because we want to know who’s gonna win before they announce a winner, before voting happens. So, there’s an election outcome or prediction aspect to it that I think is just of interest to people. I think for journalists, it gives people an objective source of sentiment. It helps provide some numbers to some anecdotal stories that they might be telling. Or it helps give a voice to groups that are otherwise often silenced or not heard as well. 

I think for other people, for political leaders, it gives them information about what does the public think, and therefore how should I act on that, or what do they want, or what do they feel is lacking, and they can make decisions and policies based on that information. And I also think it helps in a similar vein provide financial planning, right? If you need to know how many people really need different types of access to healthcare, for example, then this tells you. This helps put that into context and helps tell you how much money you need to allocate towards something. 

Andy Luttrell: 

Does Pew do any of the direct policy interfacing? Or is it purely just we’re just telling you what we see here? 

Ashley Amaya:

So, we are a non-partisan kind of fact tank, so our job is to get the information out there. We do not take any kind of policy position. 

Andy Luttrell:

Sure, sure. But do you… Is there any indication that it’s being used? I mean, I have to imagine yes, right? Is there anything you can see that it’s like, “Oh, this work that we did is having an impact.” Right? We didn’t endorse it, but it is having some impact. 

Ashley Amaya:

Sure. I think we frequently get cited in a variety of news sources. We frequently get cited by politicians. I’ve seen our work being used in school textbooks, right? So, I definitely think that it’s useful to people and I hope that it’s being used as intended. 

Andy Luttrell: 

Right. Yeah, so let’s unpack the process. We’ve hit the highlights, but just to sort of make this concrete, one of the ways… As I was thinking about talking to you, I was thinking that it would be a useful exercise to think about the process of dreaming up a survey, implementing it with all the bells and whistles that are needed, and then thinking about how we would interpret it. Because again, I think that would help folks realize it’s not just like, “Oh, someone went out on the street and asked like 30 people a question and just kind of wrote down what they said,” right? It takes some time. 

And actually, that is an interesting first question. Do you have any sense of the timeline, like from conception to the report comes out? I’m sure it varies wildly, but are we talking like couple days, couple weeks, or this is like these are many months of work to get it out there? 

Ashley Amaya: 

It varies significantly. So, for example, when an event happens that wasn’t planned for and we need some information about that, we can write the questions, hopefully test them before they’re fielded, and then administer them to our panel because we have a source of people that are ready and willing relatively quickly, and then we take about 10 to 14 days, typically, to field the survey, right? To collect responses. And then we can write a report and get it out there in a few days after that. 

That’s the exception to the rule, though, right? And on the opposite end of things, these really large, kind of one-off surveys that are tens of thousands of people across all 50 states and D.C., can take years to plan and implement. And if we are doing the survey by mail, for example, that’s several months of data collection. So, it really depends. 

I think on average it probably takes us a few months to actually conduct the survey in the sense of somebody actually starts writing the questions and thinking about the sample size. But we start planning for most of those things a year in advance or so. And the process is really somebody has a research question that they want to answer and the methods team, myself included, start talking to those substantive people on okay, well, how are we gonna answer that question? And so, there’s… Well, what data do you need for analysis? So, what questions do you need to actually ask survey respondents? How many of those people do you need to ask? And that’s a question about, well, what kind of analysis are you gonna do? 

So, are you gonna compare kind of groups of the population? So, we kind of want to have similar number of people in each of those groups to maximize our statistical power. Or do you want to say something just overall? And do you want to dig into the weeds here or there, et cetera? And so, it’s a question of how many people and that also then dictates how we sample, right? So, we don’t always interview the entire Trends panel. Sometimes, we just select a subset of them. 

And then for the Trends panel, most of our protocols for actually collecting data are consistent. So, how we contact them in the first place, how many days there are in a field period, how long the questionnaire is, do they get paid for their responses or not, how we weight the data when the data come in, all of those are pretty standard processes for us on the panel. But those are decisions that if you’re working, every time you’re designing a brand-new study with a brand-new sample, you have to think about. 

Andy Luttrell: 

When you use the Trends panel, are you setting it mostly as we’re gonna field it for this number of days, or until we hit this number of people? Again, probably depends. It’s frequently the answer. But if it’s like, “Oh, we’re gonna plan to have it open for seven days,” probably there’s a pretty reliable benchmark for how many respondents you can plan to get in those seven days. And so, amount of time and number of people is probably about the same metric in that sense. Is that right? 

Ashley Amaya:

Right. We normally set it for time. And that’s typically because we do have publication timelines that we’re also leaning towards, or that we have to move onto to the next survey that we’re going to do, right? So, ours are pretty much set on time, not number of people. There’s also a difference between later respondents and earlier respondents sometimes, depending on the topic, and so we want to make sure that we’ve got enough time to get those kind of late participants into the survey. 

Andy Luttrell:

So, yeah, I hadn’t thought… So, I will say in the psychology world when we do online surveys, often we use these methods that you can get lots of people very quickly. But I hadn’t thought of you might want to push that out further because you’re capitalizing on a very maybe narrow sample by only keeping it open for, you know, six hours or whatever it is. Are there… Can you speak at all to what those differences might be? Like what are you gaining by keeping it open just a little bit longer?

Ashley Amaya:

That’s very topic specific. Who are those later respondents and who are the early respondents? The early respondents are typically, in general, are very civically engaged, right? They’re more likely to vote. They’re more likely to volunteer. They’re more likely to be the head of their homeowner’s association, right? Whereas later respondents, you get folks that aren’t that eager to participate. That’s an example, but there are different correlations depending on the topic. 

Andy Luttrell:

You mentioned too especially when you have a quick turnaround, right, something very timely. Am I reading your CV right that… Did you join Pew like in the midst of the pandemic? Is that right?

Ashley Amaya:

I did. 

Andy Luttrell: 

Yeah. Because those early Pew data I think were very useful. I mean, I’ve cited them recently for a paper, right? In terms of differences in who’s sort of quick to uptake different recommendations and that sort of stuff. And you look and you go, “Oh my God, these data are from March and April 2020.” Which is like that’s… You had no time. You had to just hit the ground running. And of course, you did, because this was the most timely thing to look at. But you mentioned ideally having the opportunity to test questions, so what does that look like? Why would you want to test questions and how do you actually do that to know if they’re good? 

Ashley Amaya:

Well, we test questions because not everybody understands things the way that we intend them or have the same vocabulary that the people writing them have. And people don’t always interpret them the same way, so what testing includes varies dramatically, right? So, when we are short on time, every questionnaire that we field goes through our methods group and it is reviewed always before it’s actually fielded. So, hopefully you get kind of an expert review from that perspective. 

We also oftentimes start slowly, so we might only collect a few dozen responses first to see if anything looks weird. And what I mean by that is maybe we had a pretty good sense of what the distribution might look like, how many yeses, or noes, or strongly agrees, disagrees, et cetera, and if that’s just totally off, we might dig a little deeper. We also always ask a question at the end of our surveys about… that’s kind of open ended. Did the survey seem biased, did you have any feedback, et cetera, and we review all of those comments, as well. 

So, if we have more time, there are a couple things to do. Number one is cognitive interviewing. And that’s where you actually recruit people, you ask them the survey question, the question you want to field, and then you ask them follow-up questions. Well, how did you come up with your answer? What did this question mean to you? Did you understand this word? That’s very time consuming and it’s also very expensive, right, so it doesn’t always get done. You can also field questions on kind of these convenience panels or non-probability panels. 

So, for example, Survey Monkey has a panel where you can set up a question in five minutes and go ask a bunch of people to participate, and you’ll get some feedback that way, as well, to see, okay, well, what does the distribution look like? Are they having problems answering this question? Are they offended by this question? Et cetera. And so, those are kind of the quick hit items on testing. 

Andy Luttrell: 

I realize I steamrolled right by the fact that you joined Pew in the beginning of the pandemic. What was that… I mean, it’s gotta be… Were you on the road to making that shift pre-lockdown? Or was that something that happened like post-lockdown? You thought, “Oh, let’s just try something new.” 

Ashley Amaya:

I had actually interviewed with them probably one of the last weeks that everybody was in the office, so I was in the office physically, and then after that it kind of just all went from there. Not because of Pew, but just because of personally, I was terrified of making a job shift in the middle of a pandemic. But I am now… I’m happy I did. 

Andy Luttrell: 

Were you involved in any of that early work? Or you were probably just still getting your bearings at that point. 

Ashley Amaya:

Right, right. No, I was not involved in any of the COVID work that early. 

Andy Luttrell:

Okay, so a research question is identified, questions are designed and testing. Oftentimes, it sounds like they go to a predetermined panel that you use all the time, but sometimes it’s a whole new sampling strategy. Data come back, they’re weighted, you write up a report. Does that pretty much capture the whole process? 

Ashley Amaya:

Yes. 

Andy Luttrell:

Okay. Just making sure there’s no stone left unturned there. So, the kinds of big projects that you’re talking about that take so long, and it sounds like those are the ones that you’re sort of the most intimately involved with, can you give any example of a kind of project that would warrant that sort of extra level of investment? 

Ashley Amaya:

Sure. So, two things come to mind. Number one is any time we want to look at very rare populations, or very specific geographies, so the American Trends panel is really meant as a national general population panel. So, if we want to study just Asian Americans, or if we want to study every… We want to make estimates for every single state. That’s a place where our current panel is just not equipped to do that. It’s not designed to do that. 

And so, that’s why we have to do these one-off surveys. Any time we need to interview in languages other than English and Spanish, right, that’s when we are gonna have to do something that’s not our typical process. And then if we want a lot of people, that’s another place where we need additional resources and additional kind of design help. 

Andy Luttrell: 

So, what are the kind of things that have to happen to recruit unique populations? Is it all online? Are we random digit dialing still? I don’t know if we are still doing that. Or how are we getting out and capturing the voices that need to be captured uniquely? 

Ashley Amaya: 

So, we don’t do a lot of random digit dialing, also known as RDD, and that’s what telephone surveys are. We don’t do a lot of that anymore. It’s not entirely off the table, but it’s not a lot of what we do. Most of what we do these days is actually what’s called address-based sampling. So, the postal service has a list of every residential, or arguably every residential address in the United States, and they make that through some vendors commercially available. And you can draw a sample of addresses and mail information to those addresses either with a link that they can go online, or actually mail them a paper survey and they can fill it out and send it back in. That’s typically how we go about our kind of specialized surveys. 

Other people still use random digit dial or telephone. And again, they sample a random telephone number of 10 digits, right? And they dial it, and they hope it connects, and they hope somebody picks up, and they hope it’s not a business. They hope it’s not a minor’s cell phone, right? Other people use the internet, and that is sometimes through a panel similar to our Trends panel, that is recruited via one of these other methods. So, actually people got recruited through the mail, or mail invitation to fill out something online, or phone, et cetera. 

The alternative to that is what’s known as non-probability sampling. And that’s when you get a popup on your screen that says, “Hey, do you want to answer a few questions?” That’s not random. It’s not statistically a probability sample. And it makes some requirements or makes some assumptions about people. And so, in general, the Center shies away from that kind of survey frame, so it’s… There’s multiple ways to do online surveys, but we look at ways to do online surveys that recruited through phone or recruited through mail in the first place. 

Andy Luttrell: 

Is the Trends panel, like the go to one, is that a mail recruited panel? 

Ashley Amaya:

Exactly. 

Andy Luttrell:

Okay, so this all sounds wonderful, obviously. These are the steps that need to be taken. My impression, though, is there are lots of polling numbers that float around our media, and I just got this feeling in the last election that there were… I mean, just so many polling numbers that were used for whatever position anyone wanted. And it’s hard as a consumer to know like, “Well, why…” This person is saying more than half the country loves that candidate. This person’s saying more than half the country loves that candidate. Just logically, those cannot both be true numbers. What can we do as consumers to cut through that and find the data that are actually reliable indicators of public opinion?

Ashley Amaya:

So, I think there are a few questions to ask yourself. The first is kind of who sponsored it and who conducted it. And the reason I say that is because in today’s day and age, where anybody with a few thousand dollars can do an internet survey, there’s a low barrier to entry. And so, you really want to look for those people that you know and kind of have been around for a while, and have been doing this for a while, to put a little more trust in. That’s not to say the new folks on the scene are always terrible, but it’s definitely something to ask yourself and think about. 

Another thing I would ask is how many interviews were conducted and in what timeframe, right? So, are we talking about 10 years ago or are we talking now? And are we talking 10 people or are we talking 1,000 people? And that really helps identify how stable that statistic is, or I know this gets into the weeds, but basically kind of what’s the plus-minus there? 

The third thing I would ask is what’s the source of the frame, right? So, how did they identify who they want to interview? And that’s where that probability non-probability concept comes in, right? Did they recruit via a list that should be of the entire population? Most people have a phone, a cell phone, or a landline. Most people live in an address and have an address. And so, if we’re sampling from one of those frames, we’re in a pretty good place, where some of the non-probability stuff is kind of random and depends on a lot of things. 

I’d also say kind of how’s the survey weighted, right? Is it unweighted? And if it’s unweighted, I pay very little credibility, and that’s because like I mentioned before, we know that women are more likely to respond than men. We know that minority groups are less likely to respond. Less educated, less income are also less likely to respond. So, we really need to make sure that we correct some of that statistically in the weighting before we report the data. 

And the last and probably the most important thing I’ll say is it’s all about transparency. If you can’t find the answers to any of those questions, then something fishy is going on and you really should be able to find that information. It should be readily available, readily published, and that’ll help you sort through the details. 

Andy Luttrell: 

Nice. Very helpful. You know, it reminded me. I have a somewhat technical question, which is about the waiting part I have had, and you kind of answered it for me, and I just kind of wanted to clarify it, which is that I had this impression that if you do like a true probability, say random sampling procedure, you don’t need to do any fancy weighting, right? Because all the legwork was in the recruitment side of things. But the response bias is I’m guessing why it’s still important to go like, “Sure, we tried to get as many men as women to answer these questions, but if more women than men did, we have to sort of make sure…” 

You know, also, okay, first I’m gonna leave that question out there. Is that about right? 

Ashley Amaya:

Yes. Yes. If you had everybody respond, then you’re right. The sampling would take care of it. But that’s not reality. 

Andy Luttrell: 

So, the other part of the weighting question is how do you decide what to weight by, right? There’s an enormous number of variables that you could weight by, right? And in some ways, you’d go, “Well, do I have any reason to think that men would answer this question differently than women? Why do I care about gender but I’m not weighting for age, or I’m not weighting for these other identities?” You know, in a given report that you would put out, how many variables do you end up weighting the responses by and how do you decide which are worth weighting?

Ashley Amaya:

So, for the American Trends panel, we have a pretty standard weighting procedure. It’s not… We do create one-off weights from time to time, but for the most part it’s set. It’s pretty standard. I couldn’t tell you off the top of my head how many variables there are, but you can read about it on the Pew Research Center’s website. 

Andy Luttrell: 

Transparency. 

Ashley Amaya:

We publish all of that. Exactly. How people make that choice, the default is you weight by sociodemographic information, all right? So, typically age, gender, income, education, et cetera. After that, you’re really weighting by the things that are both correlated to your response, so what makes you more or less likely to respond for whatever reason, and the things that are also correlated with your outcome, right? So, what makes you decide that you believe or don’t believe in climate change? Or that you are more likely to watch CNN than Fox News? 

The things that are related to both are really the items that you should be weighting by. For a lot of the post-election, especially in 2016, a lot of the post-election research that was done on what the polls were doing right and wrong, one of the things that came out of that research was that a lot of pollsters at the time, pre-2016, were not weighting on education. And that turned out to be really critical. And so, that’s what a lot of the pollsters started to do after that election. 

Andy Luttrell: 

Okay. As a way of wrapping up, I want to ask you about alternative sources of these data, because I know you’ve done some work on that. So, gold standard, this is from the earliest days, the old straw polls and all that, so we’ve been asking people questions directly for ages. But now we have all sorts of new ways to know what people are thinking, and so what… Could you just give us a take of like where else are people starting to look to gauge public opinion outside of mailing them an invitation to respond to a survey?

Ashley Amaya:

All over the place. I think that a lot of folks are doing research into social media, right? And analyzing Twitter feeds. I think that’s great in a lot of ways to help identify how people that publish information, or post, tweet about things on social media, how they’re responding and how they feel. But that’s not the general population, right? And so, it really depends on what your goal is and what your research objective is. People are using Google search data to see what people are looking at, or looking up, or care about, et cetera. In general, we’re using natural language processing, which is basically a big computer that can analyze lots of text all at once, and we can use that to analyze media or leaders and what they say, like interviews with them. We can analyze sermons about it and church information. 

And then there’s a whole lot of data that’s not necessarily measuring attitudes, per se, but is definitely giving us more information that we want or need. So, big databases, like voter record, we know whether or not you voted. We have access to that information through records files. Folks that do work in public health, with consent, may have access to electronic health records or medical records, right, so they can see things like vaccination rates. We’re looking at imagery data. So, there’s been some cool satellite data that can look at night and take pictures at light pollution, and that helps in developing countries identify where the wealthy live, right? Because they have electricity. Or it might help us to identify other farmland, et cetera, and we are using data on satellite imagery to look at crop growth, to help our national agricultural statistics. 

So, there’s a whole… If you can think of a number, there’s probably a dataset for it. 

Andy Luttrell: 

I’ve been reading some of the recent work that’s using GPS data when they were tracking movement in early pandemic times, and able to sort of see like where are people just still roaming around and where are they actually staying in their homes. 

Ashley Amaya:

That’s a great example. 

Andy Luttrell: 

I don’t know if you’ve all messed with any of that other stuff, but the imagery one, that is very cool. And those have to involve partnerships outside of Pew, right? Or you’re more talking generally, or is Pew actually doing some of this work, as well? 

Ashley Amaya:

I’m talking generally. Yes. 

Andy Luttrell:

Okay. Just to be clear. Very cool. Well, I don’t want to take any more of your time. I think we’ve gotten our education in polling, and I appreciate you taking the time to walk us through those steps. 

Ashley Amaya:

Well, thanks for having me. 

Andy Luttrell:

All right, that’ll do it for another episode of Opinion Science. Thank you so much to Ashley Amaya for taking the time to talk about polling, and also thanks to Calvin Jordan and Rachel Weisel at Pew for helping set this up. To learn more about Pew and the work that they do, check out the show notes for links to their website and links to some of the topics that came up in our conversation. By the way, this episode was tricky, because there’s this thing in voiceover where P sounds can get distorted through a microphone. They’re called plosives. Anyhow, I kept having to say things like Pew public opinion polling, so I’m sorry if that hurt your eardrums along the way. 

For more about this show, you can pop on over to OpinionSciencePodcast.com for transcripts of this and other episodes, links to fun stuff, and even a picture of me, just in case you want to ruin whatever face you’ve imagined goes with this voice. Find a place on the web to rate and review the show. Apple Podcasts, Podchaser, Goodpods, Stitcher. Help people find the show. And subscribe to Opinion Science on your favorite app. Never miss an episode. And next one is a big one too, so get excited. Okay, that’ll do it for me. Thanks for listening and I’ll see you in a couple weeks for more Opinion Science. Bye-bye! 

alluttrell

I'm a social psychologist.

Get in touch