Alex Coppock is an assistant professor of Political Science at Yale University. His research considers what affects people’s political beliefs, especially the kinds of messages people regularly encounter–TV ads, lawn signs, Op-Eds, etc. In this episode, he shares the findings of a big, new study that just came out as well as what it means for how persuasion works.
*Note. This episode was replayed on April 24, 2023, following the release of Alex’s book: Persuasion in Parallel: How Information Changes Minds about Politics
Things that came up in this episode:
- A new study testing dozens the efficacy of dozens of political ads (Coppock, Hill, & Vavreck, 2020)
- The long-lasting effects of newspaper op-eds on public opinion (Coppock, Ekins, & Kirby, 2018)
- The effects of lawn signs on vote outcomes (Green, Krasno, Coppock, Farrer, Lenoir, & Zingher, 2016)
- Framing effects in persuasion (for an overview, see Chong & Druckman, 2007)
- The sleeper effect (see here for an overview)
Transcript
Download a PDF version of this episode’s transcript.
Andy Luttrell:
I’m sure you noticed that the United States has a big election coming up. Hard not to. By the way—make sure you vote!
But at the heart of any big election is a bunch of campaign messaging. Each candidate wants to tell you why you should vote for them and not the other candidates. Ads on TV. Viral social media posts. I’ve even seen giant billboards on the highway promoting Biden and Trump. I mean not on the same billboard…different billboards. But billboards!
These campaigns want to change your opinions. But can they do it? Do all of these messages have any effect or are people just going to vote how they were always going to vote? Can a late-night TV spot—the tenth one you’ve seen that day—change anyone’s mind? Even a little?
You’re listening to Opinion Science, the show about our opinions, where they come from, and how they change. I’m Andy Luttrell. And this week I talk to Alex Coppock. He’s an assistant professor of political science at Yale University. He studies how people incorporate new information into their political attitudes and beliefs. Recently he and his colleagues published a big study testing the persuasive effects of dozens of actual political ads used in the 2016 U.S. presidential election. And did they find that these ads persuaded people? Well how about we just leave this as a cliffhanger until Alex and I talk about his results.
One quick note about terminology before we get into it. We talk a lot about whether persuasion effects are heterogeneous or homogeneous. If persuasion has heterogeneous effects, it would mean that some persuasive messages are more effective than others or are more effective for some people and not for others. If persuasion has homogeneous effects, it would mean that any message would get the job done about as well as any other message. Although previous theories in persuasion and probably your own intuition would say that persuasion is heterogeneous—some persuasion strategies are more effective than others—Alex takes the provocative position that this may not be the case. Or at least that heterogeneity is pretty small.
Anyhow, enough from me. Let’s jump into my conversation with Alex Coppock…
Andy Luttrell:
So, your background is what? I guess we can sort of officially start if you want.
Alex Coppock:
Sure.
Andy Luttrell:
Your background is what? Sort of what is it that got you into political attitudes and persuasion?
Alex Coppock:
Yeah, so this takes me back to when I finished undergrad, there were no jobs, so I went to work in public radio. Then I didn’t get the job in public radio, so I was like, “Oh, I’m gonna do public policy. I’m gonna do Radio Lab but for public policy.” That was my pitch to grad school, so I went to policy school. It turns out they don’t teach you policy at policy school, and so I happened to wander in for no good reason into Don Green’s field experiments course, and so Don Green is like a giant of randomized experimentation in political science, and he sort of like got his feet wet on attitudes, right? So, I learned from him, I took his political psychology course. We studied that Lord, Ross & Lepper ‘79 piece. That’s the one that got me going on attitudes.
I was like, “This is wrong in an important way! You have to fix it!”
Andy Luttrell:
So, for people who don’t know, what was… What is that article about and what was it that captivated you about it?
Alex Coppock:
Sure. So, Lord, Ross & Lepper ’79, this is the piece that people cite when they’re saying it’s counterproductive to try and persuade somebody because people reject evidence they don’t like, and then they actually don’t move in the direction of that argument. They retrench further into their own. This is a false claim and the data that is brought to bear in that article, it suffers from two interlocking research design problems. There’s no random assignment and they ask subjects, “Hey, Andy, on a scale from negative eight to eight, how much did your opinion change?” And the people who are pro capital punishment use the higher end of the scale. The people who are anti capital punishment use the lower end of the scale. And so, we just like took these positive and negative numbers as evidence of moving one direction or another. People don’t know which direction they moved!
You can’t just… Figuring out the causes of things is hard. You can’t just inspect your soul to know which potential outcome you have. So, anyway, between-
Andy Luttrell:
Were these issues apparent when you read it? Was that what fired you up? Or was it the concept first and then later you realized, “Oh, wait. Maybe we don’t actually have this figured out.”
Alex Coppock:
So, I’d taken Don’s methods course, and then this was his substantive course, and he was putting this out for us to take apart. Like he didn’t say so, but he brought us to it, right? And then my co-author, Andy Guess, who’s a prof at Princeton now, we locked eyes across the table. We’re like, “We’re writing this paper.” So, that was a fun moment.
Andy Luttrell:
Nice. So yeah, so when I invited you to be on this show, you said that you’d love to talk about persuasion. It’s your favorite topic. And so, I’m curious, what is it about persuasion that gets you so excited as to say, “It’s my favorite topic?”
Alex Coppock:
So, I get really excited about causality. Causality is… This counterfactual thinking is exciting to me, and we normally have in political science, whether or not you voted is the main causal question that is being answered by experiments, right? Because it’s written down in the… So, attitudes are like the next DV that we can study experimentally. Social psychologists have known this forever, to bring students into the lab and randomize them… Well, except for some social psychologists who will remain nameless. Randomize them into conditions and then study their… So, the effect of arguments on your attitudes is a question that we can answer well, and if you don’t do it right, you get the wrong answer. And so, that’s sort of like the exciting mix for me, is that everyone has an experience of trying to persuade other people, and they have this experience of it not working, like you talk to your uncle at Thanksgiving dinner and it feels like you get further apart, not like you make progress.
And it’s just an inferential error. If you were to randomly assign whether or not you try to persuade your uncle and then measure his attitudes in those two counterfactual worlds, I promise, I promise that you make progress when you argue with him. So, that’s what’s exciting about, is that it’s counterintuitive the way that persuasion works.
Andy Luttrell:
And I’m hearing also that it’s just the method is clean, because one thing that I think sometimes persuasion has been around for so long in psychology that people go, “Oh, this… We know, we know, we know.” But it’s like no, it’s so cool that we have these paradigms that are slick, and clean, and when you do it right, you get answers. I’m teaching this class for undergrads where they build a research project and I’m like, “We’re all doing persuasion studies, because I can show you how to manipulate aspects of a message, measure an attitude at the end, and test whether that made a difference.” And there is something just sort of nice, and pure, and clean, and like you say, that sort of connects to people’s experiences in talking to other people. And persuasion happens constantly.
Alex Coppock:
Right, like you try to persuade your family members to vote for Biden. They’re Trump supporters and you’re like, “When I try and persuade them, it feels like I don’t make any progress.” Well, imagine that you could write down the dependent variable of how much on a scale from 1 to 100 do you like Biden. I bet you move them just a little tiny bit when you try and persuade them. Like not a lot, but it’s in the right direction even though it feels like it’s… We have an aversive response to talking about politics with other people.
Andy Luttrell:
So, it’s interesting to hear you be this hopeful after just reading the new paper where you tested all of these experiments, because one of the interpretations that I feel like people come away with is, “Oh, you find…” And I’ll ask you to describe this, but basically the gist is you find a very tiny sort of average movement in people’s opinions, or I forget what the outcome is, when they see these political ads. And some could interpret that to be like, “Well, throw out persuasion, because why bother?” Whereas you’re saying, “No, no, no. That’s so important that you’re moving the needle even that much.” And so-
Alex Coppock:
You got it.
Andy Luttrell:
Could you just describe kind of what you did in this enormous persuasion study and what you think it means?
Alex Coppock:
Sure. So, I want to give credit to my co-authors, Seth Hill, and Lynn Vavreck, who deserve the kudos for this great design. We’d been sort of one at a timing them in political science, where we’ll do a sample of 500 subjects, we’ll do one ad versus the control, and then we try and generalize from this one thing. And Lynn and Seth were like, “This is crazy. We need to do all of the ads.” And so, they got the most important advertisements from the 2016 election in real time, as they were coming out each week. They brought in representative samples via this YouGov panel, randomly assigned them to either an insurance ad or seeing that week’s ad and measured who are you gonna vote for and how much do you like each of the candidates, so like a favorability scale.
This design is so refillable, right? It’s like an episode of The Simpsons. You can just put any ad into this design, over and over and over again, and we can just find out. As opposed to, just to contrast this with the focus group approach, like with a focus group, what are you doing? You’re taking 10 people and then you’re asking them for how much they like the ad. I have no interest in how much you like my ad. The only thing I care about is whether it moves favorability or not. So, it turns out that Republicans really don’t like Democrat ads and Democrats really don’t like Republican ads. But they nevertheless are persuaded a little bit in the direction of those things, so we have to separate out this affective response. I give a negative evaluation of the advertisement. That’s not the point!
The point is does it change your vote choice? And so, what we find is that averaging over all of them, they move favorability one twentieth of a scale point on a one to five scale in the right direction. Okay, so one twentieth of a scale point on a one to five scale is a very small average movement. We move people 0.7 percentage points, so not a full percentage point, but a 0.7 percentage points on vote choice, so that makes you that much more likely to vote for the advertising candidate.
So, then the question is okay, so we have these very, very, very small effects. The next question out of everybody’s mouth is, “Well, is it different for different ads? Is it different for different people? Is it different over different points in the campaign?” And so, our article evolved to try and assess heterogeneity in the small average effect. So, this is where I feel like I lose people, like you’re telling me persuasion is small, great. And then you’re saying it’s not heterogeneous.
Andy Luttrell:
It’s small always.
Alex Coppock:
It’s just small. In fact, the editor of the journal emailed me the night before it was going to press and they said, “You know, the word small is in your title two times. Did you mean that?” And I was like, “Yes. I did in fact mean that.” It’s a bad joke, but it was a joke. I meant it.
Andy Luttrell:
So, why might you expect this to be a heterogenous effect, or an effect that is different for different ads, or for different people, or for different locations? And what does it actually tell us? If we’re trying to build a theory of persuasion and how it works in the world, what does it tell us that you don’t find that kind of heterogeneity?
Alex Coppock:
So, in every social science field they have built theories of under what circumstances an ad, or any kind of persuasive attempt will be more effective. So, there are psychological models where you’ve got different personality types, or you’ve got different need for cognition, like how much do you care about thinking about things? There’s your moral foundations. There are… Do you need to have an ideological match between the message and the person? Does it matter who’s sending it? So, every one of these fields has just built up this huge edifice of theory for each of those possibilities, but it’s never studied in a systematic way, because it’s hard to assess across all of the theories at the same time. You’d have to have an enormous survey measuring so many psychological batteries, and you can see how this leads to overfit theories.
I can’t remember the paper, but there was some paper that counted up in psych how many branded theories there have been the last 20 years, and so like they have capital letters in the paper, and when they count again how many of them are ever used again, and just the vast majority of these in this article we develop a novel theoretical framework, well, those novel theoretical frameworks are just dead on arrival. Because it’s not… We’re not aggregating to anything with that. And so, this article is trying to say, “Well, let’s take the biggest theories in political science. Do you need a partisan match? Are the ads from your party more effective for you than the ads from the other party? Does it matter when during the election?” People think that earlier ads are gonna have a bigger effect than later ads, because attitudes crystallize over the course of a campaign is the idea. Does it matter whether it’s a primary versus a general? Does it mater whether it’s an attack ad versus a promotional ad?
So, we just… This design allows us to investigate in a very straightforward way each of those claims. And they just all fall down. Every single one of them. The small effects are small, and they remain small regardless of all of these different conditions, and so it’s sort of a boring null effects paper.
Andy Luttrell:
Right. But what does that mean then for all of the data that came before you that shows that those things matter? How are we to interpret those in light of what you’re finding?
Alex Coppock:
So, it just always happens that when someone says, “No, Alex, look. I did a paper and there is heterogeneity.” And I look into it, and there are a couple of ways out that I have for this. Number one is that people often are measuring a dependent variable I don’t care about. They’re measuring how much I like the ad, or a rating of the… The classic studies where you’re like, “This study shows that global warming is happening. This study shows that global warming isn’t happening.” And you rate the two studies. Well, people who don’t like global warming say that the studies that find it are bad. But that’s not the question. I want to know what is the effect of that study on someone’s belief about global warming, so it’s the wrong DV a lot of the time, is that people are studying ratings of the persuasive treatments. How persuasive do you think it is? If you showed this to others, do you think it would work? Do you like it? Do you believe the creator of the ad?
Well, that’s just affective response. That’s… So, one way out is to dismiss the DV. Another way out is a lot of these studies ain’t randomized, and they do within changes, and say, “I measure you once. You’re exposed to an ad. I measure you again.” And those within changes are just really unreliable guides to causal effects. And then the third way that these studies always just come out is that you are doing a squinter on the P value on the interaction term, and you’re like, “Well, congratulations that it’s like different in a statistically significant sense, but substantively, these two effects are the same.” Like for Republicans, I moved you one point in the direction and for Democrats it’s 1.5. And I just need enough data before I can get a star on the difference between those two things.
And so, then people get misled in that third way, too.
Andy Luttrell:
There’s another version of it, too, where there might be something to be said about tailored messages. I’m wondering sort of in general what you think about that, but that any message can only tailor to so many things, and once you start to account for all the ways in which a person could be susceptible to your ad, they have other qualities of themselves that make them resist something about that ad. So, I wonder if you could say, “Well, there may be still credit to the idea of tailoring, it’s just that in practice, at scale, it’s really difficult to actually pull it off in a reliable way.”
Alex Coppock:
Yeah. I really hope that this article is a challenge to micro targeters everywhere, like so what does a micro targeter do? They build a model of support on the basis of background variables and then they go to town with the fanciest machine learning, and then they say, “Oh, I can segment your audience for you.” The thing that’s the problem is that the thing that they are adapting their model to is not what we care about. We don’t care about levels of support. We care about changes in support. We care about causal effects. And you can’t just see causal effects. That’s the problem. That’s the problem is that we have a counterfactual quantity that we’re after. It’s like what’s the difference in your attitude about Biden, whether you saw this ad or didn’t see it?
You can’t just inspect that, so you can’t train models for it, and so the models are just wrong. They’re just wrong. So, I guess the… Your point was… Well, suppose that that weren’t a problem. Suppose that you could actually do this and then adapt specifically to different kinds of people. Fine. There are so many dimensions to adapt to, then you’re gonna get into these finer and finer slices. It’s possible. I think that honestly, the only place that I see room for micro targeting is that affective first step also changes whether people ever see your ad. And so, if it’s true, suppose that my ad just moves everybody a little tiny bit in the direction of Biden. Great. But the people on the right don’t ever see it, because they avoid it when given the opportunity. That’s gonna be like heterogeneous treatment effects of the ad, but it’s in this two-stage process.
It’s like I have noncompliance in the sense that I fail to deliver the treatment to some subset. Even if the treatment effects would be the same if I successfully were to do it, so like maybe there’s something to say that it’s like you can bring a horse to water. So, if the horse doesn’t want to drink, they can’t get the… It would be equally refreshing to all of the horses, but some don’t drink, you know?
Andy Luttrell:
They say, “That pond is not for me.”
Alex Coppock:
That pond is not for me. That sounds like a liberal pond.
Andy Luttrell:
And there is some evidence from tailoring that what it’s really doing is getting people to pay attention to the message, and that if what’s inside is compelling, then people are willing to be driven by that. But if what you have is not very compelling, just because I’ve tailored it to you doesn’t mean I’ve persuaded you. I’ve gotten you to see just how phony I am by tailoring to you.
So, one thing is that it’s in a very specific context that you looked at these things, right? You looked at them in the case of the election, political campaign ads about major players in politics, and in some ways you draw some conclusions about persuasion with a capital P, and I’m wondering how willing you are to say that this is something… Really, we’re learning about persuasion. That these effects aren’t that heterogenous. Or is this like, “Well, so far, really all we can say is that when it comes to these kind of major political campaigns, the kinds of ads that we’re delivering may just kind of be resonating the same way for everybody.”
Alex Coppock:
Yeah. So, I wrote a book called Persuasion in Parallel, maybe somebody will publish it some day, that looks beyond that particular context. We’re still in survey experiment world, where I’m randomizing other kinds of treatments, like op eds, or tweet storms, or videos, or just every kind of persuasive communication that can arrive in your eyeballs or your ears through the world. Those are the kinds of treatments that I’m studying and it’s always a political attitude, as the dependent variable, and it just works this way every time, right? Like when it’s set up where there’s a treatment that has a target attitude, the treatment moves that target attitude in the intended direction by a small amount, and it’s about the same. Not literally, identically the same, but about the same for different kinds of people.
So, I’m really skeptical of claims about large heterogeneity, so it’s like the thing that I think that is true about persuasion with a capital P is that if you get that average effect and you’re saying, “Well, oh no! That average effect is an average. It might be masking big negative effects for some and big positive effects for others.” I don’t think that that’s true. I think that that’s like… I think that is essentially not true across fields.
Andy Luttrell:
And that any demonstration that there is a big effect is maybe some sort of sampling error. Is that sort of what you think?
Alex Coppock:
So, there’s so many research design failures that you could… They could be the culprit. One one that we haven’t discussed yet is group cues, and I think that that’s a really important distinction. I’m talking about the frontal attack on your attitudes that persuasive treatments are doing. The information treatments. The political ads that are coming to say like, “Here’s why you should support me.” A group cue is something different. It’s like, “We Republicans don’t believe this,” and if you are a Republican and you’re exposed to that treatment, a cue about which side your group is on on a particular issue, you adopt that. Because you’re a Bayesian. You’re like, “Well, I don’t know what position I should take, so I’ll take this cue to learn which side I’m on.”
And those cues definitely have heterogenous treatment effects. For Republicans, a Republican cue is a positive effect. But for Democrats, they also learn just the same way. They’re like, “Oh. Well, I learned that the outgroup thinks that the policy is good. I must not like it.” You know? So, some people call that motivated reasoning. I do not call that motivated reasoning. I call that Bayesian reasoning, like you just know that because you’re on opposite sides of most political issues, that you’re probably also on opposite sides on this political issue. I just wouldn’t call that a persuasive treatment. I think that that’s my out on the group cues thing, is that there are big differences in the way that people respond to group cues, and that’s because that’s not what we’re talking about when we’re talking about trying to persuade someone.
So, if your persuasive argument includes a lot of group cues, if Biden were to come out here and say, “We Democrats really believe X, Y and Z,” I think that that would not be a successful persuasive strategy. But, “We Americans believe in the following values.” That’s going to… It’s not gonna have that differential effect by group membership, because you’re not signaling group membership.
Andy Luttrell:
And I keep pushing just because I come from a perspective of, “There are all these variables that accentuate or turn off persuasion.” And even something as simple as how compelling that information is, right? I’m just wondering how far we’re pushing this. To say I give you any information, three sentences that say, “Believe this.” And people go, “Well, I don’t care what you’re telling me. I care that you’re telling me something.” That’s maybe pushing this homogeneity thing absurdly far, but would you say that that has merit to it?
Alex Coppock:
I would. And I’m gonna take your bait in two ways, right? One is we do these treatments where it’s like, “Economists say the following is true. Do you agree?” And then we see 20 percentage point treatment effects of the effect of learning that economists think this is the right policy is just enormous. No content to the argument whatsoever, right? Content free. It’s just, “These people think it’s smart.” And you’re like, “Probably.”
The other reason why I think that the quality of the messages is just like not as big a deal as we think is we had all these ads during 2016, and let me tell you, there is a range of quality in those ads. Some of Trump 2016 ads are indistinguishable from like a late night infomercial. They are very strange, like deals, deals, deals on American values or whatever, right? It’s very weird. Then you’ve got a Bernie Sanders ad with this just glorious, the America ad from Bernie Sanders. It’s just like a masterpiece. And you see this when we ask survey subjects to rate the ads, like in a focus group. There’s actually… We did dial testing. They say like, “How much do you like each ad?” Wide, wide, wide variation in how much people like the ads. There’s a difference in how good the arguments are in some sense. Nothing different in terms of persuasive effect.
So, the correlation of the average treatment effects with the ratings is just like zero, because there’s no variation in the treatment effect, so you can’t be correlated. So, that’s what I mean is that these campaign consultants that get paid a huge amount of money to say, “No, no, no. I have unlocked the keys.” Well, of course they have not unlocked the keys. No. We’re using randomized control trails on big, huge samples, with the exact content that people think… The most highly-paid campaign consultants think is the most persuasive, and just like no differences. Well, I don’t want to accept the null of no differences. The amount of variation is low in these treatment effects.
Andy Luttrell:
If we could talk also about the op eds paper, it paints a little bit more of an optimistic portrait of persuasion. The effects I think you get there are quite a bit bigger, and you have these cool long-term analyses, too, to show that these effects stick around. And so, we’ll talk about the results in a second, but what I really loved was the history lesson at the beginning about where op eds come from, and this is a history that I didn’t know, and it just sort of sucks you in, to be like, “Everyone thinks that these are important, but are they? And why would people write them if they aren’t?” So, just a cool way to start it, and I wondered whether your interest stemmed from that kind of historical or practical interest in what op eds do in media, versus you thought, “Oh, this is a place where we might see persuasion. Let’s start looking.” And then go, “Oh. This actually turns out to be an interesting domain all on its own.”
Alex Coppock:
You know, thank you so much for saying that about the opening. I had such a great afternoon finding out about that, like it’s called op eds because they were opposite the editorial. Whatever.
Andy Luttrell:
I had no idea.
Alex Coppock:
It’s just a cool thing, and when that paper came out, a bunch of op ed writers, like got in touch because they’re like, “Finally, someone’s studying our medium,” so that was kind of fun, too. No, it did not stem from a pure interest. What happened was our collaborators had taken the field experiments course. David Kirby, he worked at the CATO Institute, and they were interested in figuring out how to measure the effectiveness of their strategies. Like this guy, he’s a practitioner, and he wandered into… He didn’t wander in, but he sought out this tool, which I was just… I really respected. You could go through the world thinking that whatever you’re doing is working, but he wanted to be sure, and so we were like, “David, we have the tool for you. It’s randomized control trials. We can just learn the answer to the question that you have.”
And so, the question for the CATO people was that they’re spending so much effort placing… It’s expensive to produce an op ed for a think tank. A whole person has to spend at least a day or two writing it, they have to research it, they have to reach out to their limited pool of contacts at newspapers to figure out how to get it to them, and then you have to ask yourself are you just preaching to the choir? Are you changing no one’s mind? Because first of all, newspaper readership is small. And second of all, the D.C. insiders really shouldn’t be moved at all, because they’ve already taken their positions. And so, okay, that’s a totally plausible counterhypothesis. All right. We did the experiment. Among the mass public, the effects of these op eds are large in the sense of being about a 0.2, 0.3 standard deviation effect.
For your listeners who don’t speak standard deviations, that’s like a… That’s a moderate treatment effect. It’s not this… Like a small is a 0.1 and a big is a 0.5, so we’re in the middle. The next question is, “Okay, so does this work for people who are not just like average Americans reading the newspaper? Does it work for the D.C. policy professional audience, which is in some sense the audience that these people have in mind when they’re writing their op eds?” You would think, “Oh no, these people are not persuadable at all. They spend all day in politics. They’ve already taken a position on every one of these issues.” Well, turns out that the effects are about the same on the elite sample as they are among the mass public sample, and it’s just another plank in that heterogeneity platform that goes down.
Andy Luttrell:
And when you look long term, you could say, “Well, yes. I read an op ed and I start to think oh, maybe there’s something to this.” But follow up in a few days and I go, “Oh yeah. No, I don’t really know. I’m gonna go back to whatever I thought before.” Do these op eds have staying power or are they pretty fleeting?
Alex Coppock:
So, in this study, we measured again at 10 days after exposure and again at 30 days after exposure. And you know, the effects are about half as big. They are still homogeneous and in the right direction, but they’re about half as big. And they’re not… They’re about half as big at 10 days and then they stay half as big at 30 days, so I was really puzzled. I remain puzzled as to the shape of the decay curve for persuasive effects. It’s a very expensive and difficult thing to study, and so that kind of explains why there are fewer persuasion studies that do over time analysis, because recontact of survey subjects online is expensive and logistically challenging.
So, if I had it to do over again, there would be a million things I would check. Does it matter whether I ask the question in the first round or not? Because you might think that it’s the act of giving an answer that sort of like fixes the new attitude in a subject’s mind. It could be. That’s a really expensive and difficult thing, but if any of your listeners are out there like, “What study should I be doing?” It’s like please, do persuasion studies where you randomize whether in the first wave you ask the outcome question or not. We need to know the answer.
Andy Luttrell:
And when you do the follow ups, you don’t remind them at all of what they saw in the first survey, right? Because someone could say also that they’ll stick around. There’s some old dissonance studies too that do that, where it’s like you will forget your new attitude unless I remind you like, “Hey, remember a week ago we showed you this article?” And you go, “Oh, that’s right. No, I support this.” But this was clean. This was just here’s some issues, rate them.
Alex Coppock:
That’s right, so we didn’t retreat the treatment group in the second wave, although that would be fascinating, too, like let’s randomize whether or not we retreat people. So, I think there’s a nice paper by James Druckman and Dennis Chong that does that. But yeah, it’s really a fertile area of research as to figure out how long these things last. It’s not the life of a butterfly. They do last a little bit. How long is hard to know, because people are awash in persuasive messages, right? And so, that could be the way… The balance of your media diet is the thing that kind of gets you to equilibrium.
Andy Luttrell:
I was talking about… I was talking to Mahzarin Banaji for this show. She was talking about how there would be these interventions that would change implicit biases, and it would be like, “Wow! We’ve changed implicit biases.” And then a week later you retest and nothing much has changed. And her reaction was just like, “Yeah, of course not, like why would this tiny little intervention reshape the way you see the whole world forever?” And so, I thought that was sort of a compelling way to be like, “Oh yeah.” No, it would be unreasonable if I give you a short op ed and you forever reverse your stance on something that you’ve believed for a long time.
Alex Coppock:
Right.
Andy Luttrell:
But interesting that you are still finding that some of it is sticking around. So, you said that the pattern was interesting, and I was wondering if you could talk a little more about what you think is going on there, where you say after a few days, I’m not as enthusiastic about the position I came away from the article with. But after a month, it’s not gone completely. It’s still kind of the same decay as it was just right afterward.
Alex Coppock:
So, I have a hunch that I have no evidence for, okay? So, here’s the hunch, is that a persuasive treatment like an op ed comes with two kind of mechanisms that explain its treatment effect. One is like a valence or a framing effect. You’re thinking about the issue that way. You’re moved just sort of like temporarily by the fact that someone’s trying to persuade you. But there’s also this information that sticks, and so one model of the durability of persuasive effects that would accommodate this hockey stick pattern that we get is that immediately you have both effects. You’ve got the framing and the information. Then, over time, the framing effect falls away and all you have is the information effect.
So, like what’s a piece of evidence that’s in favor of this? When we divide up treatments into those that are just framing or just information, the framing ones decay more quickly. So, like that’s… But it’s an overfit theory. There are many things that are different across information and framing treatments beyond this thing, so ideally what you would do is you would try and modify treatments so that you delete one or the other mechanism and then do it at random. But I tried. It was just like it’s really hard to remove the framing part of an information treatment. These things are… Which makes the theory kind of a little bit useless. So, I don’t know. That’s my answer. That’s I think why, is that there’s two mechanisms and one falls away.
Andy Luttrell:
It’s kind of like the older sleeper effect, like the way that they talk about that is that there’s sort of a discounting thing that falls away, so it’s the same… If we’re just to use the same language, it’s that falling away. This kind of ambiguous falling away process.
Alex Coppock:
Oh, that’s so funny, because of course I was reading about sleeper literature when I was coming up with that overfit theory. There’s no evidence for the sleeper effect, despite just like lots and lots and lots… So, for your listeners who don’t know the sleeper effect, this is the idea that initially a treatment doesn’t have an effect, but then once your resistance to the information falls away, then there’s a big effect. Well, sure. This is a perfectly reasonable hypothesis. When investigated, there’s just no evidence for it.
Andy Luttrell:
It’s sort of like a reverse hockey stick, right? Where you say there’s no treatment effect right now, but then later, all of a sudden you go, “Wait a minute! There is something to be said about this.”
Alex Coppock:
Totally.
Andy Luttrell:
Yeah.
Alex Coppock:
Totally.
Andy Luttrell:
So, just as a wrap up, one thing that I think is interesting about the stuff that you’ve done is there seems to be a few times where you’re working in collaboration with people in the field, which is super cool, so the stuff we were talking about just now with the op eds, the CATO Institute, and you have access to these political insider type people who can take the survey, and then I was reading the cool… That lawn sign study. I don’t know how much you directly had to do with that, but there you were working with local campaigns and randomizing whether neighborhoods had lawn signs, so I’m wondering what your approach to working with the field is. Whether to you it’s a useful tool and the only way you could answer the question, or whether there’s really something valuable. It makes the research more valuable to have that connection.
Alex Coppock:
So, it’s an all of the above thing. When you work with partners, you’re definitely studying something that matters to someone in particular, right? So, you’ve got the it matters threshold is crossed. Also, honestly, social scientists don’t command enough budget to study the most important things in politics. So, there was a small amount of criticism of the study with Seth and Lynn is that, “Oh. Well, congratulations. You guys can run 59 experiments. Well, we don’t have that kind of budget.” Usually the answer to that question is, “Well, you have to partner with someone who does have the budget.” Because we don’t have the capacity to do four lawn signs experiments with these campaigns. The campaigns are doing the lawn signs. We’re just randomizing which precincts they should do it in, right?
Thankfully, I didn’t have to plant any lawn signs, but I did have to pick where others did. Yeah, so that’s the thing, is that you… There’s these productive collaborations between academia and political campaigns, or NGOs, where they have a program. They would like to know whether it makes a difference. Academics have this tool of random assignment and then the magic happens when their question matches one of our theoretical questions. That’s when it’s a productive collaboration. Otherwise, it’s like you’re just walking around doing someone else’s programming about program evaluation.
Andy Luttrell:
Well, Alex, thanks so much for taking the time to talk about all your work. This was super, super interesting, and gives me new things to think about.
Alex Coppock:
Thank you, Andy. This was great.
Andy Luttrell:
Alright that’ll do it for another episode of Opinion Science. Thanks to Alex Coppock for talking political persuasion. As always, check out the show notes for a link to his website and links to the research we talked about.
And of course, if you’re enjoying the show, fire up Apple Podcasts and leave a nice review, give us a pleasant star rating—any help spreading the word about the show is much appreciated.
Okay—that’s it. OpinionSciencePodcast.com. Check it out.
See you in a couple weeks. Buh bye…