Episode 109: The Realities of Political Persuasion with David Broockman

David Broockman is a political scientist at UC Berkeley who digs into one of democracy’s core questions: can political messages really change minds? He’s spent his career running careful studies of persuasion, from door-to-door conversations to the effects of cable news, and testing whether the confident claims of political consultants actually hold up.

In our conversation, David shares the path that brought him into political science and the “credibility revolution” that reshaped how researchers study politics. We talk about what persuasion looks like in practice, why it’s so hard to predict which messages will work, and what his research reveals about the gap between political insiders’ instincts and what actually moves the needle.

Source for intro to government shutdowns:


Transcript

Please note that this transcript was automatically generated with an AI-assisted transcription software and has not yet been checked for accuracy.

Andy Luttrell (Intro): Hey, everyone. Opinion Science is back. Hi! And lucky us, the show’s back in the midst of a government shutdown. At least, that’s the case as I record this. I guess, who knows what will happen by Monday?

If you’re not familiar with how this sort of thing works, basically Congress has to pass bills that authorize government spending, and they have to do it by a certain deadline. But if Congress and the president can’t come to an agreement on the funding priorities in time, then the new fiscal term arrives with no budget, so a whole bunch of federal workers can’t get paid, so they have to stop doing their jobs until Congress and the president can agree on the funding plan.

It’s become this annoying opportunity for gridlock that’s become a kind of recurring drama in American politics.

But here’s the thing—this wasn’t always part of the script. For a couple hundred years, federal agencies would just carry on as usual even if Congress couldn’t quite wrap up their funding bills in time. We’re talking the Civil War, the Great Depression, World Wars…all without a government shutdown.

And then came 1980 when Attorney General Benjamin Civiletti accidentally invented the government shutdown. At the time, Jimmy Carter was president and there was a fight in Congress over whether federal funds should be used for certain abortion services.

And this kind of fight wasn’t new. Throughout the 70s, federal funding decisions were getting increasingly wrapped up in controversial issues like abortion and school integration, which resulted in a handful of funding lapses where spending bills weren’t getting done in time. It struck some lawmakers that these occasional lapses might conflict with this old 1884 law, The Antideficiency Act, which says you can’t spend federal money before it’s legally appropriated. The Attorney General agreed, and in 1980, Civiletti issued an opinion to that effect. No budget, no spending.

Just five days after Civiletti’s opinion came the first ever shutdown of a government agency because a spending bill wasn’t passed in time. It only lasted a day, but that brief episode set an important precedent. From then on, budget fights weren’t just abstract disputes. They came with real consequences—furloughed workers, shuttered services, and the spectacle of dysfunction on display for the nation. And each side quickly realized: the shutdown itself was a message. A chance to persuade the public, to cast themselves as the responsible party and the other side as reckless.

Poor Civiletti. Well, I don’t know him, so I’m not sure if I actually feel bad, but clearly he didn’t see all this coming. A few years ago, he told the Washington Post: “I couldn’t have ever imagined these shutdowns would last this long of a time and would be used as a political gambit.” But here we are.

By the 90s, shutdowns evolved into full-blown political theater. The most famous were in 1995 and ’96, when House Speaker Newt Gingrich clashed with President Bill Clinton over spending cuts. The government shut down twice, adding up to nearly a month. Fast forward to the 2010s, and shutdowns became even more common. In 2013, the government shut down for 16 days in a fight over the Affordable Care Act. Then came the big one: 2018 to 2019. That was the longest shutdown in U.S. history—35 days—driven by a standoff between President Trump and Congress over funding a border wall.

So from a one-day hiccup in 1980 to an epic 35-day standoff in 2019, shutdowns have become all too familiar. And here’s the thing—they’re not just these logistical roadblocks. They’re opportunities for parties to send a message and share narratives that make them heroes and their opponents, villains. Regarding the shutdown we’re in now, Brendan Buck, who was a top aide to House Speakers John Boehner and Paul Ryan, said: “It’s a political messaging exercise framed as a negotiating tactic, but there’s very little evidence that it really serves a policymaking purpose. It is more just a platform to talk about what’s important to you.”

You’re listening to Opinion Science, the show about our opinions, where they come from, and how we talk about them. I’m Andy Luttrell. And we’re back from the summer with a great conversation with David Broockman, who’s a political scientist at UC Berkeley. David takes a very careful analytic approach to studying political persuasion. He’s looked at everything from door-to-door canvassing to the effects of partisan media, and he’s tested whether the messages political insiders are so confident about actually work on voters. Spoiler alert: the story isn’t nearly as straightforward as consultants might want you to believe.

So, in the shadow of yet another government shutdown, we’ll talk about what persuasion really looks like in practice, why so many of our assumptions about it miss the mark, and what that means for the future of democracy.

Andy Luttrell:  Tracing sort of the grand arc of, of the work that you do. A lot of it comes back to political persuasion and taking just like a very careful look at whether these messages can move the needle. Um, and so I, I’m a little curious just to start the conversation to know like why is that your thing? Like what is it about political persuasion that keeps you coming back to that?

David Broockman: Well, yeah. You know, um, from my point of view, I think political persuasion is so. Interesting because for me, I think it’s kind of on, on some level, like the ultimate thing that that kind of derives change in a, in a democracy. Right. Um, you know, I think it’s definitely true. A lot of things such as lobbyists, interest groups, campaign finance.

That all matters too. But I guess I, I do fundamentally believe in a democracy. Actually. The voters do matter a lot. Um, the elites, they also have views on things. It seems like, uh, if you look at Elon Musk’s change over time, it seems like ideology matters there. Here at Cons, Silicon Valley, we’re seeing a lot of political persuasion  has seems to have happened among some of the Silicon Valley elite.

And then among everyday people, you know, there’s lots of. Uh, issues where I think public opinion is a, is a real constraint on what government does. And on some of those issues, uh, the public opinions changed a lot. Uh, Barack Obama in his first term, you know, couldn’t, couldn’t kind of admit that he actually supported gay marriage.

And then in his second term, finally did, um. And, uh, you know, marijuana attitudes have changed tremendously and there’s other, you know, issues where attitudes really haven’t changed at all. So I think, yeah, for me the interest comes from a belief that public opinion actually is a pretty meaningful constraint on, on government action, a democracy.

And so if you wanna understand kind of political change and where political change comes from, um, you know, one important source of that is, uh, changes in what and what the public thinks.

Andy Luttrell: Hmm. I, I, is that what brought you to political science in the first place? Or is this something that you appreciate?

Only after you sort of see how things move, you go? Oh yeah. Persuasion turns out that’s important.

David Broockman: Yeah. So I, I got interested in political science, so I, it’s, it’s funny,  um, when I was in high school, I, I got involved in political campaigns in Austin, Texas, and. I was, I think, uh, you know, that I, I was not to date myself, but I was in high school during the start of the Iraq war, uh, when climate change was just starting to become, you know, really, really, um.

You know, clearly kind of, I guess I was politically persuaded on, on those issues kind of during my formative years when climate change was just kind of coming to promise as a topic. Um, and I think when I just looked at those things out happening in the world and thought about, well, how can I make a change?

Uh, it it was clear to me that, um. Trying to understand kind of how to persuade the public politically or how to, um, uh, you know, was, was an important part of making change. So, I think since the beginning, I, I was interested in this topic of, you know, the, the, the space related to campaigns, persuasion, political psychology.

I didn’t always know that I was interested in studying that as an academic. Uh, I. Got more interested in that. When I was in college and, you know, in high school, I, I had been  pretty exposed to, um, the world of political consultants and, and campaigns. And one of the things that I think really characterizes that world.

Is, is a lot of people who, uh, express, you know, just unabashed confidence about, you know, what works, here’s what persuades voters, here’s how to win a race. Um, and, and I get why the, the political world kind of selects for that. Because if you’re a candidate running for office, you know, if someone says, oh, don’t worry, I know how to win your race.

I won 10 more. You know, you know, previously that, you know, gets you to write the check a little more, a little better than like, well. We don’t really know. It’s kind of a coin flip. We can try our best, you know, that that doesn’t kind of command the same consulting fees. So it was really a shock for me when I, I came to college and I was exposed to the world of kind of.

Academic science, um, thinking about these same issues and realizing, wait a minute, um, we don’t actually know all this stuff that those political consultants that I, that I got to know, um, said, said, we know, be in part because there’s just no way that they could know those things. Um, but. That was also just right around the time of  the, um, revolution and the use of experiments in political science.

So I, I took this class from Don Green and Alan Gerber, who were then both professors at Yale. Don has since moved to, to Columbia. Uh, and, and you know, one of the real messages of that class is, you know, a, as I said, we don’t really know all the things we thought we knew because how could we, but B, we actually have this technology where we can learn things.

Um, and so that was very empowering. Um, and I took that class as a freshman and that really reoriented me towards thinking, wow, okay, if I want to kind of make change in these areas. That this kind of use of, uh, experiments and data, uh, might be a way to do it. I, I still kind of thought for a while I might do that more in the, in the private sector world.

Uh, but as I got more involved, um, and kind of saw that, that was also kind of the early days, um, you know, as, as chronicled in, um, Sasha Eisenberg’s spoke the Victory Lab, and that was, I kind of got in at the ground floor right at the, you know, beginning of these academic practitioner collaborations. The increasing just interest in what was happening in academia among practitioners and, and vice versa.

Um.  And, uh, so I, I kind of tried a little bit to do some of the practitioner work. I worked at a group called the Analyst, um, Institute, uh, for a few years in college. Um, as well as, uh, did some more kind of political science research and ultimately decided, hey, you know, the political science route is a way to keep studying and working on these issues, um, in a way that can kind of contribute to knowledge as well as maybe contribute to what, um, is happening out there in the real world.

So, so that was kind of my, my, my story a little less than a nutshell.

Andy Luttrell: It, it seems like remarkably late for experiments to come along in that world. Uh, so, so I mean, I mean there, there, there is a long history in like social sciences of doing this kind of work, but it, it just hadn’t really cracked into political work.

Is that, is that sort of your read on the trajectory? Well,

David Broockman: I think, yeah, so I think there’s a few things. So, you know, the, the first field experiments out in the real world in political science were done, you know, something like a hundred years ago. Um. Um, but you know, but before we really had all the statistics and, and you know,  understanding of how to, how to really analyze them.

Um, and then I think, you know, in social psychology definitely did a lot of, you know, lab experiments for a long time. But I think part of what’s, what’s happened in social sciences generally, especially in economics and then, and then political sciences has been not too far behind, is what, what we call in social science, the credibility revolution.

Um, and I think essentially, you know, if you, if you open up pretty much. Any social sciences, academic journals from the eighties say, you know, there, there would be a lot of practices that now we’d realize like, yeah, they, they, they really were not, they, they had no idea what they were doing. So, you know, in, in psychology, the sample sizes were super small, um, in, uh, political science and economics.

Almost all of the, the work, you know, way back then was what we call observational. Um, and, and didn’t involve what we call causal inference. So you would say, Hey, we’re gonna just look out there in the real world for naturally occurring variation. You know, an an example of that in political persuasion might be like, Hey, where do campaigns run TV ads and where do they not run TV ads?

And then say, all right, well, let’s  compare the places with TV ads without TV ads and see how they’re different. And that’s gonna tell us the effective TV ads. But of course, we know those places are different in a lot of other ways. There’s other things going on, and so. There’s been this move over the last couple decades to say, okay, we’re actually gonna take all of these threats to what we call making causal inferences.

The causal effect of, say, a message or a TV ad or whatever else, seriously. Um, and so I think the, um. Part of that is thinking about how to study this stuff more in the real world. Um, and uh, and I think that’s, you know, been part of what psychology has been working through. And then part of it for political science and, and, um, economics has been thinking about, um, how to kind of take that causal inference seriously when, when we’re studying the real world.

Um, it’s also led, I think, at the same time. To, as we’ve appreciated how difficult it is to estimate causal effects. I think it’s, it’s led to in political science kind of a rise, which, which I have mixed feelings about, uh, a rise of kind of more survey  based experiments because we can say, well, we can do an experiment in a survey.

We’re not really sure how much it generalizes to the real world, but at least we can get some causal effects. I think that trend has had some. Positive impacts and many also negative impacts. So I’m, I’m a, I’m a participant in that trend, but also I’m ambivalent about it in certain ways.

Andy Luttrell: I I, I noticed too a lot of the political science oriented experiment field experiments particularly take this causal inference question so seriously that the, there’s just like all these designs and ways of doing things that I go, oh God, I don’t, this is, this has now gotten so complicated.

Yeah. I’m so used to the, like, flip a coin and run your study. Uh, but it’s, it’s a really interesting time to see this proliferation of. Like taking seriously the threats to being able to make real claims Yes. About whether these things move the needle at all.

David Broockman: Yes. Yeah, absolutely. And, um, it’s, I, I don’t want to make it seem like we’ve figured everything out now, but I think we’re, we’re in a, in a much better place than we  were, you know, 30, 30 years ago.

So in the long run, science I think has, has a way of self-correcting, you know, I think in particular, um. The rise of kind of, uh, methods for natural experiments. So basically finding ways out in the real world to, uh, if, if you can’t create your, so you know what? People specialize. So I, so I tend to do, um, field experiments where, you know, we’re randomizing stuff out in the real world.

Um, but you know, a more common approach, um, is to use natural experiments where we find something in the real world that kind of approximates an experiment. And, and, and that that’s a great approach too. Um, there’s a lot of people who are, who are really great at, at, at finding those. Um, and so I think that’s ultimately, you know, we, we learn a lot about these phenomena.

We study from the triangulation of results from a bunch of different methods. And so, for example, some of the work I’ve done on. On Fox News as an example. You know, I’ll give you an example of kind of three studies on that. Um, there’s a really, uh, nice study from, uh, Greg Martin and Allie, or Ulu. My, my former colleagues, uh, from the Stanford Business School.

They have this really, um, neat  study basically trying to understand the effect of Fox News by using. It’s positioned on the channel order to uh, figure out, basically there’s some places that Fox News got really low on the channel order, and so people watch it a lot more ’cause they get to it. And then in the areas where Fox News was put really high on the cable channel order, people watch it less.

Um, and, and, and what’s nice about that study is they can look at change over time. And the thought is, unless you believe that for some reason the places where Fox was put lower on the channel order, were gonna be getting more Republican anyway, then you can kind of attribute that effect to Fox. So that design isn’t, you know, a hundred percent airtight, but I think it.

Pretty, pretty strong. And, and there’s since been a lot of other people that have extended it. Um, but, but it’s not airtight. And so, uh, there’s been, you know, for example, e even if you buy that, it was kind of random where, where Fox, um, was put lower on the channel order or not. The effects of that. About putting Fox low on the channel order are gonna be driven by this kind of small and maybe weird group of people who will watch Fox if it’s low  on the channel order and won’t watch Fox if it’s high on the channel order.

So that might be like an unusually persuadable group because they’re, it’s not the people who would like always be tuning in. It’s gonna be driven by. These people who, um, uh, only tune it if it’s low. Like those are the people that watch in the areas where it’s low and not if it’s high. Um, so some people have asked like, well, you know, does this really tell us a lot about Fox viewing among the people who would watch anyway?

You know, questions about the natural experiments? So I think that really moved a lot of people’s views, including mine a lot. And so then, um, Josh Kella at, at Yale, my, my collaborator and I. Did a couple studies that I used as kind of complimentary to that work that, that Greg Martin and, and did one where we, uh, recruited Fox News viewers, um, and had paid them to watch CNN instead.

So these are the kind of people that presumably, most of whom would not be implement, uh, influenced by the channel order. But people that watch like a lot of Fox and we showed that changing them to watch CNN actually really did change their minds, and it did move them kind of more to the center. We also did a descriptive study.

A a lot of people in political science have kind of  doubted that, uh, Martin and new Google study, ’cause they’ve said, well, wait a minute. The, the kinds of people who watch Fox and really stick and watch enough of it, like, aren’t they gonna already be a Republican? You know, you know, really strong Republicans to begin with.

There’s also other questions like, well, you know, aren’t people gonna be watching multiple news sources? They can get both sides of the story. So kind of people asking like, Hey, are these effects really plausible? And so we did this, um, purely descriptive study. Where we just, um, tied where we got some unique data from a few different sources where we could understand kinda who watches partisan media, how much of it do they watch, and what else do they watch?

And we show in that descriptive data that there’s actually a lot of people who are not that extreme to begin with, who watch one side’s partisan media. And that’s pretty much the only news they watch. Um, and it’s a, and it’s a relatively meaningfully sized group, so I think you, that, that’s an example. I, I tell that story as like an example of, and, and there’s a lot of other people working on, on Fox News and partisan media too.

It’s not just us, but to me that’s an example of how we kind of navigate understanding these big questions in a world where. There’s  no one study that is the silver bullet. Um, and I think you, you know, if I tell you any one of those three studies, you know, our experiment or, or you know, Allie and Griggs, uh, natural experiment, our descriptive study, no one on its own says, aha.

You know, I think we know for sure you put all those together and you say, Hey, this, this evidence is like pointing more and more and one direction, right? So I think that’s part of the reality of navigating 21st century social sciences is recognizing no one paper is the smoking gun, but maybe we can add things up and, and start to, to get towards what we feel like is knowledge

Andy Luttrell: a a And as we’ve gotten sort of all of these kinds of things piling up, we’ve got more studies that contribute to a bigger picture of whether persuasion is something viable.

Like at this moment in time, your understanding of the literature, how optimistic are you about the potential just in, in principle? We’ll, we’ll get to in practice in a minute, but like in principle does it’s, and, and I would hope so if you’re still studying it, but Yeah. Do you have some sense that messages can change minds in at least some way?

David Broockman:  Yeah, uh, I mean, I think the answer is clearly yes. Uh, and with caveats clearly. I, I think all you need to do is just look over time at, for example, you know, what do Republican voters think about trade or Russia, or what do democratic voters think about immigration, which has gone way to the left over the last, uh, decade, uh, to see that, that people do change their minds about things the same time.

There’s plenty of examples where people’s minds don’t change or people assume they would. So abortion’s a great example of this. Where, uh, you know, we’ve seen, you know, you know, these, this ebb and flow over time of abortion becoming this really salient issue in national politics. But when you look at some of the data from Gallup or others, like abortion, public opinion on, on most aspects of that issue is just famously stable over time.

And, and many issues are like that. Uh, and so I think that. We, I think we clearly see this tremendous variation where on some issues and in some contexts political persuasion is, is immense. And then in others, the  public seems to resist elite messages. And so I think part of what’s interesting is for me, is trying to understand what drives that variation, what are the steps necessary?

Um. I think there’s this, uh, there’s this nice framework from, um, John Zeller in, in, in his classic book, nature and Origins of Mass Opinion, where he says basically, Hey, there’s, there, there’s sort of two things that need to happen in order to persuade someone. Uh, the first is they need to actually be exposed to the persuasive message.

And then the second is conditional un exposure. Basically, it, it has to. Change their mind in some way. And I do think, um, one, one message of, of some of my research, including that Fox News Research, is I think we’ve been thinking a lot about that second step recently of, all right, we’re gonna force someone to be exposed to something.

Is it gonna change their mind? And maybe not enough about that first step of, well, when is it people are actually exposed to things? How much persuasive material do people actually see? You know, one, one example of. Uh, where I think that kind of thinking really could, could come into play is there’s been a lot of,  um, these postmortems about the 2024 elections.

And how did, um, Democrats lose that election? And one interesting, uh, I think dissonance is a, a lot of the popular discourse is all about how, oh, like if only Democrats had x, y, Z message. Um, and I feel like this is the perfect election to show you that, like the message probably was not. Quote unquote, like the problem for Democrats, because if you look at who changed their mind and who switched their votes between 2020 and 2024, it was predominantly the people who pay the least attention to politics.

And so those are the people who, and so like I, you know, I have friends who you know, are very involved in politics, who are like, oh, there are Republicans said this, and then Democrats didn’t answer that charge. It’s like, whoa, wait a minute. The people who changed their votes in this election are the people who never even heard.

The first thing in the first place and have no idea, you know, they, they don’t know who controls the house. They’re not sure in many cases, like which party’s more conservative, like these are low information voters. It’s, those tend to be the people that foot float between parties and elections. But in this election, it was really driven  by that group.

And so, um, I think that’s a great example of like just we have to kind of put ourselves in the mind of like, what does it mean to be the kind of person who decides election? It’s, it’s, it’s someone who doesn’t really care that much about politics and crucially often, you know, usually chooses to opt out of, um, paying attention to, and, you know, consuming political messages.

So I think that that first step of what, what’s aller calls, the receive step, whether you receive persuasive messages. Is, uh, sometimes, sometimes neglected.

Andy Luttrell: Uh, I, I dunno how large the effect is, but I, I keep citing to people the, uh, the Google trends, uh, data that, like on election day, the search for, did Joe Biden drop out?

Was like searching as like, I, I don’t know how huge of an impact that specific search head, but it illustrates. The point that you’re making, which is just like, we spend so much time stewing on strategy and message and all the, all the other variables, when there’s just an, an enormous component of the  electorate that’s just not engaged.

Right? Yes. And, and they’re actually the message. And I think there’s, there’s work in communication that I had always kind of dismissed that is mostly about like. Reach, right? Like a message is reach and I go, who cares about that? Like persuasion? It’s about the strategy and the, the message itself. And, and I think more and more I’ve come to appreciate exactly what you’re saying, which is that, yeah, but if that doesn’t get to anyone, what does it matter what it says?

David Broockman: This was, so, I, so I mentioned, you know, when, when I got started, um, doing, uh, you know, in interested in this world, I, I did, you know, I was the classic DC summer intern in college, you know, showing up to dc but I, I guess, you know, now 19 year olds have more power than they, uh, than they used to back in 2008, but in, in, in, in know, Elon Musk America, but

Andy Luttrell: mm-hmm.

David Broockman: Um, you know, back then I, I, I didn’t have doge level power, but, but I was, you know, I did show up to DC and, and one of the things that, that really, um, has always stuck with me, just my impressions of, you know, living there for a few summers. Is how in, in the DC world, it really has this, um, sense that people feel like they are  engaged in a, some kind of drama, you know, to which the country is paying wrapped attention.

Um, and they’re just not. Um, and, uh, as I, um, you know, and it is something that I think people who, who live outside the beltway, it, it’s just hard to wrap your mind around the degree to which a typical person simply does not care.

Andy Luttrell: A a and there’s just such an egocentric bias of it all of like the people who are the most interested in understanding the outcomes of these elections and like rallying for a cause are the ones who are consuming the information and are like paying wrapped attention.

And it’s so easy to go like, so obviously everybody is right. Like it, it’s an odd Yep. Thing to realize as a scholar in that world, to be like, oh, you are particularly out of touch from the people who you are most interested in understanding.

David Broockman: Yes, yes. Absolutely. And that’s a, that’s a great bridge to this, um, this recent paper I did on, uh, political practitioners and their, and their predictions.

Andy Luttrell: Yeah, so, so this is, so this is like in principle persuasion can work and there’s evidence that when you do everything just right, you can find that messages move the needle. But when it comes to, I sort of said the stepping stone to the paper we’re gonna talk about is some work that you were involved in.

Just looking at like in the wild, when there are messages, how hopeful should we be that they’re making any difference? So like setting kind of ground truth as the world. Yeah. How, how effective are these messages?

David Broockman: Yeah. Yeah. So I think the answer is, is very different when you ask the question. Like, for example, what is the effect of seeing like one news segment or one TV ad, um, and what is the effect of that?

Say like a month later, the answer is gonna be very, very small. Um, but I think clearly what is the effect of seeing. Many news segments or many TV ads that kind of build up over time. Uh, the effect of all those things is small, but,  uh, typically small, but still maybe big enough that in, in, in a world where we decide elections by one or two percentage points big enough to change the outcome of an election.

So I do think the, the more kind of real world we get in terms of, um, people’s, you know, ability to not pay attention to a message or select out of it, um, in terms of the amount of time that elapses between when people see a message and make some decision, like whether they’re, you know, how they’re gonna vote in an election and how much competition there is, how much kinda other messages they’re receiving, uh, from, from.

The same side is the message and the other side is the message. You know, all those things lead the effects of messages to decline. Um, and so often, you know, we, and also political practitioners have this, this paradigm where we’ll study the effects of messages within environments where people aren’t receiving competing messages.

They’re forced to view it, and they’re doing the, you know, the, the, the, the kind of outcome measurements. So like we asked them in a survey, for example. How they want to vote or what they think about an issue right afterwards.  Um, I think it’s very clear that those effects are much larger than effects that are more real world, and I think everyone kind of knows that and goes, uh, kind of eyes wide open, uh, in on that.

So we can see effects, um, in a. Sort of survey environment, um, depending on the issue of usually, you know, depending on the issue, if you’re talking about like vote choice, it’s gonna be low single digits. If you’re talking about issues, it could be mid to high single digits if you have a, have a, you know, really great message.

So, um, some of that persuasion in the short term, in a survey environment where people are forced to view it and, uh, you know, when there’s not competing messages can be, you know, modestly there. And then as you relax those constraints, the, the, the effects start to get smaller. And,

Andy Luttrell: and so, so some of that is just that, that there all of these variables you’re talking about in a real information environment.

But the other part of it is like, it’s not a random sample of messages that we have information about their efficacy, right? They’re the messages that were chosen by people who thought they were gonna be effective. Yeah. And so it seems like another way of  thinking about why message is in the wild. Are less effective than their potential is ’cause the people with power are just picking the wrong messages.

Yeah. Right. So, so that’s my sense of like where this new project came from. But, but maybe this was just the, the cover that you put on it Yeah. Some actual origin story. So, yeah.

David Broockman: So, so lemme tell yeah. Tell you a little bit about this project and also why we got interested in it. So, uh, my, my coauthor, um, Josh Kella and I, um, ha have done a few experiments.

Um, on the effects of door-to-door canvassing intervention. So we’ll work with these, these kinda amazing nonprofit groups that send people out door-to-door to have these conversations with people. Um, but we’ve worked on a few different issues. Um, most of our experiments have been on views towards transgender people or unauthorized immigrants.

So nonprofits will go out and have these conversations about these groups. Um, and then we will come back and, um, do follow-up surveys of the people that, um, have these conversations in the kind of weeks or months later. So the thought is, this is a little more naturalistic. You get someone coming to your door, you don’t realize that’s part of a study.

A month later you  get a survey, ask you a bunch of questions, a couple on there about this issue, and then we can measure some effect by comparing people who got that conversation to a randomized control group who answered the door. But then. Let’s talk to you about something else. Um, so apples to apples, comparisons of of people.

So that’s typically how we, we do these studies over the course of those collaborations. One of the things I really like doing about these field experiments is you as a researcher have to go out there into the field. I mean, sometimes quite literally, like the, when we were preparing for these studies, me and my collaborator, we would go do the canvassing, um, as the scripts was being prepared to see, you know, how’s it landing?

Does the protocols make sense? Um, but also we’re part of all these conversations with the funders and the groups that are developing these and. That has exposed us to a lot of conversations where, um, I guess to put it politely, I have been puzzled by the intuitions of some, uh, political practitioners around, uh, persuasion.

Um, not that I necessarily think I, I have like, you know, great intuitions either, but I’ve, I’ve just heard over the years some  things where I’m like, okay, there’s just like no way that that would work. Um, and so I’ve, I’ve, you know, me and, and, and Josh Cala have been interested over the years and, Hmm. Like what, um, like what are the intuitions of these political practitioners in, in part because of, um, strange things we’ve heard, but, but also because of, uh, you know, getting back to, you know, my high school experiences I mentioned earlier, just the, the amazing level of confidence that we hear.

You know, I’ll hear a lot of ideas. I’m like, yeah, that could be true. But then. Um, the, the confidence I would hear of like, oh, yeah, you know, what works is this, what doesn’t work? Is that, um, and so that just led to this dissonance of like, man, you know, I, I, I, I don’t feel like I’m that good at, at predicting the effects of messages.

Like, can you really be that, that good too? The, the last thing that that got me, um, interested in this paper and during this project is that I, I, I worked on this project with, uh, you know, a bunch of co-authors, um, Alex ic, uh, Ben Tap, Ben Tappen, Luke Kiewit, um, as well as some folks at, um, the, the company Sable.

Where, um, this company Sway Able is one of these, um, online pretesting platforms that, that a lot of political  organizations now use, where what they’ll do is have a political campaign, upload a bunch of different ads they might run, and they’ll run these survey based experiments to figure out which ads are most, are most persuasive.

And one of the findings of, of this paper that we published last year, um, using this, this data from Sueable that, that Sueable shared to us, where we looked at the real, um, the kind of real, you know, ad tests that a bunch of campaigns had actually done on their ads is that it’s really hard for kind of.

Attributes of the ad, like is it positive or negative? Does it feature the candidate? Does it, you know, all all these things we tacked about the ad don’t do a great job of predicting what’s effective. But diff some different ads are much more effective than others. So it is the case that some messages really are better than others and some ads are better than others.

But just based on like the statistics of, you know, the things you could. Code. It’s hard to do a good job of predicting what’s, what’s better. And so we thought, okay, well that’s hard to do, but maybe the political practitioners can do it or maybe not. Um, and so what we did in this project is we  basically sourced a bunch of messages out from the real world.

So, you know, we looked to see, okay, on each issue, what is it that, um, so, so one example, you know, we, we put in the, in the paper, our table one is the examples of the messages we used on, on marijuana. So we, we study a bunch of different issues. One of them is marijuana legalization. Um, we had a message from JD Vance back when he was a Senate candidate.

Uh, we knew him when, I guess, um, the, um, American Legislative Exchanges Council, the Heritage Foundation and the a MA. So we had, uh, a message from each of them against marijuana legalization. And then we had example messages for marijuana legalization. So one was from Tim Ryan, who is running for Senate against, uh, JD Vance.

We had, um, uh, two from the marijuana policy project and one from, uh, normal. So, uh, we kind of pulled these out from real, uh, real world, uh, political rhetoric. And we did this, um, experiment where we measured the effects of all these messages. So we did this huge experiment where people, um, are randomized to see one of those messages on one of the issues.

And so for every message  we have an estimate of how effective is this message at actually moving people, uh, on this issue. We also then did a parallel survey of political practitioners. So these are folks who actually, um, help run campaigns or fund them. So we have a lot of variation here. We have everything from people who have been, uh, kind of the, the senior management of presidential campaigns all the way down to people who are, you know, just getting their started DC working at organizations that fund or work on this stuff, but are not, you know, directly responsible for messages themselves.

And we ask, first of all, how well do Elite do? And, and then are there any predictors? Like for example, if you are more experienced, um, are you better? If you are more, uh, confident in your predictions, are you better, um, uh, more, more, more accurate, et cetera. So the thought would be if we think on one issue, message A is the worst, and message B is the best, and message C is on the middle.

Based on our experiments, can the practitioners accurately assess like, oh yeah, you should go with message, message B. You know,  message A is is, you know, already, et cetera. So that’s, that’s the question. And what we find is that, um, and we also actually compare this to the mass public. So we go out there and just ask a random sample of Americans.

Um, Hey, here, give them the exact same task. How effective do you think these, these messages would be? And what we find is that neither of the practitioners nor the public are very good at doing this. They can do a little better than chance. So, uh, when we, um. Try to, uh, try to think about, you know, for example, um, how likely are they to pick the, um, best performing message?

They’re a little more likely than chance to pick the best performing message. So it’s not as if they have, you know, no intuition about this. But, um, overall they are. People, both the practitioners and the mass public and the more experienced practitioners and the less experienced practitioners. Everybody it seems like is just not that good at figuring out the effects of these messages.

Andy Luttrell: And so, uh, the other thing to, to highlight is the confidence effect, right? The, the thing that you noticed seems to be a non-diagnostic indicator of ability.

David Broockman: Yeah. So part of, you know, we were hoping you rewrite this full paper or you said, Hey, you know, they’re not very good at this, but hey, certain people are better.

Like if you have more experience, if you’ve worked in politics, maybe if you’ve spent time talking directly to voters or not. Or maybe if you, you know, live outside the DC Metro, et cetera. And it turns out like pretty much nothing does a great job of predicting who’s more or less, um. Um, who, who, who does better or worse at predicting things?

And yes. One of the like really interesting null effects there is we ask people whether they are confident in their predictions. Um, and that has no predictive power whatsoever. So those who say, oh yeah, I’m, I’m very confident in my predictions is, uh, you know, you know not, right? And I think that’s important in part because, you know, I think we all go through life knowing that sometimes we don’t really have confident predictions.

And the question is like, when we think we know what we’re talking about, do we, and the answer in this domain seems to be no,  we, we still don’t.

Andy Luttrell: Hmm, that confidence, was that measured for each prediction, or that was a global judgment, just like I, I’m confident in the reports.

David Broockman: Yeah. So what we did is, um, first of all, people did, uh, so we asked them kind of on what issue.

They’re a quote unquote, experts. So if you’re working at a marijuana policy organization, you might say, yeah, you know, I, I know about marijuana. And then we would ask you about that issue, and then we’d ask you about a, um, a second issue, um, where you, uh, are not, um, uh, are, are not an expert. So that doesn’t make a difference.

And then we also explicitly ask you the survey question we used is, how confident are you in the estimates of the effects of the messages you just saw? Um, and so that, that’s, we’re asking you, Hey, you know, the, so we’re not doing it on like a message by message level. We’re asking you, okay, you just did this forecasting exercise.

What does this forecast, you know, overall based on how well you did, how well do you think you did on, on this exercise? And so the people who say I’m very confident are not any more accurate than the people who recognize they’re not able to do this.

Andy Luttrell:  Mm-hmm. I, I, you, you had answered this, I think when you were spelling it out for me.

I, I had a little trouble following originally the way you analyzed the data and there was a, it, it sort of, um, is potentially relevant, this distinction. So is it that. What you’re ultimately getting at is people’s ability to tell which of a selection of messages is more effective or message by message.

Will this be effective? Will that be effective? Will that be effective?

David Broockman: Yeah. So we look at it both ways. The main analysis that we do is, um, looking message by message. So basically what we do, so to, to get really into the weeds here, um, we, we, we do things a bunch of different ways, but for the analysis that I think of is, is the main analysis.

The first thing we do is kind of rescale people’s predictions because what we’re looking at we’re, we’re kind of judging people against the effects in this, um, survey-based environment. And so we don’t wanna penalize people for just, if you, for example, you know, think that effects in  survey-based, you know, a survey environment, you know, artificial survey environment are twice as large as they really are.

That that’s not, not really what we’re interested in. We’re interested in kind of the relative distance between, between the messages. Like, do you know that message A is twice as good as message B? And so in our main analysis, what we do is actually first takes everyone’s prediction and rescale them to essentially have the same average and spread as the, um, messages themselves have.

So the, the thought here is that, um. That way. Yeah. If you just think there’s more spread than there is, or you think that, um, the messages are overall five times more effective in surveys than they really are, like, we’re not gonna punish you for that. What we’re just gonna punish you for is, uh, kind of the relative rankings of the messages.

So if message A is twice as good as message B, like, do your predictions capture that, that’s the case. So that’s what you’re, we’re, we’re judging you against. And then we do a more continuous scoring. Um, the technical version is called root mean squared error. That’s that kind of a typical way that, or one of the most  typical ways that predictions are judged and in research that looks at forecasting and.

It, it, it is what it sounds like. Um, so working backwards, we, we take the error. So how different are you than the truth? So we would go through every single message you looked at and say, how different are you than the truth? Um, so that’s the error. Then you square that. So we, for example, if you have five messages, we would.

Uh, look across those five messages you did, square all of them. Then we take the mean across all of your squared errors, and then we take the square root of that. So, and that sounds kind of, you know, complicated, but that’s for various reasons. That has some nice statistical properties. Um, but, but yes, it is, it is, we are looking at a continuous sense.

Um, it’s not just based on ranking. Um, we do a, a, a few different robustness checks where we say. You know, how, how good are people at, for example, identifying like the best message, stuff like that. And there again, there’s some evidence that people can do a little bit of this, but, um, not, uh, uh, not a great job.

Andy Luttrell: Hmm. Yeah, I was trying to think of a way in which, like, what is the minimum bar for  success that people maybe could hit? And, and I was thinking something like, just, you don’t even have to know that it’s twice as good or half as good, but do you just think this one is better than that one? And it, and it sounds like what you’re saying is.

That, that, that was embedded in all of those tests that you did, and it doesn’t really change the conclusion. Yeah, just people can kind of eke out on average a sense that some are better than others, uh, but not do so particularly reliably. That important thing too, seems to be that. That there is actually variation across these messages and how effective they are.

Right? Like there, there’s a version of the story where you just go like, well, none of these do anything. And so nobody would be any good if you convince them that some of them are gonna perform well. Yeah. But like some of the messages actually seem like they work and people That’s right. Just aren’t good at finding them.

David Broockman: Yeah, that’s right. So, um, on, on some, on some of the issues here, you know, the, the messages seem to not be that effective or to backlash, but on average yeah, the, the messages do have effects. Um, yeah, it’s just that people aren’t, aren’t great at figuring out which  ones. Um, so, um, from my, um, one other thing that we, we look at here, I think when this came out, we, we showed this to some practitioners, I think.

Our, our intention is not to like mock practitioners and say, oh, you’re so, you’re so stupid or something. I mean, we, we don’t think we’d be any better at this. Um, I think it’s just a, um, a fundamental, like a limitation of human kind of forecasting abilities. You, you see in a bunch of domains that, um, most people are just not very good at forecasting things.

One of the things we do look at some of their practitioners, you know, raised to us when we were getting feedback from them on this was like, well. Maybe the people that were doing the survey, even though we were clear with them of, Hey, here’s how we tested the messages, here’s what we’re judging you against.

So, you know, there’s nothing mysterious about, about what we’re asking ’em to forecast. Maybe they’re thinking about something else. Um, which there could be other considerations here. Like, well, what messages are likely ist to spread? Or, or, or. Et cetera. So one of the things we ask is, well, okay, do the or, or maybe there’s something flawed with  our measurement, right?

We’re doing these survey based measures, so maybe the practitioners kind of understand a truth that we really don’t. Well, one answer to that kind of, those, those kind of category of critiques is to note that the practitioners actually don’t agree with each other about which messages are best. So it’s not as if there is like this practitioner conventional wisdom out there.

And so it’s like, well, why believe these political scientists about the effect of the messages? The practitioners might know better. Well, like that, that can’t be the case either, because even if we think of like the, the. You know, average practitioner must be correct. Like most of the practitioners don’t agree with each other, so like they would still be wrong.

Um, there, there is no like, you know, different reality that they, they agree on that, that could be more correct. Um, we also don’t really find evidence of, um, that there’s like individuals, there are quote unquote like super forecasters here. So maybe the different predictors we have of what’s most effective, um, uh, you know, is not all that effective.

Um. Maybe there’s something else that we didn’t measure that is, and some people are just  better at this. Um, so one of the ways we get at that is we ask people to predict two different issues as well as two sides of, of each issue. So we basically, we test whether those who forecast more accurately on one side of one issue also did, did so on the other side of that issue or on other issues.

And we didn’t find much evidence for that. So it doesn’t seem to be the case that like some people are just better at this. It seems like all of us are just not really that good at at, at doing this.

Andy Luttrell: One. So, so one like last bit of hope could be, uh, and this comes from a lot of my work is on tailored and targeted messaging.

One of the things that strikes me is like, maybe it’s just very hard to forecast the generic efficacy of a message across like a wildly diverse population. Yeah. And that if instead you could like carve out identifiable. Like audiences or segments with clear characteristics, would you have any bet left that, that  maybe people would be more accurate?

David Broockman: Yeah. Great question. Well, so I think, yeah, this gets, I think to this question of like, well, what’s the practical implication of this? Right? Um, and so one. One thing, getting back to that s Sueable paper and the, the implication of the S sueable paper that I think this really reinforces is the importance of doing this kind of testing, right?

If we’re in a world, which I’d argue across these papers that we are. Where different political messages are actually meaningfully different in their effectiveness. But we are not good as humans at predicting that. And you know, at least among the set of messages that like are plausible, you know, we, we didn’t run the like, you know, kill puppies message, um.

Among the set of messages that are plausible, we’re not good at human beings at predicting this. And, um, it doesn’t seem like aspects of the message like positive, negative, et cetera, give us all that much information either, um, about, about what’s effective. So we can’t predict it, what should we do? And I think the answer  is, um, what, you know, in media reports you can see this and from the existence of, of, of.

Uh, firms like swivel, you can see this. What it is that I think campaigns are starting to do more, which is to take this kind of, what I call like more data driven or creative approach to things where they’re basically to put it informally, like throwing a bunch of stuff at the wall and seeing what sticks, um, where say, okay, if we’re really bad at predicting which messages are best, rather than, you know, just sitting there.

Mentalizing about it. Um, why don’t we go write 20 messages, test them all and see which is best. And maybe if these three messages are best, we think about, well, what made that best? And let’s iterate on those. And so I think that’s the kind of, um, approach that this all, uh, militates towards. To your question of, okay, well, you know, how much of that could be driven by like, different reactions in different subgroups?

Um, my sense is that, um. The, the practitioner world and academics frankly, have put way too much weight on this idea that,  okay, well one message is gonna be better for one group and a different message for a different group, rather than just trying to focus on like what messages are best. You know, one way you could think about this is if you’re trying to understand, you know, what persuades people, how much of that is driven by like.

Individual differences that some people are more persuadable than others. Differences in the message that some messages are better than others. And then the last would be kind of the interaction of those first two. How much is the kind of interaction between some messages being better for some people than other messages are?

And I think. 10 years ago, there was really a sense in the academic world and the practitioner world that where the kind of real exciting advances were for persuasion were all in like the first and third bucket. So that’s to say like finding persuadable people and then tailoring messages to them. Right.

If you remember that whole, like Cambridge Analytica, in my view, kind of like nonsense scandal about like, I don’t know. I even as a central left person, like I think a lot of, a lot of, like what happened after Trump won  in 2016 was, you know, liberals, uh, deciding that like Trump must have won because the Trump, you know, the Trump campaign like illegally discovered brainwashing nonsense.

I, I’m not sure, but in any event, um, there was this notion that Cambridge Analytica had done this amazing job doing, you know, doing that third thing of like tailoring messages to people in a way that, you know, was so much more persuasive. I’m pretty skeptical of that. Um, not that there’s no value in tailoring messages, maybe, but I just think we’ve really neglected the second thing I mentioned, which is just the messages themselves being better.

Um, and what I’ve seen in, for example, Alex Copak has this nice book. It’s called Persuasion and Parallel, which I think is very consistent with this. And there’s other work consistent with this that the, the biggest variation and that that drives like why is some persuasion working sometimes and not others, is just that some messages are better.

Not that, you know, some messages are better for some people. Um, again. Is there a little bit of that out there? I’m, I’m, I’m sure there is. Um, but I think it’s, it’s kind of overstated, um, uh, both of the academic and  practitioner world relative to the importance of just differences in messages themselves.

Andy Luttrell: So is there, so, so if it’s not finding the sweet spot of a subgroup and it’s not throwing messages at the wall. I’m curious what the road forward is like. Do, are you hopeful that we can recalibrate people and, and get people to better their intuitions, or is there some other way that we could do these campaign?

No, I mean, not

David Broockman: to be clear, I, I think right now the state of the art probably is throwing the messages at the wall. Um, I think that’s, um, that, that probably is. It, at least based on my sense of, you know, these papers, we’ve done that. If you think some messages are better than others, and that’s really where the gains are, but we’re not good at predicting it, then I think part of it is how do we, yeah, how, how do we throw those messages at the wall in a way that’s, that’s smart.

And so one thing that I’d like to do kind of with some more work on next is thinking about, you know, one, how do we do that? And I think there’s some, like, frankly just statistical. Boring at least. Well, boring to the normal person. Maybe not so boring to me.  Hopefully not boring to, you know, people. I want to fund my grants, but some, some in the weeds, I’ll say not, not boring, exciting, but in the weeds, uh, questions about how you, you do that process.

So there’s a lot of optimizations you could do, for example. How do you think about, um. Are there leading indicators you could use, uh, to figure out which messages are most effective, you know, et cetera. So that’s how to do that. Throw the message at the wall, kind of, uh, uh, approach, I think is, is there, there’s actually a lot of room to innovate there and do it in a way that’s smart and looks less like throwing things at the wall and more like a more deliberate, um, uh, approach.

Uh, so I think that that’s one area, um, that, that, that is exciting. Uh, and then, you know, I do think another. Question. You know, behind that approach is a lot of that, like quote, unquote throw the message at the wall approach. Um, that, that is being used in industry now and, you know, we’re studying here, I is all based on the premise that, you know, to get back to what we were talking about at the beginning of our conversation, it’s all based on the premise that these, like within  survey ways of measuring the effects of messages generalized to the real world.

And there’s a lot, a lot of reasons they might not be. So I’ll give you one more example again, inspired by, um, Alex, Alex KO’s book. Um, professor Koic has this nice. Um, chapter in his book where he shows that, um, messages that seem to work through a more information type mechanism like they teach you something new, seem to have effects that last longer than effects that seem to work through more like salience type mechanism where they take something you already believe, like some value you have and make it really salient to you.

And I think that makes, makes a lot of sense, right? A week later, the kind of in the moment salience has gone away, but whatever, if a message taught you something new that might endure. Well there, right? There is a reason that maybe some of the messages that work through a salience mechanism that would seem to work right away in a survey-based environment might not work as well.

If you measure their long-term effects and maybe other messages that didn’t look a good as good initially would be better if you’re running ads more than five hours before  someone’s voting, right? So I do think that, um, that. Problem space of, uh, trying to understand how to do this throw at the wall in a way that’s more rigorous and asking, does this generalized, what really works in the real world is, uh, a big area where we need to do some more thinking and research.

Andy Luttrell: Well, I, I’ll be hopeful for the day when, when you fully optimize persuasion. But until then, I’ll just say thanks for taking the time to, to talk about this work. Uh, I’ve been following it for a long time and it was great to get some of the backstory.

David Broockman: Awesome. Yeah, absolutely. Great. Great to talk with you.

Thanks so much for reaching out.

Andy Luttrell (Outro): Alrighty, that’ll do it for another episode of Opinion Science. Thanks so much to David Broockman for taking the time to talk about his work. As always, you can find links to David’s website in the episode notes where you can read the details of the studies he shared. I’ll also include sources for my intro to government shutdowns, which actually helped me wrap my head around them myself. Funny enough, during the first government shutdown I was ever aware of—in 2013—I was in Berkeley for a conference. I have a weirdly vivid memory of being there and not knowing what the heck it meant for the government to have shuttered. And now, 12 years later, I’m sharing an interview with a UC Berkeley professor during another shutdown. What a world.

Anyhow, for more on Opinion Science, check out www.OpinionSciencePodcast.com where you’ll find details about each episode and ways you can support the show. Please rate and review wherever you can do such a thing, like Apple Podcasts. I think you can do it on Spotify, too. Subscribe and follow to get new episodes. I’m back to a monthly release schedule, so stay tuned!

Okay, that’s all for now. Thanks so much for listening, and I’ll see you in November for more Opinion Science. Buh bye…

alluttrell

I'm a social psychologist.

Get in touch