Episode 47: Moral Foundations & Political Opinion with Jesse Graham

Jesse Graham studies human morality and what it means for our political opinions. He’s an Associate Professor of Management at the Eccles School of Business at the University of Utah. As a graduate student with Jonathan Haidt, he helped develop Moral Foundations Theory (MFT), which has gone on to be a massively influential theory of morality and how it develops. One of Jesse’s key insights was that these moral foundations help explain the divides between liberal and conservative people, which has implications for all kinds of political opinions and pressing topics like political polarization.

In our conversation, Jesse fills us in on the early days of his research and the development of MFT over time, walks through the implications of MFT for political ideology, and reflects on where the theory is now.


Things that come up in this episode:

  • Divisions between liberal and conservatives: antipathy (Iyengar et al., 2019), geographic segregation (Motyl et al., 2014), avoiding each other’s opinions (Frimer, Skitka, & Motyl, 2017), and even shorter Thanksgiving dinners (Chen & Rohla, 2018; Frimer & Skitka, 2020)
  • Jonathan Haidt’s “Social Intuitionist Model” of morality (Haidt, 2001)
  • Moral Foundations Theory (Graham et al., 2013; for a useful overview, check out MoralFoundations.org)
  • Values beyond the moral (Schwartz, 1992)
  • How adult political leanings can be predicted from observations of them as kids (Block & Block, 2006)
  • Ideology and geographic preferences (Motyl et al., 2020)
  • Moral foundations and the basis of vaccine attitudes (Amin et al., 2017; Karimi-Malekabadi et al., 2021), needle exchange attitudes (Christie et al., 2019), and a variety of political attitudes including abortion (Koleva et al., 2012)

Transcript

Download a PDF version of this episodes transcript.

Andy Luttrell:

It’s become an all too familiar idea: the United States has become so divided! Even though public opinion data show that the opinions of Democrats and Republicans haven’t become all that more extreme over time, the tensions we feel about political disagreements have gotten deeper. 

Over time, conservative people have felt more negatively about liberal people and liberal people have come to feel more negatively toward conservative people…it’s a mess. People are choosing to live in places where politically like-minded people live, and they go to lengths to avoid hearing opinions from across this political divide. There’s even some evidence that Thanksgiving dinners don’t last as long when they’re composed on more politically diverse family members.

So what’s the deal? Why are we at such an impasse?

One account that’s developed over the years is liberals and conservatives so often talk past each other…because they have different moral priorities.

You’re listening to Opinion Science, the show about our opinions, where they come from, and how they change. I’m Andy Luttrell. And today I’m excited to share my conversation with Dr. Jesse Graham. Jesse’s an Associate Professor of Management at the Eccles School of Business at the University of Utah. As a graduate student with Jonathan Haidt, he helped develop Moral Foundations Theory (MFT), which has gone on to be a massively influential theory of morality and how it develops. One of Jesse’s key insights was that these moral foundations help explain the divides between liberal and conservative people, which has implications for all kinds of political opinions.

In our conversation, Jesse shares how he got involved in this research and what it tells us about the intersection of politics and moral psychology. So let’s get to it!

Andy Luttrell:

Yeah, so moral foundations is… So, where were you when that was sort of bubbling up? I attach your name to moral foundations theory pretty closely. Was that something that was happening while you were in grad school? Or was that something that sort of you came in and it was already pretty established? 

Jesse Graham: 

No, it was really good timing, because it was really starting up my first year of grad school, and when I started, I had applied to grad school for both developmental and social psych programs. My background, I worked in a baby lab when I was an undergrad, and then took this kind of veer to the humanities going to Harvard Divinity School, and then wanted to come back to psychology. But I was really up in the air between developmental and social and when I contacted Jon Haidt at the University of Virginia, we really hit it off, and so I thought, “Okay, for Virginia, I’ll apply to social,” and was lucky enough to get in. But then I still had a lot of developmental interests, so my first year in terms of research I was really focused on trying to develop a kind of moral education curriculum that would be sort of focused more on emotions and intuitions than on the sort of rational kind of Kohlbergian model that was used a lot in moral education. 

But at the same time, Jon Haidt was starting to think about this moral foundations idea as kind of a sequel to his social intuitionist model. So, he had a paper that came out actually 20 years ago this month that I think spurred a lot of the interest in moral psychology, where he was arguing that morality is rooted in intuitions and emotions more than reasoning, and so the idea of moral foundations theory was, “Okay, let’s try to get specific about what are the moral intuitions people have?” And so, it’s a theory from social psychology, but it’s drawing from evolutionary theory, anthropology, things like that, and so he was really starting that my first year, and then I got more interested in that. 

And this was 2004, so there’s also an election going on, so just starting from casual conversations about Bush vs. Kerry, we started thinking about applying moral foundations to this political divide between liberals and conservatives in the U.S., and so then I got a lot more interested in that, kind of put the moral education stuff to the side, and for about five years at grad school, I was like, “I’m gonna get back to that really quick, but let me just continue these interests in politics.” And then that took off more. 

And now, I think I’m still interested in moral education, but I think I would work with people in education schools at this point. 

Andy Luttrell: 

So, it sounds like though that you were interested in morality from the beginning, right? If you sort of were ready to do a project like that?

Jesse Graham: 

Oh yeah. 

Andy Luttrell: 

So, why? Why come into study morality? 

Jesse Graham: 

I found morality really interesting, and again, I was kind of coming from the humanities, where I had done a lot of work in philosophy, religious studies, and literature, and morality seemed like this really cool topic where you could study it scientifically, but it would still attach to all these other interesting areas about what it means to be human, what are our deepest core convictions, and so that was the stuff that I was really interested in. That’s why I was interested in studying religion. And the fact that my advisor, Jon Haidt, was this morality guy, sort of was like, “Well, I’m in this lab. Of course, I’m gonna be studying morality.” 

And again, I was assuming that it was gonna be a kind of moral education sort of approach before… I think by the end of my first semester, we were talking so much about politics and getting so interested in that, I was starting to put together kind of rudimentary versions of a moral foundations questionnaire to ask people about these things. And so, yeah, so that’s kind of how I started there. 

Andy Luttrell: 

So, before we get into the political stuff, could you just sort of give an overview of what moral foundations theory is? I have to imagine you have a ready-to-go explanation for that by now. 

Jesse Graham: 

Yeah. No, you’d be surprised at how awkward I still am at talking about it. And by the way, I’m a big fan of your podcast, and I’m intimidated to talk to you, because you have the most amazing podcast voice I’ve ever heard before, so that’s-

Andy Luttrell:

Well, I appreciate that. 

Jesse Graham: 

I think you could make anything sound good. But yeah, so moral foundations theory, I think as a theory it rests on four kind of pillars, and we’ve said that if any of these are knocked down, that kind of knocks down the theory. And the first pillar is nativism. So, that’s kind of the evolutionary component, the idea that there is a first draft to the moral mind and that we evolved our moral intuitions, our moral judgments, our moral nature in part because they were advantageous for us. They helped us to cooperate and survive in groups. 

And then the second part is cultural learning, and so that’s the more cultural part that it’s not that you were born with a set of moral judgments that are never changing. So, different cultures and different individuals will build on these different moral foundations in different ways, and so as a theory we’re trying to get at both the universality of moral judgment, but also the variety of moral judgment, and things like moral disagreement. 

And let’s see, the third pillar I think was intuitionism. Again, this is building on the social intuitionist model, and so it’s the idea that our moral judgments are intuitive in nature and not always the product of reasoning or rational, logical thought process. 

And then the fourth is pluralism. So, we think that there are multiple moral foundations, multiple moral concerns and values that people have that don’t just reduce to one particular value.

Andy Luttrell: 

Yeah, so on the pluralism side, I know this is a long, ongoing debate within moral psych, but my approach to it is often to wonder at what point are we calling it morality versus something else, right? So, the approach that I’ve had is there are these foundations. I often kind of swap in the language of like these are the core moral values that sort of are the blueprint for things. But it hasn’t been super clear to me why these five or six or whatever ones that have risen to the level of being a moral foundation are the moral ones, whereas other values, we go, “Well, those are something else.” So, what is it that makes it a moral value as opposed to just any other value? 

Jesse Graham: 

Yeah, that’s a great question. So, when I started getting interested in this, one of the theories that was really influential for me was Schwartz’s theory of values, and if you look at the Schwartz values circumplex, he’s got a big circle with a lot of values put on there. I feel like what I would call the moral values is covering about half of that circle. And there’s a ton of overlap between moral foundations and the kinds of values he’s looking at, like benevolence, say, that’s very related to care. And the distinction for me is that a lot of the values that Schwartz would look at are things like power, or hedonism, or they’re basically sort of different aspects of self-interest, and from an evolutionary standpoint they’re pretty easy to explain, because of course individuals should be motivated to further their own self-interest. 

And the moral values, I feel like, are the ones where you can’t so easily explain it as just self-interest, and so the moral values… So, for me, one of the tests of is this a moral value or not is do people feel passionately about it when it’s affecting somebody else rather than just themselves. And so, this is something we’ve gone back and forth with about liberty, which is a kind of candidate, maybe sixth foundation, and I do think liberty is really interesting, important, and I’ve used it in terms of predicting things like vaccine hesitancy, so I think it’s an important construct to look at. I’m still not convinced that it’s a specifically moral value, and in part what my former grad student, Pete Meindl, did some studies looking at fairness and freedom or liberty, and looking at whether this was affecting you or it was affecting a third party. And his basic finding was that for things like fairness, people seem to be similarly morally outraged when something unfair was done to a third party, but with liberty, it seemed about affecting you. 

It’s kind of like a, “Get off my lawn,” sort of impulse that I feel like… I think it’s a powerful motivator. I think it certainly can be people’s deep conviction. I’m just not totally convinced that it’s not just about self-interest. 

Andy Luttrell: 

So, is that to say maybe that things like liberty are potentially moral foundations, depending on those characteristics? Like is there a formalized way of saying, “If it meets these criteria, in this situation, for this person, it operates as a moral foundation,” but it needn’t… That’s me taking it into like an individual differences area, where I don’t think that’s where the theory was really built. But does that resonate?

Jesse Graham: 

Yeah. Definitely. And this is where I think it’s good to have a lot of critics, a lot of people criticizing your theory or your work. When I was in grad school, I would always feel kind of nervous if there was a critique or something, and now I feel like anytime I see a moral foundations takedown paper that’s trying to destroy us, I think, “All right, I’m still in the game. I still got it.” But so, one of the things that I think people did criticize is, well, what exactly… You know, you’re calling this a moral foundation. Why is fairness a moral foundation and maybe liberty isn’t, or maybe it is? What are the criteria? 

So, in a 2013 paper, this is for Advances in Experimental Social Psych. It’s kind of the big moral foundations theory kind of overview and article. We devoted a lot of space to talking about criticisms of the theory as an important contribution of the theory to the literature, is just kind of sparking some of these debates. And in response to some of the criticisms, we tried to spell out here are the criteria for what makes something a foundation. 

And so, it would be things like we didn’t want something that just reduced to one of the foundations that we’ve already looked at, and so that would be something that I think you would approach in a kind of factor analytic individual differences sort of way. And you know, some of the other ones, just is it omnipresent in people’s social judgements? Is it something that you see coming up in multiple cultures and multiple contexts? It’s not just something that’s really historically specific to one particular area. And you know, that it could be associated with an evolutionary theory that you could build a sort of evolutionary model for why this would be something that was advantageous for people to have. 

And so, we’ve always tried to be clear that we don’t think this is a canonical five. We don’t think this is covering everything. We felt like this was a pretty good place to start and then we have interests in things like liberty. Honesty I think is something that’s really important that we’re not capturing all that well in our model. If I do studies just asking people what’s morally important to you or what are some of the ways that you feel like you’ve not lived up to your values recently, honesty comes up so often, and I feel like I can’t just put that into fairness, or care, or something. I do feel like it’s kind of its own thing. 

And so, but I’m not totally sure what the evolutionary model is there for it, but I think there’s plenty of stuff with promise keeping and reputation. And there’s other ones that people have brought up. John Jost, I think, said he thought oppression was one, sort of like our reactions to oppression, which I think overlaps with liberty quite a bit too, that people should be free. Some people have suggested waste, you know, like… I don’t know. My impulse to not leave any food on the plate. Maybe that’s something moral. That feels like that’s a personality characteristic. There’s certainly an evolutionary model for it. Again, I’m not sure how much that’s a moral value versus just something that you might value. 

Andy Luttrell: 

Yeah. The self-interest does clear up some of that, and the annoying question that always comes to my head as someone who’s from the attitudes tradition and is sort of like wading in the moral psych area, is what the difference is between a moral judgment and an attitude. It sort of strikes me like why… Do we need to call it moral? Or is it just this is a negative reaction? So, what is the special advantage of being able to say there are judgments and there are moral judgments? What is that distinction and why would it matter? 

Jesse Graham: 

Yeah. I’ll give you a cynical answer first that I’ve heard from multiple sort of senior people in the field, is that for whatever reason, morality became this sort of hot topic in the early 2000s, and so if you wanted to publish a paper on attitudes, or even moods, or judgments, you would have an easier time getting it published if you called it moral, because then it seemed new and exciting. Because morality was something that wasn’t really touched by social psychologists for a long time. It was kind of seen as an aspect of moral development, again, in the kind of Kohlbergian model, and so when these two papers came out, Jon Haidt’s paper on moral emotions and intuitions, and same month, in September 2001, Josh Greene had a paper in Science where he was putting people in the FMRI scanner and giving them these kind of philosophical trolley problems, and I think those just sparked a lot of interest that, “Oh, this is something that’s empirically tractable, that we can actually look at.” 

Yeah, I think there’s plenty of paper. You know, I used to teach a class on attitudes when I was at USC, and at first, I thought, “Oh, I should have a week on moral attitudes.” And then I realized, “Oh, no. This whole thing is moral.” We’re talking about moral things. If we’re talking about racial attitudes, of course you’re bringing up people’s sort of core moral convictions there. And so, there wasn’t a lot in the attitudes textbooks that I was using, or the primary readings, that I thought, “Oh, this has nothing to do with morality.” I do think morality is one of those topics that touches on just about anything. 

If you’re talking about people’s judgments, feelings, or attitudes that are important to them, it’s going to involve morality. And again, the main distinction I would think of between moral and non-moral is just this something that’s just in the realm of self-interest or not? But you know, I’m also really interested in things like aesthetic attitudes, or I like Coke more than Pepsi. There’s certain movies that I like. It’s not entirely clear to me that those conversations about which is better, Pulp Fiction or The Big Lebowski, is a non-moral conversation. I think people get very morally… have very strong moral convictions about things like that. 

So, again, anything that’s important to us I think we tend to moralize.

Andy Luttrell:

Yeah, so it sort of transitions us back into the politics angle, which is that these moral foundations become sort of the seed for the attitudes and opinions that we hold. And so, could you talk a little bit about like you said, those early days, where it seemed like these pillars of morality, foundations for morality let’s say, were mapping onto a political divide? And in what way did they map on? 

Jesse Graham: 

Yeah, so when we were starting to look at this, the foundations that we were looking at and are still looking at, we started with care versus harm. It’s just this very fundamental, we tend to morally care about somebody being hurt, and we care about who’s doing the hurting, and I think the kind of virtues associated with this would be things like compassion, and nurturance, and peace, and we tend to really moralize things like that. 

And then fairness versus cheating, and fairness I think is a very kind of variegated construct, but in general, there’s not much debate about fairness and justice being really important to morality. And so, those first two foundations we’re looking at seem like were very well covered in the literature when we were starting out. Again, around 2004. And I would see the patron saints of them being Carol Gilligan for care and Lawrence Kohlberg for fairness. And you know, there’s all those debates between Gilligan and Kohlberg. I think the field basically settled on, “Okay, these are both morally important. These are both important aspects in morality.” 

But we were interested more in group-level concerns that we thought weren’t being covered as much in scientific approaches to morality, and so the first one of those we looked at was loyalty versus betrayal, and so the idea that it’s morally important to be loyal to your group, whether that group be the family, the nation, sports team, something like that. 

And then also really important for group living is authority versus subversion, and so this is part of the idea of showing proper respect for both authorities and traditions, and so there’s a lot of moralization of the traditions of a particular group that we saw as not necessarily just the same as loyalty, although I think it’s closely tied to it, and if you’re going against that, if you’re bucking tradition, then there’s something morally wrong with you. And these, I think, are sort of to a liberal audience are less obviously part of morality. You know, I certainly have a lot of convictions that it’s important to question traditions. It’s important to question authority, right?

And then the last one we looked at was purity versus degradation, and so this is where a lot of kind of sexual moralization goes on. The idea of treating your body as a temple and not a playground, resisting our lower base desires for this kind of higher, more divine nature. And so, this is very tied up not just in physical purity, like disgust, but also with spiritual purity. And so, those are the ones that we were looking at and we thought, “Boy, there seems to be a lot of attention paid on the right to things like loyalty, authority, and purity, but not so much on the left. On the left, what you’re really seeing is concerns about individuals treating each other fairly and not hurting anybody else.” 

Andy Luttrell: 

And when you say that, you just mean sort of out in the world, just sort of intuiting these things. Or you’re saying in your data you saw?

Jesse Graham: 

Yeah. No, just to start with, just sort of our kind of sense of, “You know, all these debates between liberals and conservatives seem to be…” And maybe there is something that is really important to conservatives that liberals aren’t talking about or are explicitly rejecting, right? So, if you’re saying question authority, some people might totally morally agree, and some people might be morally offended by that, and so we thought maybe it’s just these basic kind of core values that there are differences in, or that those kind of political subcultures of liberals and conservatives are building on these foundations in different ways. And that might be one reason why people seem to be talking past each other. 

Because one of the things that we really noticed was I’m very liberal, and if I have conservative relatives, we could argue about politics for hours, and we have, and there seems to be no convincing each other, right? There seems to be no… And you would kind of think in most sort of higher-level rational arguments that eventually there should be some coming together, some sort of consensus that is never met, and so we thought, “Boy, politics just seems like this sort of magical invisible wall is coming up between people.” And we thought maybe this moral foundations approach would be one way of trying to just describe that. 

And we weren’t even thinking about trying to fix it necessarily, but we were thinking maybe this would be one way to kind of describe what the problem is. And then I think there was a sort of higher long-term goal of, “Well, maybe this will be one way that liberals and conservatives could understand each other better or something in the future.” And that, I think, is a really difficult thing to actually do, but there has been some work on things like moral framing and moving people around, at least in the short term, on different topics. 

Andy Luttrell:

It might be my bias, but the harm and the fairness, I more easily see how those map onto the self, not self-interested piece that you were talking about, right? That you go, “Even if it’s not me that’s being harmed or I’m being unfairly discriminated against, I can still look at it and say it’s not right that this is happening, even to other people.” But when you’re describing loyalty, I had a little bit of a harder time doing that, and I wonder. Is it really the case that it’s like if you’re disloyal to my enemy, that’s wrong, right? So, that would be a case where it’s like it’s actually in my interest for you to be disloyal if you’re defecting against the person I’m fighting against. And yet, would you say that someone who holds that, who has that foundation is saying they’re wrong to have done that?

Jesse Graham: 

Interesting. Yeah, so like if you’re at war with an enemy and one of their soldiers defects or something, are you gonna see that as, “Oh, that’s a morally great thing because that’s good for our side,” right? That would be… I think the purely self-interest view would say, “Oh yeah, we should think loyalty is good when it’s in our interests. If somebody betrays me, that should be really bad. If somebody betrays somebody else, I don’t care. What does that have to do with me? If they betray my enemy, then yeah, I should see that as morally good.” 

And I don’t think people have those intuitions, right? I do think there’s a sense that even if it’s an enemy, the idea of being a fair-weather fan and switching out your Yankees cap for a Red Sox cap depending on who wins the game I think would seem kind of wrong to both sides, even when it’s momentarily in their self-interest. 

Yeah, so I do think it’s very much group interest, right? For the interest of your group. And there’s a lot of ways that you, an individual, might sacrifice themselves for the group or put their own self-interest beneath the group interest, and that is seen as a morally good thing. Yeah. 

Andy Luttrell: 

So, I’m realizing that this conversation is mostly an opportunity for me to ask these questions I’ve had about moral foundations over the years, where I’m like, “Oh, wait. This is perfect. I haven’t known where to look.” So, it’s pretty clear, and there was a recent meta-analysis on the moral foundations and political ideology relationship that shows this is a strong association. So, it seems pretty clear that liberals prioritize harm and fairness, that conservatives relatively privilege loyalty, authority, and purity, but then also harm and fairness too to equal degree? 

Jesse Graham: 

Yeah, that’s right. And yeah, so I wouldn’t say that conservatives privilege loyalty, authority, and purity over harm and fairness. I would just say relative to liberals they do, but I think… and it depends on how you ask the question, so different measures will have different kind of intercepts or starting points. If we’re asking something like how relevant to your moral thinking are the following considerations, and we have things like whether or not someone hurt a defenseless animal, or whether or not someone did something disgusting, things like that, so we see these pretty clear positive slopes if you’re plotting political ideology on the X axis. The steepest slope is for purity, but it’s also fairly steep for loyalty and authority that conservatives clearly care about these more than liberals do. 

For care and fairness, we see a very slight negative slope, and that seems reliable, but the effect size is very small there. And so, really if you’re looking at things, if you’re just wondering who cares about care and fairness, like who moralizes care and fairness, I think the answer is everybody. I mean, everybody does. Liberals do slightly more than conservatives, but I don’t think there’s a big difference there, and it’s really only when you look at the self-reported extreme conservatives that you see basic sort of equivalence between all five kinds of moral concerns. 

So, for an extreme conservative, loyalty, authority, and purity are just as important in their self-reported relevance as harm and fairness. But for everybody else, care and fairness come first. And so, for people in general, and then that gap gets wider as you move left on the political spectrum, so for extreme liberals, there’s this huge difference where they’re rating care and fairness at the top of the scale and loyalty, authority, and purity down at the bottom of the scale. 

Andy Luttrell: 

So, my curiosity then is how these moral foundations and ideology got wrapped up in one another. Are we saying that this is what ideology is, is just like these values? Or are we saying ideology is one thing and it just so happens to have gotten synced up with these moral values? So, the question is are they two different things that happen to co-occur, or are they one and the same thing? 

Jesse Graham: 

Yeah. And one of the other questions we had starting out was is this really just something about U.S. ideology, right? Like in the United States, it does seem like if you say, “I’m conservative,” part of what you’re saying there or part of what people could assume you’re saying is, “I care a lot about tradition,” or, “I think children should be respectful to authorities,” right? That does seem to be part of our at least colloquial notion of what it means to be a conservative. And if you say you’re really liberal, then the assumption is, “Oh, you don’t care as much about authority or sexual purity or something.” 

I think how these two get wrapped up in each other is a really complicated developmental question. It’s a little bit like a chicken and egg. I mean, it’s the question that I get almost every time I give a talk, is well, am I liberal because I care about fairness, or do I care about fairness because I’m liberal? What’s the causal pathway? And I think in some ways it doesn’t make sense to think about a causal pathway, because I think both of these are probably rooted in very basic kind of temperamental differences, so something like disgust sensitivity is something that you can kind of see individual differences in early on. Even some of the responses to authority. 

So, like Block and Block had this classic study from I think early 2000s where they looked at behavior of kids who were like kindergarten age on the playground and they were able to predict their political leanings 20, 25 years later. And it was the whinier kids were more likely to grow up to be conservatives. The kids who were more rebellious, who didn’t listen to the teacher, were more likely to grow up to be liberal. And so, that’s not too surprising, but I do feel like there’s basic temperamental differences that would make you more likely to be drawn to certain moral opinions and political ideologies at the same time. 

And so, yeah, so I do think the two are wrapped up in each other, like you said. 

Andy Luttrell: 

The question occurred to me because I was trying to think once of what the… like how could we identify the consequences of ideology that are driven by this moral connection, right? And are there non-moral things that ideology predicts, right? Like is there stuff that liberal ideology predicts that is irrelevant to these moral foundations? And so, that got me back to the question of are these… Is it just the same thing that it’s just like they’re so tied together that you can’t actually separate them? Or are there components of ideology that are amoral? 

Jesse Graham: 

Yeah. I had a plan a while ago with Matt Motyl to do a paper on the ideologicalization of non-political things, and because Matt had done a lot of work on geographical preferences, like where people want to live, and it would be things like how much… how walkable are the streets and things like that, that are more attractive to liberals. But one of the things that got me started, when I was at Virginia, I was also in Brian Nosek’s lab, and so he was running the Project Implicit website, and they just had tons and tons of IATs, some on political topics. A lot of what they started with was race IATs, but they also had silly stuff on there, like… I don’t know. Brittany Spears versus… I don’t even remember whatever the other pop star was in early 2000s when Brian was starting to do this. 

You know, Tom Cruise versus Denzel Washington, things like that, and so there were a lot of interesting political differences that came out of those things. I remember books versus TV. One thing just across the board, everybody says that they prefer to books to TV. When you’re looking at their implicit attitudes, they seem more on the fence, right? So, it’s like we all want to think that we’re book people, but we really, really like TV. 

Andy Luttrell: 

That’s a great example of implicit-explicit differences. 

Jesse Graham: 

Yeah. Yeah, exactly. Yeah. The person that I would like to be versus how I actually spend my time. But there were pretty reliable political differences there too, that liberals had more of this kind of pro-book preference than conservatives did. And there are a few things like that, that just seemed like, “Oh yeah, this is sort of politicized, even though this has nothing to do with politics.” And I do think it kind of speaks to our sort of tribal nature that when we get a hint that, “Oh, our group likes this kind of thing,” I always think of this with mask wearing. I think if I’m outside, sometimes I’m not wearing a mask, and I’ll get around some of my liberal academic colleagues and I’ll feel like, “Oh, crap. I better put the mask on.” 

Because you know, my group really, really values the mask wearing, even when we’re outside and we know we’re all vaccinated. Let’s all still put the masks on just to kind of show that, you know… Whereas I was in Missouri this summer visiting family and not only was Missouri a hot spot for Delta, but just nobody was wearing a mask anywhere. And I knew that I was like, “Are my kids gonna get beat up when everybody sees them wearing a mask and they’re the only kids wearing a mask?” And I just… I knew that that was politicized. It shouldn’t be. It shouldn’t be the kind of thing that your politics has anything to do with, these sort of health behaviors, but that’s a pretty obvious example of something that’s not political but has become very politicized. 

Andy Luttrell: 

But is it tied to the moral foundations at all? I know there’s… You had done some work on, right? Part of a team that did work on vaccine, foundations that were the basis of vaccine attitudes? And so, I don’t remember the details of what you found, but for mask wearing it strikes me that it’s possible that like you say, it’s politicized. 

Jesse Graham: 

Yeah. 

Andy Luttrell: 

But when you connect the moral foundations, it would just be sort of just by virtue of their inner correlation with ideology that they’d be predictive of mask wearing. Whereas it’s not a moral in those ways to people. Does that seem right? 

Jesse Graham: 

Yeah. And so, we have one paper in Nature Human Behavior a couple years ago where we were looking at the HPV vaccine, and so we had expectant mothers in the waiting room, and we were looking at just the association between moral foundations and between their vaccine attitudes, and what we found was that when you control for things like political ideology, their previous vaccination experiences, things like that, socioeconomic status, the best predictors were purity and liberty. 

And so, I think that kind of explains a lot of the more extreme anti-vaccine attitudes you would see on both the extreme right and on the extreme left. I think on the extreme right, it’s this very libertarian impulse of how dare the government tell me to put something in my kid’s body. And the purity part is really focused not so much on government telling me what to do, but it’s the in my kid’s body thing. So, like I have some very liberal relatives who are anti-vaccine for a long  time, and their anti-vax posts would always have pictures of the needle going into the kid’s arm, and I think there is some sense of, you know, we don’t really know what’s in the vaccine. We know there’s a bunch of complicated chemicals. But it’s something that’s artificial. It’s created in a lab. And they want to put it in your kid’s body. It just feels kind of unnatural, like why? This thing that I don’t understand, why am I letting somebody put that in my kid’s body, break the body membrane and put it in there? 

So, I do think those are two kind of moral concerns that do lead to vaccine hesitancy, and we’re finding the same things with COVID vaccine now. So, we’re writing up a paper now where we’re just looking at moral foundations at the county level, so we’ve gotten hundreds of thousands of responses to the moral foundations questionnaire. There’s different ways that we can kind of stratify it because we have a non-random sample, but then we can look at how those are associated with actual vaccination rates, and we find again that purity is the biggest predictor. In the current studies, we don’t actually have measures of liberty, so we’re finding the same thing with the COVID vaccine that we found with HPV. 

Andy Luttrell: 

So, in other words, the places where purity concerns are highest are also the places where vaccine hesitancy on average is higher? 

Jesse Graham: 

Exactly. Yeah, that’s right. Yeah. 

Andy Luttrell: 

Yeah, so you’ve done that kind of work with vaccines. I was noticing some other stuff of tying different moral foundations to different kinds of attitudes, like why would someone oppose this? Why would someone support this? And they’re tied to these different foundations. And is there a good sense, there’s like an intellectual exercise of just like, “Yeah, I wonder why people think that.” But is there a strategic reason why it is useful to know that? Like why, at the end of the day, why would we care that like, “Oh, on average, people who oppose this oppose it on these grounds?” 

Jesse Graham: 

Yeah. And I do think there can be sometimes sort of actionable steps you could take, so like we had a paper a few years ago looking at different particular political issues, like gun control, or abortion, and then we were looking at if you’re controlling for things like political ideology, how well do these different kinds of moral concerns predict those attitudes. And I think abortion is an interesting example because so much of the rhetoric around abortion is about harm, right? You’re killing babies. You’re murderers. It’s a lot of harm context. We found that harm attitudes weren’t that predictive. It was actually the purity attitudes that were doing most of the heavy lifting there. And so, I think that is something that it’s not in the explicit rhetoric, but you can see it in less explicit ways. 

So, sometimes pro-life activists are holding up very disgust-inducing pictures of aborted fetuses, right? I mean, they’re clearly trying to push those kinds of purity or disgust buttons. But they’re not making explicit argument. You know, nobody’s going forth and saying this is impure, it’s unholy, it’s unnatural, and that’s why it’s bad. They’re saying this is harmful and we’re killing babies. And so, that’s a case where you think if you were interested in changing people’s attitudes about abortion and all the rhetoric has been about harm, and maybe you’re making all these harm-based arguments, maybe you should be making more purity-based arguments if that’s something that’s kind of subterranean term that’s actually leading to this. 

And that’s what we’re thinking with that vaccine hesitancy, as well, is most of the rhetoric around vaccines is, “Oh, they cause some kind of harm.” Sometimes the rhetoric is more just explicitly about liberty and, “Oh, the government’s trying to tell me to do this.” But a lot of times it’s like, “We don’t know what’s in this stuff. There’s side effects. It causes autism. It causes whatever.” There’s like particular harms that people will point to. So, there we think, “Okay, well, if we have this finding that it’s really purity concerns that are the most predictive of vaccine hesitancy, can we construct messages that are trying to take that on in an explicit way?” 

If this is the kind of under the surface concern that people have that’s just leading them to be a little bit less likely to get a vaccine, or to vaccinate their kids, can we come up with messaging… The first thing I always think of is whenever I see ads for vaccines, I think, “Stop showing the needle going in the arm.” That’s creeping people out. And maybe in an unconscious way, but that’s not helping the case, right? If you’re trying to convince people who are on the fence. 

But I think you could also construct, and we’ve started working with this, but we don’t have any real findings yet, of can we try to frame the disease as the disgusting, bad sort of interloper, and the vaccine as this kind of purity shield against the disease or something. So, you know, people can start thinking of the vaccine as a way to purify, to get rid of this outside contaminant. And you know, it can be tricky, because you’re still gonna have to put this shot in your arm. There’s no kind of getting around that, right? 

Andy Luttrell: 

You mentioned that the notion of the social intuitionist model is 20 years old now, or at least in the published record, but moral foundations theory is not all that far behind in terms of just like the work that you guys were doing. So, just looking across those 20-ish, not quite 20, but 20-ish years, how would you say the theory has fared? Does it seem alive and well? Generative? Or would you say, “Well, other…” I have an idea about what you’re gonna say. I don’t think you’re gonna say, “I think it’s dead in the water.” 

Jesse Graham: 

It’s dead. My career is in shambles, Andy. 

Andy Luttrell: 

Yeah. I’m sorry. Sorry so much. But yeah, what is its status and also what are still the pieces of the puzzle that remain for us to think about?

Jesse Graham: 

Well, it’s funny, so I mentioned that we had this paper in 2013 where we were like, “Okay, let’s summarize moral foundations theory,” and at the time we were thinking it’s been almost 10 years, so it was sort of like what has moral foundations theory provided in its first 10 years. And my colleague at the time, Wendy Wood, made a joke that, “Oh, that’s in Advances in Experimental Social Psychology? That’s great. Congrats. That’s where theories go to die.” Because once you’re asked to write a chapter for that venerable volume, it’s sort of like, “Oh yeah, your theory is probably about to be crushed.” 

So, that was definitely on my mind, and we’ve been with my colleagues, Morteza Dehghani and Mohammad Atari, we’ve been thinking about, “Well, it’s about time for another update,” because it’s actually been eight years since that paper. I think there has been a lot of work. I think it has been generative. Again, I used to think that criticisms were a bad thing, that they were harmful to a theory. Now, I see it as totally essential and necessary for a theory to keep going. I do think most theories that are birthed, one or two researchers do some work on them and then they’re kind of ignored. But if you can get people really pissed off at your theory, then that will generate a lot of work, and I think it is generative. 

I mean, I think almost anybody who is critical of moral foundations theory probably has some good points and has things that can add to the theory. I think there are aspects of morality that are really important, that aren’t particularly covered by moral foundations theory, that I think are important to look at. And you know, one of the things that I feel good about, and I don’t know if this is kind of where theories go when they’re starting to die out in their home field, but you know, I’ll get these Google alerts of, “Here’s this paper in experimental agriculture that’s using moral foundations theory for whatever reason.” 

And so, I do like seeing it being used in fields that we hadn’t ever thought of. And again, that might not be people that believe all the tenets of the theory. They might not care about nativism and cultural learning. They might just, “Oh, here’s a handy taxonomy of moral stuff that we can use.” Or you know, there’s a lot of work now using machine learning, and Twitter data, and things like that. People who are interested in things like moral outrage who might not care that much about moral foundations theory, but it’s like, “Oh, well, here’s this list of words that was provided in a moral foundations kind of LIWC-style dictionary.” And so, I think it’s useful for that reason, too, and so as long as the theory is useful for people, I feel like it’s been a contribution. 

I think theoretical debates can become sort of pissing contests. My theory is bigger than your theory, or my theory completely subsumes your theory, those I think are maybe good for career building but not necessarily that generative in terms of an empirical science. Because again, I do think all these theories have something to contribute, and I wouldn’t say, “Oh yeah, no other morality theory should exist because moral foundations theory has explained everything.” I think there’s a lot of aspects that we barely cover that you might want another theoretical approach to. 

Andy Luttrell: 

Is there something you often see people get wrong about moral foundations? So, when you said people often see it as just a taxonomy, and I sort of think, “Well, that’s part of it, but it’s not like the theory.” So, maybe that is one thing that you’d say that people sort of misunderstand the crux of the theory, or are there other things that you see people go, “Man, this moral foundations idea is off into the wild and we don’t have control over it,” because people keep calling it X when it’s really Y? 

Jesse Graham: 

Yeah. I think some of it is we’ve… and I think some of this is our fault in some of our early writings, where I think we kind of blurred the line between descriptive and normative, and so part of what we were trying to do in some of our earlier papers was say, “Hey, liberal audience. Listen up. There’s aspects of morality that you’re missing and you’re not understanding conservatives right.” And especially because John was involved in looking at things like political bias in the academy, I think there’s a sort of assumption of, “Oh yeah, moral foundations theory. That’s that theory that says conservatives are better people than liberals,” or you know, like liberals have two moral-

Andy Luttrell: 

They’re morally deficient. 

Jesse Graham: 

Yeah. Yeah. That’s right.  Liberals only have two moral things and conservatives have five, so therefore conservatives are three more moral than liberals. And I think in most of our writings, we’ve tried to dispel that. I know when I give talks, I start out by saying the word moral is used in a descriptive way and in a normative way, and so if I’m saying that these loyalty concerns are moral concerns, I’m using that in the descriptive way. And again, that might rely on is it self-interest or not, but normatively it sounds like, “You guys are saying that loyalty and authority are good things? Well, what about goose-stepping Nazis? I think they’re morally bad. How come you guys think that that’s a morally good thing?” 

And then it’s like, “Whoa, whoa, whoa, whoa.” 

Andy Luttrell: 

Yeah. Hang on. 

Jesse Graham: 

That’s a normative argument that I don’t want to be on that side of. But even that kind of debate that we’ve had with people like John Jones, I think has been really helpful to clarify, and I do look back at some of our first papers and I think, “Boy, it’s really unclear exactly what we’re saying here and if it’s descriptive or normative.” And you know, and maybe we were trying to be provocative or something, but I think it’s helped me to really clarify, okay, what are you saying in this kind of descriptive way? If you’re saying let’s look at these moral values like purity and how they can predict vaccine attitudes, if you’re somebody with a normative goal of getting people vaccinated, that’s hopefully something that’s helpful for you. But that’s not really making any normative stance on, “Oh, see, these purity concerns are really a morally good thing  that we should all value.” You know, I don’t think there’s any kind of normative claim that needs to be made there. It’s sort of like look at this descriptive construct that we can look at that predicts these things. 

Andy Luttrell: 

Well, I will keep my eye out for all the new stuff that… because it’s not dead in the water yet. Keep an eye out for all the new moral foundations stuff. 

Jesse Graham: 

It’s got like two years left. Yeah. 

Andy Luttrell: 

Okay. Good. All right. I’ll start my stopwatch now. But thanks so much for taking the time. This was super fun.

Jesse Graham: 

Thanks, Andy. This was great. Thanks for the attention. 

Andy Luttrell: 

Thanks. 

Andy Luttrell:

Alright, that’ll do it for another episode of Opinion Science. Thanks again to Jesse Graham for sharing his work. You can find out more about him and his research by clicking all the links in the show notes — all the links. 

You can learn more about this show at OpinionSciencePodcast.com and follow the show @OpinionSciPod on Twitter and Facebook. And if you like what you’re hearing, help spread the word. When people ask for podcast recommendations, say, “Well there’s this show called Opinion Science.” And if you’re on social media, write a quick message to encourage your network to check it out. And of course, I’ll never say no to a nice review of the podcast online.

Ok, that’s all for this time. See you in a couple weeks for more Opinion Science. Buh-bye!

alluttrell

I'm a social psychologist.

Get in touch