Episode 61: Moral Conviction with Linda Skitka

Linda Skitka is a distinguished professor of psychology at the University of Illinois at Chicago. She’s been studying people’s moral convictions–the opinions that we connect to our core sense of moral right and wrong. Two people might agree about universal healthcare, for example, but they might disagree about how much their positions on this issue are drawn from their personal moral compass. Over the years, Linda and her colleagues have found that our opinions take on a different character if we’ve attached a feeling of moral significance to them.

 A few things than come up in this episode:

  • Bernie Sanders’ 2016 speech urging people to treat inequality as a moral issue.
  • In the opening, I discuss some research I did on how the mere perception of moral relevance makes opinions harder to change (Luttrell et al., 2016)
  • For a summary of the research on moral conviction, Linda and her colleagues recently published a great overview in Annual Review of Psychology (Skitka et al., 2021)
  • The early days of distinguishing moral conviction from other characteristics (Skitka et al., 2005)
  • People resist conformity when they hold a morally convicted attitude (Aramovich et al., 2012)
  • The question of how emotion plays a role in moralized opinions (Brandt et al., 2015; Skitka & Wisneski, 2011; Skitka et al., 2018; Wisneski & Skitka, 2016)

Transcript

Download a PDF version of this episode’s transcript.

Bernie Sanders:

You have 80 – eight-zero – people in this world who now own more wealth than the bottom 50% of the world’s population, 3.5 billion people.

Andy Luttrell:                                                    

That’s Bernie Sanders speaking in a Brooklyn church back in April, 2016. At the time, he was actively campaigning to be the Democratic nominee in that year’s election for president of the United States. And this point—that just a few people in the world are so extraordinarily wealthy that they hold as much wealth as the least wealthy half of the world’s population—it was one of his key issues. But he says something in this speech that struck me as especially interesting…

Bernie Sanders:

So in other words, I don’t want to make this just political. I want you to go to deeper into that, go into the guts of our society and try to look at things from a moral perspective.

Andy Luttrell:                                                    

A moral perspective. Which is a little odd because economic issues don’t always seem like moral issues. Sure, things like civil rights or climate action or stem cell research…they seem like bread and butter moral issues. But economic policy usually feels like…I don’t know, but certainly not a pressing moral dilemma. So what does it matter that Bernie’s encouraging people to see this issue as a moral one? Does it make sense strategically?

When I was in grad school, I ran a few experiments kind of like this. I took an issue, like recycling, that a bunch of people already agree is good. And then present people with arguments that might persuade them to abandon their original opinions and start to question whether recycling is as good as they thought it was. But before I gave them those arguments, I encouraged some people to see their pro-recycling attitudes as a product of their moral beliefs. In other words, I nudged them to moralize their opinion of recycling, just like Bernie Sanders was nudging people to moralize their opinions of wealth inequality.

Okay, so when people got those arguments against recycling programs, we checked in on people’s opinions again, and we could see whose opinions had started to change. In our control group, the arguments were pretty convincing, actually. People were no longer as pro-recycling as they once were. But the people who were nudged to connect their opinion of recycling to their sense of moral right and wrong? They were not so convinced by our message. They got the same anti-recycling arguments as the other group, but they were less likely to budge.

When I saw these results, it seemed to make two important points. First, morality is malleable. There aren’t moral issues and non-moral issues—we could push people to moralize something that they might not have already thought about in that way. Like Bernie Sanders did with wealth inequality.

And second, morality seems to make us hold on more tightly to our views. If it feels like an opinion is tied to our moral compass, good luck trying to get us to change our mind, especially if you ignore the fact that we’re grounding this opinion in ethical concerns. Sure, maybe recycling is expensive and inefficient, but it’s the right thing to do, gosh darn it! Maybe Bernie supporters, by tying his platform to their sense of moral right and wrong, ended up even more committed to the cause.

Now I’d be lying if I said the idea for these experiments occurred to me out of the blue—in part, they were extending psychological research on moral convictions, which owes a lot to our guest today.

You’re listening to Opinion Science, the show about our opinions, where they come from, and how they change—or how they don’t! I’m Andy Luttrell. And this week I get to talk to Dr. Linda Skitka. She’s a distinguished professor of psychology at the University of Illinois at Chicago. And one of the key contributions she’s made to social psychology is the notion of moral convictions. It’s the idea that there’s no such thing as a moral issue—but that any opinion could be one that people see as rooted in their moral beliefs. Over the years, she and her colleagues have shown over and over again that moral convictions act differently than other kinds of opinions.

And as you may have figured from my intro, this is a topic I’m super interested in. In fact, this episode of Opinion Science is unique in that the research I do in my academic life overlaps a lot with Linda’s. I’ve been super interested in just how strong our moralized opinions really are and whether there might even be times when moral opinions can change. So despite a few differences in how we approach studying these questions, at the end of the day, we’re interested in the same sorts of things. All of this means that I was excited to talk shop with Linda, so sorry if we get into the weeds more than usual for this show, but I think you’ll find our conversation interesting.

Oh, one other thing—for a few reasons, we decided to switch Linda’s mic input partway through the interview, so don’t be alarmed when her audio suddenly sounds different. I know you were going to be alarmed…so just don’t be. Okay, let’s get into my chat with Linda Skitka.

Andy Luttrell:                                                    

So, we can sort of do back and forth and naturally there are things we might have different views on based on the way that we’ve approached the same questions, or just expanding on how we’ve done things, but I will otherwise treat this… I mean, you brought this notion into the world, and so I’m interested in getting your take as someone who’s seen the evolution of this over time. It occurred to me that the first paper really… I mean, the first major one was 2005, even though there were inklings of it in the years prior to that.

Linda Skitka:

Yeah. Probably the very first publication… Well, I don’t know when you want to start the conversation, but when this work started for me was in the procedural and distributive justice debates. And when procedures versus outcomes are gonna matter more to people’s judgments of fairness, and an argument with Tom Tyler, he argued that if people perceive the Supreme Court to be sufficiently legitimate, they would have accepted the Roe v. Wade decision, but the legitimacy of the Supreme Court at that time was enormously high and by then some 50 years of debate about the legal status of abortion… Well, actually by the time I started doing this work it was only about 30 years.

And so, I was thinking about that as moral convictions as a boundary condition on when procedures are sufficient for people to judge whether something’s fair or not. But then, of course, reviewers wanted to say, “Well, what the hell is this moral conviction thing?”

Andy Luttrell:

Yeah. We might as well start here. So, the idea was that as you’re saying, there was sort of perceived legitimacy of some body, but also this unwillingness to accept what that agency had done, or decided, or whatever, and so based on the perspective that you brought to it, what were you thinking was at the root of why people were kind of denying the legitimacy of a decision made by an institution that they otherwise saw as legitimate?

Linda Skitka:

Right. At that point in time, there’d been about 20 years of research that had actually established that people will generally accept unfavorable outcomes if they perceive the legitimacy of the procedures that decides them as fair and legitimate. Even when those outcomes are negative to what their preferences are. But what was always being operationalized there was people’s judgments of favorability or unfavorability of an outcome, largely to their self-interest, and that’s only one layer, right? Sometimes we have moral connections to some of our attitudes that we think that those particular attitudes are not just favorable or unfavorable, or do or do not serve our self-interests, that they serve some higher order moral cause, and my assumption was that at least a good number of people saw abortion in that light. And the degree to which they had strong moral convictions about whether abortion should or should not be legal would be more likely to determine their judgments of whether even otherwise legitimate institutions and procedures were getting the issue right.

And then we did do about a dozen studies that actually showed that the degree to which people had moral convictions about outcomes was an important boundary condition on what was otherwise known as the fair process effect.

Andy Luttrell:

So, I know I’m asking you to introspect about something that occurred to you 20 years ago, but like what… Do you remember where that insight came from, that this was about morality somehow, that people had attached a sense of moral right and wrong and that was the thing, apart from all the other things, that would matter to the way people engage with their opinions and institutions? That it’s this sense of morality that’s at the root of it?

Linda Skitka:

It really was the Roe v. Wade decision. Why were people still arguing about it after decades? What made it special compared to other Supreme Court decisions? And the thing that really seems to distinguish that particular decision relative to other Supreme Court decisions was the degree to which people were using words like murder, and evil, and morally loaded terms around that particular issue, and also their unwavering commitment to their positions on that issue irrespective of what legal authorities had to say about  the matter.

Andy Luttrell:

Yeah. That seems to be a hallmark, that when morality gets wrapped up in opinions, those opinions get pretty rigid and unwilling to change. And so, just to even jump into it, why? What is so special about morality, right? Why should this be the kind of thing that carries so much weight when people bring it into these kinds of debates?

Linda Skitka:

Well, once we found that it was an important boundary condition on whether people thought legal policies or institutional decisions were fair, we decided to take a very bottom-up approach to figure out exactly what made these kinds of attitudes special. So, unlike other kind of theories on morality, such as moral foundations theory, that really basically define the moral sphere and then go out and collect data to see if it’s consistent with that definition, we instead decided to measure moral convictions and find out what predicts them, or what consequences they had that were relatively unique to them that couldn’t be explained by other variables. And so, let the data inform what made them special as opposed to starting from moral philosophy, or psychological theory, or anthropology, which most other researchers had done.

And so, over the years we found a whole bunch of distinguishing characteristics that seemed to make attitudes that are held with moral conviction different from otherwise strong but not moral attitudes. Probably one of the most important of which is authority independence, which is going back to that people rejecting the legitimacy for example of the Supreme Court, or not relying on the legitimacy of the Supreme Court to decide whether something fair is unfair. So, we’ve actually done studies in the context of Supreme Court decisions, such as the legal status of physician-assisted suicide, and when people had strong moral convictions about whether physician-assisted suicide should be made legal or illegal, whether the court decided consistently with what their moral convictions were about that issue predicted not only whether they thought the decision was ultimately fair and binding, but also affected their subsequent perceptions of the legitimacy of the Supreme Court.

They saw the Supreme Court as more or less legitimate as a function of whether the court ruled consistently with their moral beliefs or inconsistently with them. So, when do we turn to the courts? We turn to the courts often when we don’t know the right answer to something, like we don’t know whether the defendant is guilty or innocent. We don’t know whose claim should have precedence in some kind of lawsuit or injury case. And we just have to trust that, okay, if the procedures are sufficiently fair, participants are given voice, there’s an unbiased review of the evidence and so on, that the procedure will generally yield an answer that’s correct.

Okay, but if we already know the right answer, I know this defendant is guilty, and I’m morally very sure of that, or I know what the right decision is to be made on abortion, then I can use that to judge whether the authorities are in fact legitimate. And so, authority independence really is one of the key defining characteristics of moral convictions that we keep finding over and over again.

Andy Luttrell:

Meaning it doesn’t matter what they say, but I know what is right.

Linda Skitka:

Yeah. I already know. I know the right answer to this one and in fact, I’m deeply suspect of any procedure that doesn’t come up with the right answer.

Andy Luttrell:

So, one of the things that I think about when I think about morally convicted anything, like one of my great frustrations as someone who came to study morality from outside of the moral psychology literature is how unclear it is to me what morality is supposed to be. And so, in this case, sometimes I wonder like can we just… Should we just throw out the word morality and be like if what we think it is is substituting this sense that like this is objectively true, and I know it, couldn’t we just ask that question? Like, “Do you just think this is objectively true?” Do we gain anything from saying, “No, it’s a moral thing that I think is true and no one else could convince me otherwise.”

Linda Skitka:

You’re suggesting some of the other features that we have found that empirically differentiate moral convictions from non-moral attitudes, and that is a sense of perceived objectivity or factuality to the attitudes. The problem is, though, that a whole bunch of other attitudes have that characteristic, as well, without the moral component, right? I know that two plus two equals four. That’s objectively true. Or I know something objective about photosynthesis, but I don’t have any particular moral attachment to that particular issue, so I don’t think just asking about objectivity… I don’t think it’s gonna get at the main construct of moral conviction.

Andy Luttrell:

The thing I was thinking is in the context of these court decisions, right? If all of a sudden the Supreme Court came out and said, “We believe that gravity doesn’t hold true.” I would be like, “Well, that erodes my trust in your ability to make the correct call later in the same way that if you made the call that whatever my moral belief is is not true,” and so that’s a case where I’d just go like… You’d make exactly the same predictions if this is just about I believe this is objectively true as opposed to this is something special about moral stuff. So, even looking at the consequences, are there like… What are the other things that moralized attitudes do that non-moralized attitudes don’t do? Apart from just this trust in institution stuff.

Linda Skitka:

Well, a variety of things that you also wouldn’t necessarily attach to just attitudes that are rooted in beliefs about factuality or objective truth. In particular, the emotional signatures of moral convictions are something that you wouldn’t predict if it was just based on something about… Well, basically I think you’re describing the attitude certainty in some ways. I wouldn’t expect attitude certainty to necessarily have highly motivational component to it that would predict intentions and behavior necessarily in the same way, like two plus two equals four, I can be very certain about that and believe it to be objectively true, but it has no motivational component. I don’t have to do anything about it. Whereas another signature of moral convictions is the degree to which they’re tied to our perceived obligations to act, that people see them as very obligatory and in the context of politics it’s a very reliable, strong predictor of all kinds of political engagement including intentions to vote, actual voting behavior, activism, charitable donations, and so forth, which I wouldn’t expect an attitude that’s just based on black and white right and wrong to necessarily have this kind of action component come along with it.

Although, I agree that many of the other consequences of moral convictions might, for example that people are really resistant to group influence when they have strong moral convictions about an outcome. Although, [inaudible] Asch paradigm in terms of asking people to objectively identify the length of lines, that was a very objective decision and people were willing to conform to the group even when they knew they were objectively wrong. People in a very similar kind of Asch paradigm will not modify their moral convictions. They persist under very similar circumstances.

Andy Luttrell:

Yeah. That is a great comparison because that’s a case where you’d go, “We know that people will still succumb to these pressures even when they think they’re objectively right, but they go yeah, but fine. I guess something’s off. I’ll still say that that line is the longer one,” or whatever. But not so if this is a thing that I think is morally true, right?

Linda Skitka:

Exactly.

Andy Luttrell:

And go, “I don’t care what you guys are saying. I still am gonna stick to my guns and say what I think is right.”

Linda Skitka:

Exactly. And probably because there’s no cost really to deny the truth value of the length of the line, whereas I think that there would be some perceived personal cost about compromising on one’s moralized positions, that one would be feeling like they’re morally inauthentic, or perhaps not being their true selves if they don’t defend something they genuinely morally believe in.

Andy Luttrell:

Yeah, like morality just is sort of like all these features are kind of glued together in his kind of… this system that can’t really be broken so easily, where there’s this self-definingness, there’s this identity piece, there’s this sense of obligation, there’s this sense of factuality. It’s kind of this… Yeah, I was gonna say a constellation, yeah, of all these different pieces that get fused together into a sense of what is right and wrong.

So, you know, I was thinking about the terminology that we use for this. I have definitely slipped into moralized attitudes. That has become kind of my preferred moniker for this stuff. But over the years, in reading the stuff that you’ve done, these terms of moral convictions, moral mandates, moral imperatives, moralized attitudes, ultimately as far as I understand the construct has stayed relatively stable over time despite those labels, but is there any… What is your preference these days?

Linda Skitka:

I really like moral mandates because it allowed me to trim out a few words when writing about it. Attitudes held with moral conviction is a lot of words compared to a moral mandate, but some political scientists yelled at me about it in terms of they only think that that term should be restricted for when a democratic supermajority endorses a particular candidate or idea, that that gives them a moral mandate. So, I have tended to use moral conviction, but do I see any real distinction between that, and the term moralized attitudes? I don’t.

Andy Luttrell:

Yeah, because when you think about like how do we know empirically, when you do the research on this, what is it that you do, or say, or ask, to figure out what are the opinions people have that are moral convictions and what are the ones that are less morally convicted?

Linda Skitka:

Yeah. One small – this might be splitting hairs – distinction. Moralized attitude sounds like something about an attitude. It’s about something that the perceiver holds about it. Which has been often, until the moral conviction construct really came up, that’s how many researchers actually studied moralized attitudes, was that some attitudes were apparently obviously moral. They’re basically attitudes about most social issues, whereas other attitudes were of course not moral, such as people’s positions on the economy, or in one study it was the Iraq war. And I’m just going, “Really? The Iraq war doesn’t have any moral component to it?”

And so, yeah, I think there’s enormous tendency, even when people are reading the moral conviction literature, to think that what we’re talking about is a property of the attitude object. Abortion’s a moral issue, right? Gun control is a moral issue when in fact what we find is there’s enormous individual variability in the degree to which people see abortion, for example, as a personal moral conviction or not. Some people see their position on abortion as a matter of preference. They would just simply prefer that abortion, for example, be legal. Other people’s position on abortion is rooted in beliefs about conventionality or what authorities have to say about the matter. Either this is what people in my community tend to think about it, for example, my faith community, or this is what my religious leaders tell me is the right answer on this one, or this is what I believe the bible has to say about it.

That’s an attitude that’s also not rooted in moral convention in the same way that moral convictions as we have been studying them are, which again, are these kind of personal beliefs that I absolutely know the right answer in these particular circumstances, and I have strong attachments about right and wrong to that attitude. So, you know, it’s probably slicing thin, but I’ve become very careful about trying to communicate to people that we’re talking about something that is in the eyes of the perceiver. It is not an essential character of some attitude domains.

Andy Luttrell:

And it brings up another point which you’ve emphasized recently, which is that morality is a matter of degree, not a matter of kind, right? So, maybe some people would say, “Oh, you pick.” Is this opinion gonna be a moral one for me or not a moral one for me? Whereas, would you argue that that’s sort of misguided in some way?

Linda Skitka:

The me, not me part really is misguided, that we have seen a variety of other researchers use basically that kind of dichotomous identification as moral/not moral. For one thing, just asking people whether this issue is moral or not moral, you’re gonna get two kinds of responses to that. It is for me versus it is normatively perceived as moral in society, so that’s a risk, whereas I think when we’re studying moral convictions, we’re interested about for you. Not do most people but not you think of this as a moral issue. So, that’s one risk.

And then there’s some other work done by Jennifer Wright that found that measuring both on classification and strength of moral convictions both added unique variants to theoretically interesting outcomes. And so, things that you would miss if you didn’t have both.

Andy Luttrell:

So, even two people who might both say, “This connects to my morality,” can still be different in how strongly they do that, and that’s consequential.

Linda Skitka:

Right, right.

Andy Luttrell:

Yeah. Which is also useful to say that it’s not that there’s a class of attitude topics that are moral and those that aren’t. Take your favorite topic of debate, you’re gonna see a whole bunch of people across the spectrum of moralizing this thing. Yeah.

Linda Skitka:

Which I think is also interesting in terms of explaining why we have so many disagreements about policy, is because some people are really looking at it through a very different lens than other people, which makes it I think difficult to talk about sometimes our attitudes, like if one person here is taking a very strong moral stance on it, may totally not get someone who’s thinking about this from a very practical, or a view of efficiency, or some other kind of view that they’re gonna completely talk past each other.

Andy Luttrell:

And given what we know about people, it feels like, “Well, this is the only way to look at this thing,” right? If I see this as moral, how could you see it as anything other than that?

Linda Skitka:

Pretty much. Because again, to the extent that you do perceive it as very moral, it is as true to you as two plus two equals four, and I think most people think that that means that they can very easily persuade other people that disagree with them, and that they’re… I think they’ll be reasonably open-minded until they actually do sit down and have an attempt to try to persuade somebody else to share their point of view, because I think we think, “Surely if I explain the facts as I know them you will come to see it as exactly the way that I do.” It’s only when that becomes frustrated and you’re not successful that you start having to really distance yourself from the other person and say, “Wow, I don’t know what kind of person you are.”

Andy Luttrell:

We are too different in too many ways, like you can’t see what’s obvious in front of me, so I don’t know what else you’re not getting.

Linda Skitka:

Exactly.

Andy Luttrell:

You know, one of the things too, when we were talking about… We’re sort of dwelling on what is this morality thing, anyway? And one of the different ways you could look at moral convictions is whether these are opinions that truly are actually born from moral reasoning versus we got to this moral place through some other route. So, what is your sense of when people express on these questions, on surveys, yes, to me, my opinion on abortion is a result of my beliefs about right and wrong, what does that mean? Does that mean that really that was the process they used to get to their opinion? Or are they getting to the sense of right and wrong through some other direction?

Linda Skitka:

I think it’s really complicated. And that’s actually been the focus of our research in recent years. After trying to establish that moral convictions were a thing that was relatively unique and distinct from strong attitudes, then studying some of the consequences of having moral convictions, I’ve been in recent years turning my attention to how do attitudes become moralized in the first place. And from moral theory, there’s really two competing viewpoints on this. According to Jon Haidt’s social intuitionist model, moral convictions should just be sudden intuitions. Gut feelings. I know in my gut that this is just wrong. And according to classic moral developmental theory, it should be solely the consequence of reasoning and thinking very carefully through the issue to arrive at a conclusion that harm has been done or something to arrive at a decision that an attitude is a moral conviction.

I’m of a mind that it’s likely to be both. Both or either. And vegetarianism is I think a really interesting example to use to ponder this, which is something you might be interested in. I think some people actually do have this visceral disgust at the idea of consuming the flesh of animals. That’s gross. They just couldn’t possibly do that. It’s so obviously wrong to me, I’ll just never consume meat. And it’s interesting. A lot of children actually have that reaction. But I think there’s also a completely different pathway to vegetarianism as a moral conviction, and that is being exposed to, for example, evidence about slaughter processes that one may find morally objectionable, or animal treatment and so on. You can think about some of the documentaries that have been around about meat processing plants and animal husbandry and that people look into that, they start doing a lot of research on it, and just really come to the conclusion that no, this is morally wrong, and therefore I’m not going to consume meat.

Those are completely different pathways but are arriving at very similar outcomes. What I wonder, though, is if it’s necessary to… If you go through the reason pathway, in order for it to stick, I wonder if you have to really recruit that emotional reaction, as well, that you’re going to have to bring some disgust or anger to it in order to cross that finish line into a moral conviction.

Andy Luttrell: 

Yeah. It’s interesting. Yeah. On a purely speculative level, as someone who does sort of approach this notion of vegetarianism from a moral angle, it’s interesting because I feel like for me, what brought me there initially was an emotional reaction, right? Suddenly, you see these images in these videos of factory farms and you go, “Oh, man. This is terrible.” But I actually don’t revisit those feelings all that often. It’s sort of resolved itself into what I think was probably the result of a reasoning process where I go like, “Yeah, this seems wrong, this process that exists in the world,” and that became like a guiding principle. More so than… I always, I would hear other vegetarian people be like, “Oh, I just find it so disgusting,” and I go, “Man, I wish I did. Because I don’t like actually have that response,” and that might make it especially easy to maintain. And maybe that’s part of it, right? Different routes to morality might result in sort of differences in your ability to stick to the plan or the longevity, right?

Linda Skitka:

Right.

Andy Luttrell:

You might arrive at that moralization through either of those routes, but it may be that emotional route, and we actually have some data that could suggest this, that sort of predict that that becomes a more lasting moral conviction. Which is interesting, because it suggests that sometimes moral convictions can peter out if they’re not sort of bolstered by these other processes. I don’t know, does that sound kind of like what you’re saying? Or have I turned it around?

Linda Skitka:

It does. Because it is interesting that research indicates that people who do have a really strong commitment to vegetarianism tend to slip a lot. And I think it’s because it is hard to sustain. Well, it’s hard to sustain for a multitude of reasons, but you’re bringing up kind of I suspect the hedonic value of consuming meat. It tastes good. You put on top of that a normative setting and culture where everybody else is consuming meat, so you don’t necessarily have the same kind of group norm support. Nobody’s probably gonna shun you if you slip and eat meat.

Andy Luttrell:

Nobody who eats meat will.

Linda Skitka:

Yeah, but it seems really feasible to me that it probably requires eventually a combination of reasoning and affect. But to some degree, it feels like the affect is the really necessary component somewhere in that process. And the data is bearing that out, that we’re finding in a number of studies that we’re finding that emotional changes definitely predict changes in moral conviction. Most of the time, changes in perceptions of harms and benefits, for example, don’t. There’s some exceptions. But longitudinal studies and a variety of studies in the lab, very consistent findings for emotional reactions being precursors. There’s some, but not very consistent evidence on perceptions of harm, for example, being a precursor. And considerable evidence of perceptions of harm follow rather than proceed moral convictions.

Andy Luttrell:

And so, all of these are predicting, whether people are saying the degree to which people are saying this is moral for me, and sort of crazy that this information about harm and costs doesn’t do that, right? Or doesn’t consistently do that. Because I know the notion that emotion is wrapped up in this I think has been around for a long time, like since the inception of, “I think this moral convictions might matter.” There’s an assumption-

Linda Skitka:

[inaudible] really debated the competing roles of reason versus emotion, and morality as well, so yes, these are really, really old arguments.

Andy Luttrell:

But even for you, introducing this notion of moral conviction before the formal tests of the emotion component were there, it seemed like plausibly one of the things that’s making these moral convictions differ from other kinds of opinions is the emotion that people bring to it. And so, over time, I guess sort of the full scope of things, to me this is one of the more back and forth aspects of the moral conviction model of things, is sort of exactly what do we know about what emotion is doing here?

So, if you could summarize, what is emotion? What is it doing when it comes to morals?

Linda Skitka:

It’s gonna turn out to be a really big puzzle. You know, we had theoretical reasons to really expect that disgust would be a precursor to moral convictions, and so my lab really started there because there was strong theory and evidence from how people make moral judgments, that is how severely wrong they think a given act is, was really importantly predicted by disgust. Including disgust that was unrelated to the actual harm behavior. So, we did dozens of studies in our lab where we were making people feel disgusted.

Andy Luttrell:

You’re so nice.

Linda Skitka:

You know, we had people up to their elbows in Elmer’s Glue and gummy worms, and then asking them questions under those conditions compared to feeling feathers and beads, and oh my gosh, we used fart spray. You never, never use fart spray in your lab. You will not get it out of your upholstery. We actually repainted because of it.

Andy Luttrell:

Oh my God.

Linda Skitka:

We discovered that you can buy pellets that only activate a smell when air is moving over them, and one of the pellet smells that you can get is dead rat, versus Hawaiian breeze. We did incidental arousal studies, where we had people jump rope for a while, have a pause, then asked them their attitudes. We never, ever got changes in moral convictions about anything from any of those studies, which is again at odds with the moral judgment literature.

Andy Luttrell:

And again, just to be clear, these are all ways to get people into a certain state of emotion that have nothing to do with the issue, but just sort of incidentally, I feel really gross right now, or I feel really jacked up right now. None of those sort of incidental feelings seemed to be spilling over into this issue is moral or these issues are moral.

Linda Skitka:

Yeah. Which made us kind of have a, “Ha,” and, “Duh,” moment, which is that attitudes of people probably, about which people have potential to have moral convictions, probably have an awful lot of stuff that goes along with them. Associations of memory, including probably affective associations of memory that probably are trumping whatever incidental arousal that we were subjecting people to in the lab. So, that’s when we decided to study and compare, okay, to what degree will incidental arousal of disgust versus integral disgust lead to moral conviction? And what this means is we did studies where people were exposed to some disgust-evoking stimuli, some that were and were not related to the topic of abortion, so they were exposed to toilets overflowing with feces, really gross stuff. No harm implicated in it. They were exposed to vivid images of mutilated animals that were bloody, but still not human-related harm. They had harm and they were also disgusting.

Then in other conditions, they were exposed to photos of aborted fetuses, and all this was done in a task where the images were gonna be presented either super-fast and outside of conscious awareness, versus slow enough where they would have some conscious awareness of what they saw. And we had a control condition where they just saw pictures of office objects. People, even when they were presented these images super-fast, like outside of conscious awareness, were very accurate in terms of whether they saw a word or a picture. So, they were recognizing something, but in that condition we got no changes in moral conviction as a function of what they were exposed to, whether the images were related to the attitude object or independent of it.

However, when the images were presented at the superliminal, that is where people would have the ability to detect what they saw even though it was still going by really fast, under those conditions it was only the images of aborted fetuses that moved around moral convictions. Subsequent follow-up test to that, we actually compared harm, anger, and disgust, and what those results revealed is that still the changes in moral convictions were unique to being exposed to something that was related to the act you were trying to attack, that’s the aborted fetuses, and what mediated the effect was not harm. It was not anger. But only disgust.

So, this is leading us to believe that again, that emotions are really important and that harm, it seems to be much more hit and miss. But lest we think that disgust is the only thing that can moralize, other research has found other emotions can moralize, as well. For example, we did a longitudinal study in the context of a presidential election. We had already known from previous research that over the election cycle, people’s attitudes about presidential candidates tend to become stronger moral convictions over the election cycle.

We decided to do a longitudinal investigation to find out what predicted those changes, and we had several possible contenders. We had fear, threat, anger. We also measured on perceived harms and benefits of electing the different candidates. Changes in enthusiasm, and I’m sorry, enthusiasm was another one. Changes in enthusiasm towards one’s preferred candidate predict increased moralization over repeated time points over the election cycle. Hostility towards one’s non-preferred candidate increased moral conviction over repeated points in the election cycle. Changes in harms or benefits, fears of threat did not. Okay.

However, changes of moral conviction, however, did predict changes in perceived harms and benefits of electing a candidate as well as emotional changes. So, the emotional reactions are reciprocal. Stronger emotional reactions lead to stronger moral convictions, leads to stronger emotional reactions, and it keeps going, whereas stronger moral convictions lead to greater perceptions of harm and benefits, but harm and benefits are not predicting moral conviction.

Andy Luttrell:

And we know this based on the temporal order in which stuff happens, right? So, I know that your emotions today help me anticipate whether this is moral for you tomorrow.

Linda Skitka:

Exactly.

Andy Luttrell:

But your harm ideas today don’t really give me that prediction. But once you’ve seen this, once you’ve started to see these candidates in a more moral lens, that then anticipates your reactions to… your perceptions of harm and what not, right? And also, those emotions, so we have some loose sense of causality where we can say, “In the timeframe these are happening in, your feelings are coming before the changes in moralization.” So, we sort of get a sense that it’s those things that are sort of kickstarting this process.

Linda Skitka:

Exactly. The idea that emotion is very strongly connected to moral convictions is absolutely solid. Which emotions is a really interesting question and complicated one. Lots of emotions are involved.

Andy Luttrell:

So, it sounds like the story of what emotions are doing in this process has changed over time. My other question I had for you was in the span of time you’ve been studying moral conviction, are there aspects of that process that you’ve changed your mind about? Do you think about moral conviction any differently today than you did when these questions about the legitimacy of the Supreme Court were first flowing through your head?

Linda Skitka:

Yeah. I would say yes. I didn’t expect actually that theoretical predictions, the things that you would predict would be related to moral conviction were gonna line up quite so neatly. I was very open minded. Well, no I wasn’t. I really thought harm would have been way more important. I thought harm was gonna be as a precursor. And have been consistently surprised that it very inconsistently is. I’m not surprised that emotions are intimately involved, that the exactly how emotions are involved is very interesting. And as a social scientist, we know that many times things don’t work out in our research. I’ve been very surprised, the moral conviction research, about how consistent the findings are.

These findings are very easy to replicate and it’s pretty rare that we don’t get some effects.

Andy Luttrell:

Yeah. I have certainly found for as simple as this thing is to measure-

Linda Skitka:

I know.

Andy Luttrell:

Like it doesn’t take much at all, but it does. I’ve independently, I can attest that just asking people about whether this is something that they’ve connected to their sense of moral right and wrong is a pretty potent probe to get at something that matters to people. And that predicts that ways in which they engage with this issue. Other people, changing their own minds, reflecting over time on the issues that matter to them, it is… I have to imagine when you first sat down to write out a couple questions to throw on a survey, you might not have anticipated that they would be so generative.

Linda Skitka:

You know, I did think that maybe asking about the moral relevance of an object would be different than its favorability. That much, I was willing to bet on. I still wasn’t willing to bet, though, that it was gonna be distinguishable from the more usual dimensions of attitude strength that researchers have studied, such as centrality, certainty, importance, and so on. So, the finding that it very consistently does seem to be capturing something else, it’s certainly related to those constructs, but cannot be perfectly explained by them, I think is really fascinating. And to be honest with you, that was a surprise.

Andy Luttrell:

Very cool. Well, I just want to say thanks for taking the time to walk us through the world of moral conviction and it sounds like this program of work isn’t over. There’s more still to come.

Linda Skitka:

Plenty.

Andy Luttrell:

Thanks again.

Andy Luttrell:

Alright, that’ll do it for another episode of Opinion Science. Thanks so much to Linda Skitka for taking the time to talk about moral convictions. Check out the notes for the episode in your podcast app or at OpinionSciencePodcast.com for links to some of the things we were talking about, including Linda’s website.

Thanks to everyone who has rated and reviewed the podcast online. If you haven’t done that yet, it just takes a minute. If you’re an Apple Podcast user, you can do it like right now. Right in the thing you’re using to listen to this. It helps people find the show and trust it enough to give it a listen.

Alright, let’s wrap this up. It’s the last week of the semester where I work, and I’m about to get buried in things to grade. So I’m gonna go power nap and then run head first into that mess. See you in a couple weeks for more Opinion Science. Buh bye…

alluttrell

I'm a social psychologist.

Get in touch