Kurt Gray studies our moral minds and how we grapple with everyday ethics. In his new book, Outraged, he explores the deep psychology of human nature and what it means for how we navigate politically divisive times. In our conversation, we do a deep dive into his perspective that morality is fundamentally about our ideas of harm, which conflicts with how other theories talk about morality. We also get into what it means for concepts to shift with time or circumstance.
If you like this conversation, check out other episodes with moral psychologists whose views differ from Kurt’s:
- Episode 47: Moral Foundations & Political Opinion with Jesse Graham
- Episode 81: Moral Language with Morteza Dehghani
Transcript
Andy Luttrell (intro): On this podcast, I usually have some interesting preamble at the top of an episode where I dig into something related to the topic of the feature conversation. But this month’s episode is on moral and political divisions, and I feel like all I have to do is just point to the world around me in the United States. Just like…all this stuff. And if I had a specific example, it would be out of date by the time this episode gets to your ears.
It seems like we’re in as tense a place as you can get. Where a lot of us are seeing longstanding American institutions crumble with darker days looming, but then I guess there are plenty of people who are cheering on the erosion of these institutions and the values they reflect. And I try to remember that there seemed to be people who saw the Biden era in the same way folks like me are seeing the current climate, but it can be really difficult not say sure, but this feels different.
And listen, this is a giant problem and one that a few psychologists aren’t going to solve all on their own. But you know what, this is a social science podcast, so let’s look at the social science. If you’ve been listening to this show for any length of time, you’ll know that this political fracturing has been going on for a while. We’ve looked at public opinion data showing that these groups increasingly dislike one another, and considering ways we might mend these divisions, whether it’s through more productive conversations (episode 82 with Taylor Carlson or episode 85 with Monica Guzman), better social media algorithms (Episode 92 with Andy Guess), or any number of kooky strategies (Episode 64 with Robb Willer).
But there’s another part of the conundrum, which is why people are so ready to fall into outrage. What’s actually happening when our sense of right and wrong is threatened? Well wouldn’t you know, that’s the subject of a new book—Outrage: Why We Fight About Morality and Politics and How to Find Common Ground, written by Kurt Gray, who you’re about to hear from!
Kurt is a professor of psychology at the University of North Carolina-Chapel Hill, although he’s about to head my direction, starting this fall at Ohio State. Kurt runs the Deepest Beliefs Lab and the Center for the Science of Moral Understanding. He’s done a lot of work unpacking what morality even is and how we navigate moral disagreements.
As a quick set up, Kurt and I will talk about how his view of morality differs from Moral Foundations Theory, which you may have heard about…maybe even on this show! Episode 47 with Jesse Graham and Episode 81 with Morteza Dehghani are both good listens for Moral Foundations talk. Oof, I’m doing a lot of references to previous episode. It’s almost like there’s a theme on this show or something.
Anyhow, very, very briefly and skipping over plenty of nuance, Moral Foundations Theory is notable for its quality of moral pluralism, which means our sense of moral right and wrong come down to several qualitatively unique values that can’t be boiled down any more simply. Most often, those values are: care, fairness, loyalty, authority, and purity. Five values that are not interchangeable, each of which being just as plausible as a foundation for a person’s moral judgments.
Kurt and his team have argued instead for moral monism—that there’s just one thing that ultimately matters for people’s moral judgments: whether or not some agent, like a person, intentionally causes harm. That’s it. Harm or bust. Those other things, like disloyalty or impurity only pinch a moral nerve if it feels like there’s harm.
I go into all of that because it’s a lot of what we’ll end up talking about, so now you’re armed with the basics! But enough of me. Let’s get into my conversation with Kurt Gray.
Andy Luttrell: My curiosity was like why write this? And particularly why write it now? You’ve been doing this work for a while, it seems like you could have reasonably written this book years ago. What was it that finally made you go, this is something I’d like to do? It seems like now is the time.
Kurt Gray: Well, I wrote the book now I think for a couple of reasons. One, obviously the political moment is especially contentious. People are especially outraged. Social media and so forth. But also I think scientifically, I wrote the book because my research was at a special point, I think. And so I’ve been arguing for the past decade that all our moral judgments revolve around harm. Got a lot of evidence on that. Got a lot of evidence that the idea of moral foundations theory is pretty shaky. But it’s not enough just to say that, look, we’re all the same because clearly we’re filled with disagreement and there’s clearly moral diversity and pluralism. And so how can you reconcile this harm-based mind that I argue for with moral differences that are so obvious? And so I started doing some research on how different assumptions of vulnerability, assumptions about who or what is especially vulnerable to harm, how those give rise to differences in moral judgments. And once I discovered that, then it made sense to write the book because not only do we have this look, we have this common moral mind, but also it can give rise to differences in a fairly simple and powerful way. And I think that explains a lot of hot button issues. So happy to go into that. But that’s why now because the research did both similarities and differences.
Andy Luttrell: And if you were to paint a picture the book covers a lot of ground as we mentioned just a minute ago. If you were to say this is the main thing that you would get from this book, what’s your sell to the world about what your point ultimately is?
Kurt Gray: Yeah. Really it’s about understanding why we disagree when it comes to morality. I think self-knowledge as a psychologist and maybe someone who’s a little more introspective as well, I just think self-knowledge is really useful. How do I work and how people work dictates how we can act in the world. You can’t do something that we’re not capable of or it’s really hard. And so if you understand who we are as a species and a society, then you can both understand why we’re disagreeing and meet people where they’re at when you try to find common ground. And so the sell is really knowledge first and foremost, but also super useful practical knowledge. And not just about politics. You’re disagreeing with your partner or your parents or your co-workers. Why are you so morally convicted in your position and why are they so convicted? What’s the argument really about? And it’s about harm. It’s about perceptions of who’s the victor, especially if I feel I’m the victim. And so it’s giving us a common currency, a master key to understand all these moral disagreements. And I think that’s super useful. So I think that’s the sell.
Andy Luttrell: Yeah. And your point about this is who we are, it gets really fundamental. I was really amazed at the amount of work that you would’ve had to do to paint this evolutionary portrait of humanity and our historic struggle on earth. There was a good bit from that perspective, which I don’t suspect is your primary orient, maybe your orientation. I would not have pegged you as the expert on those things. And so I’m curious, what work did you have to do to paint that portrait at the outset of the book?
Kurt Gray: Right. So a lot of reading. And of course the argument is that we are more prey than we think and more prey than predator when it comes to early humanity. And we’re not talking like 10,000 years ago, we’re talking four million years ago. As human nature evolved and as we refined our psychology. We were constantly in fear of getting eaten or getting attacked. And so all the things that we think of ourselves as natural-born predators, I think are not true. Our ability to throw, it’s actually not very good. Our ability to run, it’s fine. But all these cases of like, oh, this helped us evolve for predation. It’s so narrow and so circumstantial.
And so a lot of the anthropological evidence that I was not familiar with, I really had to read. And it was super fascinating. And you don’t have to dig that much to stumble upon how we were much more prey than predator. There’s a book called Man The Hunted. This thick edited volume, all these chapters. And people ask us all the time, could you write a chapter for this edited volume? And the answer almost was like … No one’s going to read this. But this book was so amazing because it was like all these different chapters from all these different academics revolving around the same perspective. And so it was great to learn that. I had a research assistant who helped a lot who did some scourings that took some of the burden off. But it was fascinating to dig into the anthropology for sure.
Andy Luttrell: We don’t have to get into the weeds on it, but rather than me bring up this stuff and then move right on, what’s the point? Why does it matter that our notion of humanity as a predator turns out to be shaky?
Kurt Gray: Yeah. Because the argument is when we think of when people disagree about morality and the threats we face politically, when we think about the other side, we think of them as predators in a sense. They’re cold calculating, they want to burn it all down. They’re doing things to harm us. That’s our narrative. But if you understand that all of us, including people on the other side, are just trying to do their best to protect themselves, and they’re filled with fear as well. And maybe we can argue that their fears are not as valid as your fears because they’re listening to the wrong news sources or believe in the wrong facts. Fine. We can be more or less deceived. But at the end of the day, everyone’s fearing the threats that they fear and they vote accordingly. And so I think that really matters. And if you understand that we’re all prey, more prey than predator, then you understand if you’re lashing out, if you’re being aggressive, even violent, it’s because you’re worried about threats. Like January 6th, I think very bad, no doubt that I think many of those people deserve to go to prison and killing police is never a cool thing. But I think they were motivated because of stop the threats, stop the steal, democracy’s failing. And so if you understand this psychology, then it makes sense of a lot of people’s behaviors. And in a more charitable way.
Andy Luttrell: Yes. That it’s that it’s easier to have empathy for fellow prey than for fellow predators.
Kurt Gray: Yeah. Exactly. Exactly. If you’re on YouTube and you watch predation videos, you see the cat and it’s so confident. Or there’s a lion who catches a baby gazelle and it’s just batting it or playing with it just for sport. And I think that’s what we think about if we think the other side’s predators. Just harming us for sport. But we can certainly empathize more with prey as you’re fleeing the predators and being terrified or waiting in the darkness for the animal to … There’s this planet earth where there’s some pigs in the dark and the mama pig and the baby pig and the wild cat just comes up and snatches the baby pig, and the mom’s like, “What’s going on?” I feel like this is more of our psychology. You’re just waiting in the darkness for something bad to happen.
Andy Luttrell: I was not aware of the predation corner of YouTube, so now I’m happy to know that that’s out there for us. Okay. So I want to talk about the … I’m sure you’ve talked about harm-based morality forever, and I don’t know that we’re going to cover a bunch of new ground. But I do think that it’s probably a compelling perspective, particularly for people who have heard other forms of how psychologists think about morality on this show. And we’ve talked about moral foundations theory before. And so maybe just to set that stage, could you give just the briefest overview of what a lot of dominant viewpoints in the psychology of what morality is and how people come to it and what your alternative take is?
Kurt Gray: So we can do a brief history listen, which is that morality used to be grounded in understandings of harm, but objective harms. Questions of rights and responsibilities. And so I think for decades, morality was about, well, the right to avoid harm and the right to have certain freedoms, obviously. And then there was this … Well, in developmental psychology, if you ask kids, “Is it okay to punch your sister?” They’ll be like, “No.” “Is it okay to wear pajamas?” And they’re like, “Well, if the teacher says it’s so.” They distinguish between harms and more conventional wrongs.
And then there was some cross-cultural research that found, look, if you go to India, you’ll find some people that think that there are things that are wrong beyond obvious physical violence like eating chicken after a father’s funeral. And so this anthropological work made psychology wonder, oh, look, I wonder if there’s more bases for our moral judgments than just direct physical or emotional harm. And so that finding and some differences between liberals and conservatives grew into this theory of Jon Haidt called Moral Foundations Theory. Which is that our moral minds are divided into little foundations or mechanisms. Or the word foundation is this deep psychological thing that’s separate. That word does a lot of lifting in the theory. It’s not just moral value. Many people have said we have different moral values, but the foundation term, I think does a lot of heavy lifting and maybe in a sneaky way. And so it argues that there’s these five … Used to be four, used to be three, now it’s six. But mostly it’s five of these foundations. One is physical harm, one is fairness, one is loyalty, one is authority, and one is purity growing from the roots from the anthropological work in India.
And it’s argued that these are separate in a deep fundamental way and that conservatives have all five of them, but liberals are stunted and disadvantaged and only have two. So the chapter of the book in The Righteous Mind, Jon Haidt’s books says there’s a conservative advantage when it comes to morality.
And so I think, is it true? No. But is it also dangerous to argue that one group is more moral than the other? You can look at the political moment and the rhetoric of our divide, and I think part of it is because of this rhetoric that, well, I’m more moral than this side. It’s not healthy for our democracy. But I think more importantly as a scientist, this theory, it’s not true. And it’s not true in two ways. One, it’s not the case that only conservatives care about things like loyalty and authority and purity. It’s just the examples that Jon used to try to prove the theory. So if I ask you about a hot yoga juice cleanse, you’d be like, “That’s probably a liberal purity.” Cleansing the chakras. If I ask you about a authority respecting President Biden or civil rights leaders or union leaders, you’re like, “Oh, that’s a liberal thing.” Or loyalty to labor solidarity or unions, again, that’s a liberal thing.
So I think the examples of these values are pretty cherry-picked, and it turns out that everyone cares about all these values, and there’s scientific work on this as well. But I think the bigger problem is that scientifically these values are not separate from harm. The reason people care about any of five or six or really a hundred different values from punctuality to industriousness, self-reliance, all those things boil down to concerns about harm. The reason we need to respect the authority of our parents is because if we don’t, it’s going to cause harm. The reason that we need to have fair elections, which is in fairness, is about the harm that suffered. So all the work that I’ve done shows that it really revolves around questions of harm, creating this common currency to understand all sorts of moral differences.
Andy Luttrell: Could you give an example of what work shows that? So if you’re trying to convince someone particularly that no, these all really stem from the same thing, how can we know that that’s the case?
Kurt Gray: Yeah. The easiest thing to do … If you put a lot of stock on anthropological work that launched moral psychology is you can ask people. It’s actually incredibly easy to show that this works. If you go to rural India and you ask these Brahmin Indians, why is it immoral to eat chicken after your father’s funeral? They’ll say, “Well, because it harms my father.” It’s the eldest son’s job to process the father’s death pollution and by eating chicken, it stops that. That’s the spiritual understanding of the world. And that causes my father to basically suffer an eternal purgatory forever. And so I think if you accept as a psychologist that morality varies and that people will just tell you their moral judgments, they also just tell you they’re judgments of harm. If you ask evangelical Christians about why they’re against gay marriage, they’ll tell you that they think God’s against it, but they’ll also tell you that it’s going to cause widespread damage to society and to the family.
So you can just ask people, and moral psychologists have been doing this forever. For some reason they think it’s okay to ask people about morality but not harm. I don’t know why. But you can also do lots of social cognition studies. So you can put people under time pressure, ask them about whether there’s a victim. We’ve done some studies where we present people with harmless wrongs and then they look at the faces of children, and then people perceive those children as suffering more. Think of the children. If they see those acts as wrong. So you indelibly link wrongness to victimhood. And I’ve got other work that just shows that victimhood is such … The feeling is so visceral. It’s a lot like morality.
So Haidt argued for a long time that, look, we should think of morality as this gut feeling. I think that’s true. I think victimhood is a gut feeling too. Feelings of harm. If you’re afraid of flying and you’re on a plane and you’re sweating and looking out the window, and I say, “Why are you afraid of flying?” And you’re like, “Well, I just feel like it’s dangerous.” I’m like, “Well, you know it’s not dangerous because here’s some statistics.” And you’re like, “Fine. I just don’t believe them in a visceral level.” And it’s like with harm, I tell you, these folks, they’re brother and sister, and they have consensual sex together even though it’s totally harmless, you’re just like, “Yeah. I don’t believe you.” In part because we evolve to protect ourselves from harm and not believe when someone says something’s harmless.
Andy Luttrell: Yeah. I think the evidence you have too, that when you put people under time pressure, they just say it’s harmful. Even if I could explain to myself with every capacity that it’s actually probably fine in that moment, my gut reaction is about harm. It just feels dangerous. You’re playing with fire there.
I have to say too, so one of the things I appreciate about this perspective is it helps define morality in a way that is otherwise hard to do. I have come to moral psychology from the opinion and communication world, and I have always been … I was just thinking about this last night. If someone were to psychoanalyze me through this podcast, one of my deals would be, I don’t know how to define morality. This comes up constantly on this podcast. And you mentioned edited books, doing a real service. The Atlas of Moral Psychology, which you helped edit. I was like in this book will be the answer to what morality is. And I definitely learned a lot about moral psychology, but it still felt like no one’s defining it. Everyone’s just calling this morality. And I had always wondered, these five moral foundations, why are the values that are moral and the other values that Schwartz and people have been studying forever, those ones are not the moral ones. And so what’s helpful about this is I think your perspective is that if it says something about harm, that is the defining feature of a moral value. Is that how you put it?
Kurt Gray: Yeah. You intuitively see it about harm. So I think the point I want to make is that it’s not me saying it’s harmful. People are like, “Well, you could say anything’s harmful, but it’s not me.” It’s the people’s intuitive, visceral sense that’s something’s harmful. Something breaks the rules and that is causing harm to another vulnerable person, that is what morality is.
Andy Luttrell: So my provocative proposal is can we just ditch the word morality? Does it serve us to do that? If it’s all harm, could we just say, “Forget it, guys. We’re doing the psychology of intuitive and reasoned harm.” Does that sit well? Would you say, “Let’s do it. Fine.”?
Kurt Gray: Well, I think maybe. I think not quite. I think the book is a little different from the academic work. So the academic work, I think I have a little more of a specific definition. It’s a little more complicated. So it’s not just harm because this is such a broad word. Really it’s questions of victimization, especially intentional victimization. So we all agree that there’s a set of norms, there’s a set of rules that we all follow. Some of those norms are arbitrary. Like we wear green on St. Patrick’s Day, the fork was on the left. And if you did something different, well, it probably wouldn’t be immoral .but if you broke a rule and not only it caused harm, but it caused harm to someone vulnerable, and that harm was intentional, like a corporation dumping toxic waste and the water supply that feeds a daycare, you’d be like, “Well, that’s immoral.” So I think if we’re going to define it, I think it’s not just harm because stubbing your toe, whatever, natural disasters, but really this sense that there’s an intentional agent causing suffering.
And if you’re a philosopher, you’re like, “Well, I can think of lots of things that don’t meet that claim.” But if you’re a philosopher also, that’s not how the mind works. Philosophy is not psychology. And so there’s a continuum of how well things match that definition. And so I’m just saying that the closer something in the world matches that prototype in our mind of someone intentionally harming a vulnerable victim, the more we think that’s immoral. The funny thing is about this entire theory is it’s in a sense so obvious. If you know how cognition works, categorization concepts … And we’ve known this from the ’70s and ’80s, you’re like, “Yeah, of course that’s what morality is.” It’s just an instance of categorization and the template or prototype is this victimization. But for some reason, when you come to morality … This is funny, because you’re like, what is morality? It’s like, well, what is a cat? What is freedom? What is government? What is love? All these things are just prototypes in the mind given to us by culture and something innate, and we just match along this prototype. So it’s just like any other concept.
Andy Luttrell: And so one way to boil some of this down is that I think your point is that all morality is harm related, but not all harms are morally relevant. You agree with that?
Kurt Gray: That’s right. So if you stub your toe, that’s a harm, but it’s maybe not moral. Here’s the thing, if you stub your toe and you live with a bunch of other people, you’re probably going to be like, “Oh, my pain. Someone intentionally did this. Someone moved my dresser or left Lego out. I’m pretty pissed off about it.” But in general, I think all morality revolves around this understanding of victimization. But I think victimization too is seen as fundamentally moral. So you draw an equivalence between these ideas .but again, it has to be victimization of someone vulnerable. So if you’re like, “Well, those people over there, they’re not vulnerable. They don’t feel like I do.” Or they deserved it. We’re punishing them because they did something wrong, then it’s normative and we see them as less vulnerable because we see them as an evil perpetrator, and that strips them of their vulnerability. So I think it’s complicated, but in general, I would define morality as this perception of a moral dyad, a victimization unit.
Andy Luttrell: So I am curious a little bit about your story in this in terms of taking on what had become such a normative way of talking about morality. And so as you describe it now in hindsight, it’s like, “Well, yeah. Of course. What else would it have been?” But I am just curious about that first moment of insight where you go like, “Hey, wait a minute. Maybe this is all actually the same thing.” Do you remember where the seed of that got planted?
Kurt Gray: Yeah. These things add up over time, but I wrote a paper in 2010, it was published. On understandings of how we think about God after suffering. And so after a natural disaster, for instance, what do people do? Well, they don’t just say, “Well, sometime the world’s random.” They’re like, “Well, that was God. He’s angry at us for sinning.” And I ran this analysis that found that if you look at how much suffering there is in each state, so disease burden, and just basically how terrible it is to live in each state, that’s highly correlated with religiosity, with how much people believe in God. And if you control for education, still super high .one of the highest correlations I’ve found out in the world. And so I argued that what’s happening here is that, well, if they’re suffering, then people perceive an agent and God is kind, but still intentional. But there’s this sense of if you see suffering, then you perceive someone to blame for it. If I slip on a sidewalk, it’s not just to be like, “Oh, it snowed. They didn’t really get a chance to shovel.” It’s like, “Well, they intentionally didn’t shovel and it’s negligent and I’m going to sue them.”
And so the sense of, well, there’s a victim and a perpetrator in our mind, and we apply that situation even when it doesn’t really fit. That makes it seem like that’s this template that we use to make sense of the world. And if that’s a template that we apply in all sorts of things, well then maybe it applies to harmless wrongs too. Maybe that’s how we see all of morality. And so Jon, I knew the Moral Foundations Theory. I wasn’t against it. But I’m like, oh, I wonder if people will continue to do this, not from just the victim to the perpetrator or intentional agent, but the other way too. If you do an act and it seems wrong, maybe people might perceive some suffering there too. The other way. I call the dyadic completion. You’re completing the dyad from act to victim. And so I ran some studies on that and it came out. People just see some harm in I don’t know, eating roadkill or consensual incest or all these wild and wonderful things that moral psychologists study.
And so I had this data and I was at the University of Maryland at the time, and Jon Haidt, who was one of my advisor’s best friends, came down to the University of Maryland to give a colloquium. And I was going out to dinner with him. We were in the car. And he says, “Oh, what are you working on?” I’m like, “Oh, I have this cool thing because I study mind perception and people perceive minds and suffering even when it’s maybe not objectively there, and people still see some harm and victimization for these harmless wrongs.” And he says, “Oh, that’s super interesting. If that’s true, I might have to amend Moral Foundations Theory a little bit at the edges of maybe purity.” I’m like, “Okay. Cool.” And he’s like, “Oh, that’s so interesting. You should suggest me as a reviewer. I’d be excited to see it.” I said, “Okay.”
I’m like a little naive as a new professor, and this is one of my advisor’s best friends. And so I submit it and I suggest him as a reviewer, and he gets the paper and he writes a 12-page screed about how I’m wasting my time and his time and moral psychology’s time, and why would I challenge him, he’s already proven beyond the shadow of the doubt that it’s not about harm. I think he attached his dissertation I remember to that review, which I didn’t know you could do. There’s a special attachment. He’s like, “Here’s my dissertation.”
Andy Luttrell: I guess what that’s for. I didn’t know that’s what that box was for.
Kurt Gray: Yeah. Exactly. It was really aggressive. I think Jon is always saying, “Look, I’m so magnanimous in public.” I think in private we sometimes act differently. And I was just floored at the reaction of this paper. And so it made me think that on one hand, maybe I should give up, but on the other hand, we all know about radicalization. And he was like, “Well, it’s only be true of this and this and this.” So it was like a roadmap of what I needed to do to show it. And so I think from that study, it was rejected four times at every journal in social psychology. And I had a file drawer of 25 studies, but they all worked. I had an entire file drawer of significant studies, but I just had to do harder and harder tests of this hypothesis. And eventually that’s the paper that got published on look, we still intuitively perceive harm in all these really deep, visceral ways from things that are objectively harmful or harmless, sorry. Objectively harmless. And so that’s the genesis. And then I kept on reading. And then if you read the original Moral Foundations Theory papers, you’re just like, “This is the state of the evidence? Are we for real? Authority and are correlated .9 and they’re argued to be separate foundations. In what world would any of this methodological framework makes sense?”
And so I just think once you start seeing the weaknesses, the scales fall from your eyes in a sense, it’s an entire edifice built on stories, which we know are powerful, but the data are just very weak and don’t match the strength of the claims. So I didn’t start out against it, but the data just piled up in this massive castle of truth behind it.
Andy Luttrell: And it’s consistent with the frustration I had being like, well, what makes these five special? Why these five? And couldn’t quite dig up an answer. And it seems like maybe there wasn’t one to dig up necessarily. I think one question that people will have, knowing that so much of the moral foundations work informed political psychology, and it seems like there is this really robust, if you use at least the methods we have available, that there are these political differences in the moral values that people prioritize. And I was thinking of this in terms of … I do some work on moral reframing, and so how do we frame a communication in a way that appeals more to politically liberal or politically conservative folks? And in some ways, it’s a little surprising that this literature finds that framing a message in terms of harm makes it particularly appealing to liberals and not so much to conservatives, which doesn’t quite fit with everybody cares about harm. Similarly, with these correlations. It’s true that the notion is that conservatives equally prioritize across these values, but still there’s this sense that there’s a correlation between harm values and ideology. I just wonder how you think about these political divides, which is relevant obviously to the point of the book with respect to this … There just seems to be some tension based on the data that people have presented before.
Kurt Gray: Yeah. I think it’s a great point. I teach methods and teach philosophy of science, and obviously any theory has to account for all data before it as well. So you can’t just have a new theory without explaining theory or the data from the past. It’s essential. And so I appreciate that question. I think there are many answers. So one, I think that these differences are obviously true in a sense. But they more reflect language and rhetoric than they do reflect deep cognition. And I think this makes sense if you think about where this entire theory came from. So Richard Shweder or Haidt’s advisor and Haidt really just adapted Richard’s theory and just applied it and made it foundational. Made their cognitive mechanisms about it. But Shweder’s theory was really about let’s look at conversations and discussions and rhetoric about morality, and then we can just group it into broad categories.
So you could talk about autonomy, you could talk about community, you could talk about divinity. That’s the original foundations, the triad. And these themes weren’t separate. You could talk about the harm related from disobeying divinity. You could talk about how God wants a strong community. So there’s just various themes. And so what Jon did was he took really Shweder’s ideas and then applied them to the political divide. And you said, “Well, where do these come from?” So what Jon says in his Daedalus article is he’s like, “I just looked around and I just thought, what are the issues that conservatives care more about than liberals?” There’s no scientific genesis beyond his intuition. And he says, “Look, liberals, they care more about social justice. Conservatives, they seem to care more about premarital sex and obeying your pastor.” And those are the items that are used to capture the Moral Foundations Theory. So it’s not that liberals necessarily care about these foundations. We’re psychologists. We know that everything is all about how you operationalize a construct.
So what we know from the science, from lots of science is that yes, conservatives do care more about premarital sex and religion and liberals care more about social justice values. So I think what I like to say is what’s true about moral foundations theory is not new. And what’s new is not true. What’s new is that there are these foundations in the mind that are separate and eternal and innate. That’s not true. They’re not separate. They’re highly correlated. What’s true, but not new and we’ve known for a long time, it’s like, look, conservatives care more about obeying pastors and your immortal soul and not having sex until you’re married. So those are the differences. Things we already knew.
And what about the reframing literature, or what about the social media analysis? Again, it’s just language. It’s just language. If you use the terms that I use, I’m just going to think that you’re part of my group and you’re speaking to me. So if I tell you, look, I’m really concerned about liberty. I’m like liberty and freedom and individual freedoms. If you’re a libertarian, you’ll be like, “I like those arguments.” But you know what liberals also care about? Individual liberties when it comes to reproductive rights. If I think a woman should be able to have the personal freedom and liberty to make her decisions, progressives are like, “Yes.” So it’s not like one side doesn’t care about liberty. Of like you know what’s bad? Slavery. You know what liberty is? Freedom from slavery.
So everyone cares about these ideas, but because we’re divided into these communities with separate rhetoric, that’s what these reframing tap into. And I talked to Rob Willer and he’s like, “Yeah, we never control for that. We never control for rhetoric or communities.” And so everything that is moral foundations that is true is just this superficial language and rhetoric, which is interesting, but it doesn’t tell us about underlying cognition. Jon Haidt has never run an IET. He’s never run an AMP. He’s never done these implicit cognitions. And if you look at implicit cognitions in a moral foundations theory, there’s nothing there. It’s just rhetoric. And so it’s funny that Jon’s like, “Well, harm is just rhetoric.” Well, it is rhetoric, but it also reflects our deeper cognition. All the other foundations, it’s just rhetoric.
Andy Luttrell: All right. Before this turns into the moral foundations happy hour, there was one other part … And I appreciate all of this. This has been a really useful discussion. Even just if this were just a conversation between you and me, I’m finding this super helpful. There’s another thing in the book that I found interesting that I just wanted to get your take on, which was the notion of concept creep. I just found this a very compelling way of thinking about why these moments of outrage are still so prevalent despite these objective metrics of we’re doing pretty well guys, but we still just want to get so mad. And so I wonder if you could just summarize a little bit that perspective that you present in terms of whether the real world around us should be setting us up to be so alarmed and why we are so alarmed despite it.
Kurt Gray: Yeah. So it’s a great question .and I think it’s important to say concept creep because morality and harm is a concept. Again, going back to like, well, what is morality? It’s a concept in the mind. It’s an important one to be sure, one of the most important in society today. And so there’s lots of work on how concepts evolve over time. And what you can find, what’s true is that if you’re motivated to detect a concept and that concept becomes less prevalent, then your notion of what counts as that concept expands, it creeps. And that’s hard to understand when it’s set abstractly so let’s just go to a specific example that I use in the book. Food. We’re very motivated to find food. There’s a physiological drive. Food. But right now, food is everywhere. It’s abundant. You go to a grocery store, it’s like a marvelous expanse of food.
And so if I ask you standing in the middle of a grocery store is cat food human food? You say, “No. That’s stupid. It’s called cat food. I’m not going to eat that. I’ll just eat a bag of chips. I’ll eat a cucumber or I’ll eat some steak, whatever.” Great. The concept of human food is actually pretty narrow around things you might eat in a grocery store. Now, let’s say food is more scarce. You’ve been living in the forest for a month, you’re plane crashed, you’re starving, you’re walking around and you come across, surprisingly, a tin of cat food. Now I say, “Is this food?” And you’d be like, “100%. This is human food. I’m going to delight in this delicious melange of fish heads. Yum, yum, yum, yum, yum.” And so the concept of food has crept because it becomes less prevalent.
And so harm is the same way. We’re motivated to find harm because we have these prey-based minds. We’re afraid we’re vigilant of threats. If you hear a thumping in your basement at night, you’re like, “Is that a harm? Is that a threat?” You’re walking through a dark alley, you’re like, “Is that a threat?” So we’re motivated to find threats. But as threats of actual violence and death become less prevalent, then our idea of what counts as a harm or a threat expands. And so talk about this in the book. I was talking to my mom sometime ago. She grew up in northern Canada, Northern Ontario, and she’s just casually recounting all the ways that kids would die back then. Almost like smoking a cigarette, oh, kids used to die so much. Kids these days, they don’t die.
A kid was driving in the back of a car on the highway. There’s no child locks or car seats, and this kid was having a temper tantrum and just opened the door and rolled out and died. Or people put candles on their Christmas trees, lit a tree on fire, lit the house on fire, two kids died. Kids died in the lake behind the house. Kids died trapped in old fridges. It was just a lot of kids, so sad dying. Today we have seat belts, we have car seats, we have the Heimlich maneuver. We didn’t have the Heimlich maneuver before the ’70s. Crazy. We have all these things that help make us safe.
So because we’re safer, but we’re still terrified of our kids being harmed. Now, we project all sorts of things on the world that seem harmful. So a van drives slowly by your kid’s playing, it’s not a lost locksmith, it’s a kidnapper. Or your kids are walking a mile to school, you’re like, “Oh, they’re going to get abducted. It’s terrible.” So we’ve expanded our notion of harm. That means we’re seeing so many more things as harmful and as immoral. So things like micro expressions, things like speeches, violence. There’s legitimate arguments around all those concerns, but they are probably not concerns that people had back in the Industrial Revolution when kids were losing their hands to machinery. And so I just think it’s useful to think of the scale of human cognition to make sense of the modern moment.
Andy Luttrell: So there are definitely still people who roll their eyes at some of these moments of moral outrage. Does it seem like those are people who have more harm in their environment or call it … I’m trying to think of how to put this. I don’t know if you know what I’m getting at. If someone goes like, “Oh, you’re mad about this thing that isn’t even real when other people have bigger problems,” does this notion of concept creep not only explain a historical arc, but also an interpersonal dynamic?
Kurt Gray: Yes. I think so. I think if you’ve faced real … There’s variance. I want to say that. And if you suffered substantial physical or emotional harms, then you might be primed to view things … You might have PTSD. But the evidence actually suggests that if you look at individual harms, it’s not a very strong correlation between suffering harms and seeing these things as harmful more broadly. And I do think that if you know you suffered through war or you suffer through real trauma and you come across someone being like, “Look, this person gave me a weird look in the hallway.” And liberals and conservatives do this both. I think conservatives tell me … Conservative professors who are like, “Look, people are giving me dirty looks in the hallway and they’re not inviting me to parties, and I need a place to talk about this.” And I’m like, “Yo, conservative professor, it seems like you need a safe space from the microaggressions you’re facing.”
So I think people are people. It’s like my feeling in life and in the book too. Everyone has the same moral minds. But I do think that if you have suffered more, you’re maybe less focused on these more minor harms. I think with racism as well, to get it on the point of popular debates, things will happen and there’ll be white folks from more affluent backgrounds who are like, “Oh, this is …” Even some tears. This terrible thing happened, or these people said this. And I think the black folks I know in the department who have more suffered overt, more aggressive racism are like, “Yeah, that sucks. We keep on working, but we need to focus on not eroding these rights of more obvious and harms.” It fits your intuition here too, that if you are actively being targeted for these harms, you’re maybe less likely to focus on these minor harms because experiencing the more major harms. Not to say that it’s not important to highlight the more minor harms that might lead into more bigger harms as some have made this argument. But I think it’s interesting how these things might come apart.
Andy Luttrell: On the other side too. So this is when you’re in the midst of harms, you have a particular view of what that means. And the other side is from the safety of your observation deck, you have the luxury to find smaller and smaller harms because you need to. That’s the point. Is just that I can’t be safe. My whole body is telling me I’m not safe, even though I am. So I need to find a reason to justify my intuition that I’m not safe despite evidence to the contrary. And so I’m curious, are you at all optimistic about a future, a utopian future where everyone is perfectly safe and people can just accept it and live it and love it? Or does it seem like we’re doomed to outrage?
Kurt Gray: Yeah. I think the latter. I think as society’s been getting safer for so much longer, it’s not that people no longer feel outraged. In fact, perhaps you could argue even more so now. So I don’t think there’s a place where we’re all just … We’re not like Bonobos. It’s like Bonobos will hang out, they’ll just have sex all day. It’s a matrilineal society. Females rule. There’s fruit everywhere. They just hang out, chill. It’s the utopia maybe that we’re talking about. But we’re half chimpanzees too. And chimpanzees live on the other side of the Congo River with gorillas. Food is scarce, they’re fighting. There’s war. They’re constantly on guard for predators and for other chimps hurting them. And so I think maybe we’re better than chimps, but we’re certainly never going to be Bonobos. There’s always this threat of fear and this desire for even preventative violence, maybe to safeguard threats that we’re worried about. And so I think we have gotten better though. Violence is less than it ever was. That’s great. But is there going to be a time where we’re all just like life’s great and we’re taking our whatever, universal … What’s it? Universal payments.
Andy Luttrell: Oh, Yeah. Universal basic income.
Kurt Gray: Exactly. We have UBI and we just sit and just sing Kumbaya. Probably not.
Andy Luttrell: You said at the beginning though, to end on maybe something more … I walked us into a pessimistic way to wrap up. So I’m going to turn this around. Which is at the beginning you said that knowledge is powerful and the point of writing a book like this is to say, listen, if we can understand that this is the nature of these disputes, we can maybe grow from it. And so what’s your sense? And maybe even just to make it personal, are there ways in which you have taken these sorts of lessons to heart and have changed the way that you think or interact with other folks?
Kurt Gray: Yeah. It’s a great question. There’s a little bit of … Research is me search. The idea, people study the things that they don’t understand or that they lack the deficit theory. So if you’re socially anxious, you might study how to be social, things like this. And so maybe I’m generally low and outrage. I certainly have moral convictions. I donate to political causes. I vote. But it’s hard for me to get out. I feel like the world’s pretty safe, and I’m an optimist, so maybe I’m a little broken. But I do think learning about these things has also helped me when I know that look, it’s much more dangerous in other places. And to think about a broader sense of moral progress of harms. The political situation in America today is not great for many ways. And so if you zoom out and you think of like, well, we had a civil war and then we had monarchies, and then we had warring city states, and so maybe it’s just bumps in the road.
It doesn’t mean I necessarily sleep great at night, but it does help me to maybe refocus on work or what matters with my kids. And it does help me when I interact with everyday people who I disagree with. That’s helped me a ton. This understanding and knowing how to have conversations with neighbors, coworkers, Uber drivers who might think different politically, even in small ways, even in big ways it helps me have those conversations to get less angry and get the sense that most of us are in it together and we’re trying to make the world a better place. And so I’m not surrounded by threats. Go to Nebraska, and I’m just like, “Everyone here wants to kill me.” Because they don’t. Everyone here just wants to get along with their family and their job and just get by. And so I think it helps me not feel threatened by other everyday folks. And that’s really useful.
Andy Luttrell: That’s the ending I’m looking for. Thank you so much for taking the time to talk about this. The book was a fun read and I appreciate you taking the time to share it with folks listening to this.
Kurt Gray: Great. Thanks so much.
Andy Luttrell (outro): Alrighty that’ll do it for this episode of Opinion Science. Thanks again to Kurt Gray for taking the time to talk. His new book is Outrage: Why We Fight About Morality and Politics and How to Find Common Ground, which is out now. You’ll find a link to the book and Kurt’s website in the episode webpage.
And if you think of it, it would be great to rate and review the show on your podcast purveyor of preference. You can also help the show by sending the episode to someone you think would like it or you could post it on Bluesky. Help people find the show. I love when we find new ears. Go to OpinionSciencePodcast.com for past episodes and ways to support the show.
And uh, yeah, that’s all for me. Thanks for listening, and I’ll see you next month for more Opinion Science. Buh bye.