Larisa Heiphetz studies how kids think about religion and morality. She’s an assistant professor of psychology at Columbia University where she runs the Columbia Social and Moral Cognition Lab. As a new dad, I’ve been thinking about how young kids form opinions—do they even form opinions at all? So I was curious to talk with Larisa about her work on how kids make different kinds of judgments and think about their new social worlds.
If your interested in participating yourself (or your young child!) in Dr. Heiphetz’s research, you can sign up for studies here: https://columbiasamclab.weebly.com/childstudysign-up.html
Things we mention in this episode:
- Developmental psychology as a research tool to understand big questions (see Heiphetz, 2014)
- How we think of moral as different from facts and preferences (e.g., Heiphetz et al., 2013, 2014, 2017)
- Research on how kids evaluate “helpers” and “hinderers” (e.g., Hamlin & Van de Vondervoort, 2018).
- Psychological “essentialism” and why kids tend to think that way (Heiphetz, 2020)
Download a PDF version of this episode’s transcript.
Back in February, my wife and I got into our car on a Friday afternoon and drove away. That following Monday we arrived back at home, and by most accounts, everything was exactly the same. But in reality, it was all different and it’s going to be different forever. Say hi to Maya, my daughter. Still feels weird to say that. Yeah. Little baby girl that we share our house with. Podcasting is an audio medium, so you’re just gonna have to trust me when I say she’s really very cute.
Now, I could go on and on about my new life as a parent, but I’ll spare you the minutiae about sleep schedules and diapers that we’ve been subjecting our friends to. But I bring all of this up because there in the delivery room, Maya’s been in the world for what, like a few minutes? And she’s screaming because of course she’s screaming, and the nurse says, “She’s got a lot of opinions!” Now, listen. I know this was a tongue in cheek, fun thing to say. Of course, this very fresh human is screaming as a reflex, not to express a well-formed opinion about what’s going on. But it got me thinking, like I’m a guy who studies opinions. I’ve got a podcast about opinions. Can a baby have an opinion? At what age do we start sorting the good and the bad, our likes and our dislikes? Does Maya, who’s six months old already, does she make judgments about her surroundings the same way I do?
As it turns out, this really isn’t something we know much about. Opinion scientists know how to survey adults’ opinions, but they mostly haven’t considered how these things work for kids. But there is one place where developmental psychologists have looked into something like this. It’s about kids’ judgments of right and wrong. How does morality develop? How does a kid think about what it means to be a good person and what it means to be a bad person? That, we know a little bit about.
You’re listening to Opinion Science, the show about our opinions, where they come from, and how they change. I’m Andy Luttrell, and this week I talk to Larisa Heiphetz. She’s an Assistant Professor of Psychology at Columbia University, but she started as a graduate student working with Mahzarin Banaji. You can hear my conversation with Mahzarin on episode 16. And even though this should have set her down the path to traditional social psychology, as you’ll hear Larisa explain, things took a left turn when she started studying kids. Nowadays, her lab uses creative methods to understand how kids think about right and wrong, how they think about religion, and how they think about what makes a person who they are. So, to get the scoop, let’s jump right into our conversation.
You’re the first person I’ve talked to for this who I would call a developmental psychologist. And I wonder if you would call yourself that and at what point you started to. So, just in looking at your background, I don’t see like the clear moment where I go, “Ah, that’s where the developmental angle comes in.” So, I’m curious, was that always really in the back of your head, like your whole interest in psychology had this perspective baked into it? Or this was a part of it that you discovered as you went along?
More the second thing and I would say that I’m a social psychologist and a developmental psychologist, both, and in the work that our lab does, those perspectives are so integrated it’s hard for me to even distinguish them to say, “This project is social, and this project is developmental.” I think it’s all kind of going together into one big pot of science, which is part of the fun for me, is that we get to draw from these differing areas and integrate and create bridges across them.
When I came to graduate school, I was not thinking about development at all. I was not intending to do any developmental work. I was coming from a background where I had worked as a research assistant in several social psychology labs. I had applied only to social psychology programs. I was very interested in studying social psychology and the questions that I brought with me into my graduate program were about religion, and intergroup bias in particular, so how do we think about people who have different religious identities than we do? And I was proposing these questions to my fabulous graduate advisor, Mahzarin Banaji, and she said, “Oh, you know, these questions could really benefit from a developmental perspective, because it would be really cool to know how kids start to think about religion and where that comes from.”
And I said, “Well, Mahzarin, I’m not sure if you know this, but we are social psychologists, and so we study adults. That’s what we do.” And she very patiently said, “Okay, that’s fine. We have a great developmental group here, so why don’t you go to some of their brown bags, and you see what they’re doing?” And I was like, “Okay, cool. Graduate school is a place to learn new things, so I will go to these brown bags, and I will see what happens.” And then when I showed up to the brown bag, the very first one of the semester, we go around the table, and we introduce ourselves. We say what we’re interested in. And so, I said that I was very interested in studying religion, and one of the faculty members there got very excited about that and came up to me afterwards and said, “Oh, I would love to do a project together on how kids think about religion.” And I was like, “Oh, how can I resist this level of excitement and enthusiasm?”
And so, it turned out that very soon after I came to graduate school, I got involved with doing developmental research and then I figured out as I was doing it that I really loved it. It allowed me to answer really meaty theoretical questions that otherwise I wouldn’t have been able to answer, and also it was just a fun thing for me to learn and to do. I loved working with kids, it turned out. But that was never the plan. That was something that happened very fortuitously once I got into my graduate program.
Mahzarin strikes me as very good at being able to just swing in whatever direction the question pushes her. Just like in the early days with the neuroscience angle, it was like, “Oh yeah. Let’s just become neuroscientists.” And, “Oh, let’s just become developmental psychologists.” This is how we answer the cool questions. So, the kid angle was hers? Is that what I’m hearing? She kind of just lobbed it out there as a…
She was suggesting it. So, one of the things that I really appreciate about Mahzarin is that she kind of let me do my own thing. So, I came in with this interest in religion, which was not something that her lab had really been working on, and she let me go and do that stuff because I was interested in it. And that was fabulous. And she suggested that it might be cool to do with kids, and so we ended up doing a lot of kind of social cognitive developmental perspective on that, and so I think it was a collaboration between us and also the developmentalists that we worked with.
So, you mentioned that it helps kind of think of it as a tool in the toolbox to use, like work with kids, and compare it with adults to answer bigger questions. And I saw you have a chapter I think early on about using these different perspectives within the same research program. So, even before we get into nuts and bolts of what you’ve found, what do you think the value is of kind of having both of those perspectives at the same time when you’re approaching some question?
I think thinking about developmental psychology as a toolkit, as an addition to a methodological repertoire that someone might have, is an interesting way to think about things. And I think it offers a lot of benefits. So, one benefit is that there are things that we cannot learn about adults just by studying adults. So, as one example, some of our recent work is really interested in the role that theory of mind might play in religious cognition. So, is there a link between the way that people think about ordinary human minds and the way that they might think about supernatural minds? And that question is really difficult to answer just with adults, because adults are already pretty good at theory of mind. They already are basically at ceiling at passing a lot of tasks that ask them to think about what somebody else might be thinking, or how another person’s knowledge might differ from their own.
And so, to answer those kinds of questions, it can be really helpful to test kids as their theory of mind capacities are developing so that we can see whether that aspect of cognitive development has anything to do with how people end up thinking about God’s mind. So, that’s one thing that it can give us, is looking at kind of the role that some of these developmental factors play in whatever is happening among adults.
But of course, kids are not just small adults. Kids are also interesting and fabulous to learn about on their own, right? So, for instance, the part of my work that has to do with religious preferences and biases based on religious group membership, I think that’s important to study with kids also because we might think that some of these biases might be more malleable in childhood, and so if we’re thinking about how to increase egalitarianism, or reduce bias, it might be helpful to find an early point at which those biases start to emerge, and then intervene at that point as opposed to waiting until adulthood when some of these preferences might be more firmly established and more difficult to change. So, if we intervene early, it can have a translational benefit in terms of reducing some of these social preferences or biases that we might observe.
So, I hadn’t put it in quite this way, because we’ve done some work where we’re thinking of preferences and attitudes in other domains, and being like, “Well, to understand it, we have to catch them when they form.” Right? Because those are where the pieces of the puzzle are coming together. Once they’re well formed, it’s hard to really pull apart like what is actually driving what. But I hadn’t thought about going all the way back to like early life experiences as where those things form, right? And for those kinds of social preferences, those… That’s where you have to go, right? That’s where you would have to go to see sort of the seeds of those start to emerge.
For sure. And we were surprised, or at least I was surprised, in terms of the religious cognition aspect of my work, for what aspects of that we already saw in relatively young children. So, when I first started doing these studies, I wasn’t sure what ages to interview in the studies because I didn’t know when kids would start to be thinking about religion or have that be meaningful for them. But it turned out that even the youngest kids that we were working with already had an understanding of what religion was. They knew what words like God meant, like they way they think about God is probably very different from how an adult thinks about God, but they knew what we were talking about if we asked them questions about God and they could reply basically right away with whatever they thought about religion, or what they thought about people who shared their religious beliefs or didn’t share their religious beliefs.
And so, we’ve done these studies with kids in preschool and early elementary school and already they were having a really easy time with these questions, which initially I wasn’t necessarily expecting. I was starting there because I was like, “Okay, let me find the age at which kids wouldn’t necessarily be able to answer these questions and then go up until I find the age at which they are able to answer those questions,” but it turned out that those youngest ages that we started working with, kids were already quite eager to tell us what they thought about some of these really complex topics.
It reminds me a little bit of the minimal group paradigm work, where Tajfel was like, “Let’s start at where obviously prejudice won’t exist. We’ll just make these arbitrary groups and obviously we’re not gonna see prejudice.” And then you go, “Oh, God. It’s there, also? It’s there already?” And we go back to what we think is gonna be a totally blank nothing spot and we’re already seeing seeds of it, so I could sort of see two biases that you might have in assumptions. One, thinking I think is what you described, which is I would have assumed it was emptier earlier, but we’re surprised at actually how much is there. Versus another bias you might think people assume that people are bursting forth into the world already ready to go with all these things and you go, “Oh, God. It actually takes a while to put these pieces together.”
I think as the father of a two-and-a-half-month-old right now that’s where I’m at, where I’m like, “You can’t even do this? I thought you just knew how to sleep. I don’t know you had to learn how to do that.” Would you say as a general bias you have kind of assumptions of the former, where you sort of are expecting there to be an emptier pallet and you’re surprised to see how early these kinds of things that you’re interested in are emerging? Religion being one and other things being other dimensions?
I think that was my own trajectory, in part because of the training that I got was really focused on when is the earliest stage at which something might emerge. Oh, the field has been thinking it’s kind of later in development, but it turns out that even infants can do some of these things. This wasn’t with religion, but this was like work on when infants can understand how the physical world works, for instance. Some of Liz Spelke’s earlier projects. And so, I was very steeped in this tradition of, “Oh, the field thought that this emerged quite late in development,” but actually it looks like babies can also do it. But I think in developmental psychology in general, both of those threads are present. So, there’s some work in which researchers are saying, “We thought this would emerge later, but look, it’s actually happening now.” And there’s other work where the opposite pattern is happening.
So, I think your view is also represented in the field.
Got it. Okay, good. So, I’m not alone then. That’s good to hear. So, if we sort of bridge beyond religious cognition into more generally moral cognition, and the development of that, that’s sort of where I was thinking we could spend most of the time. And I sort of see a handful of threads in the work that you’ve done, but one sort of starting place is to be… is to ask the question in what ways is moral cognition different from other kinds of cognition. And is that apparent in kids in the same way that it’s apparent for adults?
So, that’s a question that developmental psychologists, cognitive and social psychologists have spent a long time on, so it’s a really meaty question. And I think there’s evidence that people distinguish moral beliefs from other kinds of beliefs in a few different ways. So, one strand of this tradition is work on social domain theory, which talks about the difference that kids might see between moral beliefs and something like a conventional belief. So, a moral belief would be about something that’s kind of right or wrong for everybody regardless of what other people think about it, regardless of where you are in the world. Kids might say that particular thing is wrong to do.
So, some prototypical examples might be hitting another person or stealing from another person, whereas social conventions might also make claims about what we are and are not supposed to do, but they’re more malleable in a sense. So, an example of a convention would be something like we raise our hands when we talk in school, or we don’t wear our pajamas to go to work, things like this. So, there kids might say, “You shouldn’t talk in class without raising your hand,” but if your teacher says it’s okay, then you should do that. It’s totally fine. Or if you lived in a country where it was totally fine to talk in class without raising your hand, then it would be an acceptable behavior.
So, social domain theory has done a lot of really good work showing that kids as young as preschool are already making this distinction, so they don’t think just like anything that I’m not supposed to do is the same. They distinguish between morals and conventions.
Another way that people make this distinction, and a place where my lab has spent some time, is looking at moral beliefs and factual beliefs. So, we got interested in this question because it seemed to us that in one sense, kids might think that morals are very similar to facts, so if I think that something is wrong, potentially I think that’s an objectively correct way to think about that topic, everybody else should share that view with me and if they don’t, they must be wrong. But there’s also some type of moral beliefs where we see disagreement across people in a society. So, for instance, if I ask a question like, “Is it okay to tell someone a small lie to help them feel better? Or is it okay to hurt one person in order to save five people?” Those are clearly moral issues. They’re about ways that we can harm other people, potentially touching on issues of justice, other things that are very important to people’s concepts of morality, but nevertheless the elicit disagreement, right?
If you and I went out and asked 100 people whether it’s okay to tell someone a small lie to help them feel better, we would probably get some differences in what people are saying to us. And so, some of our work showed that preschoolers and adults are saying that only one person can be right when we asked them about factual disagreements and also about widely shared moral disagreements, like whether it’s okay to hurt people for no reason, but they’re more likely to say that both people could be right when we ask about controversial moral beliefs, like whether it’s okay to hurt one person in order to save five people, and they’re also likely to say that both people could be right when we ask about preferences, like people are disagreeing about which color is the prettiest one or something like that.
And so, that suggests to us that people might be distinguishing morals from other kinds of mental states, but also potentially distinguishing different kinds of moral beliefs from each other.
So, I get a little hung up on the what makes something moral versus not, and the developmental area seems like where a lot of this discussion has happened. But I’m curious just to get your take. So, there is a tendency sometimes to say that what makes moral beliefs and views so impactful is that they are treated like facts, and there’s sort of this tendency to claim like, “This is what morality means,” right? It’s these sorts of things that are truly subjective but feel objective. But as you point out, they can go either way, and I have a project that we’re working on now where we show that individuals differ in whether they see morality as having these sorts of objective components versus not.
And so, does that help resolve what makes something moral versus not? Do we know a priori the kinds of moral stuff, I guess it’s the controversial versus shared beliefs, but what is it that makes something controversial versus shared? And why does one feel more objective than the other?
Yeah. I think that’s a great question and the controversial versus shared is definitely a distinction that my lab has focused on. I’m not sure that I think it’s the only distinction that matters. We focused on it in part because there have been claims in the literature that moral beliefs are viewed as fact like in general, and it seemed to us that that was definitely true of some moral beliefs, but potentially there was this other class of moral beliefs that might be less well described in that way. I think one of the things that matters for controversial moral beliefs is that there are moral claims that could be made on either side of that point.
So, for instance, in terms of telling someone a small lie to help them feel better, we think that telling the truth is a good thing to do from a moral standpoint usually. But we also think that being kind to others is a morally righteous thing to do. And so, when the truth would hurt someone’s feelings, you’re kind of pitting two moral claims against each other. And same with hurting one person in order to save five people. We think that hurting somebody is in general a bad thing to do, but we think that saving people is in general a good thing to do, and so we’re creating a dilemma for people and there’s a rich moral psychology literature on dilemmas, so we basically just translated some of those for things that children could understand and engage with us about. And we were curious what would happen when we presented even very young participants with disagreements about which of those claims should be more powerful.
So, and it’s a case of these sort of controversial moral beliefs that people can go, “Oh, I see that there are two sides to this.” I may still think that my side is the right one, right? Push me and I might say no, the facts are on my side. However, I sort of understand that someone else could view this as being moral on the other side. And so, still we’re in the domain of morality. I might still think I’m right, but I at least can acknowledge that someone else would approach this as thinking they’re right. Whereas when it’s so counter normative to have the other view, you just go, “Something is off. Something’s off here, right?” This is the right answer, right? How could anyone… I just can’t see how anyone could even approach this reasonably from the other direction.
What you’re saying makes sense to me. It’s also similar to a project that we did on religion, which I know that you are trying to move more into the morality part of the work, but we did a similar method where we presented kids and adults with religious disagreements, and we asked if both people could be right or only one person could be right, as compared with facts and preferences, and we found that religion was falling in between, which was also surprising to us with my journey of studying religion has just been full of surprises.
So, a priori, we were thinking maybe religion is exactly like a fact, because some religions at least seem like they’re making factual claims. They’ll say like, “This is what our religion teaches, and other religions are wrong,” which is typically how we talk about facts, as well. But there’s also a reason to think that religion might be more like a preference in people’s minds because at least in the United States, we live in a religiously pluralistic society, where people disagree about religious views the same way potentially that they’re disagreeing about preferences and not so much in the way that they disagree about facts, right?
We don’t have a lot of disagreement about whether or not George Washington was the first president of the United States, for instance. And so, we were expecting that religion would look either like a fact or like a preference, and we wanted to know which way it would be, but it actually was hanging out in an intermediate position between facts and preferences, which we weren’t expecting.
And so, we did some follow-up work and the reason that I was bringing up the study is because I think that follow-up study might speak to your question about morality as well, because our thought was maybe people are saying religion has some things in common with a factual belief and some things in common with a preference. And so, what we hypothesized and found is that participants would think religion was somewhat like a factual belief because both of those are ostensibly claims about how the universe functions, right? So, if I say germs are very small, I’m telling you something about germs. I’m trying to communicate about the external world. And similarly, if I say that God can do miracles, I’m trying to communicate with you about an agent that exists in the world outside of my own mind.
But similarly, religion might have some things in common with a preference because when I tell you a preference that I have, I’m also communicating something about myself. Not so much about the object of the preference, but something that makes me different from other people. So, if I say blue is the prettiest color, I haven’t really told you anything objectively true about colors, but I have told you something about me that distinguishes me from other people. And similarly, if I say God can do miracles, again, because we live in a religiously diverse society, I have also told you something about myself that distinguishes me from other people.
And so, we haven’t done exactly the same method with controversial moral beliefs, but it’s my intuition that perhaps they’re working in a similar way, where when somebody says a controversial moral belief that they have, they’re telling you something about themselves that distinguishes them at least from some other people in the community. And so, that might be drawing them a little bit more towards the preference side than the widely shared moral beliefs where you’re not really distinguishing yourself from most other people.
Yeah. That makes sense. And sort of suggests that seeing something as a fact is less about saying it’s like capital T true and just saying, “Oh, this is a kind of information that’s different from other kinds of information,” maybe. Again, I’m just throwing that out there, but when you say both are sort of statements about the nature of the world, you go, “Well, yeah, we’re sort of taught that there’s like truth to those kinds of statements, but really maybe at the seed of it is just that that’s the kind of statement you’re making and that’s why those morals feel like facts.” Because they’re supposed to be about the same kind of thing, maybe.
And so, is there a sense developmentally when this kind of stuff comes online? Is there a trajectory we can trace where kids either distinguish morality from preferences or distinguish morality from other kinds of things, or even just have the ability to reason about morality?
It’s a good question. All the work that I was telling you about with the widely shared and controversial moral disagreements, we found that effect with preschoolers, where preschoolers were saying only one person could be right more often for the widely shared than for the controversial moral beliefs. There’s some work suggesting that it’s a gradual process, which I think is true for almost anything that we could talk about in development, right? I think it’s very tricky to say that at age five years, two months, 13 days, this thing is going to happen. Be on the lookout. But, so for instance, DeAnna Coon has done some really excellent work about the extent to which people accept divergent viewpoints versus the extent to which they think there’s only one right way to think about things, and so she shows that there are different milestones at which people increase their acceptance of divergent viewpoints depending on whether you’re asking them about a fact, or a moral belief, or a preference, so in her work it comes online first for preferences and then later for moral views.
In terms of when morality emerges in general, I think that’s an even bigger question with potentially some debate or controversy about that in developmental psychology, so there’s some infant researchers that have found evidence that even infants can distinguish between prosocial and antisocial actions, for instance, right? So, Kiley Hamlin’s work where infants prefer helpers to hinderers, those are very young babies and they’re already showing the capacity to distinguish between helping someone’s goal and hindering someone’s goal even that early in development, which some have interpreted as people having an innate moral core that allows them to distinguish at least some forms of prosocial versus antisocial behaviors, though I think it’s also the case that we build on that core with social experiences, right?
So, over the course of development, we learn a lot about morality both from explicit instructions, like our parents correcting us when we don’t share, or we hit our siblings, or whatever it is that we’re going to do when we are toddlers and just learning the moral norms around us, and also through our interactions with friends, and teachers, and the people beyond our families. I think it’s a gradual process in that way.
That’s not a bad opportunity to shift to talk about essentialism and how kids think about those helpers and hinderers and people who are good versus bad. So, there’s a lot of directions that you could go in thinking about essentialism, but just to start, could you sort of… I feel like that word, essentialism, is not super widely used, so if you could start by just sort of giving us a sense of what it means to think about someone or some group in an essentialist way, and then we can sort of push that in the direction of what that means when we apply it to moral reasoning.
Absolutely. So, the idea of psychological essentialism is that there might be some kind of internal hidden essence that’s responsible for external observable properties. So, for instance, we can think about this in terms of species, which I think is a really intuitive example for a lot of people. So, if I think about what makes a tiger a tiger, there are some things that tigers might have in common, like they have stripes, they’re big, they live in particular places in the world, et cetera. So, those are some external features that I can observe. But most people have the intuitive theory that there’s something inside of a tiger that makes them a tiger, so if I meet an albino tiger that looks different from other tigers that I’ve encountered, I will probably still think it’s a tiger. Or if I meet a tiger that lives in a zoo, so doesn’t live in the place in the world where I think tigers are from, I would still think that that’s a tiger. So, I have some kind of idea in my mind that there’s something inside of the tiger that’s responsible for its tigerness.
And so, that might be the tiger essence, right? So, it’s sometimes been called a placeholder notion because people don’t always know what exactly they think it is, especially if they’re kids, right? Kids are not going to be using words like genes, or chromosomes, or having a whole bunch of biological information about exactly what makes a tiger a tiger, but they do have an intuition that there’s something inside of that tiger that causes it to have the external features that we associate with tigers.
So, psychological essentialism is kind of the application of that way of thinking to features people might assume that human beings have. So, the idea is not that essentialism is always an accurate way of thinking about humans. In fact, it’s often an inaccurate way of thinking about humans. But people, especially kids, often apply this way of thinking to humans anyway. So, if we think about psychological characteristics, for instance, kids might say if someone is shy, there’s something inside of them that gives rise to that shyness. And essences are often perceived to be internal like I was saying, but also biological, unchanging, and so kids might say if someone is shy now, they’ll always be shy because there’s something inside of them that made them be shy when they were born, and they just stay that way.
That’s a case where essentialism isn’t necessarily the best way to think about shyness, because as social psychologists we know people can vary in terms of how shy they are across different contexts. People might become more or less shy across development. There’s not necessarily anything genetic or biological that makes people shy. And yet people and kids in particular seem to have this intuition that some human characteristics come from an internal essence.
So, I was sort of thinking that there might be two approaches to thinking about that, one of which, which I’m now doubting, is that it’s super categorical that you go like, “Oh, they’re clear boundaries, right?” And that’s what it means to be essentialist, like your tiger example is making me think when I teach social categories in class. I always am thinking of the ribbons across the upper edge of classrooms in schools all over the place that are just like, “This is a dog. This is a cat. This is a tree.” We’re just teaching kids like there’s a clear little box. For all the stuff you see in the world that has feathers, those are birds. Put them in that box. The stuff that has fur, those are whatever.
And so, part of it is those lines are nice and straight, and not blurry, but… So, I’m wondering how much of that is the case for essentialism, whereas really at the heart of it is more what you said at the end, which is that stability, right? That’s just who you are. You are this. You’ll always be this. Sort of a very fixed mindset about the qualities that make you you. Do you think both of those are pieces of essentialism or is it really just the stability part?
Yes, and I appreciate you bringing up the categories, because you’re right. I wasn’t talking about that in my answer, but I think that’s also an incredibly important piece of essentialism. So, the way that I think about essentialism is a broad umbrella category. Maybe I shouldn’t have used the word category there because we were just using the word category to refer to something else, but if you picture like a big umbrella with some components underneath it. So, the idea that the characteristic is biological is one component. The idea that it’s unchanging is another component. The idea that there’s a really clear distinction in terms of categories of people who belong in that category and people who don’t is another component. Recently, some scholars working at the intersection of psychology and philosophy have suggested that another component might be the perception that that characteristic is central to identity, and so if it changes, the person would become an entirely different person. I think that’s another component that we can put underneath that umbrella.
And so, I would say that essentialism has these different aspects and it’s possible for them to all be integrated. So, when we think about a tiger, for instance, maybe we think there is a clear category distinction, and a tiger can’t become a bird just because the tiger wants to. And there’s something biological inside the tiger that makes it that way. In other cases, I think those different components don’t necessarily cohere so tightly together. We can be high on one and potentially lower on another one. In that case, I would say the person is high on some components of essentialism and not other components.
I don’t know that everybody would agree with that. Some people might say essentialism is one thing, so you’re either high on essentialism or low on essentialism, like what is this that you’re talking about with all these different components? But at least my view is that there are these different facets to it.
And so, developmentally, in reading your work I get the sense that the sort of gist message is that essentialism tends to be stronger among young kids and the debate is does it fade away, does it take different forms as kids grow into adults. So, is that the case? And what are some ways that we could know that?
Yeah. So, that’s been my reading of my literature, is that in general, in a large chunk of the studies in this area, essentialism is higher in childhood and then there’s a decrease in explicit reports of essentialism into adulthood. That’s not always the case in every study, but that’s the general pattern that I’ve seen across most of the papers that I’ve read in this area.
One way that people have explained that is to posit that essentialism is a predisposition that people bring with them when they’re looking at the world around them. And as we get older, we learn to override that predisposition with information that we’ve learned about how social factors might influence people, for instance. And so, as an adult, I might say, “Oh, well, I have learned that people can act shy in some situations and not in other situations, and so I’m using that knowledge to override the initial essentialist intuitions that I might have had from childhood.”
There’s some evidence for that hypothesis in terms of using speed of reaction time tasks with adults. So, for instance, people have done this with I think gender essentialism, where adults explicitly report less gender essentialism than kids, but if you have them do a speed of reaction time task, their levels of essentialism look more like kids than on the slow, reflective task. And the idea there is that if I’m answering very quickly, what the experimenter has done is take away my ability to override, right? They’ve made it impossible for me to think really carefully about what it is that I want to say and that makes my early intuitions come out a little bit more.
And so, that’s one kind of line of thinking that people have had about that developmental change.
So, as an adult, my knee jerk reaction might be once a shy kid, always a shy kid, but I could go, “Well, now come on, I know that that… People change, and they grow, and they evolve in different ways.” But that first reaction is still there, right? That reaction that kids are just gonna say are maybe the reactions that adults still have but can override. And so, I think maybe you started to say, but is there a good explanation for why kids have it so strongly to begin with? There’s an explanation of why it might fade away, but why is it such a potent thing from the beginning?
One way that people have thought about that is this predisposition might be helpful in some ways. So, going back to the tiger example again, if my understanding of what makes a tiger a tiger is that they look a particular way, they’re a particular color, and then I’m out in the wild and I meet an albino tiger, and I say to myself, “Oh, that is a white animal. Not at all scary. Let me go up and pet it and make friends with it,” I am going to die. And so, there’s something kind of evolutionarily beneficial in terms of thinking there’s something inside of that tiger that makes it dangerous to me, so even though it’s an albino tiger, let me get far away from here.
And so, if you want to take an evolutionary perspective, one way to think about these predispositions is that in some cases, they could save your life, though in other cases we might be overapplying them, right? So, I was making the point earlier that kids often show high levels of essentialism about human characteristics, which is not necessarily beneficial in these same kinds of ways but might be a way that that predisposition is applying kind of broadly, more broadly than potentially might be helpful.
So, if we push this into the moral domain, so I wouldn’t call tiger a moral category, but the same kind of thinking applies in this… and maybe applies even more so in this moral domain. So, what would it mean to think in a moral essentialist way?
So, this is a place where we’ve done some work showing again kids have somewhat stronger moral essentialism than do adults. When we ask about essentialism, we ask them to think about somebody who’s a good person or a bad person and why they are that way. Is there something inside of them that gives rise to that moral characteristic? Is that thing biological? Is that thing unchanging over time? If somebody changed their moral characteristics, would that change them as a person overall? Would they become a completely different person? Those are some of the ways that we found to get at people’s potential essentialism about moral characteristics and there, in addition to the developmental difference that I’ve been highlighting, we also see some optimism where people and kids in particular are reporting somewhat more essentialism about good characteristics than bad characteristics.
So, they’ll say if someone is a good person now, they’ve always been good, they’ll always be a good person in the future. You can’t really change that. But if someone is a bad person now, they can improve over time. And I think that’s consistent with some other developmental work on children’s optimism in general. Not just about moral characteristics, but about something like having the ability to draw really well, or other characteristics that kids find pretty valuable. They also think that those characteristics improve over time, and so I think some of our moral findings are consistent with just children’s optimism.
It’s a very nice finding in general. That feels good. And is that optimism bias particular to kids? Because it strikes me as unusual given how adults think about moral character, where there’s often this valence asymmetry, where it… Once I find out one reason why you’ve done something immoral in the past, that’s a quick route to me changing my impression of you. Whereas if I find out that you helped someone cross the street, I go, “Yeah, that was nice, but that doesn’t really do anything for my impression of you,” in the same way that this immorality stuff does.
Yeah. It does seem that optimism decreases to some extent across development. So, for instance, in a recent line of work that my graduate student, James Dunlea led, we were interested in moral judgments that people might be making of people who have committed pretty severe transgressions. So, we’ve looked at the example of incarceration, because that’s a place where people think that incarceration must have been pretty severe. We’ve also done some prior work suggesting that people, and kids in particular, think that you end up incarcerated because you’re a bad person, and so we were wondering how, if at all, these views might change with additional information.
And so, in this particular project that I’m thinking of, we told kids about somebody who went to prison because they did something that ended up with them going to prison, and then we asked kids what is that person going to be like after they get back from prison, versus in a control condition, they went on a business trip. What are they going to be like after they get back from the business trip? And so, kids were telling us that going to prison would make someone a better person, right? So, even though in this early work they were telling us bad people go to prison, they also seem to have the idea that prison is redemptive in some kind of way. It makes you a better person once you leave. And adults definitely did not agree with that. They did not view prison as making people better.
And for kids, this view was pretty generalized. So, we also did some follow-up work where we asked them about someone who went to time out, so think about kind of the opposite end of severity of punishment, because we wanted to see whether severity was driving the effect that we got, and we also found that kids think that time out also makes you a better person. And so, it seems like this optimism is coming out in some part in terms of thinking that even if you’ve done something wrong, or even if kids think that you are a bad person, they also have this idea that punishment and potentially other stuff, as well, can make you a better person.
Yeah. Again. Very nice. And is that adaptive, too? I can sort of see something that it sort of like provides a sense of resiliency to kids, where it’s like in the same way that those growth mindsets are posited to have these implications for kids who like aren’t gonna get super down on themselves for not doing something perfectly, knowing that you could get better, but as adults it does help to be like, “Yeah, but let’s stay away from the tiger. Let’s stay away from the bad actors,” in that protective role. But in a sort of like developmental role, you kind of want that striving for betterment and the belief that that’s possible.
Yes. I think that’s totally plausible and an interesting way to think about those data. I think also there is probably a role for social experience. So, my sense is that as adults, we’ve potentially seen people do harmful or wrong things over and over again. And maybe that optimism got beaten out of us because we have been disappointed by what we’ve seen in the world around us, for instance, so that’s not something… That’s not a mechanism that we’ve tested. It’s just wild speculation on my part. But it is potentially a complementary idea to what you just said about why that particular developmental difference may be happening.
And it raises sort of my striving for betterment raised a question of the consequences, right? Does it matter? Why would it matter that kids think in a moral essentialist way? And just to clarify, is it that they think… Is it essential only for positives or just more so for positives?
The second thing. So, kids show more moral essentialism than do adults for both good and bad qualities if I’m remembering our data correctly, but kids also show more essentialism about goodness than about badness.
And so, why would care, right? Another way to cast that is like, “Okay, that’s fine.” What is it? Does it mean anything that we could learn from that sort of in terms of the implications of it?
Yes, so there are two projects that I want to tell you about in response to that question. So, I mentioned my graduate student. James has a particular interest in looking at the connection between morality and the legal system, and so one of the other projects that we’ve been working on together is thinking about if we provide essentialist or non-essentialist explanations for contact that someone might have with the legal system, does that matter? And it turns out that it does. So, if we give somebody an essentialist explanation, like this person is in prison because he’s a bad person, we see very negative attitudes towards that individual and we can make those attitudes a little bit better by providing a behavioral explanation, like the person went to prison because they did something wrong, and we can make those attitudes better still by providing a societal explanation, like the person went to prison because he didn’t have very much money when he was growing up.
But the irony of that is that kids and adults in our studies haven’t provided those kinds of societal explanations on their own. When we just ask them what’s prison, they might say prison is a place where bad people go. They might give some behavioral answers, like prison is a place where you go when you’ve done something wrong, but they don’t really spontaneously say prison is a place where you go when you can’t afford to pay a good lawyer to get you out of prison or when you live in a racist society that overpolices your neighborhood or anything like that.
So, when we give people those kinds of explanations, that reduces their negativity towards people who have had contact with the legal system, but they don’t seem to be kind of generating or agreeing with those explanations on their own, and so that suggests that kind of teaching people about the kinds of explanations that actually are prevalent in social science research on mass incarceration might be shifting around their attitudes.
The other project that immediately came to my mind when you asked this question is one not on the legal system specifically but about morality generally, right? Because we’re using the legal system as an example context in which some of these things are very powerful and a context where I think our work can have some translational implications, but we’re also a moral psychology lab, not a criminal justice lab in particular, and so we’re interested in how these processes play out with moral processes broadly construed. And so, in another line of work in our lab, we told participants about two people who had the same moral characteristic but one we explained in essentialist terms and the other in non-essentialist terms.
So, for instance, here are two people who are both equally bad, but this person is bad because something inside of him makes him that way. Something in his brain makes him bad, so that’s the essentialist explanation. And this other person is bad because he learned to be that way from other people, which is a very non-essentialist explanation. And then we have our participants distribute resources between those two characters. And kids actually distributed equally, so their sharing didn’t seem to be so much influenced by how we described the moral characteristic that somebody had, I think because kids… So, we were working with elementary schoolers in this project. So, they have a very strong tendency to very much like fair sharing and equal distributions.
And so, we also saw that in our work, but adults sharing really differed depending on how we explained that negative moral characteristic, where they were giving more resources to somebody whose badness was attributed to social learning and consequently fewer resources than we would expect by chance to somebody who was equally bad but because of something about his essence, something deep inside of him.
And that’s an interesting finding to me because you know, we’ve been talking about how adults are less likely to endorse essentialism, and yet their behaviors are even more strongly influenced by essentialism than the behaviors of kids, and so I think that potentially shows that as adults, our behaviors can be shaped by messages even if we wouldn’t generate those messages ourselves. Even if we don’t necessarily agree with what the messages are saying. They might still be impacting how we treat other people.
And even in a sort of broader conclusion from that, some of the message I get is that people have openness to give another shot, give resources when they see this kind of thing as changeable, right? When you sort of break out of that essentialist way of thinking, that can be a precursor to sort of an openness to giving opportunity. An openness to trying to push things further. Believing that it’s possible. In both of those projects.
That’s interesting. It makes me think of some work that Oriel FeldmanHall has done with adults, and some of my developmental colleagues, I think Felix Warneken’s lab, for instance, has done with kids, showing that people will punish if their options are punish or do nothing, but also people, at least in some context, prefer to compensate the victim and not necessarily inflict severe punishment on the person who transgressed, which to my mind is a form of restorative justice, right? Like trying to make the world right for the person who suffered might be an even bigger priority in some cases than punishing the person who made the world be wrong in the first place.
And I wonder, what you just said makes me wonder whether that propensity towards restorative justice can be moved around by shifting how people are thinking about essences, which I think would be a really interesting thing to discover.
As a wrap up, just thinking about doing research with kids, as someone who’s only ever done research with adults, it strikes me as just very challenging to one, translate a sophisticated question into a methodology that you could actually learn anything from when fielding it with kids, and then two, just the logistics of doing research where kids are the participants. And so, as a way of thinking about the challenges, I wonder if you could think back to the first project you did, where all of a sudden you were now doing a project with developmental psychologists. Was there any part of that that you were particularly surprised about in terms of like what it takes to test a question well when you’re using kids as your participants?
You’re right that it is tricky and challenging. I think one of the things that I’ve had to work on since the beginning of trying to work with kids is thinking about a complex topic, like religion, or morality, or the legal system, and then asking about that topic in a way that would make sense to kids. And a lot of that for me is trying to intuit, like put myself in the mind of a five-year-old. What might that be like for me? And then also looking at prior research and seeing what kinds of tasks have worked there.
So, for instance, when we want to study prosocial behaviors or generosity in kids, a lot of the times we give them some stickers and we ask them to share stickers, because it turns out that kids love stickers, which is something that I learned just by working with kids and seeing what they responded to positively in the lab, and then also looking at prior literature, which also uses sticker tasks a lot of the time, which is not a task that we would use with adults, because adults don’t care about stickers. So, that’s an example of like adapting a methodology to something that kids would care about.
Another thing that I’ve had to think a lot about is recruitment. So, with adult studies, we have a participant pool. We have MTurk and Prolific. We have all kinds of ways of recruiting that make it relatively straightforward. To work with kids, it’s more of a challenge. And so, I’ve had the opportunity to form some really excellent partnerships with schools, and museums, and places that have been a lot of fun for me to work with, and I think have also made the work better because then I’m hearing from teachers, and principals, and museum staff members, and families who just happen to be in the museum, about their input on the project. And I feel like my developmental work has allowed me to do that in a way that I might not have been able to do if I had just stuck with working with adults.
And then I think the last thing that I have to say about that is that I thought that working with kids was very hard in some of these ways, and also very rewarding and fun, and then when I came to Columbia and started my faculty position, one of the things that my lab started doing is working with kids of incarcerated parents, and when we started doing that, then all of the other things that we’ve been talking about started to seem quite straightforward because working with kids of incarcerated parents was a whole new thing to learn for me, right? So, we developed new partnerships with organizations that we hadn’t worked with before. We read new literatures. We talked with other kinds of organizations and people that we hadn’t done outreach to previously.
And that has also been so informative to me in terms of perspectives of people who are not particularly included in psychological research, and so I felt really honored and grateful that they felt like talking with me and felt like working with me. But that has been a unique experience even compared with the other developmental studies that I had done previously.
After a year of not being able to collect data from people individually, folks like me are able to write a survey and still do a bunch of research with online survey responses. My guess is just mailing a survey to a four-year-old doesn’t pan out all that well. Have you been able to continue some of these lines of research over the last year through some innovative ways of getting into kids’ heads?
Yeah, so we’ve been doing a lot of our research online on Zoom, so we’ve translated some of the things that we do in person into like a 10 or 15-minute game that kids can play with us online. And so, even with some of these sticker tasks, we have kids move around stickers to different envelopes on their screen, and as they’re doing that, we hold up actual envelopes and put in physical stickers to match what kids are doing on the screen to make it more realistic and communicate like, “This is an actual consequential decision that you’re making.”
And my lab has been great. Like the students and the postdocs that I’ve been working with did a really amazing job of pivoting to online data collection, which was very new for us with kids. We had been doing online data collection with adults for a long time, but none of us had really used Zoom for developmental data collection before, and so I really appreciated the energy and curiosity about how to do this that my lab members brought, which has really allowed us to keep doing science. So, it’s been harder during the pandemic for everybody in a whole bunch of ways, like the ability to collect data collection doesn’t rank among the top 100 difficulties that people have experienced during this pandemic, but we have been able to do it.
And I think one value that we’ve observed is that because we’re doing things on Zoom, we’re not just limited to kids who live in a particular geographical area. So, somebody who lives in Montana, or Kansas, or wherever in the United States, where they’re not going to come to Columbia to do a study for 20 minutes and then go back home, they can now sign up for our studies and just do them from their own home online, and so I’ve really appreciated the opportunity to talk with families who live in a more diverse area than we were able to cover before.
Nice. Well, very cool. Well, I’ve taken enough of your time. I just wanted to say thanks for sharing all this stuff and as always, I’ll keep an eye out for what’s coming out next. Thanks so much.
All right, that’ll do it for another episode of Opinion Science. Thank you so much to Larisa Heiphetz for taking the time to talk about this stuff. You can check out the show notes for a link to her lab’s website and links to some of the research that we talked about. Also, if you are a child or you know one, Larisa shared a link that you can use to sign up for studies that her lab is running. You can find that in the show notes along with a full transcript of this episode.
As for me, I gotta get back to this dad business, but in the meantime, make sure you’re subscribed to Opinion Science on your favorite podcast app, and please rate and review the show. Tell the world about it. Share it on social media. Seeing this podcast grow is almost as cool as seeing my kid grow. Almost. We’ll wrap up this week’s show with an excerpt from my daughter Maya’s new treatise on morality. She’s only six months old, but I think she makes some good points.
Okay, see you in a couple weeks for more Opinion Science. Bye-bye!