Greg Maio studies human values. He’s a professor of psychology at the University of Bath in England.* He also co-wrote the popular textbook, The Psychology of Attitudes and Attitude Change, and in 2016, his own book came out called The Psychology of Human Values. In our conversation, he shares his work on what values are and why they’re so important. We talk about when values guide or choices (and when they don’t), how people have a hard time articulating their values, and how we can design interventions around the values that people can come together on.
*In my intro for this episode, I mistakenly said the University of Bath was in Wales. It is actually in England. Greg was at Cardiff University for years before recently moving to Bath, and Cardiff is in Wales. Sorry for the mix-up!
Some things we mention in this episode:
- What are values? (See this useful online article by Dr. Maio.)
- How do values work and how has the science on this evolved? (see Maio, 2010)
- How values can act as “truisms” that make them hard to defend (Maio & Olson, 1998; Bernard, Maio, & Olson, 2003; 2005)
- Values can contribute unity on otherwise divisive issues (e.g., Wolf, Haddock, Manstead, & Maio, 2020)
When the going gets tough, it’s worth asking yourself. What are your core values? In the grand scheme, are you living in line with your personal values or not? And although one person might prioritize some values more than another person, it’s also the case that a whole group of people in a culture might be guided by somewhat different values than people in another culture. But what are values, exactly? Like, I like pizza, but that’s not one of my core values, right? I mean, maybe, but probably not. And okay, I value authenticity, and that does feel like a value, but what does it actually mean that I value authenticity and why do I value authenticity? Sure, it means that I’ll come to admire someone more when they present themselves authentically, but why do I care, and why should you?
You’re listening to Opinion Science, the show about our opinions, where they come from, and how they change. I’m Andy Luttrell, and this week I get to talk to Dr. Greg Maio. He’s a Psychology Professor at the University of Bath in Wales, and although he’s been in the U.K. for many years, his accent quickly reveals his Canadian origins. He cowrote the popular textbook, The Psychology of Attitudes and Attitude Change, and in 2016, his own book came out, called The Psychology of Human Values. I’ve always liked Greg’s work, and on the few occasions I’ve been able to talk to him in person, I’ve always enjoyed it, so this was a nice chance to catch up with him and get his take on the study of human values.
I thought one of the things you mentioned, it’s a topic you love, and I… For some reason, I had this impression that you had come to looking at values later in your research career, like you started out looking at just kind of attitudes and persuasion types of background stuff, and then years later you sort of stumbled into values, but I was looking at your CV and I was like, “Oh, values as a word appears pretty early in the list of things that you’ve had out there.” So, I’m curious, what is the thing, just to get us rolling, what is it about values that you find useful to study?
In the very early days, I’d say I was interested in them as a kind of attitude. So, that’s why there’s this overlap between my interest in attitudes and values. So, I saw them as being a more abstract attitude, and something where we’re not just gonna look at, for example, attitudes toward brushing your teeth every day, or shaving with this Gillette razor, or anything like that. But you know, let’s look at something really broad and important, like what’s your attitude towards helping others? What’s your attitude towards freedom? And these abstract concepts were intriguing to me because people mention them a lot in their arguments for why they favor particular policies, or why they take certain stands on different issues, and I was therefore fascinated in why people feel the way they do about these abstract concepts. But then as I got to know the field more, I realized that researchers in this area weren’t calling these abstract ideals attitudes. They were calling them values.
And I started to discover reasons why they were being distinguished that way, and part of it has to do with the kind of subjective standpoint we take when we talk about values as opposed to attitudes, because when we talk about values, we think about ideals that are important to us. Whereas when we talk about attitudes, they’re generally things that we like or dislike. They’re things that we feel like we might approach or avoid, and they’re generally a little bit more concrete, which might dispose them to that kind of orientation.
But anyway, yes, these two did coincide for me from an early stage, and I guess to add one last thing to why I got interested in them is because I think I was fascinated by two things. One is the difficulties people have in talking about why they feel the way they do about these values, just that they were the backstops for arguments, but rarely could people go further. I was fascinated by that. But at the same time, I was fascinated by their potential to unite people, to provide bases for agreement and consistency, so that if you said, “You know, I think that we shouldn’t discriminate because equality is important,” you would get agreement with that from most individuals, and you would at least get agreement that equality is important. You might get disagreement about how to achieve equality, or where exactly it’s important, or how, but you would get agreement on that abstract principle in many instances.
So, the combination of having this low cognitive arsenal to explain our values, this difficulty in reasoning about them, but at the same time having high consensus about them, to me was really interesting. Because they could be a really useful way for bridging divides in fractious disputes.
Would you be able to give… So, you kind of gave a definition of values in the context of that, but if we’re to kind of zero in, what do we exactly mean when we say that this is a value? And maybe give an example or two of what values are and how those, like you said, are different from opinions or other kinds of beliefs.
So, one example of a value might be helpfulness, an example. Yeah, another example might be quality, like I just mentioned, or freedom. National security. Forgiveness. Wealth. These are some ideals and a common measure of values that was developed several decades ago by Shalom Schwartz, and it in turn builds on prior measures like one developed by Milton Rokeach. The common element of all these values is they’re abstract and they’re ideals that evolved from their own research projects establishing that people generally feel that these ideals are important guiding principles in their lives.
So, what I call a value here are abstract ideals that people think could be important guiding principles to them. So, they influence their attitudes, their behaviors, their perceptions, and generally when we measure them, we ask people to consider the abstract ideal. There are a few different methods, but the most common method is to say, for example, equality. Equal opportunity at all. How important is it as a guiding principle in your life? And people respond on a scale from opposed to my values to extremely important. Now, the one point I should make about that is that most values researchers aren’t interested in solely where people place themselves on that scale, because we know people tend to agree. What’s more interesting is the relative differences in agreement and also how did these differences occur across many different values? Values that reflect different motivations and different orientations.
So, for instance, maybe I should get to this later, but there’s a model that describes how values differ in the motivations that they reflect and express, and that can lead you to predict different patterns in how people endorse them.
So, historically how have we considered values? Like what… If we look at the evolution, because values you’d think are like… In some ways, it’s hard for me to think of because they are so abstract, and that is probably what has made them slippery as a construct in the science over time, so I’m just curious kind of if you look historically, how have people tried to assess values and study them, and how from that would you say you’ve added to that? To say that, “Actually, we should be studying values in this way.”
So, one of the earliest measures by Gordon Allport measured values in a way that was more oriented towards looking at their relevance to career goals or life goals. And actually, I think that’s where some of the earlier interest stemmed, was to be thinking about aspirations or life aspirations and how would you decide what career to pursue? And people would advise young people and students to say, “Well, okay. Maybe look to your values to see what’s really gonna matter to you.”
But that method, like the one I just mentioned that people like Shalom Schwartz used, still rely on this abstract rating and saying, “Okay, here’s an ideal, like helpfulness. How important is it to you?” And then we do some math to calculate an average across several values that are similar and then we give people a score. Now, where I think my interest started is to say, “Okay, that’s really interesting. It’s interesting people agree with them. It’s also interesting that people tend to have a hard time explaining why they agree with them. And it’s also interesting, these relative differences in values do predict things.” So, people who favor protecting the environment more on average will do more environmentally things. On average is an important qualifier there, because there will always be specific exceptions.
So, these values are interesting in how they relate to attitudes and behavior, but to me the next step is to look more closely at that how, because the abstraction in values, as you pointed out, means that there’s a swath of different behaviors and attitudes that might be relevant to them. So, with this protecting the environment example, many people care about it, think it’s important to protect the environment, but when you look at associations between protecting the environment and any specific behavior, like whether or not people have bought low energy light bulbs, generally their correlation will be weak to null in many cases. You won’t necessarily see a strong association between that value and that very specific behavior.
Where the patterns emerge is when you aggregate across a lot of different attitudes and behaviors that are relevant to the value. So, how much do people travel less by car, perhaps use less energy, or turn off light bulbs, et cetera. Generally, in past research we’ve seen you have to do some aggregation in order to have a chance for these values to show any kind of predictive validity. But to me, that begs another question. It begs a question about how do all these specific exemplars, these specific instances, combine to create, if you will, the abstract ideal that we have in our heads that we’re so passionately attached to? That people will defend vigorously, and it appears go to war over, you know?
What is that aggregate? Because for instance, in one study we did a couple years ago, we asked people about examples they associate with protecting the environment in several different countries. And when we asked people in Brazil, conserving water came up, quite frankly. When you ask people in England, that doesn’t come up at all. And actually, we have plenty of water in England and Wales. So, I think perhaps that’s part of it. But the flipside was among our sample in Brazil, recycling did not come up, but in the U.K. it was number one. But of course, that’s partly due to the culture, and the context, and the availability and promotion of recycling may differ between the two places.
So, that’s important to us because if we looked at associations between the importance in protecting the environment and recycling behavior in both places, we should see different associations merely because people are not making the same links. So, I think this is a long way of answering your question to say that where we’re trying to go in the next steps is to get from the abstract to the concrete. Because it’s interesting that people agree on the abstract, but the next step is to start looking at where the cracks appear in the more concrete elements.
Yeah. It’s interesting that values are so important. Like you’re saying, they’re kind of… They’re what you prioritize. They’re the things you say that should guide the way that I conduct my life. And yet, there are plenty of times where you’d see someone make a choice and you’d go, “That doesn’t look like it’s necessarily in line with those values that you’re so interested in.” And so, like you said, some of it could just be like, “Oh, I just don’t even see this as value relevant.” And I often think of this in consumer behavior. You’d be like, “I have certain values about what I think is important, but when I’m standing there in the aisle looking at this product, it just doesn’t register the value to me.”
If I care about sustainability and environmentalism, and for whatever reason I’m sitting in the store and I go, “Oh, I need a toothbrush? That one looks great.” And it just doesn’t appear to me like that would be at all related to environmentalism. Do we have any sense about when our big, abstract values come to bear on these specific, concrete choices that people make?
I think we’re getting there. But I think learning history is part of the answer, so to use a concrete example, you mentioned I think toothbrush. You might have seen recently there are more wooden toothbrushes available, and I’d say for a long time, many people wouldn’t have thought of sustainability in buying toothbrushes so much, unless it occurred to them that there’s a lot of plastic in these things, and perhaps they might. But as soon as the new product comes out, a wooden one, and it starts to appear in that context, then I think the potential there is for people to start making the association and to start thinking about that product in a way that can now vary in sustainability, just like happened with coffee and other products that you might see on the shelves. And I think it’s those types of exposures that then give the chance for people to make those connections.
Now, whether or not they actually do the more sustainable purchase or not is also influenced by other values, potentially, and that’s the other thing that a lot of values researchers like to focus on, which is that it’s rarely just one value that has relevance in a domain. It’s there are other values that will compete. Values like wealth, or saving money, for instance. If the wooden toothbrush costs you five times as much, you might start to think twice about it, so there are interesting questions about the tradeoffs we make between different values and there’s been great research over decades on tradeoffs, and especially in areas like political contexts, where people sometimes will reason about the tradeoffs you have to make between values, but other times will ignore them.
And I think in our individual judgments and behavior, it’s interesting how on an everyday basis we sometimes will think about the tradeoffs, but maybe other times not.
And presumably it has something to do with, like you said, it’s these are values that lots of people agree with, but the real critical moment, or real critical point, is do you prioritize this one over others? And so, presumably that could help resolve some of those tradeoffs. They go, “Well, all these values are important, but this is the one that I’m all about.” And I wonder about both that and do we have a good sense of where those come from? How malleable are those? Are we sort of… Do we burst into the world with our value priorities all set in order? Or in what ways are we adaptive to different environments, or cultures, or messages from people telling us what we should prioritize?
Well, I think the evidence that’s looked at for example differences, twin studies on attitudes, and also some related evidence on values align with the broad view that most of the differences we have are culturally determined or socially and culturally determined. But it’s not to say there isn’t some physical or biological element, something that we are born with that does influence these. In attitudes research, it’s been suggested that some of this happens with sheer, simple physical differences between people, which by themselves carry significance in the cultures that we’re in.
So, for instance, if you’re taller and more athletic, you might get treated differently than if you’re not so tall and athletic, and that in turn may lead to attitudinal differences. But it’s not that there was a gene that coded you for a particular type of conservatism. And I think we probably could express some, expect some similarities with regard to values in that there may be a component like that. Certainly, there are gender differences in values, so that partly, of course, aligns with that, but also there are big social and cultural differences in how different genders are treated.
So, that again aligns with the idea that there’s probably a little bit of something that we’re born with, but there’s a hell of a lot that is part of our environment.
In terms of messages, I mean, can we deliberately try to manipulate values? Do we have evidence of this? Because sometimes when we think about values, we talk about like framing your message to appeal to a value that someone already has, right? So, the best you can do is hope that you can sort of take something that’s just set in stone in a person’s head and make your message resonate with it. But can we? Can we get people to be like, “No, we should prioritize this.” And I think of it because you say it’s people have a hard time explaining their values. It also makes sense to me that it would be difficult to sort of present an argument for a value.
Yeah. I mean, that’s a really interesting question, and we were… I was fascinated by this issue earlier on too in my career. There was some work by William McGuire in the 1960s that I was really inspired by, which looked at the idea of truisms, that there are cultural truisms out there like penicillin has been a boon to humankind, or back to tooth brushing, it’s a good idea to brush your teeth every day. Now, he found that when you look at beliefs like that, people often couldn’t explain or justify why they believe that this was the case, but they’d agree very highly, but if you gave them a brief paragraph or two of arguments against these, they could quickly reverse position very fast. As soon as they found something that said the American Dental Association believes that your tooth enamel is damaged by brushing twice daily or something, beliefs changed.
Now, that was many years ago. I doubt the same truisms exist today. But I was curious whether that’s the case with values because people are so passionately attached to them that yeah, maybe they can’t think of reasons why a particular value is important, like really, really good arguments, let’s say. But they might stay attached to them nonetheless, really emotionally attached to these ideas, so even if you can’t think of exactly why it’s important, you’re not gonna give it up. Not gonna change views on it simply because you get some even two-page message that gives eloquent attacks against the value.
Anyway, we tried that at one point a while back, early 2000s, and what I did was it was a very kind of skeptical study where just I thought, “Okay, I’m gonna try to write some paragraphs attacking the value of equality.” I thought people are gonna really attach to that, so let’s just see what we can come up with and took some examples about issues that were playing out in the time about hiring and quotas, et cetera, and started writing a couple pages on that. Gave them to participants and had measures of the value of equality after this message in a group that would get it, or there was a control group that didn’t.
And I was really surprised that this message changed ratings of the value of equality a great deal. I mean, I can’t remember the exact numbers. It was a while ago, but the scale movement was something like 30%. 30 to 50%, so it wasn’t just that they nudged over a bit from the message. They moved quite a bit after this message. You know, I thought if they just were nudged over slightly it would have been interesting, but we’d really need to look at it a lot more, but the fact that there was such movement was reminiscent to some degree of what McGuire had found with these other truisms.
And so, we then looked at whether you could give people arguments like this, rebut the arguments, or encourage them to rebut the arguments, and then see similar effects of countervalue attacks later, and we found that in line too with what McGuire had found. If you gave people an opportunity to just think and contemplate about the values beforehand, to think about counterattacks or think about arguments in support of them, then later on when they see these new messages attacking the value, they’re less convinced. It’s like they convince themselves they can do it, they can defend the value, and there’s no movement then at that point.
But other research, not in our own lab, but other research later I think took things significantly a step further. For example, finding that if you did change people’s values on a particular topic, you would change a variety of related attitudes related to that value. So, in fact, finding that you could change the attitudes more easily by attacking the value than by trying to directly address the attitude itself. And you know, I think that’s a remarkable finding, as well.
Then there’s other research that’s looked at values longitudinally and found that people when they enter new life contexts tend to change their values over time. So, for instance, entering a new field of study, or new job role, and that’s interesting because unlike the other studies I’ve just mentioned, which were looking at short-term value change, this was looking at longer-term value change, and in theory you need repeated exposure to different experiences that challenge your values to really get that persistent change over time. Whereas, in the other studies, these were more possibly short-term effects of the messages that were given. We don’t know, because there weren’t follow ups from those messages.
In these kinds of settings, is it… It’s probably hard to know, but how much of it seems to be that I pull away from a value that I had, versus I’ve just found a new value that in this moment seems to take priority, right? And so, I find it really compelling the way that we look at values in this relative sense. Because when we look at opinions, we just go, “Do you like this or do you not like this?” And we less often are like, “Okay. Well, of all these wonderful things, which do you like the most?” Because we go, for those specific kinds of opinions, that perspective doesn’t make as much sense. But for values it does, where you go, “All of these are… They’re on my shelf of values that I think are important, but I have one of those values that sits on my desk,” let’s say. And I go, “This is the one that is really the one that’s gonna guide my choices.”
And so, it’s not that I’m getting rid of values on my shelf, it’s just that I’m swapping them out for what’s on my desk. So, I’m curious if we have any sense of which of those is driving these changes.
I think you summarized it perfectly. I think that what is happening is we are shifting our priorities. It’s not that we abandon other values, it’s that we lean more towards some and then we deprioritize others. And part of the reason why I say that is we had one experiment where we did an intervention that changed values, and we found that when you increase the importance to a particular set of values, then values that were motivationally opposing those values went down in importance. So, it was as though people are making tradeoffs in their mind. They’re shifting the priorities and the balance between the values.
But the other, the motivationally opposing values didn’t become unimportant. They just became a little bit less important. So, to give you an example, if we gave people an intervention that made it more important to them to be helpful and forgiving of others, so in other words, to transcend their own needs to think about others’ well being, then values that involved putting the self first, like achievement and wealth tended to decrease in importance following that intervention, suggesting there is a bit of a seesaw that people are maintaining.
But you raised the question of which is happening more, you know, to what extent is it really about sort of shifting one set up and simultaneously, equally to another degree decreasing others. I don’t know the answer to that question. I don’t know that we do. And in fact, I’d like to say it’s probably more whichever value is in focus, but I vaguely remember that trends we saw suggested that it might even happen more on the opposing side. So, it might be, but it was very weak, so I don’t know I think is the answer. I think it would be an interesting question to pursue further.
Earlier on, you mentioned these motivations. So, what does that mean? My impression is a lot of the psychology of values has been super focused on how these values clump together and how they’re organized in some sort of way. And motivation is one of the tools that has been used to understand that, so could you just sort of walk through what we mean by motivations that are relevant to values and what that information gives us?
Yeah, so the most… I’d say the most influential perspective on this was described by Shalom Schwartz in studies that have been carried out in over 80 different nations around the world, and his model describes four broad categories of motivation that are expressed by values. One of them is like I just mentioned about transcending your own personal interest to consider the welfare of others. Another set of motivations is to pursue your own interests in whatever directions you deem appropriate. Then there’s a second pair of values, which… Sorry, not second pair of values, but it is another set of values which involve following your own intellectual and emotional interests in uncertain directions. These are what he calls openness values, and then there are values that have to do with protecting the status quo, what he labels conservation values. And these would involve concepts like national security, or politeness, or the ability… Anything that kind of subjects one’s own needs to the social order.
Now, he arranges all of these values in a circle to show their competing motivational orientations, and those are examples of the motivations described in that model, which I say has been… That model’s been well supported in many nations around the world. There are other perspectives. I think there are other motives that you could potentially see as being aligned with other values that don’t necessarily fall in that model. For instance, there’s debate from time to time about, well, what would you call a value? Is this model comprehensive? Does it include every value that’s under the sun? So, his model has 56… a core, initial list of 56 values, which has since been reduced to a smaller number, but there’s always debate about whether or not there could be more.
And I think the abstraction within values makes it quite plausible that yeah, we haven’t uncovered all the values that exist, and I’d say that actually we could use other models within personality to tap other motivations, as well.
So, the idea of motivations, as you describe them, they kind of sound like super values. Because it strikes me as very similar to the difference between values and the attitudes that they inspire. So, what’s to stop us from saying, “Well, really it’s self transcendence is… That’s the value.” And sure, it’s associated with these other things that are in pursuit of that, but you’d kind of give exactly the same analysis to, “Oh, these are the range of preferences that are in pursuit of this value.”
I think for theoretical precision, you probably need to use terms like you just said. Super values, or higher order values, because yeah, I think on one level, you’re right. It’s kind of like any cognitive concept we have. There are different levels of abstraction and we can decide what’s the natural level, what’s the point where we want to call this a value versus something else? And I think the idea of [inaudible 0:29:56.7] values is more abstract even than helpfulness, which is abstract by itself, or than forgiveness, and these concepts like helpfulness and forgiveness are more abstract than other abstract terms that could be subsumed within them, like family forgiveness, for instance, versus international forgiveness in for example other nations that might have been aggressing against one’s own.
So, these issues with abstraction, that’s part of why I feel this is so important, that we understand these under linkages, because you really… It’s your interest as a researcher or a practitioner that determines what level you’re looking at, but the history of the research on values has kind of settled on this particular level I was just describing, at least as a tool for trying to understand more about how they influence us and our opinions.
You’ve noted that values tend to be pretty tied to emotion. Is that right?
And what kind of evidence do we have that that is the case?
Well, in some data we collected, way back when we asked people to list their reasons about their values, and we asked them to rate feelings that they associated with values, we could see really high relations between the feelings that people gave and the value importance. But very little connection between their reasons and the value importance when we were recording them. There were connections, but it wasn’t nearly as strong. That’s some data, anyway, but I think to be frank, I don’t think a lot of researchers really examine that particular research question closely, because I think we’ve been more or less taking it for granted or just assuming it because you look around you in the world outside and you see people argue over these abstract concepts pretty passionately, and you start to feel like there is an emotional connection. The only trick is how much does that outweigh the rational one? Or not outweigh, but how much does it dominate?
And that’s a tricky question to answer, so our… The little kind of correlational design I just described is one way of maybe answering that question, but in some ways it’s like comparing apples and oranges, and it’s difficult to come up with a metric to say, “Oh, yeah, yeah. Now we can definitely say it’s more emotional rather than more cognitive.” To some extent, there are designs, like we could look at whether or not people are more influenced by emotional messages than fairly rationally directed messages at their values, and that might help to answer the question, but yeah, I think that still needs to be done.
I get the same impression from the work on moralized opinions, which is obviously what I do a lot of work in, and there’s often this like, “Oh, of course they’re super emotional.” But you go, “But we’ve never really… Why do they have to be?” It just kind of feels like it’s this way of talking about moral values where you go, “And of course, they’re very emotional.” But like you say, you go, “Well, we don’t have a great true…” That could be totally right, but we don’t have a perfect sense of how right it is, and I think part of it is driven by the like, “Well, if they’re not rational, then they must be emotional.”
Because like you said, when you ask people to explain themselves, they have a hard time. And I’m curious why, right? So, from a moral psychology standpoint, some of that explanation is, well, if people can’t explain themselves, therefore they’re using just gut intuitive emotions to think about their values. But is there any other way that we could think about, like how could we get these things that are so important that we agree on that drive important choices that we try to make, that when push comes to shove we really can’t explain why they’re important?
Yeah. I mean, I am reminded… You mentioned moral decisions, and judgment, and there’s work on moral dumbfounding by Jonathan Haidt from which was very similar to the work we’d done in some respects on values as functioning as truisms, where people lack arguments for them. Except that a big difference is that he cleverly designed these scenarios which removed one by one logical bases for people to object to various moral wrongs. And were still left with evidence that people were emotionally objecting to these issues. They were various moral behaviors that he looked at which would make people feel disgust, and people couldn’t necessarily explain why they would feel disgust at this behavior, like I think one example is having sex with a dead chicken. You know, and you could sort of cleverly take out all the wrongs, I think the utilitarian negative consequences of this and still show that feeling of disgust.
What’s different in the values context is we’re looking at these positively-judged ideals, where we’re not looking at asking people to justify those positive emotions when you suddenly give them all these anti-reasons, which we could potentially do. Maybe that is something we need, but we haven’t done anything analogous to the dumbfounding. The limit… The method so far has been asking people to just justify these values and then what we find them saying is things like, “Well, it’s interesting. I never thought about it before. I think helpfulness is important because we should help others. God says we should help others and that’s really important to me, and I think we should help others because that’s what I learned as I was growing up, and that’s my family.” And all these things that psychologically are very meaningful to people. I’m not taking away from the psychological substance of that. It’s just that when you look at those reasons, you have trouble… You still want to know more. You still want to say, “Okay, so why? Why did your family say that helpfulness is important?”
And eventually, people do give reasons, and they come up with things like, “Oh, because if I help, maybe others will help me.” But then sometimes they go a step further and say, “Oh, I wonder then why… Where’s the chicken or the egg? Which came first? Do they help me, or do I help them?” And people can tie themselves up in knots a bit because it seems evident that it’s not been given much prior thought.
It also seems that it’s a reason why it’s tricky to have conversations with people who disagree, because they’re prioritizing different values, right? And you go, “Explain to me why it’s this is more about equality than it is about this other thing.” Right? And you go, “Yeah, I can’t. I just know that it’s important that we pursue equality, and this is the way to do it.” And someone else goes, “Yeah, but tell me why we should pursue equality.” And in some ways, I wonder if you run into problems where if you ask for an explanation of a value, that this person goes, “Everybody has this value, right?” That you look like a terrible person for wanting a rational reason to value something that seems like everyone should value it.
Yeah. I mean, in our experiments, the people are just giving these responses as individuals anonymously, so we’re trying to do it in a sense where there’s no judgment of them for this, but of course they could bring that judgment internally, so they internalize the feelings of this society and think, “Well, I’d be a bad person if I felt otherwise.” And actually, we’ve done some data collection where we look at that. We ask people about how much they ideally would hold a value versus think they ought to hold that value, and there is a high correspondence between those two judgments. But in the West, the ideal tends to dominate. So, we find people really overwhelmingly feel like what they’re reporting is the personal ideal to them and a little bit less of what’s the obligation or standard that their society suggests they should hold.
But that said, I think it’s difficult for us to disentangle the two. We’re brought up in particular environments where we think of different values as having a priority, but then I think the example you’re raising is, well, if a particular issue becomes salient, one value might really jump to the fore. And we think, “Well, of course this is about equality. Or now this is about freedom.” And to me, that’s where a lot of these debates come in, and that’s where I’m really interested in this potential for values to not… to bridge across the divide.
Now, that may sound odd, because we’re just talking about how people come up with different values for different sides of an issue, but what would be nice is if people could take a further step back, ignore the issue, and just say, “Now, in the abstract, do the opposing parties on this topic agree about this ideal?” So, do they both agree that equality is important in an abstract way? Do they both agree freedom is important or whatever values are presumably being contested? And if they do, as we found most often they do, like when we looked at various groups around the world in recent data, we found huge amounts of value similarities between groups.
There are differences, but they pale in comparison to the amount of similarity. So, it makes you wonder if you point out that similarity before the issue comes up, before people make assumptions about the huge ways in which they differ in their values, then you can put the values in a context. You can say, “Okay, so you agree about these values in general, but when it comes to this particular issue, what is it now that changes? Is it now that this value is more or less important to you in the abstract, or is it that you just don’t think it’s relevant here in this situation?” And you know, I think that’s so true today. It’s so very true today. But it begs the question of what happens if you can point out the overlap, the fact that as we found, that in the abstract people do share values more than they think they do.
So, I’m curious whether or not once you point that out, can people start to bridge the divides a little bit by looking a little bit more at the specifics?
Yeah. By way of maybe wrapping up, I was thinking that this transitions into a recent paper… I didn’t notice it when it came out, probably… I mean, very recently. On thinking about values in the context of the COVID-19 pandemic. It sort of concludes with this notion that values are well placed to be a tool of intervention for a health crisis like this. And so, I’m curious for both that, if you could sort of explain a little bit what the goals of that article were, and then more generally, like what are the possibilities of using values as a tool in interventions in ways that are hopefully, fingers crossed, to accomplish some good in the world?
So, I mean that article was really about trying to extend that idea about value similarities into this context. To say that in this context, even though people may have very different views about lockdowns, about responses to the pandemic, there will often still be a lot of similarity about the basic values that people hold, and the most important values to most people are very self-transcending in nature. So, they’re very helpful, forgiving in their orientation. And that’s true for people even that you might expect to be feeling otherwise, so people who are imagined as being on the very right end of the spectrum are usually seen as only being concerned about themselves, whereas actually still, when you look at values measures, those very self-transcending values will be very high on their list, roughly perhaps equal to the other values, but not necessarily lower in most cases.
We have to work very hard to find people who don’t put those values near the top of the list. So, with that kind of finding in mind, it makes us believe that when it comes to trying to get a collective action in favor of some common interest, like stemming a pandemic, reminding people of these shared values can have value. Sorry to reuse the word. But reminding them of this can actually give a basis for agreement and for people to have a starting point to say, “Yeah, okay, we all agree with these ideas. Okay, now if we can all agree on those ideals, the next step is what do you agree about being the mechanism to do this?” You know, what is the most helpful thing? What is the thing that is the greatest broad benefit?
Personally, I feel like that’s where a lot of the issues start to arise and the divisions start to come about, because we don’t tackle that point. We start to make assumptions that people’s values are very different, we grow the division between them rather than saying, “Okay, maybe they’re the same, but we are disagreeing perhaps legitimately on different means, or we have different facts in our understanding and that’s what’s causing the confusion.” But I think sometimes in our emotional response to the other side, we think, “Oh, geez. I so much disagree with them,” so then we project these opposing values when it’s maybe not to our advantage to do that. What we should be doing is coming to grips with the alternative beliefs that people have and how they’re playing a role.
Well, great. Well, fingers crossed that that’ll turn into something. I just want to say thanks for taking the time to talk about your work and yeah, always fun to talk to you.
Boy, such a pleasure to be able to talk about this work, so thanks very much.
All right, that’ll do it for another episode of Opinion Science. Thank you so much for listening in. As always, check out the show notes for links to the things we talked about along with a full transcript. Subscribe to Opinion Science anywhere you get podcasts and follow the show on social media @OpinionSciPod. Check out OpinionSciencePodcast.com for everything else you could possibly want in this world. And hey, if you’re enjoying the show, learning new things about public opinion and communication, and you’re willing to spend a few seconds to help the show, leaving a nice review on your favorite podcast platform is not only nice to see for me, but also helps other people find us. Okie doke, that’s it for now. I’ll see you in a couple weeks for more Opinion Science. Bye-bye!