Allison Earl studies the challenges of getting health information to people who need it. Her research looks at how people react defensively to information about their health and how to improve it. In this episode, she shares her research on people’s tendency to avoid threatening health information and how simple meditation exercises can make people more open to these kinds of messages.
Some things that come up in this episode:
- Targeting health information to specific groups makes people feel judged (Derricks & Earl, 2019)
- Rejecting information about stimatized health issues (Earl, Nisson, & Albarracín, 2015)
- Race disparities in attention to HIV-prevention information (Earl et al., 2016)
- Trigger warnings as a way to get people ready for emotional information (Gainsburg & Earl, 2018)
- Meditation makes people more open to threatening health information (Takahashi & Earl, 2020)
Transcript
Download a PDF version of this episode’s transcript.
Andy Luttrell:
I hate going to the dentist and it’s not for the typical reasons. I’m not afraid of drilling, and scraping, and clamping. Okay, I’m a little afraid of those things, but I mostly hate it because it kills my self-esteem. It’s never good news. It’s never, “Wow! Great job! Your mouth is perfect! Go home, celebrate!” Instead, it’s, “Well, I found this little problem that’s gonna be annoying to fix, and in the meantime, just don’t smile at all. And don’t forget to floss.” They’re never impressed with my flossing. But you could argue that it’s all in my best interest. I should want tiptop oral hygiene. But I can’t shake the feeling that my dentist thinks I’m an idiot and for that reason, I delay my checkups, which isn’t good for anyone. All of this is a central problem for health communicators. How do you convey health information without appearing to pass judgment? Do we really have to hold people’s hands to get them to brush their teeth? Yeah, we kind of do.
You’re listening to Opinion Science, the show about our opinions, where they come from, and how they change. I’m Andy Luttrell, and this week I talked to Allison Earl. She’s an associate professor of psychology at the University of Michigan, and her research is all about when people close their eyes to health information, and what we can do to deliver that information in a less threatening way.
You know, in looking at the kind of work that you’ve done, my takeaway is that you’re… So, I’m gonna tell you what you’re interested in and you’re gonna tell me if I’m right.
Allison Earl:
I love it.
Andy Luttrell:
Which is like we often think about communication as conveying information, but in a health context, that tends to be kind of fraught for all sorts of reasons. And so, the general theme of your work is like all the ways that can go wrong is one way that you can interpret that. Is that fair?
Allison Earl:
In some sense that is actually a fair assessment of the kind of work that I do. I would say that my research takes the approach of focusing on how receivers, or how audiences take in messages. So, a lot of the persuasion literature kind of focuses on what the source would do, right? So, like it’s our expectations as researchers, as scientists, as interventionists, about what we think will work, so it’s from the perspective of the source side, and there hasn’t been as much work on the kind of like thinking about the recipient side. So, thinking about how people actually engage with these messages once they land. So, all of the ways that it can go wrong, all of the ways that people engage defensive processing of messages that they don’t agree with, because almost always that’s what health messages are, right? You’re getting the message because you’re doing something that you shouldn’t be doing, that you need to change, that you should be doing more of, and so those messages almost by default end up in the kind of threatening message category.
It’s extremely rare that you get the message that A-plus! Good job, you! You’re doing everything exactly perfectly that you possibly could for your health. Usually, there’s something in that message that indicates you need to do something differently.
Andy Luttrell:
So, are a lot of these messages, like you said, I’m getting it because someone decided I need this information? Versus another kind of messaging would just be like the news did a story about health and wellness and everybody’s getting this message. Are both of those equivalent, do you think? Or one more common than the other that you’ve looked at?
Allison Earl:
So, people tend to make the attribution that they are receiving health messages because of something about themselves. So, whether or not that is the source’s intention, whether or not it’s a mass media campaign where everyone is expected to learn about caffeine consumption, the attribution that people make when they get those messages is that it is something about them. It does feel personal. It does feel about the self. So, I’m receiving this message because this source thinks it’s relevant for me, not necessarily because I think it’s relevant for me.
Andy Luttrell:
Is that something that you think distinguishes health information from other kinds of messages? Because there are all sorts of messages that don’t feel personal in quite that way. Is there something special about health communication that makes it feel this way?
Allison Earl:
Andy, I don’t have any data for this, this is a complete hunch, but I feel like the answer to that is yes. And so, part of the reason I have moved my work into this kind of intersection between attitude and identity is because I think that overlap is more pronounced for health messages, because health messages are about the self. So, the kinds of threats that come up in response to health messages, they have overlap with other kinds of messages, like political messages, or other kinds of persuasive messages that are trying to get you to change your behavior, but there is something… My intuition is that there is something that is more about the self for the kind of health messages, because the consequences of not engaging with the recommendations is about the self. It’s your health, it’s your well being, so it elicits a lot more fear and a lot more shame than other kinds of messages.
Andy Luttrell:
So, ultimately, what’s the problem? You go, “Well, deal with it.” What are the implications of the fact that health messaging evokes those kinds of reactions to people?
Allison Earl:
I think that it, as you identified earlier, there’s just a lot more landmines. So, it’s just a lot more complicated to try to thread the needle of how to get the message to recipients if all of these kind of defensive processes are engaged. So, it’s just more hurdles that we have to overcome to get the message out to audiences who could benefit from receiving them.
Andy Luttrell:
Is there an example from your own work that you could give to sort of illustrate what that means? What those dangers are and what those threats might be?
Allison Earl:
Oh my gosh. There’s so many examples. Let me think. Let me think if… So, a lot of my work focuses on health and health communication, and I’ve done a lot of work in the context of HIV prevention, and so one of the kind of early errors that HIV prevention messaging made was to conflate identity and behavior. So, health communicators in the HIV prevention world treated people who engaged in high-risk behavior as high-risk individuals instead of focusing on the behavior. I’m gonna give you a specific example, because I think it’s hard to understand in the abstract but makes a lot of sense in the concrete.
So, for instance, these messages would target gay men as a high-risk category, so they would ask people in the context of an HIV prevention screening, do you have sex with men? That’s a behavior. And then they would label them with the identity. “Oh, you’re a gay man.” And recipients wouldn’t necessarily claim a gay identity, even though they were engaging in the high-risk behavior, so the identity label as being high risk, being a gay man, people reacted against that. They didn’t want to engage with messages that were targeted to gay men, because they didn’t view that as their identity, even though they were engaging in the high-risk behavior that the message was trying to get out to them. It was the behavior that is the high-risk thing and not necessarily the identity.
But people reacted against being labeled with the identity even while they were engaging in the high-risk behavior, so then you see the development of the label man who have sex with men as the category label, rather than gay men.
Andy Luttrell:
And that makes it more threatening? Is that what you’re saying?
Allison Earl:
So, labeling the behavior, men who have sex with men, is less threatening than label the identity, gay men. Because people don’t necessarily believe that they are part of that identity group, even while they’re engaging in the high-risk behavior associated with the identity group.
Andy Luttrell:
Yeah, so these messages kind of get internalized as like a, “This is about me.” Not, “This is about something that I’ve done.”
Allison Earl:
Yeah, so as a HIV prevention scientist, imagine you come into the clinic and I say, “Do you have sex with men?” And you say, “Yeah, sometimes.” And then I say, “Well, here’s a brochure about gay men and HIV.” And you’re like, “Whoa, whoa, whoa, whoa, whoa. Whoa, whoa, whoa, whoa, whoa. I’m not gay.” And you’re like, “But you just told me that you have sex with men.” “No, no, no, no, no. I’m not gay. I just have sex with men but I’m not gay.” So, it’s like that dissociation between the identity and the behavior becomes very important because people don’t want to claim, didn’t want to claim the identity, and there’s a whole host of sociocultural reasons why people didn’t want to take up that label, particularly in the ‘80s and ‘90s when the AIDS crisis was kind of at its peak, particularly in marginalized communities, among Black communities or Latino communities where that label is even more stigmatized. Particularly in the early days of the HIV epidemic.
Andy Luttrell:
So, these feelings of threat, do they result in people not believing the information, rejecting the information, not looking at the information? Ultimately, what’s going awry?
Allison Earl:
Kind of all of the above. So, there’s a lot of ways that people can engage defensively with persuasive messages, as I’m sure you know and have talked with others on this podcast about. Just to summarize some of the big categories, the first one is they just ignore the information. So, they just go like, “Nope, that’s not for me.” So, denial is kind of the most primitive of the defense mechanisms, right? So, if I can get out of it by ignoring it, that’s probably what I’ll do. If that fails, then you get things like counterarguing the message, where you say, “Well, I only have sex with people I know, so that’s not really that risky, because they’re clean. And so, if you’re trying to tell me that I need to use a condom, then you are implying that they are not clean,” and so that becomes the belief that’s harder to move around, because I think that I am okay. I’m doing it safely.
You get things like source derogation, where who are you to tell me how to live my life? You’re not an expert. What are you doing? So, you get this kind of like denial/ignoring, counterarguing the message, source derogation, and then you get things like selective attention, all down the line.
Andy Luttrell:
Yeah, so these are like defensive reactions of all varieties, all flavors.
Allison Earl:
Exactly.
Andy Luttrell:
So, some of this speaks to selective exposure, which you’ve done plenty of work on. Would you mind giving an overview of what selective exposure is and how do we know that it’s happening? As a scientist, how do you know that it’s happening?
Allison Earl:
So, selective exposure is the idea that people are more likely to select or choose information that is consistent with what they already think, feel, and do. And they are more likely to ignore information that challenges what they think, feel, and do. So, in a health context, what you see is that people who are already engaging in health recommendations are more likely to participate in interventions targeting an increase or decrease in a behavior. So, it’s preaching to the choir, so one of the major problems with these interventions is self-selection bias, is are these selective exposure biases? Who’s gonna show up for an intervention? People who are already on board with what you’re trying to tell them.
So, it’s really hard to get the people for whom the interventions are designed, the people who could most benefit from the message because they need to change.
Andy Luttrell:
And so, if you’re presenting people with information, right? Because I could just say, “Look at this. Look at this information. You need to read this.” You can still… I mean, people, like you said, they can have these defensive reactions. And so, when you’re doing studies and you can say some people show this bias toward rejecting or looking away from stuff that conflicts with their current world view, and other people maybe don’t show that bias as much, what do we actually see? Concretely, what are you looking for?
Allison Earl:
So, in a selective exposure paradigm, what you would look for are things like does a participant pick up a brochure in a waiting room. Does a participant watch a video that’s playing in a waiting room while they’re waiting to see their healthcare provider? Does a participant click on a link that’s been shared with them with information about a health issue? So, it’s do they choose to read, view, listen to information about a given topic.
Andy Luttrell:
So, you as the researcher will create opportunities for people to get the information that they would need, and then you look at ways in which they’re either taking the bait or leaving it on the table.
Allison Earl:
Exactly right. So, in a laboratory setting, what you would do is bring participants into a lab room, and you might have a table full of brochures, and you don’t let them have their cell phones, you don’t let them have their backpack, you don’t let them have any other distracting devices, and you say something like, “Okay, I’ve gotta get the questionnaire for you. It’s in the other room. We were running late. I’ve gotta go get it. I’ll be back in a few minutes with your task for today. Please just wait here.” And so, the participant is sat in the room with the information, and then you can unobtrusively observe, which means you watch them without them knowing that they’re being watched, and see whether or not they pick up any of the information. So, that would be the exposure task, is do they choose the information or not.
And you can also measure how much attention they pay to the information. So, for instance, how long they read the brochure, how long they watch the video, how long they spend on the webpage. You can ask them to self report how much attention they paid to the information and it turns out that actually is quite predictive of the other measures of self report. People are usually pretty willing to tell you, like, “No, I didn’t pay attention to that. I don’t really care for your brochure.” My participants have never had a problem telling me they don’t like the information I’m giving them.
Andy Luttrell:
It seems like there’s a general tendency for people to have a bias to sort of turn away from the stuff that is threatening to them, either because… And this is one of the things that I was thinking about, because oftentimes selective exposure is about this information disagrees with what I already believe.
Allison Earl:
Yes.
Andy Luttrell:
Which is a little different from just saying, “This information makes me feel bad.” And that I just don’t want to look at stuff that makes me feel bad. I might secretly believe it, but I just… I hate thinking about it, and that’s why I won’t touch it. Are those distinct?
Allison Earl:
I think that’s a great observation, and in a lot of my work, what we’ve seen is that these processes are driven by this affective component, that people just don’t want to feel bad. And it’s not that they feel bad in the moment. It really is this anticipatory affective response, like, “If I pick that up, I know that I’m not gonna feel good. I’m gonna feel bad if I read that.” So, it’s an anticipated negative response, and so that kind of anticipatory piece is what shapes whether or not they pick up the information. So, again, I think there is… There is an absence of work looking at how these processes play out across domains, right? So, like you’ve said in the kind of political sphere, or more of the belief relevant sphere, it is always the negative affect in relation to the specific belief.
And so, that’s more consistent with Festinger’s original conceptualization of what dissonance is, right? It’s that negative affective experience in response to two beliefs, or a belief and a behavior conflicting. You know, what I’ve seen in the health world is I think more consistent with Aronson’s understanding of what dissonance is, which is the overlap with the self. So, it’s like I would be a bad person if I did this, and it’s that self-reflective piece that’s coming up when people are making the decisions about whether or not to engage with health information. So, I’m a bad person, like what we hear in the focus groups all the time is that people will say, “I’m not the kind of person who needs to read about HIV. I’m not the kind of person who needs to think about condom use, and if I pick up that brochure in the waiting room, people are gonna look at me like I’m the kind of person who needs that.” So, that’s an identity. That’s an identity threat, right? Like people are gonna judge me if I pick this up.
Andy Luttrell:
So, I guess mainly what I’m wondering is whether this link between the behavior and the identity is malleable. Because you could say either you’re a person who says, “Oh, that, me being a good or bad person is not about these behaviors.” And so, they would go, “I’m not threatened by that information.” Or maybe there are certain health behaviors that just aren’t as identity relevant, where if it’s like… Say it’s some rare genetic disease and you go, “It has no bearing on who I am as a good, moral person, and so it’s not threatening to me that you’d give me that information.” Is that connection malleable in the way that I’m thinking?
Allison Earl:
Yeah, it’s hard to know because HIV is so stigmatizing in and of itself that even the content, just the HIV itself, may be enough to pull identity threats for people. So, we’ve done some work with targeting messages, and this is the idea that you… So, this is a healthcare strategy that doctors utilize, where you try to disseminate health information to people who are at high risk. So, for instance, if a patient comes into the office, you might look at their chart and you say, “Oh, this is a 65-year-old patient with calcium deficiency and perhaps low bone growth. I might give them this brochure about osteoporosis.” Or I might be more likely to give a female patient versus a male patient a brochure about breast cancer, because the base rates for breast cancer are higher for women than for men.
So, targeting is a strategy that disseminates information to audiences based on some characteristic of the audience that puts them at risk for a condition. So, it turns out that for racial identities, that this happens a lot, that doctors are very willing to give information to audiences based on their race, with race being one of the relevant risk factors for disease. So, in the case of sickle cell anemia, for instance, you might be more likely to give an African American patient the sickle cell brochure than a white client, just because sickle cell anemia is much more common among African Americans.
The problem is that there are a lot of other conditions that are more prevalent among African American populations that have nothing to do with genetics. So for instance, there’s a huge racial disparity in HIV, so for instance African Americans account for about 13% of the U.S. population and almost half of all new HIV infections, and so by the same logic that I just outlined for how doctors might make decisions about targeting information, doctors might also make the decision to give HIV prevention information more to African American patients than to European American patients. And what we see there is that it turns out that African American patients really, really, really do not like being targeted based on their racial identity, and we thought that would emerge for things like HIV, which has been historically linked with the African American community, but might be less likely to emerge for other kinds of health conditions like the flu. So, like, why does it matter if you’re being targeted to receive the flu information? That’s not stigmatizing. That’s not something about you. Right? You can get the flu from touching a door handle. There’s nothing stigmatizing about that.
And yet when African Americans perceive that they are being targeted to receive the flu information based on their race, they disengage from the message. They report feeling unfairly judged. They derogate the source. There’s less recall of information from the brochure. And they have lower intentions and lower behavior in response to the message, the recommendations being advocated by the message. So, it’s not always about having stigmatizing information. From my work, it suggests it’s how people make sense of why they are receiving the information that’s important. More so than the information itself.
So, it’s the attributional piece. It’s that why am I receiving this health information that is the really important decision point at which people decide whether or not to engage or disengage from information.
Andy Luttrell:
So, in that example with the flu, it’s because there’s the perception that it’s now linked to identity. But if instead it was just like, “It’s flu season, we’re giving this to everybody.” Would you expect the same sort of disconnect? That people would read into it like, “Oh, this must be about I’m a bad person.” Or is there a way to convey that information that’s just like, “We’re giving this to everybody because it’s flu season.”
Allison Earl:
Yeah, that was the control condition. So, the control condition was like, “Everybody’s getting this information. You received this because of a randomly generated computer algorithm.” And we’ve played around with different control conditions, and it turns out that if people actually believe you that they are getting it randomly, then the effects go away. They don’t always believe you that it’s random, and if they don’t believe that it’s random, then you get those same kind of disengagement effects. But that’s a great question and yes, you’re right. If you can make people feel like it’s not about them, then they’re more likely to listen to the message.
Andy Luttrell:
So, as a communicator, that sounds like a good strategy, to be mindful of not making it seem as though this is a value judgment on you, right? To say this is information that is relevant to a human that thrives in the world. But I don’t know, maybe sometimes you can’t get around it. You go… I mean, this is, you’re in a risk group. This is why I’m giving this to you.
Allison Earl:
Yeah. Jeff Stone actually has some work also that suggests that if you fill in the attributional gap for people, so for instance if doctors say, “I’m giving you this information because,” and you give them a reason, they’re much less likely to react defensively to that message. So, I think a really important take home point is that if you don’t give people an attribution for why you are giving them the information that they’re giving, why you’re asking them to do the thing that you’re asking them to do, then they’re just gonna fill in the gap, and they’re gonna fill in the gap with whatever explanation is the most salient in their lives. So, for health communicators or burgeoning health communicators out there, just please be mindful that your audiences might have different attributions for why you’re doing what you’re doing than you have attributions for why you’re doing what you’re doing.
Andy Luttrell:
And it seems for health stuff because it’s so scary, that we know uncertainty drives people to look for those answers, right? Whereas other kinds of context you go, “Oh, that’s weird that they did that, and I don’t care. La la la la la. My day is going on.” Whereas with health it’s like, “Oh, my doctor gave me this pamphlet. What do they think? Are they saying that I’m doing something wrong?” And so, it sounds like this is a scenario that if it’s unclear, people will make up whatever story seems to fit, which can often be the scariest or most threatening story that unfortunately causes them to disengage from it. Right?
Allison Earl:
I think that’s exactly right, and I don’t know why. I mean, it is I think an open question. I think it’s certainly an interesting question, because so often the fundamental attribution error is called the fundamental attribution error for a reason. So, we are more likely to make dispositional attributions about why other people do the things that they’re doing. Why don’t we do that in a health context? Why don’t we say, “Wow, there’s something strange about that doctor that they’re giving me this health information.” It seems to be like, “Why am I getting this information? What is it about me,” that is the kind of default attributional style. I’m not sure if it’s something about the doctor-patient interaction, or something about the kind of power differential or expertise differential that would be driving this. I think there’s a lot. It’s an open question as far as I can tell. But that’s a really interesting point, like why is that the default attribution and how could we change it.
Andy Luttrell:
And of course, it’s too bad because… All of these defensive processing styles are unfortunate in this context, because they’re turning people away from information that is useful for them, and it’s sort of like, “Why can’t we just go like, even if it seems threatening, I should buckle up and pay attention.” So, when I was looking at some of your stuff earlier, it reminded me that I have occasionally used calorie counter apps, which always seem good in principle, but I hate doing them, but I find myself in these situations where I go, like… I had a sandwich, and I’m entering in the amount of bread, and then I go, “But you know, the bread on my sandwich wasn’t as big as normal, so I’m gonna say 1.8 slices of bread.” And I go, “I’m just lying to myself. What good is this?” All it’s doing is it’s like this defensive reaction that all it’s doing is acting against me, right? Which to me sounds like a lot of these health biases, where people are actively shielding themselves from the very information that they might need to be healthy and productive in the world.
Allison Earl:
I love that example, Andy. I think that is such a common example and in a sense a such a low stakes example, like who are you hiding from? Why are you lying about the size of the bread, right? But I totally resonate with that experience also, right? Like who am I lying to? It’s just me. But at the same time, it’s like but it was the end slice, right? So, it was smaller.
Andy Luttrell:
Well, and that’s the trick too that I think with these sorts of things, you’re reading into ambiguity, right? We know lots of these biases in psychology are reading into ambiguity and sort of massaging what’s in front of you that’s able to be massaged. It’s not making stuff up out of nothing, right? It’s not like, “Oh, I didn’t have any bread at all.” You’d be like, “Well, that’s just a full lie.” Whereas it’s like, “Eh, I don’t know. It was small.” In the same ways that people, if they’re threatened by this health information can just go, “Well, maybe it’s not… I’m just not gonna worry about it right now.” Right? Rather than invent some falsity.
Allison Earl:
I think that is a really excellent point, and I think that phenomenon that you’re describing kind of transcends just the health domain. I totally see that in the way that people are responding in the world broadly. I mean, also specifically in the kind of COVID era and how people are responding to things like the face mask regulations that are being rolled out. There is a kernel of truth in all of these perspectives, and that those kernels grow into seeds that end up seeding different belief sets, and so initially there might be more overlap in the kind of information or beliefs that people are using, and then they get watered and fertilized by the motivated reasoning that ends up pulling them in different directions.
Andy Luttrell:
Maybe we could sort of end with more hopeful. We’ve been pretty dour up to this point about how people will never pay attention to anything. But I saw a recent paper with a student of yours looking at how different practices might counteract those anticipated negative emotions you talked about, so like you said, I just know that this is gonna make me feel bad, it’s gonna make me feel upset, or shameful, or whatever you said. But you guys actually found a way to make people a little more open to that in the face of what might otherwise be that distressing situation.
Allison Earl:
Yeah, so we talked a bit earlier about how those kind of like expecting to feel bad is what drives a lot of avoidance of the health information. And so, if you look at the learning literature, if you look at the selective exposure literature, what you find is that when people are in these kind of high arousal negative emotions, like you feel anxious, you feel nervous, or worried, you’re not gonna take in information. That’s true in a classroom and it’s true in a health clinic. And so, how can we help people get to the place where they are willing to engage with this information that could be potentially upsetting, or could be difficult, or could be hard? And one of the ways that we found is to just to try and calm yourself down.
So, we used either a 10-minute mindfulness intervention or a 10-minute relaxation training. So, it’s a systematic relaxation, so you start by relaxing your toes, and then you relax your ankles, and then you relax your calves, so it’s like this progressive relaxation technique, and both of those techniques, when people reported feeling calmer, so lower arousal, more positive emotions, were more willing to engage with health information. Regardless of whether it was high threat or low threat. So, across conditions there was more engagement with the health information.
Andy Luttrell:
And so, what did that do? What was that relaxation? Did it make the thing less threatening? Did it make it less scary? Or did it just make people sort of numb to the emotion that that might make them feel?
Allison Earl:
So, we expected that what it would do is change people’s kind of assessment of the information, their appraisal of the information as threatening or not, and that’s not what happened. People were still like, “Oh, that information that you’re gonna give me, that is gonna be scary, and I was scared by it.” But it’s something about getting yourself in this state of receptivity. So, the kind of low arousal, positive affect state, that if you’ve ever taken a yoga class, or done a kind of meditation, at the end of it if you feel that sense of calm, that sense of low arousal positive affect, you can put yourself in a place where you feel like no matter what comes, I can handle it. And that is more likely to come out of this calm affective state than if you feel really ramped up and anxious, or nervous, or worried about what it is that you’re gonna do. I mean, the kind of alternative hypothesis would be to warn you that the scary thing is coming. And when we warn you that the scary thing is coming, you’re like, “Oh, okay. The scary thing is coming. Now I’m gonna be afraid.”
And so, those kinds of trigger warnings actually produce the opposite kind of response, where you get more avoidance, less engagement, when people feel like what they’re about to read is scary.
So, even though the trigger warnings paper wasn’t in a health context, we see it as being kind of two sides of the same coin.
Andy Luttrell:
It reminds me too of self-affirmation type interventions, which I think… I have to imagine those have been used in the health context. Is that right?
Allison Earl:
Yes. So, Jenny Howell and James Shepperd have a paper in Psych Science looking at self-affirmation and medical avoidance, so avoidance of medical testing, and they have found some pretty robust effects with the self-affirmation interventions in a health context.
Andy Luttrell:
And so, the idea is self-affirmation’s all about sort of making you feel good about who you are, that I am a good person, my character is sound, which I’m guessing the reason that they would look at that is because like you said, sometimes health messages seem to make you feel like you’re a bad person. Whereas if before I get that information, I can just go, “Oh, I’m good. I’m solid in who I am. Bring on this information.” Because I’ve sort of preemptively proven to myself that I’m okay. I’m guessing that’s sort of the gist.
Allison Earl:
Yeah, that’s exactly right, and we actually conceptualized the paper, the mindfulness meditation paper as being related to the self-affirmation manipulations. The self-affirmation manipulation is designed to short circuit the threat by making you feel secure in yourself. So, there is literature on self-affirmation effects in medical avoidance, but the self-affirmation literature broadly is quite messy, and so figuring out exactly what to affirm and how to affirm it to get the effects that you are looking for is tricky. And so, we were thinking of this kind of affective induction as a kind of less specific, more diffuse approach to getting at a similar place. And so, we focused on that affective piece of having people feel calm without the kind of specific self-affirmation piece.
Andy Luttrell:
To draw some lines, it reminds me a little, your PhD advisor has this idea of defensive confidence.
Allison Earl:
Yes.
Andy Luttrell:
It sort of seems like that, where at the time, that… Well, it was about selective exposure, right? At the time.
Allison Earl:
It was.
Andy Luttrell:
People would say, “I feel like I can defend my views, so bring on any information you have, and I’m willing to read it, because I know that I can tackle it.” And so, it sort of seems like what these interventions are doing are giving that people that defensive confidence in a slightly different way, but with the same kind of idea, that you’re saying, “Put yourself in a calm state of mind and assure yourself that even though yes, you acknowledge the thing that’s coming is scary, you’re gonna be able to come out the other side all the better, because you’ll now have powerful information and it won’t have disrupted your mood for the day.”
Allison Earl:
Absolutely. So, that kind of efficacy approach, ironically, so many times health interventionists think that the strategy is to make audiences feel vulnerable. So, scare people into changing their behavior, shame people into changing their behavior, make people feel vulnerable so that they are receptive to changing what they’re doing, and what we’re finding is that it’s the complete opposite approaches that are actually the most effective. So, make people feel confident that no matter what they’re gonna come across, that they can enact the changes that they need to to protect themselves and protect their family.
Andy Luttrell:
That’s a very inspirational point to end on. Allie, thanks so much for sharing the stuff that you’ve done and giving hopefully to health communicators some notion of what they’re up against.
Allison Earl:
Thanks so much, Andy. It was really great to be on the show.
Andy Luttrell:
That’ll do it for this episode of Opinion Science. Thank you to Allie Earl for coming on the show, and to learn more about her work, check out the show notes for a link to her lab’s website. And I wanted to quickly follow up on one thing that came up in our conversation. Sometimes when I get to talking to fellow social scientists, I worry that we can slip into jargon, but I want this show to be enjoyed by anyone, so one of the jargony bits that came up today was something called self-affirmation. I’ll talk about it more some other time on this show, but the gist is that when people take a second to remind themselves of their important values, reflect on what’s truly important in life, it can make overwhelming circumstances a little more manageable. Basically, after you think about your core values, after you affirm the things that are important to you, any information that challenges the idea that you’re a good, capable, ethical person is less threatening. So, in persuasion, when someone tries to convince you to change your mind, it can often feel like that person is saying, “You’re stupid and wrong for believing what you believe.” But sometimes we see that after people practice self-affirmation, they become more open to a new way of thinking about some issue, because they no longer interpret the persuasive message as a threat to who they are. Same idea with health communication, as we talked about.
Anyhow, as always, head over to OpinionSciencePodcast.com to learn more about the show and to get a transcript of this week’s episode. You can follow us on Facebook and Twitter @OpinionSciPod. You can also rate and review the show on Apple Podcasts to encourage people to check it out, so give us your shiniest five stars if you can. All right, that’s all for this week. Time to brush my teeth and floss, but they’re never impressed with my flossing. See you next week for more Opinion Science. Bye-bye.