Could ‘inoculation’ Limit Election Misinformation?

A popular new strategy for combating misinformation doesn’t by itself help people distinguish truth from falsehood, but improves when paired with reminders to focus on accuracy, finds new Cornell-led research supported by Google.

Psychological inoculation, a form of “prebunking” intended to help people identify and refute false or misleading information, uses short videos in place of ads to highlight manipulative techniques common to misinformation, such as emotional language, false dichotomies and scapegoating. The strategy has already been deployed to millions of users of YouTube, Facebook and other platforms, and could be utilized before and after the U.S. presidential election.

In a series of studies involving nearly 7,300 online participants, an inoculation video about emotional language improved recognition of that technique – but did not improve people’s ability to discern true headlines from false ones, the researchers found. Participants’ ability to identify true information improved when the video was bookended with video clips prompting them to think about whether content was accurate, suggesting a combined approach could be more effective, the researchers said.

“If you just tell people to watch out for things like emotional language, they’ll disbelieve true things that have emotional language as much as false things that have emotional language,” said Gordon Pennycook, associate professor and Himan Brown Faculty Fellow in the Department of Psychology and College of Arts and Sciences. “Encouragingly, we found some synergy between these two approaches, and that means we may be able to develop more effective interventions.”

Pennycook is the first author of “Inoculation and Accuracy Prompting Increase Accuracy Discernment in Combination but Not Alone,” published Nov. 4 in Nature Human Behaviour by an international team of co-authors.

Prior studies involving members of the research team showed that inoculation videos helped people identify manipulative techniques in sample tweets. That raised hopes that a relatively simple intervention could be implemented on a large scale to “immunize” populations against potentially viral misinformation.

The new study investigated whether inoculation’s benefits carried over to more real-world conditions by helping people assess whether information was true or not.

In three initial studies, participants watched the same emotional language video used in the earlier study, which warns viewers to be wary, for example, of headlines referencing a “horrific” accident rather than a “serious” one, or a “disgusting” (versus “disagreeable”) ruling. They then reviewed real headlines – some true, some false – presented in one of two versions the researchers designed: either emotionally neutral or using charged language that could evoke fear or anger. For example, a true, low-emotion headline read, “NYC wants to ‘end the COVID era,’ declares vaccine as a requirement for its workers.” The evocative version read, “Thousands being forced to take the jab: NYC mandates vaccines for its workers.”

Replicating the earlier work, the less than two-minute inoculation video helped study participants flag manipulative content, particularly in high-emotion headlines. But that didn’t make them better at judging which information was accurate – even in the context most favorable for inoculation, when all false headlines contained highly emotional language and all true headlines were neutral.

“When the task is made more difficult by intermixing actual true or false claims,” the authors wrote, “the video appears to lose its effectiveness as an ‘inoculation against misinformation.'”

A final pair of studies explored the potential benefits of so-called accuracy prompts – simple reminders about the importance of considering accuracy and the threat of misinformation, which have been shown to reduce sharing of misinformation on social media. Like inoculation, accuracy prompts alone proved ineffective for helping people identify true versus false claims (unlike their past use where they successfully improved the news people share). But when the accuracy prompts were sandwiched around the inoculation video, study participants’ identification of true headlines (but not false ones) improved significantly, by up to 10%.

“This shows that combining two techniques that can be readily deployed at scale can boost people’s skills to avoid being misled,” said Stephan Lewandowsky, professor at the University of Bristol, England, and a co-author of the research.

The results have significant implications for the growing field of designing misinformation interventions, the researchers said, highlighting for industry actors and policymakers the importance of testing and deploying multiple interventions in tandem.

“If you’re going to run these interventions, you should probably begin them with a base reminder about accuracy,” Pennycook said. “Just getting people to think more about whether things are true will carry over – at least in the short term – to what they’re seeing and choices about what they would share online.”

In addition to Pennycook and Lewandowsky, co-authors are Adam Berinsky and David Rand ’04, professors at the Massachusetts Institute of Technology; Puneet Bhargava, a graduate student at the University of Pennsylvania; and Hause Lin, a postdoctoral researcher at MIT.

/Public Release. View in full here.