Our volunteers read an article about climate change on a template copied from the National Academy of Science. It appeared scientific and authoritative, but as they scrolled through it, they could see it was nonsense: logical flaws in the narrative, and signatories at the end of the article include names like Charles Darwin, Mickey Mouse, and Professor Geri Halliwell of the Spice Girls. Once they finished reading the article, we asked them to browse the internet for climate change content. We repeatedly found that our volunteers were not duped by misinformation on the web or social media – no matter how plausible or convincing it might appear.
The climate change piece signed by Mickey Mouse was light-hearted, but we found the same effect with more serious subject matter. Consistently, even when we asked people to read more weighty, nuanced content, we found that exposing them to clearly manipulated information made them more immune to misinformation when exposed to it on the internet.
Taking these findings out of the lab, my Cambridge colleagues and I have worked on free access games that users can use to inoculate themselves against misinformation. One is called Bad News (forgive the pun). Here, we put players into the shoes of an online manipulator – a misinformation grifter who is fear-mongering and peddling conspiracy theories. We use inactivated fictional humor in the game to let players launch and experience a misinformation attack by this nefarious fake news tycoon. The game went viral, with millions of people going through the intervention and building immunity to misinformation. A second game with the World Health Organisation called Go Viral helped pre-bunk misinformation around COVID-19. Again, we used humor, challenging players to deploy falsehoods about virus-busting medicine and flooding WhatsApp with silliness about gargling lemons and gorging on kiwi fruit. Another game on US Homeland Security encouraged players to use bots to create outlandish conspiracy theories about foreign interference in American elections. Here, we saw people wage huge bot wars about topics such as pineapple on pizza and the ramifications of covert Italian cultural interference on US pizza-topping restrictions.
From gaming to Google
Using levity and humor in games to viralize weakened or inactive doses of misinformation has proved highly successful in building psychological defenses to falsehoods. However, the effect is limited to those who actively sign up to play.
To scale the impact, we worked with Google to build short misinformation debunking ads that appeared on YouTube. Again, we used the same concept: humorous, non-overtly political content to raise awareness of manipulation and help users build their immunity. One of our videos tackles the way bad actors foment extremism by painting complex dilemmas as black or white, using misinformation to strip nuance and paint opposing ideas or concepts as inherently right or wrong. Revenge of the Sith leverages the Star Wars franchise, presenting an inactivated strain of misinformation in the form of a debate between the characters of Anakin Skywalker and Obi-Wan Kenobi. The video has Anakin say: “Either you’re with me, or you’re my enemy,” to which Obi-Wan replies, “Only a Sith deals in absolutes.” Our narrator then explains this is a false dilemma: misinformation deals in absolutes to manipulate and polarize opinion to meet its own ends. Testing the impact of the video within 24 hours of its launch, we found that millions of viewers were better equipped to identify and neutralize misinformation as a result.
Safeguarding a fact-based future
Our work in inoculating people against misinformation is still at the trial stage. It’s hard to scale social media games globally or to mandate adverts on YouTube, especially because Google and other platforms operate advertising revenue models. We hope to see pre-bunking integrated into national educational curricula so that new generations can be better inoculated against misinformation on social and other media from an early age.
We continue to work in the UK, the US, and Europe, where our findings have been adopted into university teaching – so far, on an ad hoc basis. My goal is to see this instituted more systematically and at a population level. That, and to help build top-down pressure on social media companies to do more, ideally in response to legislation. It is incumbent on authorities and decision-makers to prioritize this.
In this age of AI, social media, and polarization, “seeing is believing” is no longer a useful heuristic. It is becoming more difficult for human beings to discern fact from fiction, and we have perhaps less incentive or inclination to do so. This troubles me. Look through history and you can clearly see trends. Whenever major societal or cultural issues spike, whenever conflict breaks out in our world, it is usually preceded by a massive influx of propaganda and misinformation. We are at a peak moment in supply and demand for misinformation, powered by technologies that did not exist in the past. We urgently need to build resilience to manipulation in the post-truth era to avert all-out information warfare and whatever that might entail. I’m cautiously optimistic that we have the wherewithal to do this. However, the solution will need to be as multi-layered as it is systemic.