Share
Facebook Facebook icon Twitter Twitter icon LinkedIn LinkedIn icon Email

Strategy

Fakery and falsehood: why prevention is better than cure 

Published 10 April 2025 in Strategy • 13 min read • Audio availableAudio available

It’s possible to immunize people against false or manipulated information through simple exercises that policymakers and platforms could easily roll out.

In September 2024, the city of Springfield, Ohio, found itself in the spotlight. During a presidential election debate, Donald Trump repeated an outlandish claim: Haitian immigrants in Springfield were abducting and eating the community’s domestic cats and dogs.

“They’re eating the pets of the people that live there, and this is what’s happening in our country,” Trump announced on live television. The story soon went viral on social media, making its way around the globe and garnering tens of millions of comments, memes, and AI-generated images of panicked pets from users on both sides of America’s political divide. Meanwhile, city officials in Springfield scrambled to debunk the claim, confirming that they had not received any such report. US National Security Council spokesman John Kirby went further, decrying right-wing media for “disinformation” and warning the claims were “dangerous.”

Falsehoods, fallacies, and “fake news” are not new. Since humans first invented language and began organizing into communities, societies, and empires, we have been prone to creating, spreading, and consuming misinformation and its more nefarious, wilful sibling, disinformation. Whether it’s to deliberately mislead and manipulate other people or a simple product of human error, the fact is that misinformation has been well-documented since Roman times.

Gaius Julius Caesar Augustus was an arch manipulator, promulgating falsehoods and propaganda to besmirch the reputation of his enemy, Mark Antony, in the last century BC. Throughout history, abundant examples exist of countries, governments, and industries manipulating public opinion. During the First World War, the British authorities passed laws prohibiting the press from publishing anything negative about Allied troops. The Aspidistra Transmitter broadcast programs to German households in the 1940s, informing them that the war was going badly for Hitler. And, in the 1950s, big tobacco firms went out of their way to sideline research linking smoking to cancer. The Frank Statement advert put out by Philip Morris et al (mis)informed smokers everywhere: “At one time or another during these years, critics have held (tobacco) responsible for practically every disease of the human body. One by one these charges have been abandoned for lack of evidence.”

From the great wars of the 20th century to the conflicts in Israel and Ukraine, from non-existent weapons of mass destruction to allegations of election rigging in the US, from actual warfare to cyber warfare to cognitive warfare, misinformation is nothing new. The only difference now is that human and technology interactions are reshaping our relationship with information. We have access to technology-powered content creation tools and communication channels that can send (mis)information around the globe in fractions of seconds. In that sense, we may be at an inflection point.

This is not a pipe and not everything is as it appears, as René Magritte observed in his 1929 painting ‘La Trahison des Images' (The Treachery of Images)

Are we at a turning point?

In our always-on, hyper-connected, internet-powered world, misinformation can travel faster and further than at any other time in our history; the speed of propagation has accelerated exponentially. The way that we consume information has also changed. From Facebook to Instagram and the news apps on our mobile phones, our feeds are managed by algorithms that curate what we see based on what we’ve already consumed. In a world where social media has converted everyone into a content creator and consumer, journalist and reader, misinformation can circulate quickly and freely. And as our appetite for content grows and the kind of content we consume becomes more homogenized by machines, misinformation has the potential to proliferate at scale.

Legacy media remains subject to checks and balances, but anyone can pretty much say anything they want on X, Facebook, YouTube, or TikTok. Libertarians might argue that this is no bad thing: that when information is centralized, it is akin to handing a microphone to a dictator. However, without guardrails in place, misinformation can propagate on social media like knotweed. It can feel like we are living in a fictional reality where blatant falsehoods have become the norm.

Then there’s the AI factor. In 2024, US Democratic political consultant Steven Kramer was indicted and fined for sending out deep fake robocalls mimicking Joe Biden’s voice and urging New Hampshire residents not to vote in their Democratic primary. In Slovakia, an AI-generated conversation on political fraud between the chairman of Progressive Slovakia and a highly respected journalist went viral on social media two days before the general election in 2023.

As AI evolves, it will only become harder to distinguish between fact and fiction. Spotting the lie before it’s too late is about to become even more difficult. Given AI can generate content at mind-boggling speed and scale, it will also become easier for bad actors to produce endless variations and amalgamations of fact and fiction, information and misinformation as they figure out what resonates best with which users.

In an environment where partisan and even extremist views are increasingly the norm, there is a risk that any common narrative or sense of truth will disappear, leaving different camps more disposed to simply accepting or rejecting information depending on what they want to believe. We are seeing a gradual and systemic degradation of shared beliefs. Whether politics, education, the environment, or health and healthcare, we are coalescing into opposing blocs, making us more vulnerable to misinformation. During the pandemic, people burned down 5G masts in the UK because of conspiracy theories linking cell towers to COVID-19. In Iran, people read that consuming methanol could cure the virus. More than 700 died as a result. The American Journal of Tropical Medicine and Hygiene estimates that as many as 5,800 people were hospitalized because of misinformation about COVID-19 on social media.

Heightened socio-political tension, deepening polarization, the accelerating and unchecked flow of content on the internet, AI, and deepfakes – all of it can breed paranoia and distrust. It can make us doubt each other and diminish our faith in the legitimacy of our authorities, media, electoral processes, and democracies. It can create feedback loops where we actively anticipate misinformation and become more distrustful, making us vulnerable to consuming more misinformation if it chimes with our worldview. The result? More polarization, more confusion, more chaos.

New York is more populous than London. Or is it the other way round?

You can’t unknow what you know

The standard response to misinformation has traditionally been debunking – using a substantiated truth to expose the falsehood of a claim and discredit it. However, this approach has severe limitations due to human psychology.

When you debunk something, you inadvertently support what psychologists call the continued influence of that thing. Without wanting to, you give it more substance. Why? To discredit something, you first have to repeat it, which is problematic because of the way our brains work.

When we absorb a falsehood and integrate it into our memory, it becomes entangled there – making friends with the other facts we store and hold to be true in our minds. Repeating that falsehood, even to fact-check or debunk it, means that we end up strengthening it: the memory network associated with that piece of misinformation becomes more robust.

Let’s say you read that New York is more populous than London. The idea takes root in your memory. The next day, you talk to a geographer who tells you New York is not as populous as London. In doing so, the falsehood has effectively been repeated and its place in your memory reinforced. However, now you have a kind of retrieval error when you go to access that information: you will be accessing false and true information concurrently. This means you will have to purposefully suppress the misinformation: I remember hearing about New York being more crowded than London, but was this true or false?

These kinds of cognitive gymnastics are hard for the brain. In my research experiments, I’ve seen many instances of this. Tell a group of volunteers that a fire was caused by oil and paint cans poorly stored in a warehouse, and then later, correct yourself and tell them that the cause of the fire is unknown. Again and again, the same volunteers will tell you that paint and oil caused the blaze, even though they have been told this was not the case. It’s a persistent behavior: our brains are hardwired to store information in a way that makes debunking problematic. The old courtroom saying about striking something from the records is based on a fallacy: just as you can’t unring a bell, it is near impossible to unhear something we have been told to believe. Once a falsehood has bedded in, it is tough to get it out. A far better approach to dealing with misinformation is to pre-bunk it: to inoculate against it in advance. What do I mean by this?

Prevention is better than cure

When you give someone a vaccine, you give them a weakened dose of the disease or virus you are trying to prevent, which triggers antibodies and confers resistance to future infection. The same thing is possible with our minds.

When you pre-emptively expose people to a microdose of a falsehood, you can deconstruct it and refute it in advance so they build up the psychological and cognitive antibodies to become more resistant to a full dose in the future.

We put this to the test in the lab by asking volunteers to read something that pre-bunks a specific misconception – the idea that shark cartilage capsules cure cancer, say, a misconception that went viral in alternative medicine worldwide in the noughties but that was thoroughly debunked by the scientific community in 2007.

“It appeared scientific and authoritative, but as they scrolled through it, they could see it was nonsense. Signatories at the end of the article included Charles Darwin and Mickey Mouse. ”

Misinformation constitutes any information that is either false or misleading in some way. Disinformation is a subset of misinformation with some psychological intent to deceive or harm others. It can be difficult to disentangle the two, as malicious intent can be difficult to prove. Misinformation often leverages a grain of truth or a real societal event but then distorts the context or the implications, and it can be difficult to identify. While flat earth theories and reptile politicians feel obvious, they make up a fairly small percentage of the news or content people consume daily. Biased or partisan content can masquerade as facts and can originate even in the most credible source unless they assiduously and routinely prioritize transparent factchecking and correcting, where necessary.

In contrast to debunking, prebunking is gaining prominence in the academic world as a way to pre-emptively build resilience against anticipated exposure to misinformation. This approach is usually grounded in inoculation theory a medical immunization analogy that argues it is possible to build psychological resistance against unwanted persuasion attempts, much like medical inoculations build physiological resistance against pathogens. Psychological inoculation treatments contain two core components: a forewarning that induces a perceived threat of an impending attack on one’s attitudes and exposure to a weakened (micro)dose of misinformation that contains a pre-emptive refutation (or prebunk) of the anticipated misleading arguments or persuasion techniques. 

Our volunteers read an article about climate change on a template copied from the National Academy of Science. It appeared scientific and authoritative, but as they scrolled through it, they could see it was nonsense: logical flaws in the narrative, and signatories at the end of the article include names like Charles Darwin, Mickey Mouse, and Professor Geri Halliwell of the Spice Girls. Once they finished reading the article, we asked them to browse the internet for climate change content. We repeatedly found that our volunteers were not duped by misinformation on the web or social media – no matter how plausible or convincing it might appear.

The climate change piece signed by Mickey Mouse was light-hearted, but we found the same effect with more serious subject matter. Consistently, even when we asked people to read more weighty, nuanced content, we found that exposing them to clearly manipulated information made them more immune to misinformation when exposed to it on the internet.
Taking these findings out of the lab, my Cambridge colleagues and I have worked on free access games that users can use to inoculate themselves against misinformation. One is called Bad News (forgive the pun). Here, we put players into the shoes of an online manipulator – a misinformation grifter who is fear-mongering and peddling conspiracy theories. We use inactivated fictional humor in the game to let players launch and experience a misinformation attack by this nefarious fake news tycoon. The game went viral, with millions of people going through the intervention and building immunity to misinformation. A second game with the World Health Organisation called Go Viral helped pre-bunk misinformation around COVID-19. Again, we used humor, challenging players to deploy falsehoods about virus-busting medicine and flooding WhatsApp with silliness about gargling lemons and gorging on kiwi fruit. Another game on US Homeland Security encouraged players to use bots to create outlandish conspiracy theories about foreign interference in American elections. Here, we saw people wage huge bot wars about topics such as pineapple on pizza and the ramifications of covert Italian cultural interference on US pizza-topping restrictions.

From gaming to Google

Using levity and humor in games to viralize weakened or inactive doses of misinformation has proved highly successful in building psychological defenses to falsehoods. However, the effect is limited to those who actively sign up to play.

To scale the impact, we worked with Google to build short misinformation debunking ads that appeared on YouTube. Again, we used the same concept: humorous, non-overtly political content to raise awareness of manipulation and help users build their immunity. One of our videos tackles the way bad actors foment extremism by painting complex dilemmas as black or white, using misinformation to strip nuance and paint opposing ideas or concepts as inherently right or wrong. Revenge of the Sith leverages the Star Wars franchise, presenting an inactivated strain of misinformation in the form of a debate between the characters of Anakin Skywalker and Obi-Wan Kenobi. The video has Anakin say: “Either you’re with me, or you’re my enemy,” to which Obi-Wan replies, “Only a Sith deals in absolutes.” Our narrator then explains this is a false dilemma: misinformation deals in absolutes to manipulate and polarize opinion to meet its own ends. Testing the impact of the video within 24 hours of its launch, we found that millions of viewers were better equipped to identify and neutralize misinformation as a result.
Safeguarding a fact-based future

Our work in inoculating people against misinformation is still at the trial stage. It’s hard to scale social media games globally or to mandate adverts on YouTube, especially because Google and other platforms operate advertising revenue models. We hope to see pre-bunking integrated into national educational curricula so that new generations can be better inoculated against misinformation on social and other media from an early age.

We continue to work in the UK, the US, and Europe, where our findings have been adopted into university teaching – so far, on an ad hoc basis. My goal is to see this instituted more systematically and at a population level. That, and to help build top-down pressure on social media companies to do more, ideally in response to legislation. It is incumbent on authorities and decision-makers to prioritize this.

In this age of AI, social media, and polarization, “seeing is believing” is no longer a useful heuristic. It is becoming more difficult for human beings to discern fact from fiction, and we have perhaps less incentive or inclination to do so. This troubles me. Look through history and you can clearly see trends. Whenever major societal or cultural issues spike, whenever conflict breaks out in our world, it is usually preceded by a massive influx of propaganda and misinformation. We are at a peak moment in supply and demand for misinformation, powered by technologies that did not exist in the past. We urgently need to build resilience to manipulation in the post-truth era to avert all-out information warfare and whatever that might entail. I’m cautiously optimistic that we have the wherewithal to do this. However, the solution will need to be as multi-layered as it is systemic.

Authors

Sander van der Linden

Professor of Social Psychology at the University of Cambridge

Sander van der Linden is Professor of Social Psychology in Society and Director of the Cambridge Social Decision-Making Lab in the Department of Psychology at the University of Cambridge. He is the author of the award-winning book Foolproof: Why We Fall for Misinformation and How to Build Immunity.

Related

Learn Brain Circuits

Join us for daily exercises focusing on issues from team building to developing an actionable sustainability plan to personal development. Go on - they only take five minutes.
 
Read more 

Explore Leadership

What makes a great leader? Do you need charisma? How do you inspire your team? Our experts offer actionable insights through first-person narratives, behind-the-scenes interviews and The Help Desk.
 
Read more

Join Membership

Log in here to join in the conversation with the I by IMD community. Your subscription grants you access to the quarterly magazine plus daily articles, videos, podcasts and learning exercises.
 
Sign up
X

Log in or register to enjoy the full experience

Explore first person business intelligence from top minds curated for a global executive audience