Women must play leading role in era of ‘fair AI for all’
The ability of women to empathize and act ethically and responsibly makes them ideally placed to lead us into a future where AI benefits all society, not just the wealthy....
- Audio available
by Öykü Işık, José Parra Moyano, Michael D. Watkins Published 9 January 2025 in Technology • 7 min read
Meta’s recent decision to scrap fact-checking in the US signals a turning point in how misinformation is handled online. By shifting to a “community notes” model, like Elon Musk’s approach on X, the tech giant is effectively placing the responsibility of moderating content into the hands of users. While this may sound empowering on paper, the move raises a host of questions about digital ethics and corporate accountability.
Meta says there’s “no immediate plan” to ditch third-party fact-checking or roll out community notes beyond the US.
Content moderation is one of the trickiest, most expensive challenges facing tech companies today. Meta’s pivot towards community-driven moderation feels less like innovation and more like a strategic decision to sidestep the financial and political headaches that come with policing content at scale.
Meta is cutting costs now that the White House heat is off. By doing it, Zuckerberg’s falling in line with the incoming administration – led by a President who once warned that if Meta messed with the 2024 election, Zuckerberg could plan on spending the rest of his life behind bars.
By abdicating its responsibility to fact-check, Meta risks creating an online environment reminiscent of the internet’s “Wild West” days of the late 1990s to mid-2000s – a place where disinformation and hate can roam free, but at a scale and scope never seen before.
Online safety advocates are appalled. They’re already sounding alarms, warning that the move opens the floodgates for manipulation. “These moves could have dire consequences for many children and young adults,” Ian Russell, whose 14-year-old daughter Molly died by suicide after being exposed to harmful content on platforms like Instagram, told reporters.
Misinformation and hate speech aren’t just online noise – they can spill into the real world with serious consequences.
The shift comes at a particularly sensitive time. Donald Trump is returning to the White House, and online platforms are under increased scrutiny for their role in shaping political narratives. Meta’s influence on public opinion is central to how information spreads globally. With more than three billion users across its platforms – including Facebook, Instagram, and WhatsApp – Meta holds immense power to shape public discourse.
The timing of the $1.6tn corporation’s decision – just as Joel Kaplan, a high-profile Republican, takes over as Meta’s new president of global affairs – can’t be ignored. The departure of Nick Clegg, the former UK deputy leader who previously held the role, signals a shift that many view as aligning the platform more closely with US political currents.
Kaplan wasted no time making headlines. On Fox News this week, he called Meta’s previous fact-checkers “too biased” and hinted at smoother sailing with Trump’s imminent return to power. However, studies have shown this not to be the case and some critics see Kaplan’s comments as just a smokescreen for conservative disinformation.
CEO Mark Zuckerberg made it clear that Meta will work with the new administration to “push back on governments” trying to rein in American tech companies. In a video post, he didn’t hold back, calling out China, Latin America, and an “ever-increasing number” of European laws that he says are “institutionalizing censorship” and stifling innovation. This could be seen as an attempt to curry favor with the incoming Trump administration.
“Is community-driven moderation just a fancy way of saying “not our problem”?”
But Zuckerberg also took aim at “legacy media,” arguing it forced his company to “censor more and more.”
But there’s more at play here than just politics. Content moderation isn’t just unpopular among some people – it’s expensive. Meta says it pours billions of dollars into safety and security each year, employing tens of thousands of people globally to manage content.
Zuckerberg framed this move as a step towards prioritizing free speech. However, by decentralizing fact-checking, the company trims operational costs and appeases those who believe moderation efforts have gone too far. Is community-driven moderation just a fancy way of saying “not our problem”?
This shift inevitably fuels the debate over digital ethics. Is it ethical for platforms to hand off the responsibility of content moderation to users, knowing that many lack the expertise or motivation to police harmful content effectively?
It comes at a time when AI-generated content is flooding the internet, blurring the lines between authentic posts and fabricated narratives. Without robust oversight, digital platforms risk becoming echo chambers for disinformation, with potentially dire consequences for democracy, public health, and social cohesion.
History offers a sobering reminder of what happens when platforms fail to moderate content effectively. Several years ago, Meta (then Facebook) was linked to the spread of hate speech in Myanmar. The UN later said Facebook played a “determining role” in fueling anger against the Rohingya Muslim population by failing to control disinformation and inflammatory content. Meta admitted it didn’t do enough.
For Meta, the risk of regulatory backlash may be a price worth paying for the broader goal of streamlining operations and cutting costs.
The lesson here is clear: left to their own devices, platforms can inadvertently become complicit in real-world harm.
Regulators worldwide have taken note. The EU’s Digital Services Act (DSA) and the UK’s Online Safety Act mandate stronger measures to protect users from harmful content. It is unclear whether Meta’s “hands-off” approach will clash with these frameworks.
While potential fines of up to 6-10% of revenue may seem like a deterrent, the reality is that they pale in comparison to Meta’s revenue streams ($156bn last year). For Meta, the risk of regulatory backlash may be a price worth paying for the broader goal of streamlining operations and cutting costs.
Still, this strategy carries reputational risks. Meta has already weathered its fair share of data privacy scandals. Zuckerberg rolled out third-party fact-checking back in 2016, part of a broader effort to tackle the flood of misinformation swamping Facebook after the platform faced heavy criticism.
It was one of Meta’s first big plays to clean up its act – but now, that chapter seems to be closing.
The elephant in the room is the lack of viable competition. Despite some dissatisfaction with Meta’s policies, “network effects” make it incredibly difficult for users to abandon the platform. The more people jump on a platform, the harder it is to leave. Building alternative platforms is prohibitively expensive, and acquiring users from entrenched networks is a near-impossible task.
This reality insulates Meta from market pressures, allowing it to experiment with content policies without immediate risk to its dominance.
But the real test lies in user experience. If the quality of content deteriorates significantly, users may eventually seek alternatives. For now, the platform’s algorithm-driven engagement model keeps users hooked, even if the content isn’t always reliable. This model mirrors the junk food dilemma – people know it’s bad for them, but it’s hard to resist.
Regulators such as those outside the US may eventually intervene by imposing stricter measures. Proposals for “know your customer” (KYC) rules, like those in banking, are gaining traction in digital policy circles. Such measures could introduce additional safeguards for vulnerable users, forcing platforms to verify identities and limit access to harmful content.
The broader implications of Meta’s decision touch on the core of what digital platforms represent. Are they public utilities with a duty to uphold societal well-being, or private enterprises free to prioritize growth and shareholder returns? The answer may shape the future and safety of online spaces for many years to come.
Users might engage more thoughtfully when faced with context rather than censorship.
Meta’s decision to ditch fact-checking in the US and lean into community-driven moderation feels a lot like Elon Musk’s playbook on X. It sounds democratic – let the people decide what’s true – but reality is often messier.
On the bright side, this model offers transparency. Users see the process, contribute their knowledge, and avoid the heavy-handed feel of top-down censorship. But it’s not all roses. Misinformation spreads fast – faster than most users can flag, fact-check, and rate. By the time a helpful note lands, the damage is often done.
And let’s not forget manipulation. Even with safeguards, bad actors know how to game the system, upvoting misleading notes or burying corrections. On divisive issues, ideological camps could cancel each other out, leaving critical posts unchecked.
There’s also the normalization factor. When hateful or misleading content sits front and center, even with a cautionary note, people get used to it. Over time, it blends into the background – just part of the scenery.
Still, some argue this approach invites civil discourse. Notes can correct content without the sting of bans or takedowns. Users might engage more thoughtfully when faced with context rather than censorship.
But will it work at scale? Platforms process billions of posts. A handful of dedicated users can’t catch everything. And during high-stakes events – elections, pandemics – slow responses can have real consequences.
The community-driven approach isn’t without merit, but relying on it entirely could open the floodgates. Misinformation might thrive, and platforms risk losing trust. For now, it seems better as a supplement to moderation, not a replacement.
Professor of Digital Strategy and Cybersecurity at IMD
Öykü Işık is Professor of Digital Strategy and Cybersecurity at IMD, where she leads the Cybersecurity Risk and Strategy program and co-directs the Generative AI for Business Sprint. She is an expert on digital resilience and the ways in which disruptive technologies challenge our society and organizations. Named on the Thinkers50 Radar 2022 list of up-and-coming global thought leaders, she helps businesses to tackle cybersecurity, data privacy, and digital ethics challenges, and enables CEOs and other executives to understand these issues.
Professor of Digital Strategy
José Parra Moyano is Professor of Digital Strategy. He focuses on the management and economics of data and privacy and how firms can create sustainable value in the digital economy. An award-winning teacher, he also founded his own successful startup, was appointed to the World Economic Forum’s Global Shapers Community of young people driving change, and was named on the Forbes ‘30 under 30’ list of outstanding young entrepreneurs in Switzerland. At IMD, he teaches in a variety of programs, such as the MBA and Strategic Finance programs, on the topic of AI, strategy, and Innovation.
Professor of Leadership and Organizational Change at IMD
Michael D Watkins is Professor of Leadership and Organizational Change at IMD, and author of The First 90 Days, Master Your Next Move, Predictable Surprises, and 12 other books on leadership and negotiation. His book, The Six Disciplines of Strategic Thinking, explores how executives can learn to think strategically and lead their organizations into the future. A Thinkers 50-ranked management influencer and recognized expert in his field, his work features in HBR Guides and HBR’s 10 Must Reads on leadership, teams, strategic initiatives, and new managers. Over the past 20 years, he has used his First 90 Days® methodology to help leaders make successful transitions, both in his teaching at IMD, INSEAD, and Harvard Business School, where he gained his PhD in decision sciences, as well as through his private consultancy practice Genesis Advisers. At IMD, he directs the First 90 Days open program for leaders taking on challenging new roles and co-directs the Transition to Business Leadership (TBL) executive program for future enterprise leaders, as well as the Program for Executive Development.
16 January 2025 • by Rupa Dash in Technology
The ability of women to empathize and act ethically and responsibly makes them ideally placed to lead us into a future where AI benefits all society, not just the wealthy....
14 January 2025 • by Jerry Davis in Technology
Many new businesses have emerged in recent years with few employees, no roots in the community, and heavily reliant on Big Tech for survival. How does this square with the demands for...
17 October 2024 • by Öykü Işık, Carlos Cordon in Technology
The deadly blasts from tampered devices in Lebanon highlight the fragility of supply chains, making it clear that better security measures are urgently needed....
30 September 2024 • by Öykü Işık in Technology
When the British Library fell victim to a major ransomware attack it opted for a policy of full transparency. Here we outline what government organizations, NGOs, and businesses can learn from the...
Explore first person business intelligence from top minds curated for a global executive audience