
Buying your way into entrepreneurship: the dss+ journey
How a private equity-backed corporate carve-out created a successful, sustainable consulting powerhouse ...
by Michael Yaziji Published 31 March 2025 in Technology • 7 min read • Audio available
Although an oversimplification, examining how AI is used in China and the West through the lens of George Orwell, Aldous Huxley, and Neil Postman is illuminating. In China, AI’s encroachment into public and private life increasingly resembles Orwell’s 1984, reflecting a government-driven expansion of surveillance and control. In the West, AI’s profit-driven design seems to fulfill Huxley’s Brave New World prophecy, offering relentless streams of attention-grabbing content that threaten to trivialize civic discourse. Adding a further dimension is Postman’s warning, articulated in Amusing Ourselves to Death, that the medium by which we consume information shapes not only our habits but our modes of thinking. While it may appear that China and the West diverge sharply, the signs are that these models are converging, combining the most troubling features of both dystopias.
In China, facial recognition technology, predictive policing, and the social credit system exemplify the ubiquitous state power that Orwell feared. The analysts IHS Markit estimate that over 600 million CCTV cameras are operating nationwide, many equipped with AI-driven facial recognition software that can identify individuals within seconds. These tools monitor citizens’ activities, penalize perceived dissent, and enforce loyalty to officially sanctioned narratives. Although local variations exist, the trajectory remains clear: surveillance and technological governance bolster a top-down order in which potential critics may self-censor or face retribution. Such developments echo 1984’s grim scenario wherein individuals grow accustomed to constant oversight. (It should be noted that cameras attached to AI systems are also rapidly proliferating in the West.)
In the West, AI’s encroachment into the digital marketplace may lack the overt centralization seen in China, but it is no less insidious. Rather than being orchestrated by a state apparatus, this dynamic is driven by a profit model that monetizes user engagement. Social media platforms such as YouTube, Instagram, TikTok, and Facebook use AI algorithms to captivate attention, continually refining recommendation systems that lure users with bursts of compelling or polarizing content. The incentive is straightforward: time spent scrolling translates directly into advertising revenue, fueling an ecosystem that keeps us hooked.
In 1932, Huxley imagined a society that required no overt oppression: its citizens were placated by easy pleasures and made docile by consuming “soma”, a chemical sedative. Now, algorithmically engineered feeds serve as a “digital soma”, tempering any impulse toward rigorous civic engagement by offering streams of digestible entertainment. Political discourse is distilled into memes and sound bites, subtly crowding out thoughtful debate. Instead of an iron-fisted restriction of information, the public is invited to gorge on more content than it can process. As in Huxley’s vision, the result is not overt subjugation but complacency born of constant diversion – a process that undermines the will to question or rebel.
Postman’s critique prefigures how we arrived at this moment. Forty years ago, he argued that when news, politics, and education became “television-friendly”, they necessarily adopted the grammar of entertainment. AI platforms take this further. While traditional television was a one-size-fits-all entertainment mode, modern social media refines and personalizes the “onslaught of superficialities”, ensuring not just amusement but a sense of personal relevance and indispensability. By analyzing billions of data points (likes, clicks, and watch times), these systems can customize headlines or video snippets precisely to a user’s emotional triggers. They amplify fleeting outrage, viral dance challenges, or conspiracy theories while discouraging sustained critical reflection. The endless feed format has become a digital update to Postman’s insight: no longer is “all the world a stage”, but all the world is a personalized distraction.
Globally, we spend an average of 4.5 hours a day on mobile devices – around 2.5 of these are immersed in algorithmically curated social media feeds. Postman warned us about this shift from depth to “algorithmic shallows”: our modes of thinking have been shaped by the medium itself. We might feel perpetually informed – swiping through endless headlines or updates – yet we rarely achieve the analytical grounding needed to comprehend intricate global challenges. The synergy of Huxley’s sedation-by-pleasure and Postman’s entertainment-driven discourse yields a cultural landscape where trivial concerns often overshadow the substantive, and momentary outrage briefly displaces measured analysis.
Nonetheless, Western societies are not monolithic. Some European countries, for instance, have adopted robust data-protection regulations (e.g., GDPR) and proposed AI governance frameworks (e.g., the EU’s AI Act) to mitigate manipulative tech-driven practices. These measures are nascent, however, and face resistance from corporate interests and political inertia. The bigger picture remains one in which consumer appetites for round-the-clock amusement and profit-driven algorithms align almost too seamlessly, reinforcing a steady diet of distraction that threatens to erode the civic fabric from within.
Orwell’s state oversight and Huxley’s trivializing distraction might seem to occupy separate worlds. In China, media outlets such as the People’s Daily communicate an official line. At the same time, in Western countries, infinite digital content addresses users’ tastes with an abandon that appears to champion freedom of choice. When AI is centrally harnessed to unify opinions, as in certain authoritarian contexts, it creates stark information gatekeepers; when unleashed in an unregulated capitalist market, it swamps citizens with so many pieces of content that discernment and cohesion become elusive. Both approaches stifle reflection and action – one by limiting the range of permissible ideas, and the other by fragmenting the public’s attention into endless micro-entertainments.
Convergence between these models is no longer purely speculative. Authoritarian states can incorporate Western-style consumer seduction into their data-driven ecosystems, just as Western governments look to AI for policing, security, and “predictive” applications that echo the logic of a social credit system. In China, technology giants like Tencent and Alibaba work closely with government oversight, aligning commercial interests with national priorities. In the West, state agencies increasingly rely on or purchase data from private tech firms, reinforcing a dynamic where corporate and government powers intersect. Under these conditions, the notion of a clear divide between Orwellian and Huxleyan paradigms fades.
The future could witness both heavy-handed state oversight and the manipulative power of nonstop entertainment flourishing side by side, forming a hybrid system that leaves citizens both scrutinized and distracted.
It is worth asking how societies can resist these converging forces. In China’s tightly controlled environment, meaningful dissent often must be covert: activists resort to encrypted communication, peer-to-peer networks, VPNs, and coded language, seeking to outmaneuver ever-evolving algorithms that track social media activity. These measures can fail once AI refines its capacity to detect subtle irregularities in communication patterns, leaving little room for error.
In the West, resistance theoretically has wider channels – citizens can push for policy reforms, demand greater transparency in AI algorithms, or call for stronger data protections. Digital literacy campaigns, such as those funded by the European Union’s Digital Education Action Plan and civic education initiatives in several US states, hold the promise of reminding people to reflect on their media consumption. Yet, each countermeasure faces entrenched corporate and political interests; under current business models, outrage and maximal user engagement translate directly into profit. Consumers either need robust willpower to forgo digital addictions or must learn to participate critically, questioning algorithmic biases and the echo chambers that reinforce their viewpoints.
The fate of AI’s influence on society cannot be divorced from human decision-making. While there is an element of inevitability in the proliferation of powerful technologies – nations and industries rarely relinquish tools that give them a competitive advantage – it is still possible to shape AI so it enhances rather than impedes collective freedom. Regulatory frameworks can be strengthened nationally and across transnational bodies like the European Commission to balance incentives toward deeper civic engagement. Open-source AI research can provide meaningful transparency. Grassroots movements, investigative journalists, and ethical technologists can expose manipulative practices and organize to curb AI abuses.
However, the question remains whether citizens, policymakers, and thought leaders have the will and foresight to unite across borders for this cause. China’s model of streamlined control may concentrate power in the hands of a few, while the West’s model disperses it among multinational corporations and political actors who each have reason to maintain the status quo. The confluence of these forces – where government-driven surveillance meets business-driven distraction – could erode human agency to a point beyond what Orwell or Huxley individually imagined. Should these dystopias fuse, the resulting system could be one where everyone is perpetually watched, perpetually entertained, and perpetually malleable.
Societies have a narrowing window to determine whether AI becomes a tool for empowerment or a mechanism for subtle and not-so-subtle domination. In China, the push toward advanced surveillance signals the realization of Orwellian fears; in the West, the unrelenting flow of addictive content demonstrates Huxley’s cautionary tale. It would be a mistake to assume that these paths will remain entirely separate, for the expanding exchange of data, technologies, and practices points to a more ominous amalgamation. Whether the future is shaped by robust, equitable frameworks or surrenders to the creeping alliance of surveillance and distraction is a choice that rests, however precariously, on our collective will to resist.
Michael Yaziji is an award-winning author whose work spans leadership and strategy. He is recognized as a world-leading expert on non-market strategy and NGO-corporate relations and has a particular interest in ethical questions facing business leaders. His research includes the world’s largest survey on psychological drivers, psychological safety, and organizational performance and explores how human biases and self-deception can impact decision making and how they can be mitigated. At IMD, he is the co-Director of the Stakeholder Management for Boards training program.
16 April 2025 • by Benoit F. Leleux in Technology
How a private equity-backed corporate carve-out created a successful, sustainable consulting powerhouse ...
10 April 2025 • by Sander van der Linden in Technology
It’s possible to immunize people against false or manipulated information through simple exercises that policymakers and platforms could easily roll out....
7 April 2025 • by Gopi Kallayil in Technology
Used boldly and responsibly, AI could drive powerful positive change, argues Google’s AI business strategist....
27 March 2025 • by Robert Earle, Karl Schmedders in Technology
Subsidies for renewables have led to electricity prices frequently falling to less than zero, creating opportunities for consumers. ...
Explore first person business intelligence from top minds curated for a global executive audience