
AI Nutrition Labels: A food-inspired approach to trust
To build trust, companies should be as transparent with their algorithms as they are with their ingredients....
by Tommaso Giardini Published August 19, 2024 in Artificial Intelligence • 7 min read
This month, the European Union’s sweeping AI Act starts coming into force, bringing various obligations and prohibitions over the coming years. This is only the tip of the iceberg; other governments around the world are also attempting to regulate AI – and all are doing it differently.
While multinational businesses are well attuned to the challenges of navigating cross-border regulations in foreign markets, these new rules mean that integrating AI into products and services is becoming an especially complex regulatory landscape. Comprising up to 10 different policy areas from data protection to intellectual property, this complexity turns AI compliance into a strategy problem for managers.
As details are still emerging, navigating the rules requires a judicious approach to ensure that the benefits of AI integration are not offset by the risks of non-compliance. Here is a brief look at some of the top questions businesses should consider when integrating AI.
Businesses are being exposed to an evolving patchwork of regulatory approaches around the globe as Governments are still experimenting with AI rules and are not systematically coordinating with each other to harmonize the rules.
The Digital Policy Alert’s tracking of digital policy developments in G20 countries, Europe, and Southeast Asia provides a telling picture: In the past year alone, governments advanced over 440 regulatory developments affecting AI. The United States (171), the European Union and its member states (99), and the United Kingdom (56) were the most active jurisdictions. However, businesses are being exposed to an evolving patchwork of regulatory approaches around the globe as Governments are still experimenting with AI rules and are not systematically coordinating with each other to harmonize the rules.
“China has implemented three technology-specific AI regulations that address generative AI, deep fakes, and recommendation algorithms.”
Only a few AI rules are currently on the radar of business executives. The European Union’s AI Act is a salient example. It establishes a risk-based approach, prohibiting AI systems that pose unacceptable risks and imposing a range of compliance obligations for “high-risk AI systems.”
The United States Executive Order on AI, adopted in October 2023, instructs government agencies to draft rules for AI use in the public and private sectors. How these rules will evolve, depends on the election result: The Republican Party platform pledges to repeal the Executive Order and instead prioritize AI innovation and development rooted in “free speech and human flourishing.”
China has implemented three technology-specific AI regulations that address generative AI, deep fakes, and recommendation algorithms. Among other obligations, providers must ensure that AI output adheres to government values and register their systems through the government’s “algorithm filing” system.
More countries are expected to join the fray, including Argentina, Brazil, Canada, and South Korea, which are all deliberating AI laws of their own.
“Content moderation” rules aim to ensure that AI systems don’t generate illegal or harmful content and require companies that integrate generative AI to implement content removal mechanisms.
The Digital Policy Alert database reveals that governments draw from over 10 policy areas to develop AI rules. The most common policy areas tracked are “design and testing standards,” “data governance,” and “consumer protection.” For example, “data governance” rules require companies that train AI models to rigorously check their training datasets and confirm that personal data is not used without a valid legal basis.
Other policy areas include “content moderation,” “competition,” and “intellectual property.” For example, “content moderation” rules aim to ensure that AI systems don’t generate illegal or harmful content and require companies that integrate generative AI to implement content removal mechanisms. The compliance obligations triggered by these rules thus vary based on their underlying motives.
Yes, because AI rules simultaneously deviate across national borders and policy areas. These two dimensions increase the complexity of AI regulations and create a new level of compliance risk for businesses.
If your company integrates AI in products and services across Asia, you must ensure compliance with China’s data regime, India’s consumer protection framework, and Australia’s online safety rules.
Complying with overlapping national rules is challenging enough. Companies are already grappling with a regulatory patchwork regarding rules on data protection and cross-border data transfers. With AI, this complexity is multiplied by the number of relevant policy areas.
Yes, because every company that integrates AI is now exposed to a growing body of global digital regulations. Exposure to digital rules is a new challenge even for multinational companies who are adept at complying with more established foreign regulations. Foreign markets have different and still emerging rules in a variety of digital policy areas.
This year, California suspended the permit for autonomous vehicle deployment for Cruise, the San Francisco-based, self-driving-car subsidiary of General Motors, due to non-compliance with quality requirements.
Yes, enforcement agencies are rigorously enforcing AI rules across different policy areas, with costly consequences for businesses. For example, this year, California suspended the permit for autonomous vehicle deployment for Cruise, the San Francisco-based, self-driving-car subsidiary of General Motors, due to non-compliance with quality requirements.
Currently, we observe that enforcement focus lies on AI-model builders rather than companies that integrate AI. When ChatGPT was launched, data protection authorities around the world initiated a wave of investigations. Competition authorities are investigating partnerships between AI providers to ensure fair competition. Online safety concerns triggered investigations into generative AI. Finally, US regulators are currently investigating political bias in Google’s Gemini AI system.
Corporate leaders must consider AI compliance risk as a new component of their expansion strategy. Navigating the AI regulatory labyrinth will demand a multitude of resources already devoted to complying with global data rules. Compliance teams must clearly understand the multifaceted challenges posed by incongruent rules that cover multiple regions and policy areas. This will require an expanded focus, from the well-known area of data protection to a range of other policy areas that are now relevant to AI compliance. Finally, operational teams must adjust to the idiosyncrasies of each market to ensure compliance at the technical level. As AI regulations continue to grow and proliferate across new jurisdictions, so will the compliance challenges.
Associate Director of the Digital Policy Alert
Tommaso Giardini is the Associate Director of the Digital Policy Alert, a public, independent, comprehensive and searchable record of policy changes that affect the digital economy. Tommaso’s interests lie in the systematic monitoring and comparative analysis of international digital policy developments from an interdisciplinary perspective. He received a Master’s Degree in Law and Economics from the University of St. Gallen, where he co-founded the student Law Clinic.
July 28, 2025 • by Tomoko Yokoi, Michael R. Wade in Artificial Intelligence
To build trust, companies should be as transparent with their algorithms as they are with their ingredients....
July 25, 2025 • by Konstantinos Trantopoulos , Paolo Aversa in Artificial Intelligence
In Formula 1 racing, every microsecond counts. Konstantinos Trantopoulos and Paolo Aversa explore how teams use Human-AI collaboration to boost performance – and what businesses in other sectors can learn from their...
July 25, 2025 • by Faisal Hoque, Pranay Sanklecha, Paul Scade in Artificial Intelligence
As AI races ahead and regulators fall behind, the real question isn’t what your system can do, but what kind of organization you become by deploying it. Answering these four questions will...
July 16, 2025 • by Faisal Hoque in Artificial Intelligence
AI won't contribute to a thriving business unless it's hardwired into your purpose, people, processes, and architecture. Here’s how getting that alignment right can ensure maximum value from AI investments. ...
Explore first person business intelligence from top minds curated for a global executive audience