Share
Facebook Facebook icon Twitter Twitter icon LinkedIn LinkedIn icon Email
Ai Regulations

Artificial Intelligence

AI compliance is a strategy problem 

Published 19 August 2024 in Artificial Intelligence • 7 min read

A patchwork of emerging AI rules is raising companies’ compliance risk. Ready? Here’s how to think strategically about new risks now. 

This month, the European Union’s sweeping AI Act starts coming into force, bringing various obligations and prohibitions over the coming years. This is only the tip of the iceberg; other governments around the world are also attempting to regulate AI – and all are doing it differently.

While multinational businesses are well attuned to the challenges of navigating cross-border regulations in foreign markets, these new rules mean that integrating AI into products and services is becoming an especially complex regulatory landscape. Comprising up to 10 different policy areas from data protection to intellectual property, this complexity turns AI compliance into a strategy problem for managers.

As details are still emerging, navigating the rules requires a judicious approach to ensure that the benefits of AI integration are not offset by the risks of non-compliance. Here is a brief look at some of the top questions businesses should consider when integrating AI.

Businesses are being exposed to an evolving patchwork of regulatory approaches around the globe as Governments are still experimenting with AI rules and are not systematically coordinating with each other to harmonize the rules.

Just how complex is the AI regulatory landscape?

The Digital Policy Alert’s tracking of digital policy developments in G20 countries, Europe, and Southeast Asia provides a telling picture: In the past year alone, governments advanced over 440 regulatory developments affecting AI. The United States (171), the European Union and its member states (99), and the United Kingdom (56) were the most active jurisdictions. However, businesses are being exposed to an evolving patchwork of regulatory approaches around the globe as Governments are still experimenting with AI rules and are not systematically coordinating with each other to harmonize the rules.

CHina
“China has implemented three technology-specific AI regulations that address generative AI, deep fakes, and recommendation algorithms.”

What are some examples of high-profile AI regulations?

Only a few AI rules are currently on the radar of business executives. The European Union’s AI Act is a salient example. It establishes a risk-based approach, prohibiting AI systems that pose unacceptable risks and imposing a range of compliance obligations for “high-risk AI systems.”

The United States Executive Order on AI, adopted in October 2023, instructs government agencies to draft rules for AI use in the public and private sectors. How these rules will evolve, depends on the election result: The Republican Party platform pledges to repeal the Executive Order and instead prioritize AI innovation and development rooted in “free speech and human flourishing.”

China has implemented three technology-specific AI regulations that address generative AI, deep fakes, and recommendation algorithms. Among other obligations, providers must ensure that AI output adheres to government values and register their systems through the government’s “algorithm filing” system.

More countries are expected to join the fray, including Argentina, Brazil, Canada, and South Korea, which are all deliberating AI laws of their own.

“Content moderation” rules aim to ensure that AI systems don’t generate illegal or harmful content and require companies that integrate generative AI to implement content removal mechanisms.

Which policy areas are most relevant to AI rules?

The Digital Policy Alert database reveals that governments draw from over 10 policy areas to develop AI rules. The most common policy areas tracked are “design and testing standards,” “data governance,” and “consumer protection.” For example, “data governance” rules require companies that train AI models to rigorously check their training datasets and confirm that personal data is not used without a valid legal basis.

Other policy areas include “content moderation,” “competition,” and “intellectual property.” For example, “content moderation” rules aim to ensure that AI systems don’t generate illegal or harmful content and require companies that integrate generative AI to implement content removal mechanisms. The compliance obligations triggered by these rules thus vary based on their underlying motives.

Overlapping Regulations
Complying with overlapping national rules is challenging enough; with AI it is multiplied

Does AI integration increase businesses’ compliance risk?

Yes, because AI rules simultaneously deviate across national borders and policy areas. These two dimensions increase the complexity of AI regulations and create a new level of compliance risk for businesses.

If your company integrates AI in products and services across Asia, you must ensure compliance with China’s data regime, India’s consumer protection framework, and Australia’s online safety rules.

Complying with overlapping national rules is challenging enough. Companies are already grappling with a regulatory patchwork regarding rules on data protection and cross-border data transfers. With AI, this complexity is multiplied by the number of relevant policy areas.

Do AI rules even apply to “traditional” businesses?

Yes, because every company that integrates AI is now exposed to a growing body of global digital regulations. Exposure to digital rules is a new challenge even for multinational companies who are adept at complying with more established foreign regulations. Foreign markets have different and still emerging rules in a variety of digital policy areas.

This year, California suspended the permit for autonomous vehicle deployment for Cruise, the San Francisco-based, self-driving-car subsidiary of General Motors, due to non-compliance with quality requirements.

Are governments enforcing rules on companies that integrate AI?

Yes, enforcement agencies are rigorously enforcing AI rules across different policy areas, with costly consequences for businesses. For example, this year, California suspended the permit for autonomous vehicle deployment for Cruise, the San Francisco-based, self-driving-car subsidiary of General Motors, due to non-compliance with quality requirements.

Currently, we observe that enforcement focus lies on AI-model builders rather than companies that integrate AI. When ChatGPT was launched, data protection authorities around the world initiated a wave of investigations. Competition authorities are investigating partnerships between AI providers to ensure fair competition. Online safety concerns triggered investigations into generative AI. Finally, US regulators are currently investigating political bias in Google’s Gemini AI system.

How should executives think about AI compliance strategies?

Corporate leaders must consider AI compliance risk as a new component of their expansion strategy. Navigating the AI regulatory labyrinth will demand a multitude of resources already devoted to complying with global data rules. Compliance teams must clearly understand the multifaceted challenges posed by incongruent rules that cover multiple regions and policy areas. This will require an expanded focus, from the well-known area of data protection to a range of other policy areas that are now relevant to AI compliance. Finally, operational teams must adjust to the idiosyncrasies of each market to ensure compliance at the technical level. As AI regulations continue to grow and proliferate across new jurisdictions, so will the compliance challenges.

Authors

Tommaso Giardini

Tommaso Giardini

Associate Director of the Digital Policy Alert

Tommaso Giardini is the Associate Director of the Digital Policy Alert, a public, independent, comprehensive and searchable record of policy changes that affect the digital economy. Tommaso’s interests lie in the systematic monitoring and comparative analysis of international digital policy developments from an interdisciplinary perspective. He received a Master’s Degree in Law and Economics from the University of St. Gallen, where he co-founded the student Law Clinic.

Related

Learn Brain Circuits

Join us for daily exercises focusing on issues from team building to developing an actionable sustainability plan to personal development. Go on - they only take five minutes.
 
Read more 

Explore Leadership

What makes a great leader? Do you need charisma? How do you inspire your team? Our experts offer actionable insights through first-person narratives, behind-the-scenes interviews and The Help Desk.
 
Read more

Join Membership

Log in here to join in the conversation with the I by IMD community. Your subscription grants you access to the quarterly magazine plus daily articles, videos, podcasts and learning exercises.
 
Sign up
X

Log in or register to enjoy the full experience

Explore first person business intelligence from top minds curated for a global executive audience