Share
Facebook Facebook icon Twitter Twitter icon LinkedIn LinkedIn icon Email

Leading in Turbulent Times

AI regulation: Turning compliance into competitive advantage 

April 12, 2024 in Leading in Turbulent Times

How navigating regulatory changes around AI can not only safeguard your organization against emerging threats but also foster ethical and sustainable innovation ...

In the relentless pursuit of ever-more sophisticated generative artificial intelligence (GenAI), technology companies are now putting fresh productivity tools into the hands of organizations worldwide. However, this surge in AI capability puts the onus on businesses to navigate still-evolving regulations to ensure their innovations are not only groundbreaking but also ethically sound and legally compliant.

My recent insights, shared during a Leading in Turbulent Times (LiTT) webinar, shed light on the strategic importance of not just compliance, but also leveraging regulations to drive innovation and competitive advantage in organizations.

The imperative is clear: according to research by Goldman Sachs , AI investment is expected to surge in the coming years, with leaders of organizations being the primary adopters of AI tools (80% are already regular users), much more so than middle managers or frontline employees. From deciphering ancient texts to diagnosing diseases and challenging journalists, the business applications of AI are diverse and promising.

However, alongside these advancements come very real concerns, particularly over the cybersecurity threats posed by hackers who are among the most enthusiastic early adopters of AI. Emerging technologies are expected to deliver more benefits to cyber attackers, rather than defenders – a fact underscored by a deep fake scammer walking off with $25m in a first-of-its-kind AI heist this year in Hong Kong.

Mitigating risks as GenAI proliferates

The World Economic Forum’s Global Risks Report 2024 highlighted AI-generated misinformation and disinformation as the second most critical danger, behind only extreme weather events. This reflects growing apprehension among participants in my LiTT webinar, who cited behavioral manipulation as the biggest risk they saw posed by AI, in a poll.

There are also reliability issues and hallucinations to contend with; a throng of New York lawyers was sanctioned for using fake ChatGPT cases in a legal brief last year. Additionally, Air Canada’s chatbot promised a consumer a discount that was not available; a court ordered the airline to pay compensation. This underscores the urgency of addressing AI risks.

Moreover, the sustainability of AI is under scrutiny because of its substantial carbon footprint, primarily driven by energy-intensive processes like training large language models (LLMs) that underpin chatbots. Elon Musk, the billionaire entrepreneur, recently used media interviews to highlight the enormous supply of electricity needed to satisfy the demands of increasingly powerful technology. 

Consequently, consumer trust in AI is shifting, with surveys indicating a preference for careful management over blind excitement. Interestingly, polls showing what would make people more comfortable with AI put laws and regulations as the top priorities for consumers.

Understanding the regulatory landscape

For organizations, addressing these risks requires a multifaceted approach. Legal frameworks serve as primary motivators for companies, as highlighted by McKinsey’s State of AI in 2023 report, which showed how inaccuracy, cybersecurity, and intellectual property infringement are the most-cited risks of AI adoption.

Regulators worldwide are increasingly focusing on stemming those problems linked to AI, but approaches taken vary from region to region. In the US, there is no country-wide regulation of AI; instead, an executive order was issued by President Joe Biden intended to make AI safe, secure, and trustworthy. It aims to enforce accountability by setting out best practices, such as standards for detecting AI-generated content, and pushing organizations to create AI safety and security boards. 

China’s approach is different: the world’s second largest economy wants to regulate specific applications of AI, such as social media, deep fakes, or recommendation engines. What matters to Beijing is staying ahead of the innovation curve and maintaining innovation in AI development. 

In Europe, meanwhile, lawmakers have approved the EU AI Act, which classifies AI systems into four risk categories from minimal to unacceptable. The highest-risk AI applications, such as social scoring or mass surveillance, are prohibited. For low-risk applications, such as chatbots, the EU wants to ensure transparency and accounting when these are deployed. 

data
“Data governance framework is the most important capability today for organizations looking to deploy GenAI.”

However, debates persist over the Act’s potential impact on Europeaninnovation. Advocates say it has nowhere near enough bite, while critics say the Act will harm the continent’s global competitiveness. They compare it to GDPR, the EU’s data protection laws, which led to a moderate drop in sales and profits for companies targeting European markets, albeit while safeguarding privacy.

One area increasingly targeted by regulators is intellectual property. AI can be a copyright nightmare, exemplified by the New York Times suing ChatGPT-maker OpenAI and its investor Microsoft, arguing that millions of Times articles were used to train the chatbots that now compete with the legacy newspaper (charges they deny).

AI governance best practices

The challenge for multinational organizations is how to deal with the divergent approaches to AI regulation across the global markets in which they operate.

Fortunately, there is a broad consensus for many AI governance practices. These include conducting pre-deployment risk assessments, as well as dangerous capabilities evaluations, and third-party model audits. Best practice also includes implementing safety restrictions and “red teaming”, which is proactively looking for vulnerabilities before threat actors do, so they can be fixed.

Beyond that, it will also be critical to improve data governance frameworks, which is the most important capability today for organizations looking to deploy GenAI. This might involve training an AI model without ingesting copyrighted content. There are now NGOs pushing for this; some LLMs trained with fair data already exist.

Responsible AI governance also stresses the importance of measuring and strengthening internal governance structures (such as through upskilling and having an ethics advisory board), as well as having a high level of human involvement, and solid AI operations management. Lastly, organizations should ensure there’s good stakeholder interaction and communication, stressing transparency.

Compliance by design is fully achievable, and organizations should start by checking their exposure. Do you have an AI system, as defined by the EU AI Act? If so, create an organization-wide awareness plan, including training materials and testing processes. Finally, it’s important to understand this cannot be achieved in organizational silos; you need a multidisciplinary team that goes beyond the compliance and data-science functions.

Organizations that do so are likely to find that AI regulation can be a friend, rather than a foe.

More to explore

X

Log in or register to enjoy the full experience

Explore first person business intelligence from top minds curated for a global executive audience