In the relentless pursuit of ever-more sophisticated generative artificial intelligence (GenAI), technology companies are now putting fresh productivity tools into the hands of organizations worldwide. However, this surge in AI capability puts the onus on businesses to navigate still-evolving regulations to ensure their innovations are not only groundbreaking but also ethically sound and legally compliant.
My recent insights, shared during a Leading in Turbulent Times (LiTT) webinar, shed light on the strategic importance of not just compliance, but also leveraging regulations to drive innovation and competitive advantage in organizations.
The imperative is clear: according to research by Goldman Sachs , AI investment is expected to surge in the coming years, with leaders of organizations being the primary adopters of AI tools (80% are already regular users), much more so than middle managers or frontline employees. From deciphering ancient texts to diagnosing diseases and challenging journalists, the business applications of AI are diverse and promising.
However, alongside these advancements come very real concerns, particularly over the cybersecurity threats posed by hackers who are among the most enthusiastic early adopters of AI. Emerging technologies are expected to deliver more benefits to cyber attackers, rather than defenders – a fact underscored by a deep fake scammer walking off with $25m in a first-of-its-kind AI heist this year in Hong Kong.
Mitigating risks as GenAI proliferates
The World Economic Forum’s Global Risks Report 2024 highlighted AI-generated misinformation and disinformation as the second most critical danger, behind only extreme weather events. This reflects growing apprehension among participants in my LiTT webinar, who cited behavioral manipulation as the biggest risk they saw posed by AI, in a poll.
There are also reliability issues and hallucinations to contend with; a throng of New York lawyers was sanctioned for using fake ChatGPT cases in a legal brief last year. Additionally, Air Canada’s chatbot promised a consumer a discount that was not available; a court ordered the airline to pay compensation. This underscores the urgency of addressing AI risks.
Moreover, the sustainability of AI is under scrutiny because of its substantial carbon footprint, primarily driven by energy-intensive processes like training large language models (LLMs) that underpin chatbots. Elon Musk, the billionaire entrepreneur, recently used media interviews to highlight the enormous supply of electricity needed to satisfy the demands of increasingly powerful technology. 
Consequently, consumer trust in AI is shifting, with surveys indicating a preference for careful management over blind excitement. Interestingly, polls showing what would make people more comfortable with AI put laws and regulations as the top priorities for consumers.
Understanding the regulatory landscape
For organizations, addressing these risks requires a multifaceted approach. Legal frameworks serve as primary motivators for companies, as highlighted by McKinsey’s State of AI in 2023 report, which showed how inaccuracy, cybersecurity, and intellectual property infringement are the most-cited risks of AI adoption.
Regulators worldwide are increasingly focusing on stemming those problems linked to AI, but approaches taken vary from region to region. In the US, there is no country-wide regulation of AI; instead, an executive order was issued by President Joe Biden intended to make AI safe, secure, and trustworthy. It aims to enforce accountability by setting out best practices, such as standards for detecting AI-generated content, and pushing organizations to create AI safety and security boards. 
China’s approach is different: the world’s second largest economy wants to regulate specific applications of AI, such as social media, deep fakes, or recommendation engines. What matters to Beijing is staying ahead of the innovation curve and maintaining innovation in AI development. 
In Europe, meanwhile, lawmakers have approved the EU AI Act, which classifies AI systems into four risk categories from minimal to unacceptable. The highest-risk AI applications, such as social scoring or mass surveillance, are prohibited. For low-risk applications, such as chatbots, the EU wants to ensure transparency and accounting when these are deployed.