New tech, different rules
GenAI and other rapidly evolving technologies are creating complex ethical, legal, and social challenges around data privacy, cybersecurity, inequality, and governance. While searching for the advantages that these tools offer, governments and business leaders must keep in mind the need to foster inclusiveness, trust, and resilience.
Currently, AI regulation lacks globally applicable standards. The EU is taking a top-down approach with the EU AI Act, which has been designed to distinguish between “limited-risk” and “high-risk” AI systems and prescribe an appropriate level of transparency for each. The framework is intended to work in alignment with the General Data Protection Regulation (GDPR), which asserts that individuals, rather than governments or corporations, have the right to decide how their data is used.
In contrast, the US has taken more tentative steps towards regulation with the Algorithmic Accountability Act of 2022, which mandates that companies monitor the privacy and transparency impacts of their AI systems. However, the federal government has, to date, failed to enact this regulation. Another prominent recent attempt at implementing new legislation, California’s proposed AI safety bill, was subject to a veto by Governor Gavin Newsom over concerns that overly strict regulation could hinder innovation and drive technology firms out of the state.
While proponents of the US approach often justify their support with the soundbite that “Europe regulates, the US innovates,” this overlooks the fact that effective regulation provides clarity and stability. Demonstrating the effectiveness of a regulatory approach, the 2023 IMD World Digital Competitiveness Ranking has five EU nations in the top ten. This progress is partly a result of the implementation of the EU Data Governance Act (DGA), which establishes protocols for the use of data in the public sector, including trade secrets, personal information, and intellectual property.
Without a clear regulatory framework, fear of contravening legal requirements may deter companies from harnessing data analytics and securing a first-mover advantage from the insights they provide.
CFOs should avoid adopting a safer but potentially complacent “wait-and-see” stance on AI regulation and best practices. The reputational and financial risks of inaction are too high. As a first step, companies should obtain a clear, centralized view of all AI tools in use across their organizations. Establishing governance structures with defined accountability for AI management is essential. Additionally, leaders should introduce employee training programs to ensure responsible development and use of AI tools across the enterprise.