Share
Facebook Facebook icon Twitter Twitter icon LinkedIn LinkedIn icon Email

Artificial Intelligence

Tough on the outside, soft on the inside: The likely evolution of AI governance under Trump 

Published 17 January 2025 in Artificial Intelligence • 6 min read

IMD’s Simon J Evenett, Johannes Fritz, and Tommaso Giardini argue that the Trump administration will take a more sophisticated approach to AI regulation than some expect.

As the US prepares for a change in administration, one of the most keenly debated policy areas will be artificial intelligence (AI).

AI is reshaping all aspects of human interaction, from consumer services to national security, raising profound questions about privacy, fairness, and safety. For businesses to sustain innovation and continue to attract investment, they require a reliable regulatory environment. Broader society is equally keen to see robust oversight of AI development, conscious that, as well as offering unprecedented opportunities to maximize human potential, the technology poses risks to job security, equality, and even personal and national safety. All eyes, then, are on how the forthcoming Trump government will move AI regulation forward.

Well before the country’s new leaders step foot in the White House, there has been upheaval on this front. On 30 October 2023, President Joe Biden signed a landmark executive order (EO) to ensure the US leads standards in harnessing AI, and in putting in place safeguards against its dangers.

Domestically, the EO seeks to establish rigorous standards for AI safety, requiring developers of high-risk AI systems to share safety test results with the government. It introduces measures to protect privacy, eliminate algorithmic discrimination, and support workers affected by AI-driven changes. Moreover, it promotes “responsible” use of AI in healthcare, education, and government operations, while fostering innovation through initiatives such as the National AI Research Resource (NAIRR) and the development of an extended workforce.

Internationally, the EO directs federal agencies to collaborate with their counterparts in other countries to establish AI safety benchmarks, promote ethical AI deployment, and address cross-border challenges such as cybersecurity and sustainable development. The Biden Administration has emphasized the importance of building strong cooperative frameworks with allies, aligning itself with global initiatives such as the G7 Hiroshima Process on Generative Artificial Intelligence (AI) and the UK AI Safety Summit. Its goal is to create a shared vision of safe, equitable, and globally beneficial AI governance.

Consistent emphasis on security suggests a more reassuring continuity is in prospect when the keys to the White House change hands

Out with the old?

Having set out an alternative position on domestic AI regulation during his election campaign, President-elect Donald Trump may see things differently, having pledged to repeal what this administration characterizes as “Biden’s dangerous Executive Order, which hinders AI innovation,” proposing instead to support “AI development rooted in free speech and human flourishing.” This suggests a significant shift from the Biden administration’s strict approach to AI oversight, which includes safety standards and protection against discrimination and privacy violations, to a much looser, more open approach.

However, a close examination of the incoming administration’s policy documents – particularly the influential Project 2025 report’s chapter on the US Commerce Department – reveals a more nuanced picture. While advocating for reductions in the regulatory burden on AI development, the report maintains a strong focus on national security considerations. It calls for the implementation of AI in commercial processes, the establishment of specialized teams for AI policy, and emphasizes the potential value of AI for trade enforcement and strengthening export controls to “safeguard AI innovations and prevent misuse by foreign adversaries.”

This consistent emphasis on security suggests a more reassuring continuity is in prospect when the keys to the White House change hands. Notably, the Trump administration appears poised to maintain –and even intensify– Biden government policies limiting China’s access to US-developed advanced AI technologies and know-how, suggesting an understanding on the part of the new administration that AI constitutes a crucial element in the new global power dynamic.

“State-level AI legislation has developed notably faster than previous waves of privacy and online safety regulation.”

Oversight could fall to state-level

If, as seems likely, federal oversight of AI safety is wound down in the years ahead, state governments stand ready to fill the regulatory vacuum, a pattern familiar from previous waves of technology regulation, such as data breach notification laws and consumer privacy protection. Preparation for this scenario is already underway. In 2024 alone, 45 states considered nearly 700 pieces of AI-related legislation, with 113 bills enacted into law.

Colorado has already passed comprehensive legislation addressing high-risk AI use, while both California and Tennessee have enacted targeted reforms addressing specific issues, such as data provenance and digital replicas. However, these varied approaches, driven by differing sets of social and political priorities, could lead to uneven coverage and a lack of common standards, as each state seeks to balance protecting its citizens with sustaining technological and business innovation.

California has been particularly active, enacting AB 2013 Generative Artificial Intelligence: Training Data Transparency, which requires developers of generative AI (GenAI) systems to document their training methods in detail (albeit this has raised concerns about protecting trade secrets and confidential information). California has also expanded its right of publicity law to apply to AI-generated digital replicas of deceased individuals. However, Governor Gavin Newsom’s veto of SB 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, which would have regulated large-scale AI models, highlights the delicacy of the ongoing debate on appropriate regulatory boundaries.

As the patchwork of state regulations emerges, businesses operating across state lines will be obliged to navigate diverse and potentially conflicting requirements, exacerbating uncertainty about their regulatory position, with any misstep potentially costly. Ultimately, regulatory fragmentation may drive the formation of an unlikely alliance between business interests seeking regulatory uniformity and Republican congressional leaders who see supporting federal legislation over a maze of state requirements as the lesser of two evils.

Precedent supports the likelihood of this outcome. Previous waves of state-level technology regulation eventually led to pressure for federal standardization. With AI, this process appears to be accelerating. State-level AI legislation has developed notably faster than previous waves of privacy and online safety regulation. Nevertheless, as businesses face increasingly divergent state requirements, the pressure for federal government to impose a measure of harmonization could become impossible to ignore. 

This dual track reflects the multifaceted challenge of AI governance.

A twin-track approach to regulation

These contrasting dynamics suggest we may see a fascinating divergence in domestic and foreign policies. We expect the Trump administration to demonstrate strong continuity in treating AI as a critical technology requiring stringent protection from geopolitical rivals. Conversely, in terms of internal policy, we may now be entering a period of reduced federal oversight. Over the long term, the consequent emergence of uneven state-level regulation could give rise to pressure for new federal standards. Ultimately, if it implements overly lax AI governance at federal level, the Trump administration could stymie the Republican Party’s long-term goal of structurally light technology governance.

This dual track reflects the multifaceted challenge of AI governance. It also suggests that, while the incoming administration’s approach to AI regulation may differ markedly from that of its predecessor, the final shape of US AI policy may be determined less by executive action than by the interplay of state initiatives, business interests, and geopolitical imperatives.

For businesses and policymakers alike, the key lesson may be that AI regulation, like the technology itself, will continue to evolve in ways that transcend party political discourse. The challenge ahead lies in finding approaches that can balance innovation with responsibility, national security with economic dynamism, and federal consistency with a sensitivity to local strengths and vulnerabilities at state level.

Authors

Simon Evenett

Simon J. Evenett

Professor of Geopolitics and Strategy at IMD

Simon J. Evenett is Professor of Geopolitics and Strategy at IMD and a leading expert on trade, investment, and global business dynamics. With nearly 30 years of experience, he has advised executives and guided students in navigating significant shifts in the global economy. In 2023, he was appointed Co-Chair of the World Economic Forum’s Global Future Council on Trade and Investment.

Evenett founded the St Gallen Endowment for Prosperity Through Trade, which oversees key initiatives like the Global Trade Alert and Digital Policy Alert. His research focuses on trade policy, geopolitical rivalry, and industrial policy, with over 250 publications. He has held academic positions at the University of St. Gallen, Oxford University, and Johns Hopkins University.

Johannes Fritz

Johannes Fritz

CEO of the St.Gallen Endowment

Johannes Fritz is the CEO of the St.Gallen Endowment, a Swiss non-profit that champions international openness, collaboration and exchange. He leads the Digital Policy Alert transparency initiative focusing on prominent digital trade issues such as data transfers and AI regulation. Alongside his work for the St.Gallen Endowment, he is a Lecturer for Economic History and Economic Thought at the University of Fribourg, Switzerland. Johannes holds a Ph.D. in Economics, and his work focuses on utilizing technology to bring transparency to public policy choice.

Tommaso Giardini

Tommaso Giardini

Associate Director of the Digital Policy Alert

Tommaso Giardini is the Associate Director of the Digital Policy Alert, a public, independent, comprehensive and searchable record of policy changes that affect the digital economy. Tommaso’s interests lie in the systematic monitoring and comparative analysis of international digital policy developments from an interdisciplinary perspective. He received a Master’s Degree in Law and Economics from the University of St. Gallen, where he co-founded the student Law Clinic.

Related

Learn Brain Circuits

Join us for daily exercises focusing on issues from team building to developing an actionable sustainability plan to personal development. Go on - they only take five minutes.
 
Read more 

Explore Leadership

What makes a great leader? Do you need charisma? How do you inspire your team? Our experts offer actionable insights through first-person narratives, behind-the-scenes interviews and The Help Desk.
 
Read more

Join Membership

Log in here to join in the conversation with the I by IMD community. Your subscription grants you access to the quarterly magazine plus daily articles, videos, podcasts and learning exercises.
 
Sign up
X

Log in or register to enjoy the full experience

Explore first person business intelligence from top minds curated for a global executive audience