Share
Facebook Facebook icon Twitter Twitter icon LinkedIn LinkedIn icon Email
AI

Technology

What business leaders must do to avoid extreme AI risks 

Published 5 June 2023 in Technology • 6 min read

With a proliferation of warnings about the extreme risks to humanity posed by AI, business leaders have a responsibility to work together with tech developers to mitigate the dangers rather than exacerbating them, say Michael Watkins and Ralf Weissbeck. 

In the rapidly evolving world of Artificial Intelligence (AI), there is increasing concern about the risks of uncontrolled advancement of the technology. In an article published in late May 2023 titled Model Evaluation for Extreme Risks, leading researchers from Google DeepMind, OpenAI, and non-profits focused on technology policy identified nine “extreme AI risks” and proposed strategies for containing them. A separate open letter from AI leaders warned the technology could pose a “risk of extinction” to humanity on the scale of nuclear war and pandemics. 

Against this backdrop, business leaders must understand those risks and ensure their organizations don’t contribute to them. To do this, they must support efforts by governments and academic institutions to mitigate these risks, carefully screen and oversee providers of AI models and related services, and implement internal strategies and controls to ensure AI development in their businesses doesn’t contribute to the problems. 

Understanding extreme AI risks  

The AI researchers defined “extreme risks” as “those that would be extremely large in scale … in terms of impact (e.g., damage in the tens of thousands of lives lost, hundreds of billions of dollars of economic or environmental damage) or the level of adverse disruption to the social and political order… for example, the outbreak of inter-state war, a significant erosion in the quality of public discourse, or the widespread disempowerment of publics, governments, and other human-led organizations.” 

Here are the nine extreme AI risks the researchers identified and their implications for societies and businesses: 

Mitigating the risks 

The authors advocate that the companies developing AI models adopt the following policies, develop tools to implement them, and encourage governments to craft and enforce policies that support them. 

  • Responsible training: Responsible decisions are made about whether and how to train a new model that shows early signs of risk.  
  • Responsible deployment: Responsible decisions are made about whether, when, and how to deploy potentially risky models.  
  • Transparency: Useful and actionable information is reported to stakeholders to help them mitigate potential risks.  
  • Appropriate security: Strong information security controls and systems are applied to models that might pose extreme risks.  

However, the stakes are too high not to involve businesses, which will be the focal point for implementing many AI models and will invest in developing their own models and applications. The risks associated with advanced AI technologies are profoundly worrying, potentially affecting individual businesses, whole societies, and even humanity as a species. Companies have a substantial role to play in mitigating these threats. 

What business leaders must do  

Business leaders should focus on three main areas to support the prudent control of AI: supporting governments and academia, auditing vendors of AI services, and creating internal procedures, tools, and training. 

Support governments and academia  

Business leaders must understand the importance of collaborating with governments and academic institutions to identify and tackle potential AI risks. Such partnerships can provide technical expertise, regulatory oversight, and academic insights to understand the full implications of these risks and devise suitable mitigation strategies. Business leaders should actively participate in public dialogues about AI risk and governance. By sharing their practical experiences and industry perspectives, they can help shape policies that are both effective and beneficial for the business sector.

Furthermore, businesses should fund research at academic institutions to further the understanding of AI risks. This could involve sponsoring research projects, offering internships, or providing access to data and resources. By fostering a close relationship with academia, businesses can stay at the forefront of AI risk knowledge, ensuring they are prepared to address these risks as they arise. 

“However, the stakes are too high not to involve businesses, which will be the focal point for implementing many AI models and will invest in developing their own models and applications.”

Select and audit providers of AI models and services  

Given the complexity of AI technologies, many businesses rely on third-party vendors for their AI services. However, these vendors can also be a source of AI risks, particularly if they lack appropriate controls or do not fully comply with them. Therefore, a rigorous vendor selection and audit process is essential. Business leaders should establish criteria for selecting AI vendors, ensuring they have robust security measures, ethical guidelines, and a proven track record of regulatory compliance. This might involve comprehensive due diligence, requiring vendors to demonstrate their commitment to AI safety and ethical use. 

Furthermore, ongoing audits of AI vendors should be conducted to ensure compliance with established controls. These audits should assess the vendor’s AI development practices, data handling procedures, and adherence to ethical and regulatory standards. By doing so, businesses can identify potential risks early and take appropriate action to mitigate them. 

Institute internal procedures, tools, and training  

While external partnerships and vendor management are crucial, it is equally important that businesses have robust internal procedures in place to mitigate AI risks. This includes developing guidelines for AI use, establishing controls for AI development, and implementing tools to monitor and manage AI systems.  

A good starting point is to develop an AI ethics policy that outlines the business’s commitment to responsible AI use. This policy should provide guidelines for AI development and use, addressing issues like transparency, fairness, privacy, and security. Moreover, businesses should implement controls throughout the AI development process, including risk assessments, testing and validation procedures, and oversight mechanisms. These controls can help prevent the creation of risky AI systems and ensure that any risks are identified and addressed promptly. 

Furthermore, businesses should equip employees with the tools and training to work safely with AI. This might involve technical training on AI development, education about AI ethics, and tools to monitor and manage AI systems. By investing in their employees, businesses can create a culture of AI safety and responsibility, ensuring everyone plays their part in mitigating AI risks.  

Remember, these measures are not just about protecting businesses but also contributing to the broader global effort to manage the safe development and use of AI. 

Authors

Michael Watkins - IMD Professor

Michael D. Watkins

Professor of Leadership and Organizational Change at IMD

Michael D Watkins is Professor of Leadership and Organizational Change at IMD, and author of The First 90 Days, Master Your Next Move, Predictable Surprises, and 12 other books on leadership and negotiation. His book, The Six Disciplines of Strategic Thinking, explores how executives can learn to think strategically and lead their organizations into the future. A Thinkers 50-ranked management influencer and recognized expert in his field, his work features in HBR Guides and HBR’s 10 Must Reads on leadership, teams, strategic initiatives, and new managers. Over the past 20 years, he has used his First 90 Days® methodology to help leaders make successful transitions, both in his teaching at IMD, INSEAD, and Harvard Business School, where he gained his PhD in decision sciences, as well as through his private consultancy practice Genesis Advisers. At IMD, he directs the First 90 Days open program for leaders taking on challenging new roles and co-directs the Transition to Business Leadership (TBL) executive program for future enterprise leaders.

Ralf Weissbeck

The former Group Chief Information Officer and a member of the Executive Committee at The Adecco Group.

Ralf Weissbeck is the former CIO of The Adecco Group. He co-led the recovery of the 2022 Akka Technologies ransomware attack and led the recovery of the 2017 Maersk ransomware attack that shut down 49,000 devices and 7000 servers and destroyed 1000 applications.

Related

Learn Brain Circuits

Join us for daily exercises focusing on issues from team building to developing an actionable sustainability plan to personal development. Go on - they only take five minutes.
 
Read more 

Explore Leadership

What makes a great leader? Do you need charisma? How do you inspire your team? Our experts offer actionable insights through first-person narratives, behind-the-scenes interviews and The Help Desk.
 
Read more

Join Membership

Log in here to join in the conversation with the I by IMD community. Your subscription grants you access to the quarterly magazine plus daily articles, videos, podcasts and learning exercises.
 
Sign up
X

Log in or register to enjoy the full experience

Explore first person business intelligence from top minds curated for a global executive audience