As AI systems become more powerful and pervasive across critical domains, incidents of harm to humans are increasing in frequency and severity. MIT’s AI Incident Tracker reports a 47% increase in the number of reported incidents between 2023 and 2024, while Stanford’s most recent AI Index Report puts the increase at 56.4%. The annual increase rises to 74% for incidents in the most severe category of harm, according to MIT.
This development puts business leaders in a difficult position. Historically, companies have relied on government oversight for the guardrails that minimize potential harms from new technologies and business activities. Yet governments currently lack the appetite, expertise, or frameworks to regulate AI implementation at the granular level. Increasingly, the responsibility for deploying responsible AI (RAI) systems falls to the businesses themselves.
Strategically implemented responsible AI (RAI) offers organizations a way to mitigate and manage the risks associated with this technology. The potential benefits are considerable. According to a collaborative Stanford-Accenture survey of C-suite executives across 1,000 companies, businesses expect that adoption of RAI will increase revenues by an average of 18%. Similarly, a 2024 McKinsey survey of senior executives found that more than four in 10 (42%) respondents reported improved business operations and nearly three in 10 (28%) reported improved business outcomes as a result of beginning to operationalize RAI.
Yet while leaders increasingly recognize the importance and potential benefits of RAI, many organizations struggle to take effective steps to put in place a governance structure for it. A key reason for the gap between aspiration and action is that few businesses have the internal resources needed to navigate the complex philosophical territory involved in ensuring that AI is implemented in a truly responsible manner.