Share
Facebook Facebook icon Twitter Twitter icon LinkedIn LinkedIn icon Email
The future of AI OpenAI government regulations

Governance

OpenAI: the future is looking great for AI, less so for humanity

Published 23 November 2023 in Governance • 9 min read

The ongoing saga at the tech company highlights the urgent need for proper governance and regulation of the development of AI. Is it time for governments to step in?

“Hopelessly idealistic.” “Out of touch with reality.” “Embarrassingly inept.” This is how the OpenAI board has been described in online forums and social media this past week. Indeed, the debacle around the firing and then subsequent rehiring of CEO Sam Altman, with two interim CEOs in between, betrays an air of desperate amateurishness.  

The latest update to this still-unfolding saga is that three of the four OpenAI board members who voted to remove Altman and demote its president and board chairman Greg Brockman have themselves been removed from the board. The three are OpenAI’s chief scientist Ilya Sutskever, robotics engineer and entrepreneur Tasha McCauley, and Georgetown University Center for Security and Emerging Technology head Helen Toner.  

All three are known to be concerned about the potential negative effects of AI. As recently as last month, Toner called on the US government to “take action to protect citizens from AI’s harms and risks.” The fourth previous OpenAI board member, Quora CEO Adam D’Angelo, will remain on the board.  

Just because the four acted clumsily does not mean they were wrong. 

The main role of a corporate board is to represent the interests of shareholders. In the case of OpenAI, it is slightly different because OpenAI is nominally a non-profit foundation. The board is tasked with a fiduciary duty to uphold the organization’s core mission, set out in 2018, to build artificial general intelligence (AGI) that is safe and benefits all of humanity. AGI, in turn, is defined as highly autonomous systems that outperform humans at most economically valuable work. 

But by 2023, the board was overseeing a vastly different organization. Driven by Altman, OpenAI had expanded massively, launching a series of products and platforms. Many of these, like ChatGPT and Dall-E, had become phenomenally successful. It had also spawned a capped-profit subsidiary (at 100X investment), 49% of which had been sold to Microsoft for $10bn.  

How to use the white heat of technology to forge greater success
“When it comes to AI, we may not have the luxury of waiting a few years to see what the private sector can do on its own before the government establishes ground rules.”

We could rightly criticize the OpenAI board for how they handled the situation, but we shouldn’t fault them for their reasons for acting. If the majority of members truly felt that Altman was steering the organization too far from its mission, then they were right to intervene. Court judges do not rule based on personal feelings; they rule based on the law. If you don’t like their decisions, then challenge the laws, not those applying them. The same logic applies to OpenAI and its board.  

It seems likely that the board had become increasingly uneasy with the direction that OpenAI was taking as it steadily diverged from its stated mission. Last Friday, perhaps due to evidence of a new AGI-like breakthrough, these tensions came to a head.  

It is not yet clear who the winners are in this saga. You could point to Microsoft for gaining influence among OpenAI executives and engineers. Or Google, for seeing its main AI competitor wobble uncontrollably before its eyes. Or Altman and Brockman, for cementing their statuses as AI legends.  

While the winners can be debated, there is no question about the loser – AI safety. 

By bungling the governance so badly, the OpenAI board scored a howler of an own goal. While their objective was apparently to slow AI development down to ensure a safe deployment, the result is likely to be the opposite.  

AItman’s camp of AI accelerators and optimists, supported by Microsoft, has emerged squarely on top. In a post-OpenAI Silicon Valley, anyone who has serious concerns about AI safety will now have a hard time being taken seriously. The new OpenAI board members, parachuted in to steady the ship, are corporate and government elites. The adults have taken over, and they are likely to support the expansionist and competitive goals championed by Altman.  

Time for government to step in 

OpenAI’s convoluted structure – a non-profit focused on the benefit of humanity overseeing a for-profit intent on leading in an industry Bloomberg estimates will grow to $1.3 trillion over the next decade – was an earnest but futile attempt at corporate self-regulation. In many ways, it was the most audacious, grandiose, and perhaps most foolish version of Silicon Valley’s go-to response to any kind of concern about the societal impact of technology and innovation. 

A generation ago, tech entrepreneurs convinced the Clinton Administration that businesses could self-regulate the internet. An eBay score is surely more effective than liability law. If consumers don’t want to share their personal data, they can just opt out. It isn’t music piracy if you are merely sharing your favorite tunes with your peers – even when you have 100,000 “peers”. Why should it be up to tech companies to protect minors from harmful online content when that’s clearly what parents are for? And let’s not get hung up on the law if Meta’s own Oversight Board can “answer some of the most difficult questions around freedom of expression online: what to take down, what to leave up, and why.

The history of the internet shows that self-regulation alone is usually not enough.

As business school professors, we’re sympathetic to the idea that private sector governance is often preferable to heavy-handed government intervention. That’s particularly true for fast-evolving new technologies. But the history of the internet shows that self-regulation alone is usually not enough. Business doesn’t have a great track record of policing itself (remember FTX?), and the enormous profits that frequently await tech first-movers mean that other considerations often fall by the wayside regardless of initial good intentions. It may take public officials a few years to elbow their way into new industries but if the stakes are sufficiently high, sooner or later the government will assert itself. 

The problem is, when it comes to AI, we may not have the luxury of waiting a few years to see what the private sector can do on its own before the government establishes ground rules. The stakes are too high. The technology is too powerful. The fortunes to be made are too vast. Self-regulation will not suffice. We need to rein in the potential harm that AI can cause, and that means we need government regulation now. 

Unfortunately, the regulatory route today doesn’t look promising.  

Despite a strong statement noting the power and danger of AI from an international group including the EU, the US, and China earlier this fall, no concrete actions were taken, or even recommended. The UK has stated publicly that it will not regulate AI in the short term. The EU’s AI bill has also hit a roadblock, as France, Germany, and Italy are no longer happy with its attempt to slow down AI development. The US rushed out an Executive Order on AI that got a lot of pushback as being soft on big tech.  

Looming large over Western governments is that regulation could hamper the West’s ability to keep up with China in what already feels like an AI arms race (notably, back in June, now-ex-OpenAI board member Toner penned a Foreign Affairs piece arguing that China was actually trailing and that “regulating AI will not set America back in the technology race”).

Thinker AI
“While the winners of the OpenAI saga can be debated, there is no question about the loser – AI safety.”

Ironically, many Western countries are all too eager to restrict the transfer of sophisticated hardware across borders (think Nvidia chips or ASML lithography machines) but seem unwilling to afford the same attention to powerful software and algorithms these systems enable.  

We need AI regulation, but in what areas? 

AI regulation is urgently required, and there are at least seven distinct areas that should be included in any legislation. The objective should be to ensure the protection of individual rights, and, ultimately, the continuation of humanity. 

1. Harmlessness

AI should not create harm to people or society.

2. Accountability

Owners and developers of AI need to take responsibility for any adverse impacts.

3. Impartiality

AI should take active steps to minimize biases and treat all people equally.

4. Transparency

When an output is created by AI, it should be clearly labeled as such.

5. Explainability

AI systems should be designed to enable researchers to study how these systems operate, make decisions, and reach conclusions.

6. Security

AI systems should include safeguards so that the underlying data and technology cannot be stolen and are safe from attacks.

7. Privacy

AI should respect the privacy rights of individuals and organizations and avoid the use of personally identifiable information.

The situation with OpenAI appears to be a clear victory for AI accelerators over AI ethicists. We are boldly moving into an exciting future of AI-fueled productivity gains. But we need to ask, in the absence of formal regulation and oversight, who is keeping an eye on the dark side?

Authors

Michael Wade - IMD Professor

Michael R. Wade

Professor of Innovation and Strategy at IMD

Michael R Wade holds the Tonomus Professorship in Digital Business Transformation and is Director of IMD’s Global Center for Digital Business Transformation. He directs a number of open programs such as Leading Digital Business Transformation, Digital Transformation for Boards, Leading Digital Execution, and the Digital Transformation Sprint. He has written ten books, hundreds of articles, and hosts a popular management podcast. In 2021, he was inducted into the Swiss Digital Shapers Hall of Fame.

David Bach

David Bach

Professor of Strategy and Political Economy, and Dean of Innovation and Programs. IMD

An expert in strategy and political economy, David Bach is Professor of Strategy and Political Economy, Rio Tinto Chair in Stakeholder Engagement, and Dean of Innovation and Programs. He will assume the Presidency of IMD on 1 September 2024. He is also the Program Director of IMD’s Shaping the Business Environment for Sustainability program. Through his award-winning teaching and writing, Bach helps managers and senior executives develop a strategic lens for the nexus of business and politics.

Related

Learn Brain Circuits

Join us for daily exercises focusing on issues from team building to developing an actionable sustainability plan to personal development. Go on - they only take five minutes.
 
Read more 

Explore Leadership

What makes a great leader? Do you need charisma? How do you inspire your team? Our experts offer actionable insights through first-person narratives, behind-the-scenes interviews and The Help Desk.
 
Read more

Join Membership

Log in here to join in the conversation with the I by IMD community. Your subscription grants you access to the quarterly magazine plus daily articles, videos, podcasts and learning exercises.
 
Sign up
X

Log in or register to enjoy the full experience

Explore first person business intelligence from top minds curated for a global executive audience