Share
Facebook Facebook icon Twitter Twitter icon LinkedIn LinkedIn icon Email

Artificial Intelligence

Exploiting the ethically positive potential of AI 

Published 12 December 2024 in Artificial Intelligence • 7 min read

Ethics professor Peter G.  Kirchschläger sets out a roadmap for the regulation of AI technology that will satisfy the concerns of governments, businesses, and consumers alike.

The increasing use of generative AI is understandably causing great alarm amongst politicians, policymakers, businesses, and consumers. The unrestrained use of digital systems poses complex and far-reaching threats. Not only is AI greatly increasing global inequality, but tech giants are also massive users of energy, seriously impacting global climate-change goals. We are seeing unchecked violations of the right to privacy, with Big Tech capturing vast amounts of data to be sold to the highest bidder – usually without our knowledge or consent.

As with any new technology, society needs guardrails to protect its users from those who own and operate it. So, we need rules to regulate the use of AI – but how do we compose those rules, and what should they look like?

Big Tech wants to regulate itself and argues that it is uniquely well-placed to do so. This is tantamount to the poacher not so much turning gamekeeper but performing both jobs simultaneously. Letting it write the global rules for AI and the digital realm would be disastrous, given it has consistently created dangerous tools that exploit its users without regard for their interests, and which undermine democracy in the name of maximizing profits.

AI is correcting a widespread misconception about the nature of regulation itself; namely, that it hinders innovation

Historical precedents for successful technology regulation

I am optimistic, however, that we can come up with well-functioning global rules to constrain AI systems. One good example is how the world agreed to stop the use of ozone-depleting substances under the Montreal Protocol, which became effective in 1991 and continues to be amended in light of new scientific, technical, and economic developments. This precedent shows that humans can distinguish between what is technically possible – what we can do – and the things we should (or should not) do. Humanity has shown that it is able to make normative assessments and follow them through in its actions.

A necessary precursor to regulating AI is correcting a widespread misconception about the nature of regulation itself; namely, that it hinders innovation. Again, history shows that this is not the case. Take air travel, for example. People were highly reluctant to use airplanes as a means of transport because they were not confident in flying. But very precise regulations, strictly enforced, led them to trust in the organizations who were offering the service. This led to great economic growth in the US and other countries and to constant technological innovation which continues to this day.

Another misconception about regulation is that it is anti-business – but in some fields where AI is applied, we no longer have free markets. In the area of search engine technology, for example, we have monopolies in a way we haven’t seen in other industries for decades, with a handful of Big Tech firms dominating. Looking at it from a free-market perspective, this is not in the interests of business or consumers, so it’s in all our interests to end the monopoly.  

This begs the question of how we should design the rules. I believe we make better-informed decisions – and ones that are more likely to be implemented successfully – if we are aware of the ethical dimension that underpins them. This means including the ethical dimension of the decision in the decision-making process in a highly transparent way. Doing so would mean that the people who must live with a decision understand the rationale for it. They may not like it, of course, but at least they know why the rule is in place. 

“We are now seeing growing momentum for the establishment of such an agency.”

Why AI needs UN oversight

This takes us to the question of who, if not Big Tech, should set the rules. Fortunately, we now have many independent experts and academics able to advise on how best to regulate the use and development of AI and related technologies (or “data-based systems”). The private sector needs to be consulted in policy-making processes, but its voice should not be louder than that of consumers and civil society organizations.

I believe the need for international guardrails calls for the establishment of an agency with global reach under the aegis of the UN. While the UN is facing criticism on many fronts – including its inability to prevent or de-escalate armed conflicts, the composition of its Security Council (where five permanent members have veto power that has led to decision-making stalemates), and its seeming ineffectiveness in protecting human rights in some conflict situations – the success of the International Atomic Energy Agency (IAEA) after the Second World War can serve as a model here. In preventing nuclear combat and promoting the safe, secure, and peaceful use of nuclear technology, the establishment of the IAEA proves that we are capable not only of devising rules that prohibit the reckless, unchecked pursuit of technological advances but also of abiding by those rules when the future of the planet is at stake.

Like the IAEA, the UN International Data-Based Systems Agency (IDA) would be a global AI watchdog charged with promoting the safe, secure, peaceful, and sustainable use of AI, ensuring that the technology respects human rights and encourages cooperation in the field.

We are now seeing growing momentum for the establishment of such an agency. There are several reasons for this. First, many influential public figures support the idea. These include UN Secretary-General António Guterres, UN High Commissioner for Human Rights Volker Türk, and Pope Francis, who as head of the Catholic Church, represents some 1.2 billion people. Sam Altman, founder of OpenAI, also gave his backing to the idea at the World Economic Forum in Davos this year, as have many in industry and civil society.

Another reason for positive momentum originates in the sphere of government. Politicians of all stripes realize that a deepfake can end their careers overnight. AI-generated images have the potential to destabilize a country almost as quickly through fake news (a dynamic we saw this summer in the UK when a few clicks of a mouse sparked rioting by right-wing extremists). This destabilizing potential is not only a problem for democracies: autocrats and dictators, who as a rule are even more vested in harmony in society, are equally threatened.

Arguably the most significant benefit is that, for the first time in human history, we can deploy technology to enforce human rights in the ‘real’ world.

Benefits of data-based systems

On a positive note, data-based systems, including blockchain technology, have huge potential to serve the interests of government and citizens alike. Arguably the most significant benefit is that, for the first time in human history, we can deploy technology to enforce human rights in the ‘real’ world. This is done through identifying parties who commit human rights breaches and sanctioning such violations.

Data-based systems can also be used to identify hate speech, racism, and incitement to violence online and in social media, and there is huge potential for promoting human rights in the digital space generally by making all information transparent, accessible in real-time, and not easily capable of being manipulated.

That said, we are not yet fully exploiting the ethically positive potential of AI – which is all the harder to understand because the potential is so clear. I am optimistic that we can get there and that we can use data-based systems to protect human dignity and ensure a sustainable future for all. But we need to get going. We need to start regulating AI systems urgently because every second, every minute, every hour, every day that we are not doing so means people and the planet continue to suffer.

Authors

Peter G. Kirchschläger

Peter G Kirchschlaeger is Ethics-Professor and Director of the Institute of Social Ethics (ISE) at the University of Lucerne, visiting professor at the Chair of Neuroinformatics and Neural Systems at ETH Zurich and at the ETH AI Center, as well as a research fellow at the University of the Free State in Bloemfontein, South Africa. He is an expert consultant in ethics for international organizations, President a.i. of the Swiss Federal Ethics Committee on Non-Human Biotechnology, and Director of the new master degree program on ethics at the University of Lucerne.

Related

Learn Brain Circuits

Join us for daily exercises focusing on issues from team building to developing an actionable sustainability plan to personal development. Go on - they only take five minutes.
 
Read more 

Explore Leadership

What makes a great leader? Do you need charisma? How do you inspire your team? Our experts offer actionable insights through first-person narratives, behind-the-scenes interviews and The Help Desk.
 
Read more

Join Membership

Log in here to join in the conversation with the I by IMD community. Your subscription grants you access to the quarterly magazine plus daily articles, videos, podcasts and learning exercises.
 
Sign up
X

Log in or register to enjoy the full experience

Explore first person business intelligence from top minds curated for a global executive audience