Why AI needs UN oversight
This takes us to the question of who, if not Big Tech, should set the rules. Fortunately, we now have many independent experts and academics able to advise on how best to regulate the use and development of AI and related technologies (or “data-based systems”). The private sector needs to be consulted in policy-making processes, but its voice should not be louder than that of consumers and civil society organizations.
I believe the need for international guardrails calls for the establishment of an agency with global reach under the aegis of the UN. While the UN is facing criticism on many fronts – including its inability to prevent or de-escalate armed conflicts, the composition of its Security Council (where five permanent members have veto power that has led to decision-making stalemates), and its seeming ineffectiveness in protecting human rights in some conflict situations – the success of the International Atomic Energy Agency (IAEA) after the Second World War can serve as a model here. In preventing nuclear combat and promoting the safe, secure, and peaceful use of nuclear technology, the establishment of the IAEA proves that we are capable not only of devising rules that prohibit the reckless, unchecked pursuit of technological advances but also of abiding by those rules when the future of the planet is at stake.
Like the IAEA, the UN International Data-Based Systems Agency (IDA) would be a global AI watchdog charged with promoting the safe, secure, peaceful, and sustainable use of AI, ensuring that the technology respects human rights and encourages cooperation in the field.
We are now seeing growing momentum for the establishment of such an agency. There are several reasons for this. First, many influential public figures support the idea. These include UN Secretary-General António Guterres, UN High Commissioner for Human Rights Volker Türk, and Pope Francis, who as head of the Catholic Church, represents some 1.2 billion people. Sam Altman, founder of OpenAI, also gave his backing to the idea at the World Economic Forum in Davos this year, as have many in industry and civil society.
Another reason for positive momentum originates in the sphere of government. Politicians of all stripes realize that a deepfake can end their careers overnight. AI-generated images have the potential to destabilize a country almost as quickly through fake news (a dynamic we saw this summer in the UK when a few clicks of a mouse sparked rioting by right-wing extremists). This destabilizing potential is not only a problem for democracies: autocrats and dictators, who as a rule are even more vested in harmony in society, are equally threatened.