United Nations Tackles AI

Default Profile ImageBen O'Connell
United Nations Tackles AI

The UN General Assembly has recently adopted a resolution to promote ‘safe, secure, and trustworthy’ AI systems. 

The Assembly recognised AI systems’ potential to accelerate and enable progress towards reaching the 17 Sustainable Development Goals.

It’s the first time the Assembly has adopted a resolution on regulating the emerging field. The US National Security Advisor reportedly said earlier this month that the adoption would represent a ‘historic step forward’ for the safe use of AI.

The resolution touches on many common AI concerns, from the need to develop and support governance approaches to preventing situations where AI systems become too hard for humans to manage.

The Assembly further recognised the ‘varying levels’ of technological development between and within countries and the unique challenges developing nations face in keeping up with the rapid pace of innovation. Equity and support for the digital divide are crucial. 

Linda Thomas-Greenfield, US Ambassador and Permanent Representative to the UN, highlighted the international community’s opportunity and responsibility “to govern this technology rather than let it govern us”.

“Let us commit to closing the digital gap within and between nations and using this technology to advance shared priorities around sustainable development.”

The Role of the United Nations in AI Governance

Role of the United Nations in AI Governance

AI governance is a timely international issue, but what can the United Nations actually do about it?

In short, The UN is not a world government and can’t force countries to cooperate.  However, it acts as a crucial platform for dialogue, negotiation, and joint action on global challenges.  

Despite its limitations, the UN has demonstrably achieved progress in some areas, though some argue it could be more effective with reform.

On AI regulation, the new landmark resolution marks changing times and shifting focuses of world leaders. 

AI is a fascinating paradox in that the same systems that could progress healthcare, climate change, and many other sectors equally carry risks like bias, job displacements, and even autonomous weapons.

Striking a balance between nurturing innovation and mitigating risks is the core challenge of AI regulation.

The United Nations and similar world organisations seem to be struggling to keep up with advancing technology; this resolution is coming in 2024, after all. Effective AI regulation requires global collaboration, which can be challenging due to differing priorities and approaches.

Educating the public about AI’s capabilities and limitations, explaining opaque systems, and supporting research on interpretable AI, robust safety measures, and algorithm bias will be paramount. 

But by taking a proactive and collaborative approach, world leaders can harness the power of AI for good while mitigating the potential risks.