The UK Government set out in January 2025 a ‘blueprint to turbocharge’ AI use. But the principles that guide international and state actors’ use of artificial intelligence (AI) have – and continue to be – at the centre of global conversations. International organisations and governments around the world have been working on non-binding and binding principles for multi-lateral AI governance.
The Knowledge for Development and Diplomacy programme (K4DD) – led by IDS – has released a report studying the evidence around ‘Multilateral Technology Governance’. Here is what we found.
The landscape of AI governance
The types of mechanisms governing AI fall broadly into two categories: binding and non-binding. Non-binding governance has an important role in the multilateral regulation of technologies, in part because it is quicker to adopt than binding regulations. Non-binding mechanisms carry no built-in cost for non-compliance. This makes it easier to attract more participants and also makes non-binding regulations more adaptable to the increasingly fast-paced changes in technology.
The landscape is very rich, and centres primarily around a variety of international organisations, including the Organization for Economic Cooperation and Development (OECD), the United Nations’ Educational, Scientific, and Cultural Organisation (UNESCO) and the International Telecommunications Union (ITU), among others. Of these, UNESCO is unquestionably the most influential actor both by virtue of the widespread adoption of its recommendations with 193 signatories, and the fact that it is highly involved in cooperation with other governance organisations.
The OECD’s governance efforts are also influential, although the OECD’s 38 Member States are comprised solely of high-income countries and do not provide a forum for negotiation with low and middle-income countries. Nevertheless, the OECD’s definition of AI has been widely adopted by all 36 Member States and eight non-member states.
The OECD AI Principles put forward five values-based recommendations for signatories, setting standards for AI that the OECD notes are ‘practical and flexible enough to stand the test of time’. These are:
- Inclusive growth, sustainable development and well-being,
- Human rights and democratic values, including fairness and privacy,
- Transparency and explainability,
- Robustness, security and safety, and
- Accountability.
The G7 also recently focused a significant portion of its policy guidance on AI. This includes the May 2023 Hiroshima Process on Generative Artificial Intelligence, which was followed in May 2024 by the Hiroshima AI Process Friends Group at the OECD.
Inter-Agency Working Group on Artificial Intelligence
Together with UNESCO, the ITU hosts the Inter-Agency Working Group on Artificial Intelligence (IAWG-AI), which brings together expertise within the United Nations system on AI ethics and the ‘strategic approach and roadmap for supporting capacity development’. The group is open to all interested UN members and observers of UN High-level Committees and requires interested entities to designate a senior-level focal point from the organization to contribute to the Group’s work.
Notably, the group is largely responsible for the production of the UN System White Paper on AI Governance. At least within the UN system, the Working Group may serve as an influential actor in shaping UN programming and outreach on AI in the future.
UNESCO’s recommendations
UNESCO’s recommendations on the Ethics of Artificial Intelligence were introduced in 2021 and are the most widely adopted globally, and are applicable to all 194 of UNESCO’s Member States. The recommendations adopt a broad understanding of AI, defining it as ‘systems with the ability to process data in a way which resembles intelligent behaviour’. The recommendations adopt four key values for the effective governance of AI systems:
- Human rights and human dignity;
- Living in peaceful, just, and interconnected societies;
- Ensuring diversity and inclusiveness; and
- Environment and ecosystem flourishing.
The recommendations are wide-ranging, covering issues from surveillance, oversight, data protection, and the environment, and highlight a need for governments and the private sector to build AI systems that protect and promote human rights and fundamental freedoms. Although UNESCO’s recommendations are non-binding in nature, their widespread adoption makes them among both the most inclusive and theoretically effective of the AI governance mechanisms surveyed here.
As part of its work on AI, UNESCO also undertakes State ‘readiness assessments’ as regards AI. These assessments – the most recent of which was Mexico – are undertaken in cooperation with the State in question, and UNESCO’s reports are the product of collaboration with national bodies related to technology and AI governance. In addition to its recommendations and its partnership with the ITU hosting the IAWG-AI, UNESCO hosts the Global AI Ethics and Governance Observatory, which aims to provide policy guidance on AI through research, best practices, and toolkits. Importantly, it includes questions of ethics, governance, innovation and standards, and even neurotechnology in this work.
The role of standard-setting organisations
Standards-setting plays an important role in the international governance of any technology, including AI. Like non-binding international agreements, standards constitute a voluntary mechanism. The two most important standards-setting bodies as regards AI are the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC). Both are non-governmental organizations, with membership comprised of national standards-setting institutions. The non-governmental status of both institutions means they rely on direct financial contributions from buy-in as opposed to public financial support.
Both the ISO and ICE jointly co-host the Committee on Artificial Intelligence – SC 42 – which plays a role in shaping standardisation in AI. Founded in 2017, SC 42 is a consensus-based body with 60 countries currently represented in the body. SC 42 operates under a one-country, one-vote policy and has five working groups: Foundational standards, Big Data, Trustworthiness, Use cases and applications, and computational approaches and characteristics of artificial intelligence.
The importance of non-binding mechanisms
As AI continues to evolve, bodies leveraging non-binding influence—such as the OECD, UNESCO, and standards-setting bodies—will be well positioned to continue to address the challenges and opportunities that continue to arise as AI technologies continue to evolve and proliferate. Non-binding mechanisms are also more likely to bring more actors than just the State to the table—allowing civil society and business alike to play a greater role in the governance process.
Mechanisms such as the EU’s AI Act will do the same, although the binding nature of such regulation means it will take longer to negotiate and be slower to adapt. The adaptability of non-binding mechanisms seems to make them the favoured international governance mechanism for AI, at least for the moment.