Opinion

How can a new UK government deliver AI for development?

Published on 26 June 2024

Caroline Khene

Digital and Technology Cluster Lead

Kevin Hernandez

Research Officer

As we approach the UK General Election, the IDS Digital and Technology Cluster has been contemplating what the digital development priorities regarding artificial intelligence (AI) should be for an incoming administration. There is enormous potential for AI to benefit development globally, but also many risks that must first be fully understood and addressed.

Close up of a computer microchip in a blue colour with white letters 'AI' placed over it.
Credit: Shutterstock / dee karen

The current Digital Development Strategy commits to deliver a flagship AI for Development programme and continue collaboration with the (OECD hosted) Global Partnership on Artificial Intelligence. Both proclaim the potential for these technologies to address global challenges. But against the backdrop of the landmark General Assembly landmark resolution on artificial intelligence warning of the risk of   “…improper or malicious design, development, deployment and use of artificial intelligence systems, such as without adequate safeguards or in a manner inconsistent with international law” hindering the progress of the SDGs, there is a vital role for governments to ensure that decision making about the introduction of these technologies involves the meaningful participation of all affected communities.

Mitigating risks of new technologies

As IDS digital development researchers and practitioners, we bring to the table decades of research and insights about how we should be thinking about the interventions needed to mitigate the risks of new technologies amplifying existing axes of inequality,  introducing new threats to democracy or introducing new risks to vulnerable communities in humanitarian contexts.

We are well aware of the hype cycle of technologies in development – from blockchain to open data, and AI is no different in terms of overstated promises and lack of enduring applications beyond scattered pilots. The political economy of the technology sector provides Silicon Valley firms and other private actors unprecedented power in driving digital government and shaping agendas and discourse. But we are concerned that few people in the development community have the skills and insights to separate AI snake oil from genuine applications.

Similarly, we fear they don’t have the skills or insights to understand the profound threats from reliance on ChatGPT for research and education, or to push back against government spending on AI driven surveillance technologies that threaten civil liberties and human rights.

AI is a water intensive technology

How many AI champions in the development community are weighing up the profound environment threats it poses? As two billion people lack access to safe drinking water, how do we reconcile enthusiasm for AI with the fact that it’s an incredibly thirsty technology.  Research shows that by 2027 global AI demand might need 4.2 – 6.6 billion cubic meters of water (half the UK’s current withdrawal) – training GPT-3 in Microsoft’s U.S. data centres can directly evaporate 700,000 litres of clean freshwater.

As governments and Donors, including the UK, shape their AI agenda, a priority must be to take a holistic transparent and accountable approach is fundamental, considering the ethical and environmental consequences of their strategies.

As research partners with donors around the globe, including the Swiss Agency for Development and Cooperation (SDC), the German development agency GIZ and the FCDO, we are in a unique position to drive conversations about AI adoption and hope to help shape AI for development agendas which truly serve human rights and development goals.

Democratic values

Too often we have seen development agencies come into a country in a top-down fashion – rather than coming in and working with local actors who can reflect local priorities and contexts. Donors need to approach AI initiatives as long-term and gradually evolving, strengthening local capacities and initiatives in AI design, development, and governance.

We have observed this from some donors, who apply an approach of supporting national and regional AI ecosystems (rather than one-off projects), partnering with local universities and companies, and promoting locally created (and thought of) AI solutions.

The goal should be decentralised models that uphold digital self-determination, human agency and democratic values integral to equitable AI deployments in underrepresented regions.

These very factors have been prioritised and recommended in a recently published statement by the T20 (think tank) Brazil Taskforce 5 on Inclusive digital transformation, (as part of Brazil’s presidency of the G20) as well as an upcoming joint declaration with Civil20 (C20) aimed at promoting data justice and sustainable digital transformation.

Data challenge

As efforts are made towards improving AI ecosystems in LMICs, data scarcity still remains a significant challenge, possibly for decades to come. Data is the main machinery of AI, hence constraining inclusive and contextually sensitized AI practices for several LMICs.

Data scarcity emanates from continued digital exclusion and structural inequalities – how can countries jump onto the next digital innovation without addressing the foundational issues, or adequately incorporating them in the design of the digitalisation programmes? Building without foundations invites unethical practice, bias, and the continued dominance of the private sector shaping the AI digital development and governance agenda.

Priorities for AI in digital development

In summary, the areas that development agencies should be doing right when it comes to AI in digital development are:

  • Promoting inclusive and self-determined AI in low-middle-income countries.
  • Enable trustworthy AI in humanitarian action by promoting agile policy environments that allow controlled experimentation and iterative refinement of AI systems.
  • Strengthening in-house capacity in Digital/AI Governance building in-house skills and capacities for local partners as well as development agency staff.
  • Facilitate dialogue with private actors in the procurement, deployment and development of standards and rules of AI in digital development.
  • Practise and promote transparency, plain English, and accountability to foster public understanding of how AI systems function and make decisions that impact people’s lives.
  • Global coordination for international cooperation, multilateralism and harmonisation in digital/AI governance – coordination including cyber security standards, digital rights and their inclusion in constitutions, information integrity partnerships for trustworthy digitalisation in democratic life, and public data commons.

Read all the opinion blogs in this series UK election: International development priorities for a new government.

Disclaimer
The views expressed in this opinion piece are those of the author/s and do not necessarily reflect the views or policies of IDS.

Share

About this opinion

Related content

Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.