As the AI for Good Global Summit 2025 convenes this week, the development community finds itself at a critical juncture. While digital rights and AI governance have dominated discussions within the international development sector—among agencies, donors, NGOs, and practitioners—we in the development community risk remaining trapped in reactive cycles that leave the trajectory of AI in the hands of technology companies and powerful governments. To ensure AI truly serves human development priorities, the development community must reclaim its agency in shaping AI strategy and practice.

The limits of reactive positioning
The current landscape of AI for development presents a stark power imbalance. The private sector is setting the agenda for AI integration in the development community, with limited autonomy for other actors in Low- and Middle-Income Countries (LMICs) to self-determine use and design. Governments and development agencies, including multilateral organisations find themselves confronting persistently limited entry points to influence both design and implementation processes. This exclusion perpetuates existing inequalities whilst creating new forms of digital dependency.
Our recent engagement with development actors analysing AI applications across peace, human rights, governance, gender equality, and social inclusion themes reveals the extent of this challenge. Whilst AI demonstrates significant strengths – predictive capabilities for rapid decision-making, automated damage assessment using satellite imagery, and enhanced monitoring systems—these capabilities are overshadowed by fundamental weaknesses. The lack of algorithmic transparency creates ‘black box’ effects, especially neural network based, where decision outcomes remain incomprehensible. This is particularly problematic in sensitive humanitarian contexts.
Digital inequalities in LMICs and even high-income countries, stemming from inadequate digital infrastructure, exacerbate existing socioeconomic inequalities. The affected populations lack agency and power in how their data is collected. For example, in our DISPACT project investigating the digitalisation of the social protection in South Africa – we found how the use of AI for facial recognition in applying for grants presented several barriers. Citizens who fail to authenticate via facial recognition are excluded where technology becomes a ‘disabler’ rather than an ‘enabler’ – evidenced by the local civil society organisation Black Sash receiving 35 weekly calls from beneficiaries unable to access verification links due to lack of smartphones or data. This is also something that had severe implications in India’s Aadhaar programme, resulting in hunger deaths linked to unsuccessful authentications.
AI models encode assumptions about what counts as knowledge, how it should be structured, and what outcomes matter. These assumptions are rarely aligned with Indigenous, local, and community-based knowledge systems across LMICs, creating a form of epistemic injustice within AI systems themselves.
Our mapping of donor approaches in AI systems reveals a concerning pattern: even the most progressive initiatives remain largely reactive, focused on mitigating AI’s harmful impacts rather than fundamentally reshaping how AI systems are conceived and developed. While the fight for digital rights remains essential, our preoccupation with reactive responses has inadvertently limited our capacity to shape the trajectory of AI development itself. For example, a donor may fund projects to fix bias in an existing AI recruitment tool without questioning whether such tools should be used in fragile labour markets in the first place, rather than working with local communities to design systems that reflect their employment realities.
We have become so focused on addressing the problems generated by AI systems that we have ceded ground on determining what AI systems should be designed to achieve in the first place.
A proactive research agenda: Reclaiming the development discourse
The development community possesses deep expertise in understanding complex social systems, participatory processes, and contextual adaptation – precisely the knowledge required to shape responsible AI development. Rather than simply responding to technological developments, we must actively reshape the elements, tasks, and artefacts that define digital development practice.
Our analysis of current AI applications in development reveals both the potential and the urgent need for this proactive approach. Existing applications span from conflict early warning systems and refugee resettlement matching to gender-based violence detection and governance analytics. However, these tools frequently operate within frameworks designed by technical experts with limited understanding of development contexts, humanitarian principles, or local power dynamics. The result is a landscape where AI tools may deliver technical functionality whilst failing to address or even exacerbating the underlying inequalities they purport to solve.
Critical questions for practice and research
Our research agenda centres on several interconnected areas of inquiry, informed by gaps identified in current practice. In our participatory design and user research, we ask questions such as how should user research methodologies evolve to capture the lived experiences of potential AI users in resource-constrained environments? Who should lead these processes, and how can we ensure meaningful participation rather than tokenistic consultation?
Our work on deliberative and critical design of AI in civic tech demonstrates that traditional design approaches, developed primarily for consumer technologies, require fundamental reconceptualisation for AI systems – particularly agentic AI – that will operate in contexts where digital literacy and technical capacity vary dramatically. Drawing from our experience implementing civic tech in Southern Africa, we’ve identified that effective AI design requires deliberative frameworks that ensure access to deliberative settings, enable comprehension across different knowledge systems, accommodate multivocality from diverse stakeholders, and maintain responsiveness to community feedback.
The challenge lies not merely in technical design but in addressing what is termed the ‘technical opacity of AI’ whilst navigating complex power dynamics between technocrats, government actors, civil society, and citizens. Our research shows that for AI to work well in development, we need to stop making quick, surface-level decisions about it. Instead, we need to take the time to talk with different groups, listen to their views, and build trust so that everyone understands how AI will affect them.
Learning from current practice: A foundation for strategy
Our comprehensive analysis of bilateral donors’ AI initiatives reveals instructive patterns about both opportunities and limitations in current approaches. Leading donors like Canada’s IDRC and Germany’s GIZ have developed sophisticated programmes that emphasise locally created solutions and responsible AI development. IDRC’s Artificial Intelligence for Development in Africa (AI4D) programme and Feminist AI Research Network (FAIR) in collaboration with the UK’s FCDO demonstrate how patient, ecosystem-building approaches can foster genuine local capacity. Similarly, GIZ’s FAIR Forward programme prioritises removing barriers to AI access through open datasets and local training whilst supporting policy frameworks for ethical AI governance.
Yet even these exemplary initiatives reveal the constraints of working within existing development paradigms. The emphasis on building local capacity to use AI systems designed elsewhere, whilst valuable, falls short of enabling local communities to determine why AI systems should exist in the first place. The spread of AI tools for tasks like refugee resettlement and disaster response may look impressive, but they raise big questions about who has control, how consent is handled, and who really benefits.
The AI for good paradox
Events like the AI for Good Global Summit exemplify both the opportunities and challenges we face. Such platforms showcase impressive applications of AI whilst simultaneously raising concerns about the pace and responsibility of adoption. The enthusiasm for AI solutions often outpaces critical examination of their appropriateness, sustainability, or alignment with development principles.
This creates a paradox: whilst we need spaces to explore AI’s potential, we also need mechanisms to ensure that excitement about technological possibilities does not override careful consideration of social implications. The development community’s role is not to dampen innovation but to ensure it serves human flourishing rather than technological determinism.
AI has the potential to transform development, but only if it is shaped by those who understand the complexities of human development and local contexts. In our next opinion blog, we will explore how the development community can move beyond analysis to action, outlining concrete pathways to reclaim agency and lead in shaping AI for inclusive, equitable development.