Following up from our previous blog, in this blog we lay out critical pathways for the development community to lead in shaping responsible, context-sensitive AI systems for inclusive development. Moving beyond reactive responses requires addressing the epistemological, technological, and governance asymmetries that define the current AI landscape.

As the AI for Good Global Summit comes to an end, we offer three key recommendations for the international development community and its partners based on our research in AI strategy and digital development (ICT4D):
AI in local problem-solving
Cultivate contextual adaptation
How do these considerations manifest differently across sectors, regions, and cultural contexts? The tendency towards universal AI solutions contradicts decades of development learning about the importance of contextual adaptation. We must resist the allure of scalable solutions that ignore local knowledge systems and power structures. LMICs face not only technical limitations but also regulatory capacity gaps, with current global AI governance frameworks insufficiently inclusive of LMIC perspectives. Building pathways for LMIC actors to shape governance frameworks, standards, and oversight mechanisms is critical. Agentic AI will widen the divide, as LMIC actors are excluded from shaping how agents act in complex sociotechnical environments. Without local co-design, agentic AI systems risk replicating colonial dynamics where decisions are made about communities without their input, now automated at scale.
Our research on civic tech implementation reveals the dangers of isomorphic mimicry, where global systems influence local processes, creating capability traps that prevent genuine innovation. African digital democracy unfolds differently than western models, often emphasising ‘watchdog’ initiatives driven by civil society rather than government-sponsored participation platforms. This context demands AI design approaches that recognise and build upon these distinct democratic traditions rather than imposing external frameworks. Our work illustrates how AI can be leveraged for social good in LMIC contexts, demonstrating how non-traditional data and deep learning can support real-time urban governance and decision-making in cities in the Global South.
Addressing critical gaps
Our analysis identified several areas where current donor activity is insufficient or entirely absent. These include AI-enabled technology-facilitated gender-based violence (despite UNESCO’s recognition of the issue), governance of open-source AI systems, promoting a public AI strategy, and the development of AI systems that genuinely serve local rather than external interests. These gaps represent opportunities for the development community to lead rather than follow.
Strengthen Local Capacity and Innovation
Amplify local voices from analysis to action
Amplifying the contributions of local digital innovation communities as authoritative voices in inclusive AI design requires moving beyond extractive research practices towards genuine co-creation approaches that recognise local expertise as fundamental rather than supplementary. Current AI applications in development—from predictive analytics for conflict early warning to generative AI chatbots for humanitarian response—often replicate rather than challenge existing power structures.
Our engagement with development actors revealed that most AI for development initiatives remain concentrated in a handful of African countries with existing digital ecosystems – for example, Kenya, South Africa, Ghana, Nigeria, Uganda – whilst other regions remain largely excluded from both development and implementation processes. Moving beyond this pattern requires fundamental changes in how we approach AI development, deployment, and governance—changes that position affected communities as co-designers rather than end-users of AI systems that shape their lives. LMIC actors need support to actively develop and govern AI systems, ensuring sovereignty and sustainability of local AI initiatives. Supporting civic tech ecosystems and transdisciplinary local research networks is critical to this goal.
Advance research to guide contextual AI development
Our work in the Digital Cluster at IDS approaches these challenges through two complementary streams: examining how AI can be used responsibly within development research practice and exploring how AI systems themselves can be developed to address global development challenges. This dual focus recognises that we cannot credibly advocate for responsible AI development whilst remaining uncritical about our own AI use.
The theoretical foundations for this work are emerging, influenced by decades of development theory and practice. Yet the rapid pace of AI development requires us to build this theoretical framework whilst simultaneously engaging in practical experimentation. This is uncomfortable but necessary work.
Build inclusive foundations
Foster critical AI literacy
The goal is not to create AI experts but to develop the analytical capacity to determine when AI augments rather than replaces human capabilities, and when trust in automated systems is warranted versus dangerous. Building critical AI literacy within government agencies, civil society, and local innovation ecosystems is essential for participatory and accountable AI design. This literacy empowers communities to engage with AI systems, question their impacts, and co-shape governance frameworks that reflect local values. Without it, even well-intentioned AI initiatives risk reinforcing existing power imbalances and dependency.
Strengthen Data Security and multilingual inclusion
Beyond compliance with global data protection frameworks, development practice should evolve to address the unique vulnerabilities that AI systems introduce. Generative AI, in particular, presents novel challenges for data privacy and security that existing development protocols are ill-equipped to handle. The question is not merely technical but fundamentally about power relations and control over information – especially in low resource languages where users are more vulnerable to data poisoning. Our work, underscores the challenges of ensuring data quality and protection within AI systems deployed in low-resource governance contexts.
We also found that addressing these gaps requires a more flexible, outcome-oriented approach to data regulation – aligning protections with specific, high-risk AI use cases rather than generic tool bans – to ensure safe, equitable AI deployment in LMIC contexts. Efforts should also prioritise open, multilingual, multimodal AI resources to enhance accessibility and local relevance, paired with community-led dataset creation practices that embed local values and strengthen trust.
Conclusion: From reaction to leadership
The problems identified through digital rights advocacy have illuminated the focus areas that demand attention. To move beyond slogans, the development community can: champion locally relevant AI that meets community priorities, invest in local leadership and innovation ecosystems, and build inclusive, trusted data and knowledge infrastructures.
This is not about balancing interests; it is about ensuring that AI serves development, not the other way around. As the AI for Good Global Summit calls us to reimagine possibilities, let us reclaim our agency to shape AI systems that are rooted in equity, development priorities, and local contexts, ensuring that technology becomes a partner in human flourishing across the globe.