Case Studies in the Practice of Responsible AI for Development
Examples from Agriculture, Education, Health, and Human Rights
With the support of
This report documents promising use cases of responsible artificial intelligence (AI) in international development. Drawing on interviews with leaders across eight different organizations in health, education, agriculture, and human rights, the study explores eight distinct implementations of AI for development:
- askNivi, a chatbot for health
- HURIDOCS/UPR Info, an AI-optimized human rights database
- Jacaranda Health’s PROMPTS, an AI message triage tool
- EIDU’s message personalization for Kenyan students
- Digital Green’s Farmer Chat
- RobotsMali, using AI to create local language school books
- ACADIC, using machine learning to track COVID-19 in South Africa
- ICRISAT’s Plantix, an image recognition app for agriculture
Our conversations with the organizations illustrate ways in which organizations ensure they are engaging with AI responsibly, implementing practices such as:
- Do no harm (go slowly).
- Keep humans in the loop.
- Pursue fairness for all.
- Share with the community of practice.
These approaches are largely bespoke, and depend on where an organization sits in the AI value chain. The cases suggest the need for a closer integration between global communities and lofty principles and the practices of individual implementing organizations.
We observed three kinds of impact: increasing scale and efficiency, improving outcomes, and strengthening the community of practice. While it is still early to document impacts of generative AI in the sector, these cases remind us that impact assessment requires investment in time and resources, and that organizations with direct connections to users or communities can measure impact more easily. This, we argue, is a key moment for the community to support evidence gathering.
Contributors Dr. Jonathan Donner, Gratiana Fu
Date
DOI 10.64329/GFWZ9293
Related Project
This work is licensed under CC BY-NC-SA 4.0.