The United Kingdom has announced a series of projects to support the responsible development and use of artificial intelligence across Africa, following the G20 “AI for Africa Initiative” held in Cape Town.
The initiatives, developed with African and international partners, aim to strengthen democratic resilience, improve development outcomes, and ensure AI technologies are deployed safely and fairly.
The UK’s Foreign, Commonwealth & Development Office (FCDO) is working with Canada’s International Development Research Centre (IDRC) and philanthropic funder Community Jameel to launch the AI Evidence Alliance for Social Impact (AEASI).
The £2.75 million project, with £1 million contributed by the FCDO, will be implemented by the Abdul Latif Jameel Poverty Action Lab (J-PAL) and IDInsight. It will fund experimental evaluations to measure real-world impacts of AI tools in Africa and Asia, build local research leadership, and provide evidence-based guidance for policymakers. The initiative is part of a wider $7.5 million partnership with Google.org to expand AI impact evaluations.
Separately, the University of Cape Town will host a new African Hub for AI Safety, Security and Peace, becoming the 12th multidisciplinary global AI lab — and the second in South Africa — under the UK-Canada AI for Development program. The hub will focus on mitigating risks associated with AI, training researchers and policymakers, and ensuring African perspectives are represented in global governance frameworks.
“AI has the power to fuel growth, build trust and transform lives, and every country should share in that,” UK AI Minister Kanishka Narayan said. “By working with countries like South Africa, we’re making AI safer, fairer and more inclusive, and helping communities shape the future on their terms.”
Maggie Gorman Velez, vice president of IDRC, said the alliance will provide “contextually grounded research and evidence on what works and what does not” to support inclusive, responsible AI development.
Community Jameel Director George Richards added that rigorous evaluation is needed to identify effective, safe, and fair AI solutions that can be scaled to benefit communities worldwide.
Google.org AI for Social Good head Alex Diaz stressed the urgency of studying what works, what fails, and why, to maximize AI’s potential for public good.
The African Hub for AI Safety, Security and Peace will produce open-access research, create risk detection tools in multiple African languages, and train students and policymakers. The AI4D Evaluation Partnership will focus on reducing bias, exclusion, and systemic harms in AI adoption across the continent.