Kenya is at a crossroads in harnessing artificial intelligence (AI) for financial inclusion, but expert Jimmie Mwangi warns that without robust governance, AI-driven credit scoring could deepen existing inequities.
A recent Survey on Artificial Intelligence in the Banking Sector—conducted by the Central Bank of Kenya (CBK)—shows that half of Kenyan lenders have yet to implement AI. Of the 50 percent using AI, about 65 percent deploy it for credit risk scoring.
AI has the potential to transform risk assessment by leveraging digital footprints, transaction histories, mobile-money data, and behavioral analytics. This shift could help reach unbanked and underserved groups, especially those in the informal economy. Yet, Mwangi cautions, these benefits could backfire without ethical guardrails. The same CBK survey reveals that few institutions using AI have mechanisms for bias detection, algorithm explainability, or customer redress.
Mwangi emphasizes that denying credit based solely on an opaque algorithm is not just unethical—it’s unwise. Institutions must be able to explain decisions, particularly in a society challenged by inequality and a widening digital divide.
He calls for alignment between Kenya’s National AI Strategy and CBK guidelines. Financial regulations must explicitly incorporate values of inclusivity, ethics, accountability, data integrity, and proportionate human oversight
Regulatory sandboxes could be leveraged to test AI systems for fairness and transparency—not just predictive accuracy. Equally important is evolving compliance functions: they must shift from post-development review to actively shaping AI systems from the design phase. Compliance teams should scrutinize data sets, assumptions, and optimized outcomes to mitigate risk and uphold ethical standards.
Ultimately, Mwangi argues, the success of AI in credit risk will depend on governance—not technology alone. Kenya must ask: Will AI expand financial access or reinforce exclusion masked as innovation? Real progress hinges on regulators setting enforceable standards and institutions embedding meaningful AI governance into their operations. Without both, the promise of AI may become a threat to inclusion, not an enabler.