Africa’s AI Momentum Is Real. The Governance Beneath It Is Not.
Op-Ed by Tiffany A. Archer, Esq., Founder & President of Eunomia Risk Advisory & Eunomia Global
On 27 April 2026, South Africa’s Communications and Digital Technologies Minister pulled the country’s Draft National AI Policy after it emerged that its reference list contained fictitious academic citations, articles attributed to researchers who had never written them, on topics they had never studied. The most plausible explanation: AI-generated content included without verification. The draft had been approved by Cabinet on March 25 and gazetted for public comment on April 10.
The irony is hard to miss: a policy designed to govern AI was itself undermined by it. The scandal is embarrassing, but the deeper lesson is more important than the headlines suggest. A governance framework that no one meaningfully interrogated made it all the way to Cabinet. The framework cleared every formal checkpoint. The governance did not hold.
This is exactly the problem I want to examine. Not South Africa’s specifically, but the continent’s. The pace of AI governance activity across Africa is significant and accelerating. Ghana formally launched its National AI Strategy on April 24, 2026, backed by a $250 million national AI computing centre and a proposed National AI Office. Kenya launched its AI Strategy 2025–2030 in March 2025. Nigeria published its National AI Strategy in April 2025. Rwanda and Egypt have been implementing theirs for longer still. The African Union’s Continental AI Strategy, endorsed in 2024, frames Phase I through 2026 as a period of governance framework creation and capacity building across member states.
This is meaningful momentum, and by itself it is insufficient. Governance documents capture intent. They do not control the human decisions that determine whether any of it holds. South Africa’s withdrawal made that gap visible in an unusually public way, but the gap exists everywhere.
What the Frameworks Cannot Reach
A compliance framework is a record of intent. It tells you who is nominally accountable, what processes are supposed to govern high-stakes decisions, and what language an organization uses to describe its values. What it cannot tell you is whether the person who holds that accountability has the organizational standing to stop one, or whether an ethics policy changes anything about how a risk escalation is handled when commercial pressure is present.
Authority bias is not a personality flaw. It is a cognitive default. When a junior compliance officer flags a concern about an AI deployment and a senior commercial leader signals indifference, the concern disappears. Not because anyone made a dishonest calculation, but because deference to perceived authority is how organizations function under uncertainty. Behavioral science has documented this for decades. Better governance documents have not changed it.
Ethical fading compounds the problem. People become so absorbed in the operational dimensions of a decision (the timeline, the budget, the deliverable) that its ethical dimensions stop registering, not through negligence, but through ordinary cognition. An organization can have a dedicated compliance function, a published AI governance framework, and a values statement on the wall, and still produce decisions that none of those things touched.
In conversations with senior technology and regulatory leaders at my executive roundtable series on digital governance in Africa, a consistent observation emerges. Leaders can describe their governance frameworks in precise detail: the policy, the accountable officer, the reporting line. When the question shifts to enforcement, to what actually happened after the last incident and whether the person with authority used it, the answers change register entirely.
A February 2026 PwC survey of more than 150 African CEOs found that only 37% had formal responsible AI and risk management processes in place. Even that likely overstates readiness. Formalization is the easy part; the harder part is whether those processes change anything when a decision is under pressure.
Healthcare, Land, Livelihoods: Where Failures Are Felt
Africa’s AI adoption is accelerating fastest in sectors where the gap carries the greatest consequence. Ghana, Kenya, Nigeria, and Rwanda are all deploying AI in healthcare, land administration, and financial services, the sectors where decisions reach most directly into people’s lives.
These are contexts where AI systems make or inform decisions about which patients are prioritized for care, who qualifies for social protection, who receives land tenure recognition, and which farmers access credit and markets. When those systems fail, the damage is not only reputational. It reaches people who are already navigating asymmetric power relationships with the institutions that serve them. Governance failures in these sectors are trust violations, and trust broken at scale does not recover on the schedule of a corrective action plan.
Meanwhile, capital is arriving faster than governance. Microsoft committed $329 million to expand cloud and AI infrastructure in South Africa in April 2026, building on an earlier $1.2 billion allocation. The investment commitment and the policy withdrawal landed in the same month, in the same country. That juxtaposition indicts neither party. It is simply an accurate picture of the moment: capital moving at commercial speed, while governance frameworks are still being written, reviewed, and in some cases, pulled back and started again.
The AU Continental AI Strategy explicitly prioritizes ethical implementation. What it cannot guarantee is that the organizations charged with implementing it have interrogated the behavioral layer underneath their stated commitments. That interrogation asks harder questions than standard governance reviews tend to reach: whether the person who holds accountability has the organizational standing to use it; whether an ethics policy changes behavior when following it is commercially costly; whether the governance framework was designed to govern decisions or to document intent.
The Questions Governance Frameworks Don’t Ask
The organizations that hold up under pressure are not those with elaborate frameworks. They are those where leaders understand, with specificity, how decisions are made inside their institutions: who people listen to when the policy says one thing and the commercial incentive says another, where concern goes when it surfaces at the wrong level, and whether the governance structure on paper bears any resemblance to the authority structure in the room.
That awareness has to be built deliberately, and it requires asking different questions than governance reviews typically ask. Not “do we have a policy?” but “who can stop a deployment, and have they ever done it?” Not “who is accountable?” but “what happened the last time that accountability was tested?”
South Africa will redraft its AI policy. The substance of its AI governance ambitions, as the law firm Fasken noted in its analysis of the withdrawal, is likely to remain largely intact. The 2024 AI Policy Framework that shaped the draft is still in place, and the fundamental direction is sound. Ghana’s momentum is genuine, the AU framework provides a real foundation, and none of that should be minimized.
But Africa’s position in this moment carries an advantage that most of the world cannot claim: the behavioral patterns that determine whether governance frameworks govern behavior are still forming. The institutional cultures around AI decision-making have not yet hardened into habit. That is a narrow window, and it closes as adoption accelerates.
Africa’s AI moment will not be defined by its strategies. It will be defined by the leaders who ask harder questions than those strategies require, and act on what they find.
Tiffany A. Archer, Esq. is the Founder and President of Eunomia Risk Advisory, Inc. and Eunomia Global LLC, boutique advisory firms specializing in behavioral governance, institutional culture, and AI risk. Her advisory work applies the POWER Scan™, a behavioral diagnostic that examines how authority, organizational voice, and ethical decision-making operate inside institutions. She is an Adjunct Lecturer at Fordham Law School and NYU, a Strategic Partner to the African Corporate Governance Network, a member of the Wall Street Journal Board of Directors Council, and Co-Chair of the Behavioral Governance in Digital Technologies Subcommittee of the New York City Bar Association's AI Task Force.






