The inclusion of AI-fabricated research in the draft South Africa National AI Policy was a serious embarrassment for the Department of Communications and Digital Technologies, but the speed with which Minister Solly Malatsi withdrew the document deserves recognition, according to Adams & Adams partner Darren Olivier.
The Department of Communications and Digital Technologies published the policy for public comment in early April. The document was subsequently found to contain six citations likely fabricated by large language models, prompting harsh backlash against the department and Malatsi. The minister withdrew the policy, apologized publicly and committed to restoring its integrity before republication.
“Credit where credit is due,” Olivier said. “The decision to withdraw the draft policy quickly was the right one. It showed awareness of the problem and a willingness to act before the issue grew even larger. In a governance context, speed matters.”
The longer a flawed document remains in circulation, he argued, the harder it becomes to rebuild confidence and trust. Most organizations would hesitate to act in similar situations, hoping the issue would fade or go unnoticed. “Experience suggests that this strategy rarely works. Withdrawal, by contrast, signals respect for the process and for the public that relies on it,” he said. “While the situation is clearly embarrassing, the response deserves recognition. It protected the integrity of the policy process and created space to fix the problem properly.”
Olivier said the incident offered a broader lesson for South African organizations now experimenting with AI. “Every professional is under pressure to work faster and deliver more. Every team is learning, often in real time, how to integrate new tools into existing workflows,” he said. “Reputation is fragile and takes years to build. AI can damage it quickly, and once trust is shaken, recovery can take considerable time. It may never fully happen.”
He argued that the next draft of the policy matters more than the first. “If the next draft reflects careful oversight, disciplined verification, and a clear understanding of local realities, this uncomfortable moment will have served a constructive purpose,” he said. “The demand for clarity on AI governance is only going to grow. Businesses want guidance. Regulators want consistency, and citizens want reassurance that new technologies are managed responsibly.”
Olivier identified three crucial elements for a credible next version. First, the policy should be rooted in local realities. “Borrowing ideas from other jurisdictions is sensible, but copying them wholesale rarely works. Our legal framework, infrastructure constraints, and economic priorities are different from those of Europe, the United States, or Asia,” he said.
Second, the process must show clear human oversight. People are more likely to trust systems when they know who stands behind them. Third, and most importantly, the policy should demonstrate verification. “Sources should be checked carefully. Evidence should be traceable. Assumptions should be tested against real-world conditions. These steps may feel routine, yet they are the foundation on which public trust is built,” he said.
Olivier said South Africa has reached a defining moment in AI governance, with the very body intended to create trust around AI becoming a case study in failure. Governance, he argued, is not about banning the technology — it is about knowing when AI was used, who reviewed the output and how the final decision was reached.
AI use can be sensible for many organizations, including government, and offers genuine benefits. “AI is very good at producing structure, summarising large bodies of information, and as a sounding board. In practice, the best results usually come from collaboration. AI accelerates the work. Humans apply judgment,” he said.
His working rule, he said, is never to outsource human thinking. Humans must provide judgment throughout the process. “Then, before anything leaves the building, let humans verify every reference, every citation, and every assumption,” he said. “This final check is where credibility is protected — and where many organisations underestimate the risk.”





