South Africa’s withdrawal of its Draft National Artificial Intelligence Policy in late April after fake AI-generated citations were discovered in the document marked the first time a government has formally withdrawn an official document over AI hallucinations — but not the first time fabricated AI content has appeared in government or quasi-government material.
The incident tarnished what was set to be a historic moment for South Africa, which was preparing to become the first African nation to adopt a policy establishing a formal AI ethics board outside the West. “The most plausible explanation is that AI-generated citations were included without proper verification,” Communications and Digital Technologies Minister Solly Malatsi wrote in a statement. “There will be consequence management for those responsible for drafting and quality assurance.”
The South African case is part of a broader pattern of AI-generated text or citations slipping into official documents over the past two years, raising concerns about accountability and reinforcing the need for human verification.
In South Africa’s case, at least six of the 67 sources in the bibliography of the policy published in April were AI hallucinations, according to a letter from civil rights group Article One. A News24 article reported that several of the academic journals cited in the document were “completely fictitious.”
In May 2025, a Make America Healthy Again report on children’s health released by the Trump administration was found to contain incorrect citations, including nonexistent studies and muddled author and journal attributions. The Washington Post found that some references included “oaicite” attached to URLs — a marker often considered indicative of ChatGPT use. White House Press Secretary Karoline Leavitt downplayed the errors as “formatting issues” and said a corrected report would be uploaded. A revised version was published a few hours later.
In August 2025, the Australian Financial Review raised concerns about suspected AI use in a Deloitte report commissioned by Australia’s Department of Employment and Workplace Relations. Academics alleged the report contained fake academic references and fabricated quotes. In a September 2025 email to Australia’s Department of Finance, Deloitte confirmed that “the use of the generative AI tools had resulted in inaccurate outputs whereby certain citations, in the form of footnotes and sources in the accompanying reference list, contained errors.” The company republished the corrected study in September and, in November, refunded the Australian government $290,000 out of the $440,000 it had charged for the report.
Deloitte was also at the center of a similar incident in Canada. The use of generative AI in a 526-page, $1.2 million healthcare report for the Newfoundland and Labrador government led to the inclusion of fake citations, The Independent reported in November. Deloitte rereleased the report after correcting the citations. The Canadian government has since updated its Request for Proposals contract to require disclosure of “all intended uses of AI and/or machine learning” and reserved the right to assess AI-related risks at any point before or after a contract is awarded.
In Europe, the European Union Agency for Cybersecurity, ENISA, admitted that two of its threat reports published in 2025 were riddled with AI-hallucinated sources. Out of 492 footnotes in one report, 26 were incorrect, according to researchers from the German public institution Westfälische Hochschule cited by Der Spiegel magazine.
Researchers have stressed that the underlying concern is not the use of AI itself but the failure to verify it. “ENISA let AI touch the one layer it should never touch unguarded: the truth layer,” AI law and data ethics researcher Chiara Gallese wrote on LinkedIn. “That’s how hallucinations turn into institutional publications. And when this happens inside a cybersecurity authority with a €27 million budget, the problem isn’t skill. It’s the process. No mandatory verification step. No provenance checks. No clear rule for AI use. Just speed, convenience, and trust-by-default.”





