
TL; DR
South Africa’s communications minister, Solly Malatsi, has withdrawn the country’s draft national AI policy after News24 discovered that at least 6 out of 67 academic citations were hallucinations created by artificial intelligence. The policy was approved by the Cabinet of Ministers in March and published for public comment. Malatsi called it an “unacceptable failure” and promised to manage the consequences. The scandal leaves South Africa without an AI governance framework and raises questions about the institutional capacity to regulate the technology.
South Africa’s Department of Communications and Digital Technologies has spent months developing a national AI policy. He proposed a National Artificial Intelligence Commission, an AI Ethics Board, an AI Regulatory Authority, an AI Ombudsman, a National AI Security Institute, and an AI Insurance Superfund. He outlined five pillars of AI governance: capacity building, responsible governance, ethical and inclusive AI, cultural preservation, and human-centered deployment. It adopted a risk-based approach based on the EU’s AI Act. The Cabinet of Ministers approved the project on March 25. “Government Gazette” published it on April 10 for public comment. Then News24, a South African news outlet, checked the bibliography and found that at least six of the 67 academic references in the document were missing. The magazines were real. There were no articles. Authors credited with substantive research on AI management never wrote the articles attributed to them. The editors of the South African Journal of Philosophy, AI & Society and the Journal of Ethics and Social Philosophy independently confirmed to News24 that the referenced articles were never published on their pages. According to Communications Minister Solly Malatsi, the most plausible explanation is that the project’s authors used a generative artificial intelligence tool and published the result without checking any references. A government designed to manage AI has been undermined by an AI it cannot manage.
Don’t take it out
Malatsi announced his withdrawal on April 27called the fabricated citations an “unacceptable loophole” that “undermines the integrity and credibility of legislative policy.” He said results management would follow those responsible for project development and quality assurance. “This failure is not just a technical problem,” said the minister. The chairman of parliament’s portfolio committee offered a more succinct assessment, suggesting the department “use ChatGPT this time” when re-drafting. The document will be revised before it is republished for public comment, but no time frame has been given. South Africa currently lacks a formal AI governance framework governments around the world are grappling with how to regulate artificial intelligenceand the country’s credibility as a serious participant in that conversation took a hit that would last longer than the policy revision.
The scandal isn’t just the appearance of fake citations in a government document. They appeared in a government document written by the department responsible for the country’s digital technology strategy on artificial intelligence in Brussels, Washington and Beijing during the world’s most coherent debate on artificial intelligence. EU AI ActThe most ambitious regulatory framework for artificial intelligence is grappling with delayed standards and an implementation timeline pushed back to 2027 for high-risk systems. The United States has no federal AI legislation, and the White House is looking to states to enact independent laws while trying to thwart their efforts. China has adopted rules on artificial intelligence, but applies them selectively. To this picture, South Africa proposed a policy that could not pass bibliographic scrutiny.
An example
South Africa’s hallucinatory citations are an extreme case of a problem quietly spreading among institutions using generative artificial intelligence for research and project development. A study published in Nature found that 2.6 percent of academic papers published in 2025 contained at least one potentially hallucinatory citation, up from 0.3 percent in 2024. If this ratio applies to approximately seven million scientific publications from 2025, more than 110,000 reference documents are invalid. GPTZero, a Canadian detection startup, analyzed more than 4,000 research papers accepted at one of the world’s top artificial intelligence conferences, NeurIPS 2025, and found more than 100 hallucinatory citations in at least 53 papers. In a separate multi-model study, only 26.5 percent of AI-generated bibliographic references were completely accurate. The problem is structural: large language models generate citations through probabilistic token prediction rather than information retrieval. They don’t look at the documents. They predict what the citation will look like based on patterns in the training data, and when the prediction is confident enough, they create a citation that reads as authoritative but points to nothing.
The South African case differs in that the technology hallucinates, a well-documented and inherent limitation of generative AI, but the hallucinations were published in an official government policy document that passed Cabinet approval without anyone confirming the references. The drafting process includes civil servants, subject-specific consultation and ministerial review. Dumisani Sondlo, the department’s head of AI policy, previously described the development of the policy as “an act of admitting that we don’t know enough”. This recognition did not extend to admitting that the tool used to assist policy making was itself invalid. Six fake quotes identified by News24 have been caught. The additional citations in reference 67 of the document have not been publicly confirmed as genuine. The entire bibliography is now in doubt and, in addition, the analytical foundation on which policy proposals are built.
Results
The immediate result is a reset of South Africa’s AI governance timeline. The draft policy to position the country as a leader in responsible AI adoption on the African continent needs to be reworked, renegotiated and resubmitted. Damage to institutional trust goes beyond politics itself. If the department responsible for managing AI cannot verify that the sources in its policy document are real, the question is whether it has the ability to evaluate the AI systems it proposes to regulate. The policy envisaged a multi-regulatory model AI management and human control will be incorporated into existing regulatory frameworks rather than centralized under a single authority. This model requires each participating regulator to have sufficient technical understanding to evaluate AI systems in their sector. The hallucination scandal does not inspire confidence that the coordination department has met this threshold.
The broader lesson is not that governments should not use AI in policymaking. It’s the lack of dramatic AI failure mode. This is not an accident. It does not display an error message. It produces fluent, formatted, reliable text that resembles the output of a competent researcher. The false quotes in South Africa’s AI policy were clearly not wrong. They were convincing. They were referring to real magazines. They attributed the work to real people. They followed the formatting conventions of academic references. The only way to catch them was to verify that each one actually existed, which would require methodical human verification that the AI should have made redundant. Increasing public confidence in artificial intelligence not irrational. It is a response to a technology that is simultaneously powerful enough to make national policy and unreliable enough to fabricate the evidence on which policy is based. South Africa’s embarrassment is singular, but not a major failure, using AI without the ability to scrutinize its output. It’s happening in universities, law firms, newsrooms and government offices around the world. South Africa is the first government to simply publish receipts. Difficulties in implementing AI regulation real, but they start with a condition that the South African department has not met: understanding what the technology does before trying to write the rules for it.





