Why the Withdrawal of South Africa’s Draft AI Policy is Justified
Ayanda Zulu
– May 9, 2026
4 min read

The Minister of Communications and Digital Technologies, Solly Malatsi, has withdrawn South Africa’s draft artificial intelligence (AI) policy after an internal review confirmed that it included fictitious sources.
The decision follows media reports that raised concerns about the accuracy of certain academic sources in the document, which then prompted a departmental investigation that subsequently confirmed the issue.
This withdrawal should be welcomed, not only because the integrity of the draft policy has been compromised, but because the policy itself raises deeper concerns about its design and its approach to AI governance.
This incident involving fictitious sources highlights the limitations of AI, and it will naturally be used by proponents of regulation to strengthen their case. However, the case against regulation remains valid and is worth unpacking.
EU-Style AI Regulation
It is no secret that the push to regulate AI is not happening in a vacuum. The “Brussels Effect” is a real phenomenon in the background, with the department clearly seeking to align national AI policy with European Union (EU)-style regulation under the EU's Artificial Intelligence Act.
While this push for a degree of alignment is understandable when one considers the nature and size of the European market, it does not necessarily justify importing a complex, overly bureaucratic compliance system like the Artificial Intelligence Act, which may lead to unintended consequences.
The first point to underscore is the rapid pace at which AI has evolved and is likely to continue evolving. Large language models, for example, are updated frequently and gain new capabilities in relatively short cycles. In such a context, rigid, law-based regulation will inevitably lag technological development and risk rapid obsolescence or a failure to address the problems it seeks to resolve.
This point should not be read as an argument in favour of more flexible or adaptive regulation, but as a broader critique of the assumption that artificial intelligence can be effectively governed through heavy, rule-based regulation.
The Department itself has acknowledged that regulation could potentially stifle innovation and technological development. However, where its reasoning then falls short is in assuming that heavy-handed regulation can coexist with innovation. Such regulation exists in tension with innovation, and it is not compatible with it.
Market Distortion and Innovation Constraints
In many ways AI is a technology that is still being understood, and it is only natural that policy responses are characterised by a degree of caution. However, market liberalisation – rather than heavy-handed regulation – offers a more appropriate framework for governing its development. There is an increasing body of scholarship that emphasises the significant potential of artificial intelligence as a driver of development and economic prosperity, and it argues that it should be approached with openness, adventure, and curiosity.
The other key point relates to the market distortions that complex bureaucratic regulation can inevitably create. As with most industries, there are established incumbents and large firms with the resources to absorb extensive compliance requirements, even where they may be burdensome or inefficient. This is not equally true for smaller businesses and start-ups, which often lack the capacity to meet such demands yet still require access to global markets to compete and grow.
This is again to underscore that heavy-handed regulation is not the appropriate response, and that AI governance requires careful and critical consideration to avoid producing unintended negative consequences.
All this is not to dismiss concerns about AI, including issues such as the inclusion of fictitious sources in this case. Those concerns are valid. However, the argument here is that heavy-handed regulation is not the solution and may in fact do more harm than good by slowing innovation and limiting technological progress.
Rather, market liberalisation should be considered as a governing approach. While it is not without limitations, it allows for a more measured and adaptive response to AI development, without premature or overly rigid intervention. Where poor models or practices exist, for instance, market competition creates space for better alternatives to emerge and gain traction.
The withdrawal of the draft policy should be seen as a positive development. It provides an opportunity for broader and more deliberate public engagement on the appropriate direction for AI governance in South Africa.
Ayanda Zulu holds a BSocSci in Political Studies from the University of Pretoria and is a policy officer at the Free Market Foundation.