Artificial Intelligence (AI) offers significant benefits while simultaneously presenting complex legal and ethical challenges. At SAICA’s recent Audit Reform webinar on the policies, laws and regulations governing AI, experts discussed the risks of AI, including deepfake scams, AI-driven misinformation, and its influence on decision-making in healthcare, finance, and law enforcement. A central concern was the challenge regulators face in keeping pace with technological advancements while preventing harm. The core dilemma remains: how do we regulate AI effectively without stifling innovation?
The need for AI regulation
‘AI is already embedded in our daily lives,’ said panel chair Nerushka Bowan, founder of the LITT Institute and an emerging technology and privacy-law specialist. ‘It influences everything from personalised content recommendations to financial fraud. It’s no longer a futuristic concept – it’s here, and it needs governance.’
AI is now integral to critical sectors such as autonomous driving, recruitment, and medical diagnostics. However, it also presents significant risks. Fraudulent deepfake technology has been used to impersonate executives and authorise fraudulent transactions, while biased AI algorithms have been flagged for unfairly discriminating against job applicants and loan seekers. These challenges highlight the need for regulation, but implementation remains a subject of debate.
The panellists agreed that AI regulation should be industry specific. ‘A general AI law won’t work because the risks differ across sectors,’ said Walter Bhengu, SAICA Project Director for Legislation and Governance. ‘AI in banking, where it influences lending decisions, is different from AI in social media, where misinformation is a major concern.’
The EU AI Act adopts a tiered approach, categorising AI applications by risk level:
- Minimal risk (such as spam filters, AI chatbots)
- Limited risk (eg AI-powered recruitment tools, facial recognition for security access)
- High risk (eg AI in medical diagnostics, automated credit scoring)
- Unacceptable risk (eg social scoring systems, biometric surveillance by authoritarian regimes)
This model is gaining traction worldwide, but concerns remain about its rigidity. The challenge is ensuring laws remain flexible enough to accommodate emerging AI developments without creating regulatory gaps.
Global AI laws and African businesses
For African businesses, aligning with global AI laws is crucial. ‘Even if your business is based in Johannesburg, global AI laws affect you,’ noted Bhengu. Multinational companies must navigate a patchwork of regulations, ensuring compliance across different legal environments. ‘South African businesses cannot ignore what happens in Brussels or Washington, as we are deeply integrated into global trade. Many companies must comply with GDPR and other foreign laws due to international partnerships.’
Companies like Apple and Meta are already adjusting their operations to comply with EU regulations. African companies must follow suit to avoid legal and commercial barriers. International trade also plays a significant role − firms exporting AI-driven products or handling data from European citizens must comply with GDPR while those engaging with US businesses must consider AI governance policies emerging under the Trump administration.
South Africa, Nigeria, and Kenya are among the African nations exploring national AI policies, but progress is slow. Without clear local regulations, African firms must navigate an evolving landscape of international compliance requirements.
Deepfake threats and AI-generated misinformation
AI-generated deepfakes present major risks. ‘A financial worker in Hong Kong was tricked into transferring $35 billion using AI-generated video impersonation,’ said Bowan. ‘This isn’t science fiction – it’s happening now.’
Deepfakes are also a growing concern in politics. ‘Fake videos of politicians can spread faster than fact-checkers can respond,’ Bowan warned. AI-generated misinformation has been linked to election interference and social unrest, increasing pressure on social media platforms to implement detection mechanisms.
While some governments have proposed criminal penalties for the malicious use of deepfakes, enforcement remains difficult. Platforms are experimenting with AI-powered fact-checking, but misinformation detection often lags behind sophisticated AI manipulation.
Who is responsible when AI fails?
One of the biggest questions in AI regulation is accountability. ‘Businesses adopt AI but often overlook governance,’ said Preeta Bhagattjee, Director and Head of the Technology & Innovation practice at Werksmans Attorneys. ‘AI is embedded in many third-party tools, and companies must assess where it’s being used and its associated risks.’
There is increasing pressure on companies to conduct AI audits – assessments ensuring AI models comply with ethical and legal standards. ‘AI governance should include clear oversight from boards, risk teams, and compliance officers,’ Bhagattjee explained.
A major legal debate revolves around whether AI developers should be held liable for misuse of their technology. Some argue for mandatory watermarking or digital signatures to verify AI-generated content, while others believe that companies deploying AI should take full responsibility for its use.
AI contracts and legal liability
One of the biggest legal risks comes from AI-related contracts. Many organisations rush to adopt AI solutions without fully understanding contractual obligations. Bhengu referenced the CrowdStrike outage, which disrupted airlines and financial institutions globally. ‘This incident highlights the importance of strong contracts that clearly define liability in case of AI failures. Companies need to include specific clauses on third-party liability and indemnification.’
AI-related contracts must include liability clauses, Bhengu warned. ‘Many companies sign agreements without considering AI liability. If an AI tool causes harm, who is responsible – you or the vendor?’
Professional standards and AI ethics
Professional bodies are incorporating AI ethics into industry guidelines. ‘UK accountants now have AI-specific guidance,’ said Bhengu. ‘Standardising AI use within professions helps avoid future regulatory uncertainty.’
Companies are adopting a ‘compliance-by-design’ approach, integrating ethical principles like transparency and accountability into their AI strategies. Regulators are also exploring AI disclosure requirements, where companies must report the extent of AI’s role in decision-making, particularly in sensitive areas like hiring, credit scoring, and legal analytics.
What’s next?
As AI technology evolves, regulations must keep up. ‘AI today will look different in five years,’ said Bhagattjee. ‘Regulations need to be adaptable to allow for growth without stifling innovation.’
A risk-based approach is gaining support. ‘Regulators should assess AI’s potential harm versus its benefits,’ she suggested. ‘This allows flexibility while ensuring responsible use.’
Another growing concern is AI’s environmental impact. ‘Training large AI models requires vast computational power consuming significant energy resources,’ said Nevellan Moodley, Head of Financial Services Advisory and Digital Assets Lead, Africa and Middle East, at BDO. Future AI regulations may include sustainability requirements, encouraging energy-efficient model training and responsible AI development.
Preparing for AI regulation
AI is here to stay, and businesses must be proactive, said Moodley. ‘Companies need to understand AI risks, set governance frameworks, and keep up with regulations.’ He pointed to the NIST Cybersecurity Framework (CSF), which provides a structured approach to managing AI-related risks, and ISO/IEC 42001, the first international standard for AI management systems (AIMS), which ensures responsible AI development and governance.
Key recommendations for businesses include:
- Establishing AI governance committees at board level
- Implementing data-governance frameworks to monitor AI-related data usage
- Developing AI policies that integrate with existing privacy and cybersecurity laws
- Conducting AI impact assessments to identify potential risks
Beyond compliance, AI governance is about maintaining trust with customers, investors, and regulators. Companies should prioritise internal risk management, employee training, and external compliance to stay ahead of the regulatory curve.
‘AI will not replace professionals,’ Moodley said, ‘but professionals who use AI will replace those who don’t.’ While AI can handle repetitive tasks, human judgement, creativity, and ethics remain irreplaceable.
Author
Monique Verduyn





