Exploring the promises, pitfalls, and ethical concerns of Artificial Intelligence in the modern world.
SAICA’s webinar on ‘Policies, Laws and Regulations governing Artificial Intelligence’ delved into the frameworks designed to ensure the ethical, transparent, and responsible development and use of AI technologies and how to safeguard against risks like bias, discrimination, and misuse.
Artificial Intelligence (AI) has woven itself into the fabric of our everyday lives, shaping the way we interact with technology, businesses, and even each other. From the algorithms that recommend what we watch to the tools that create content indistinguishable from human creation, AI is evolving rapidly.
‘Its impact is monumental, yet it raises a slew of questions about privacy, ethics, bias, and governance,’ said Nerushka Bowan, an expert in AI, privacy and innovation law and founder of the LITT Institute. ‘ The revolution isn’t just about technology − it’s about what it means to be human in a world that’s increasingly driven by machine learning.’
Understanding AI’s evolution and ubiquity
At its core, AI refers to the development of computer systems that perform tasks requiring human intelligence, such as visual perception, speech recognition,
decision-making, and language translation.
‘AI has been with us for years, shaping everything from the apps on our phones to the systems that manage infrastructure,’ Bowan added. ‘However, something new has come into focus in recent years; generative AI brings with it even more powerful capabilities, making it the topic on everyone’s mind.’
Generative AI is a subset of AI that goes beyond analysing data or automating tasks. It creates new, original content, whether that’s a piece of writing, an image, a piece of music, or even a conversation partner. ChatGPT, DALL·E, and other generative AI tools have taken centre stage in this evolution. They don’t just replicate existing work − they generate content that blurs the line between human and machine production. With tools like these advancing so quickly, AI is reaching unprecedented levels of sophistication.
The power and growth of generative AI
The swift adoption of generative AI tools underscores their appeal. For example, ChatGPT reached 100 million users in just two months, a feat that outpaces the growth of apps like TikTok, Spotify, and Uber. The surge in popularity speaks to how effectively these AI systems have tapped into the human desire for efficiency, creativity, and novelty.
Generative AI creates not only text but also music, video, and even complex simulations. In one case, a user tasked an AI system with creating a painting in the style of Salvador Dalí, and the result was indistinguishable from the work of a human artist.
Similar capabilities are emerging in AI voice generation. When a new ChatGPT voice mode sounded strikingly similar to Scarlett Johansson, users mistakenly believed she was involved, though she had declined to participate. Instead, developers used a voice actor with a similar tone. After Johansson’s legal team threatened to sue over the unauthorised resemblance, the company removed the voice from the app. This incident underscores the ethical and legal concerns surrounding AI voice replication, particularly when it mimics recognisable individuals without consent.
While this rapid advancement is impressive, it raises profound ethical concerns. As AI-generated content becomes harder to distinguish from human-made creations, it becomes easier to spread misinformation, disinformation, and deepfakes, all of which can have real-world consequences.
The ethical challenges of generative AI
The more sophisticated AI becomes, the greater the ethical concerns. We’ve already seen AI’s potential to manipulate opinions, perpetuate biases, and blur reality. One of the most worrying examples is how AI systems are being used to create deepfakes −videos and images that convincingly mimic real people, often for nefarious purposes. In the wrong hands, this technology could become a powerful weapon for spreading misinformation, fraud, and propaganda. Even today, AI-generated court cases and fabricated legal documents have found their way into courts, with lawyers in multiple countries unknowingly relying on false information.
This brings us to one of the key ethical dilemmas facing the AI revolution: bias and discrimination. AI systems are only as good as the data they are trained on, and this data often reflects the biases and prejudices present in society.
‘For example, Bowan said, ‘if you prompt an AI image generator like MidJourney to create an image of an “Indian person”, you’re going to get an old man with a grey beard. I’m also of Indian heritage and do not identify with this man in a beard. You can apply that to any stereotype. Think of “CEO” – you’ll probably get a white male, for example.’
Similar biases appear when AI is used for more serious tasks such as credit underwriting or hiring decisions, where biased algorithms can perpetuate systemic inequalities.
In an effort to address these biases, some AI developers have tried to create more ‘woke’ algorithms, ensuring that generated images are more diverse. But these efforts sometimes backfire. For example, a tool designed to generate more diverse images ended up producing unrealistic or historically inaccurate portrayals, further complicating the issue.
AI in the legal and regulatory arena
The challenge of managing AI goes beyond just ethics; it’s also a matter of regulation. ‘While countries like Rwanda are rapidly embracing AI as part of their economic strategies − aiming for 25% of their GDP to be driven by AI by 2030 − others, like South Africa, are just starting to develop comprehensive AI policies,’ said Nevellan Moodley, head of financial services advisory BDO. ‘South Africa’s new National Artificial Intelligence Policy Framework released in August 2024 aims to bring the country’s AI strategy closer to those of global leaders. However, much work remains to be done to ensure that AI benefits the economy without exacerbating existing inequalities.’
Globally, the most ambitious attempt to regulate AI comes from the European Union with its AI Act. The legislation categorises AI systems into low-, medium-, and high-risk groups, with strict regulations on high-risk applications such as credit scoring and law enforcement. It also prohibits certain types of AI practices such as using AI to make decisions based on gender or other protected characteristics. The penalties for non-compliance are steep: companies that violate the law could face fines of up to 6% of their global revenue. This approach sets a standard that other countries, including South Africa, may soon adopt.
‘The challenge for businesses, particularly in countries without strong AI regulation, is to anticipate future regulations and start implementing AI guardrails now,’ Moodley added. ‘Without these precautions, companies risk falling behind or facing severe penalties when regulations inevitably catch up.’
AI in the workforce: threat or opportunity?
One of the most common concerns about AI is that it will lead to widespread job loss. It’s a fear that has been voiced with every technological revolution, from the rise of computers to the automation of factories. However, history has shown that while some jobs are lost, new opportunities often emerge.
‘AI will likely follow a similar trajectory,’ said Jason O’Shaughnessy, head of international business Envestnet. ‘Instead of replacing human workers, AI can complement them by automating repetitive tasks and freeing up people to focus on more creative or meaningful work.’
That said, AI’s impact on the workforce is already being felt in some sectors. For example, in 2023, Air Canada faced legal action after a chatbot mistakenly promised a refund policy that didn’t exist. The courts ruled that the company had to honour the policy, highlighting how AI-generated content can have real-world consequences for companies and their customers.
‘As AI becomes integral to decision-making, regular audits are crucial to ensure algorithms function properly, particularly in detecting bias, maintaining data integrity, and complying with regulations like the EU AI Act,’ O’Shaughnessy said. ‘Audits help guarantee fair, accurate outcomes in critical areas like credit scoring, hiring, and legal judgments while supporting transparency and risk mitigation.’
The need for robust AI governance is highlighted by cases like that of Air Canada, where a chatbot’s error led to legal liabilities, demonstrating the importance of oversight to avoid potential legal and financial risks as AI continues to shape industries.
As AI continues to reshape industries, new jobs will emerge in areas like AI governance − a field focused on ensuring that AI systems are used responsibly. Experts in AI governance will be tasked with managing risks, preventing biases, and ensuring that AI systems comply with ethical and legal standards. This could become a specialised role within organisations, much like data protection officers are today.
Human relationships with AI: a new frontier
Beyond the workplace, AI is also starting to infiltrate more intimate aspects of our lives. One of the more surprising developments is the rise of AI companions. Apps like Replika allow users to create virtual partners, friends, or even families. These AI companions are highly customisable, offering users an emotionally responsive experience that can sometimes feel more fulfilling than real-world relationships. While AI companionship can provide comfort for those struggling with loneliness, it also raises concerns about fostering unhealthy emotional dependencies or unrealistic expectations for human relationships.
In countries like Japan, where loneliness is a growing social issue, robots are being designed to care for the elderly or simulate the experience of parenting through robotic children. These innovations could help address some of society’s most pressing challenges, but they also blur the line between human and machine, raising questions about the emotional and psychological impact of AI companionship.
The future of AI: a balancing act
The panellists concurred that as AI continues to evolve, it’s clear that it offers both incredible opportunities and significant risks. On one hand, AI has the potential to transform industries, improve efficiency, and even enhance our personal lives. On the other hand, without proper governance it could exacerbate social inequalities, perpetuate biases, and even pose existential risks.
The key to navigating this new landscape lies in balance. We must embrace the benefits of AI while remaining vigilant about its risks. This requires strong regulatory frameworks, ethical guidelines, and a commitment to ensuring that AI serves the greater good.
‘At the same time, we must not lose sight of what makes us human,’ Bowan said. ‘In a world increasingly shaped by algorithms, our human connections, empathy, and creativity will become even more important. AI may be able to replicate certain tasks and even certain emotions, but it can never fully replace the depth and richness of human experience. By leveraging AI as a tool rather than a replacement we can build a future where technology enhances rather than diminishes our humanity.’
In short, the AI revolution is here, and it’s moving fast. It’s up to us to shape it in a way that aligns with our values and ensures that its benefits are shared by all.
Author
Monique Verduyn





