Internal audit often assumes a position of relative safety. After all, who audits the machines if not the auditors? Yet this confidence may be misplaced. If internal audit continues to define itself by control testing, sampling, and retrospective assurance, it may discover that AI is not merely assisting the profession − it is replacing large parts of it.
The uncomfortable truth is that much of what internal audit has traditionally done is already better suited to machines. AI can analyse entire populations of transactions, monitor controls continuously, identify anomalies in real time, and generate exception reports faster and more consistently than any audit team. Compared to this, periodic audits and sample‑based testing look increasingly out-dated.
AI does not need to replace internal audit wholesale to render parts of it irrelevant. It only needs to outperform auditors at what they choose to spend most of their time doing. And increasingly, that is exactly what is happening.
Paradoxically, this same technological disruption creates the greatest opportunity internal audit has seen in decades. As organisations race to embed AI into finance, operations, HR, and decision‑making, completely new risk domains emerge − algorithmic bias, opaque models, data misuse, regulatory exposure, ethical drift, and over‑reliance on automated judgement. These risks cannot be mitigated in the traditional sense. They must be governed, challenged, and understood.
This is where internal audit either evolves − or exits. The future of internal audit is not as the auditor of AI outputs, but as the auditor of AI governance. Boards do not need more dashboards; they need confidence. Regulators do not need faster reports; they need accountability. Executives do not need more data; they need informed challenge when technology decisions quietly reshape risk appetite and organisational behaviour.
AI can flag anomalies. It cannot ask whether the organisation should accept them.
The timeline for fully automated internal audit therefore depends less on the pace of AI development and more on the profession’s willingness to redefine itself. If internal audit clings to execution and efficiency, automation will arrive quickly. If it pivots decisively toward governance assurance, ethical oversight, and strategic risk judgement, it does not become irrelevant − but indispensable.
In the age of intelligent machines, internal audit’s future will not be secured by learning how to use AI faster than everyone else. It will be secured by doing what AI cannot: exercising scepticism, context, and courage.
The real question is no longer whether AI will transform internal audit. It is whether internal audit will transform itself before AI makes that decision for it.
Automation
When Microsoft’s Chief AI Officer, Mustafa Suleyman, claimed that most white‑collar work could be automated within the next 12 to 18 months, the reaction was predictable: disbelief, defensiveness, and quiet anxiety. Lawyers, accountants, consultants, and auditors all protested: surely
not us!*
The real risk
The real risk facing internal audit is not technological − it is conceptual. Automation does not threaten internal audit because AI is powerful; it threatens internal audit if the profession continues to anchor its identity to tasks rather than judgement.
NOTE
* https://www.businessinsider.com/microsoft-ai-ceo-mustafa-suleyman-white-collar-tasks-automation-prediction-2026-2.
AUTHOR
Frans Geldenhuys CA(SA), CISA
One of the founding members of ALICE™ – an automated audit testing platform (www.bidvestalice.com)






