The Future of Audit: Governance Key to Safeguard Quality and Trust in Age of AI

Auditing has long been the cornerstone of transparency and accountability in business. For much of its history, it has been a manual craft: imagine auditors working in conference rooms, sifting through piles of paperwork, and validating financial statements. This process, which relied on sampling, involved examining a subset of transactions as a proxy for the organisation’s activities — it was effective but limited by time and scale.

That picture is rapidly changing. Digital technologies such as artificial intelligence (AI) are driving large efficiencies across many industries and the audit profession is no exception. Instead of reviewing a sample of transactions, auditors can now analyse an entire dataset and use technology to flag, or even predict, outliers.

But as these technologies reshape assurance, they also raise a critical question: can society trust AI to uphold the same standards of integrity that auditing was built on?

The answer lies in governance.

 

Governance as Foundation of Trust

While AI enhances accuracy, efficiency, and fraud detection, it also introduces new risks, from algorithmic bias to data manipulation and deepfakes. Without clear governance, transparency, and oversight, these same technologies could erode the very trust they aim to strengthen.

We have already seen in Asia how fraud is becoming more sophisticated, faster and easier to scale. Last year, a finance worker in Hong Kong transferred more than US$25 million to scammers after they used deepfake technology to pose as the company’s chief financial officer.

Bad actors are using increasingly sophisticated tools to conceal manipulation, and auditors alone cannot uncover fraud deliberately engineered to be hidden. When adversaries evolve, the audit profession must evolve in tandem — but the ecosystem must adapt along with it.

This is why stronger governance frameworks are essential across the whole finance function. Boards, regulators, audit committees, and management must ensure that the use of AI across all areas of financial reporting is transparent, explainable, and ethically grounded. This means establishing clear governance over how AI tools are approved, tested for bias, and monitored over time. There is also a need for the industry to work with regulators, standard-setters and academia to develop consistent frameworks and best practices to support management and auditors as they navigate the age of AI.

Technology can automate the “how” of auditing, but governance ensures the “why” and the “what for” are achieved. Human judgement, independence, and accountability must remain at the centre.

In Deloitte’s research of AI-influenced workforce evolution, three operating models are emerging: human-in-the-loop, human-on-the-loop, and human-off-the-loop. An effective auditing governance model would always include humans to be in the loop — where human judgement is embedded directly in the process flow, and on the loop — where management, boards and auditors supervise AI outputs, intervening in cases of anomalies or exceptions. In contrast, the human-off-the-loop model, which involves limited supervision and intervention, is generally not suitable for auditing, management, and boards.

 

Good Software Vs Bad Software

AI has made the cyber security landscape even more dangerous. Malware as a Service (MaaS), lets criminal operators rent or buy ready-made malware, toolkits, and customer support to run sophisticated attacks without deep technical skills. This democratisation of offensive capability, described in recent reporting as part of a broader “Malware 2.0” era, means fraud and cyber-enabled manipulation can be scaled rapidly and iterated on by adversaries. Attacks can be faster, more targeted, and harder to trace.

Yet technology also provides the defence. Behavioural analytics, endpoint monitoring, and AI-driven anomaly detection allow internal control teams to identify signs of compromise, unusual credential use, or manipulated records that traditional checks would miss. Using AI to automate routine information checks will also redirect professional judgement towards high-risk areas and complex analyses.

AI is already transforming how auditors detect irregularities. Instead of relying on sampling, modern audits can analyse up to 100 percent of transactions, identifying outliers and patterns that might indicate error or fraud. Agentic AI systems capable of executing defined audit workflows under human supervision are now emerging as the next evolution. For example, Deloitte’s auditors can run the firm’s global cloud-based Omnia platform to provide extra assurance that potential audit risk factors have been identified.

These tools are only effective when they operate within governance frameworks that mandate model validation, explainability, and clear escalation paths when suspicious activity is found. Using impactful AI means building “Trustworthy AI” — establishing systems that organisations can rely on for critical decisions. This includes providing the business context, incorporating human oversight to check results, and understanding the placement of an AI tool in a workflow.

Audit teams will need adequate training and manpower to tackle this new risk landscape. Auditors are increasingly being trained to recognise signs of algorithmic manipulation, trace digital evidence trails, and understand malicious systems such as MaaS. Critical thinking and professional scepticism remain paramount for the audit profession, which may take on a more integrated approach to understanding a client’s business, including its strategy and operating models, in order to interpret AI-driven insights, and deliver enhanced value through the audit.

 

Building Trusted Audit Ecosystem

Trust in AI-enabled auditing cannot come from technology alone; it must come from a trusted ecosystem. Regulators, boards, management, auditors, and technology providers must work together to embed responsible innovation across the corporate landscape. A company’s financial integrity starts long before the audit, through culture, tone from the top, and effective internal controls.

In Thailand, as digital transformation accelerates, this balance between innovation and governance has become increasingly vital. The Securities and Exchange Commission (SEC) continues to emphasise corporate governance as a foundation of sustainable business practices, while the Federation of Accounting Professions (TFAC) drives adoption of global quality management standards such as ISQM 1 to strengthen audit quality. Together, these frameworks underscore that technology must serve integrity, not replace it.

For Thai businesses, aligning AI adoption with these governance principles will help ensure that digital transformation enhances, rather than compromises, investor confidence and market transparency.

AI is not replacing the auditor; it is reshaping the assurance that society can expect. Technology can enhance the depth and focus of audits, but strong governance, commitment to international standards, regulatory alignment and effective board oversight remain the first and best safeguards for high-quality reporting and public trust.

 

*The article is written by Lee Boon Teck, Audit & Assurance Regional Managing Partner, Deloitte Southeast Asia.