Auditors Can't Blame AI for Mistakes, UK Regulator Says�But Is That the Whole Story?

Summary: The UK's Financial Reporting Council has issued the world's first guidance on AI in audits, emphasizing that auditors remain accountable for AI-driven mistakes. While audit firms invest billions in AI for efficiency gains, the regulator warns of risks like hallucinations and data distortions. This comes amid broader debates about AI's impact on jobs, with tech companies citing AI for layoffs while experts argue AI will transform rather than eliminate roles. The guidance highlights the tension between automation and human oversight in professional services.

The UK’s Financial Reporting Council (FRC) has issued a stark warning to audit firms: you can’t blame artificial intelligence for audit failures. In what it calls the world’s first guidance on AI in audits, the regulator emphasizes that accountability remains squarely with audit partners, not the technology. “You can’t blame it on the box,” Mark Babington, executive director of regulatory standards at the FRC, told the Financial Times. “If you use this technology, you are still accountable for it.”

This guidance comes as the Big Four audit firms – Deloitte, EY, KPMG, and PwC – have collectively invested billions in AI, betting it will revolutionize audit processes by speeding up work and cutting costs. KPMG’s US business even boasted of winning an audit tender from a rival earlier this year after showcasing its AI platform. But the FRC warns that these technologies pose risks to audit quality, including “misuse” of AI outputs and “deficient” outputs like hallucinations or data distortions that could lead to inappropriate conclusions.

The Human Oversight Imperative

Despite concerns about AI replacing auditors, Babington dismisses what he calls too much “hand-wringing and panic” over job losses. He argues that audits remain “still reliant on really good, professionally sceptical judgement.” The regulator stresses that firms must maintain human oversight and professional judgment while investing in staff education and safe system design. “You’ve got to think about how you are going to identify threats,” Babington said. “At what point might [an AI agent’s] behaviour change in a way that you would no longer consider it appropriate to use without further human intervention?”

This emphasis on human accountability raises a crucial question: are auditors equipped to oversee increasingly complex AI systems? The FRC itself is expanding its use of AI, including to triage corporate reporting evidence and interrogate significant documentation. Yet Babington notes the regulator’s planning budget for the next two years includes a below-inflation rise, constraining how much expertise it can hire – increasing the push to use AI.

The Broader AI Accountability Debate

The FRC’s stance reflects a growing global conversation about AI accountability that extends far beyond auditing. In the United States, the Internal Revenue Service has taken a different approach, paying Palantir $1.8 million last year to develop a custom tool called the Selection and Analytic Platform (SNAP) to improve audit case selection. The IRS, struggling with outdated systems and staffing cuts, aims to use SNAP to identify high-value cases for audits, tax collection, and criminal investigations.

This contrast highlights a fundamental tension in AI adoption: while some regulators emphasize human accountability, others are actively deploying AI to enhance their capabilities despite resource constraints. The IRS has paid Palantir over $200 million in contracts since 2014, demonstrating significant investment in AI-driven solutions even as it faces political unpopularity and staffing challenges.

Job Market Realities and AI Narratives

The FRC’s guidance arrives amid a complex landscape of AI-driven workforce changes. While Babington downplays job loss concerns, PwC and KPMG have both announced plans to cut hundreds of audit jobs, citing fewer mid-level staff leaving for roles elsewhere amid a cooling labor market. This mirrors broader trends where tech CEOs increasingly cite AI as justification for workforce reductions.

According to BBC analysis, tech giants including Google, Amazon, Meta, Pinterest, and Atlassian have announced or warned of workforce reductions linked to AI developments. Meta plans to nearly double spending on AI this year while implementing hiring freezes and further job cuts. Block is shedding almost half its workforce, with CEO Jack Dorsey citing AI tools enabling smaller teams to do more. Amazon, Meta, Google, and Microsoft plan to invest $650 billion in AI over the coming year, with Amazon cutting about 30,000 corporate workers since October partly to offset these costs.

Tech investor Terrence Rohan offers a blunt perspective: “Pointing to AI makes a better blog post. Or it at least doesn’t make you seem as much the bad guy who just wants to cut people for cost-effectiveness.” Yet there’s evidence of real productivity changes – some companies are using code that is 25% to 75% AI-generated.

A More Nuanced View of AI’s Impact

Not all experts agree with the job apocalypse narrative. AI expert Erik Brynjolfsson, a Stanford University professor, argues against predictions of a tech job apocalypse due to AI. He explains that rather than eliminating jobs, AI will transform roles, creating new positions like ‘chief question officer’ and ‘agent fleet manager.’ “The real value is defining the right questions,” Brynjolfsson says. “Understanding the problems that need to be solved, defining them in a way that really are useful to people. So those who can identify those opportunities are going to be more valuable than ever before.”

Brynjolfsson emphasizes that AI acts as a complement to human skills, enhancing productivity and expanding fields like software development by enabling more people to create applications through natural language. “In some cases, it does replace what they’re doing,” he acknowledges. “But at the same time, it helps people be twice or even 10 times more productive.” He highlights historical precedents where technology increased demand for tech professionals and stresses the importance of human oversight in defining problems and evaluating outcomes.

The Competitive Landscape and Quality Concerns

The FRC expresses concern that AI investment could widen gaps in audit quality and capability. Deloitte, EY, KPMG, and PwC captured 90% of audit fees paid by the FTSE 350 in 2024. “There are massive differences about the level of resource that firms are able to invest,” Babington noted. However, he’s speaking to US regulators about how private equity investments in smaller firms could help level the playing field, which he says smaller firms are actively seeking.

This raises important questions about market concentration and whether AI will further entrench the dominance of large firms or create opportunities for smaller competitors. As AI tools become more sophisticated, will they democratize audit capabilities or create new barriers to entry?

Looking Forward: Balancing Innovation and Responsibility

The FRC’s guidance represents a crucial step in establishing frameworks for responsible AI adoption in professional services. By emphasizing that accountability cannot be outsourced to algorithms, the regulator is setting important boundaries for how AI should be integrated into critical business functions.

Yet the broader context suggests this is just one piece of a much larger puzzle. From IRS audit systems to tech industry layoffs to expert debates about AI’s true impact on employment, the conversation about AI accountability is evolving rapidly. As Babington himself acknowledges, it’s “really important” for firms to invest in AI: “If audit can’t keep up with [AI adoption elsewhere in the corporate world], we’re in quite a low place.”

The challenge for auditors – and all professionals working with AI – will be balancing the efficiency gains of automation with the irreplaceable value of human judgment, skepticism, and ethical oversight. As AI systems become more capable, the humans overseeing them may need to become more sophisticated in their understanding of both the technology’s potential and its limitations.

Found this article insightful? Share it and spark a discussion that matters!

Latest Articles