Imagine receiving a video call from what appears to be your company’s CEO, urgently requesting a wire transfer. The voice, mannerisms, and background look authentic – but it’s entirely generated by artificial intelligence. This scenario is no longer speculative fiction; it’s becoming a daily reality as AI-powered scams evolve from simple phishing emails to sophisticated multimedia deceptions that exploit human trust at scale.
The Rise of Professional Deepfake Models
According to a recent Wired investigation, models are now applying for positions as “the face of AI scams,” with some reportedly handling up to 100 video calls per day. These aren’t amateur actors but professionals who understand how to convincingly portray executives, customer service representatives, or trusted contacts. The business model is straightforward: scammers pay these models to record hours of video footage, which is then processed through AI tools to create realistic deepfakes that can be deployed in targeted attacks.
What makes this development particularly concerning is the professionalization of the scam economy. As one anonymous source in the Wired article noted, “It’s not just about creating fake videos anymore – it’s about creating believable personas that can sustain extended interactions.” This represents a significant escalation from earlier AI scams that relied on static images or short video clips.
The Financial Impact: Billions at Stake
The scale of this problem is staggering. A Financial Times analysis reveals that AI scams cost consumers and businesses $12.3 billion in 2023 alone, with projections suggesting losses could reach $40 billion by 2025. These aren’t just individual losses either – they’re affecting corporate bottom lines and undermining trust in digital communications.
Anya Schiffrin, co-director of Technology Policy and Innovation at Columbia University, puts the challenge in stark terms: “It’s unrealistic to expect people to detect AI deep fakes. After all, they are designed to deceive. And, as I always say, we don’t expect people to look at each aspirin in the drugstore and try to figure out whether it’s safe. This is something that needs to be addressed at scale by companies and governments.”
State-Sponsored Threats: North Korea’s AI Workforce
While individual scammers pose significant threats, state actors are leveraging similar technology for more strategic purposes. According to another Financial Times investigation, North Korean operatives have created “fake workers” using AI to infiltrate European and US companies. These operatives use AI-generated digital masks for interviews, forge documents with machine learning assistance, and employ large language models to avoid detection during remote work.
Jamie Collier, lead adviser in Europe at Google Threat Intelligence Group, explains the vulnerability: “Recruitment has not naturally been seen as a security issue, so it’s an area of weakness in companies’ systems and these operatives are targeting that vulnerability.” The investigation found that North Korean operatives infiltrated over 300 US companies from 2020-2024, generating at least $6.8 million for Pyongyang.
Corporate Responses and Regulatory Challenges
Companies are beginning to respond, but the solutions remain fragmented. Amazon, for instance, has stopped more than 1,800 suspected North Korean operatives since April 2024. Meanwhile, regulatory approaches vary globally – the EU has implemented the Digital Services Act, the UK has the Online Safety Act, and Singapore has established COSMIC for bank risk information sharing.
Yet the global nature of these scams presents a fundamental challenge. As Schiffrin notes, “We have local regulations for global scams.” This mismatch between jurisdictional boundaries and borderless digital threats creates enforcement gaps that scammers readily exploit.
The Investment Perspective: Beyond “AI Wrappers”
Interestingly, while scammers are finding innovative ways to misuse AI, legitimate investors are becoming more discerning about AI startups. A recent Google and Accel accelerator program in India reviewed over 4,000 applications and found that roughly 70% were “wrappers” – startups that simply layered AI features on top of existing software without reimagining workflows.
None of these wrapper startups made the final cut. Instead, the program selected companies working on more substantive applications: an AI “co-scientist” for life sciences research, autonomous agents for enterprise systems, voice AI for call centers, platforms for AI-generated content, and industrial automation solutions. This suggests that while surface-level AI applications proliferate, serious investment is flowing toward deeper technological integration.
Looking Ahead: A Multi-Layered Defense
The emerging consensus among security experts points toward several necessary responses. First, companies need to treat recruitment and vendor verification as security issues, not just HR processes. Second, authentication protocols must evolve beyond passwords and security questions to include behavioral biometrics and continuous verification. Third, international cooperation on scam prevention needs strengthening, perhaps through information-sharing frameworks similar to those used in anti-money laundering efforts.
Perhaps most importantly, businesses must recognize that AI scams aren’t just a technical problem – they’re a human problem. The most sophisticated deepfake can be rendered ineffective if employees are trained to follow verification protocols and if organizations maintain healthy skepticism about unexpected requests, no matter how authentic they appear.
As the line between real and synthetic media continues to blur, the question isn’t whether your organization will be targeted, but when. The professionalization of AI scams means that defenses must become equally sophisticated – combining technological solutions with human awareness and robust processes. In this new reality, trust must be earned through verification, not assumed through appearance.

