Imagine hiring a top-performing remote employee, only to discover they’re actually a North Korean operative using artificial intelligence to steal your company’s secrets and funnel money to Pyongyang. This isn’t science fiction – it’s happening right now at some of Europe’s biggest companies, according to a Financial Times investigation that reveals how AI is being weaponized in sophisticated employment scams.
The North Korean ‘Mini Army’ of Fake Workers
A “mini army” of North Korean IT operatives is increasingly using AI to pose as remote workers, secure jobs, and earn wages at European companies, cyber experts warn. This state-backed enterprise has already infiltrated more than 300 U.S. companies between 2020 and 2024, generating at least $6.8 million for Kim Jong Un’s regime, according to Department of Justice figures.
Jamie Collier, lead adviser in Europe at Google Threat Intelligence Group, told the FT there are now indications that the phenomenon is spreading to Europe, with North Korean agents setting up “laptop farms” in the UK. “Recruitment has not naturally been seen as a security issue, so it’s an area of weakness in companies’ systems and these operatives are targeting that vulnerability,” he said.
The scam’s sophistication is alarming. Operatives typically steal identities by hijacking dormant LinkedIn accounts or paying account holders for access. After forging CVs and identity documents – and relying on other operatives to provide LinkedIn endorsements – they use AI to create digital masks or avatars and deepfake video filters to appear in remote job interviews.
How AI Makes Scammers More Credible
Alex Laurie, chief technology officer at cyber security firm Ping Identity, explained that AI has radically enhanced false applicants’ credibility. “By using large language models, operatives can generate culturally appropriate names and matching email address formats, ensuring that their communications do not trigger linguistic or cultural ‘red flags’ that previously spotted such scams,” he said.
After companies tightened online recruitment processes amid concerns about AI-generated applications, North Korean operatives adapted. They started paying real people, or “facilitators,” to be interviewed online instead. Once hired, they intercept laptops sent to new starters, log in remotely, and use LLMs and chatbot commands to undertake tasks – sometimes working multiple jobs simultaneously.
The Corporate Security Wake-Up Call
Rafe Pilling, director of threat intelligence at Sophos’ counter-threat unit, called it a state-backed enterprise: “A mini army of North Koreans have been targeting high-salary, fully remote tech jobs. Framing themselves as talent with around seven to 10 years’ experience, getting jobs, drawing a salary – rinse and repeat.”
The implications are profound. “The future of UK national security will be determined by the ability of its corporate sector to authenticate its workforce in the face of persistent, AI-enhanced adversarial impact,” Laurie warned.
Amazon’s security chief Stephen Schmidt revealed in a January LinkedIn post that Amazon had stopped more than 1,800 suspected North Korean operatives from getting jobs since April 2024. These were increasingly targeting AI and machine learning roles, he noted: “This isn’t Amazon specific – this is likely happening at scale across the industry.”
Broader AI Security Threats Emerging
This North Korean threat represents just one facet of a growing AI security landscape. According to a Google Cloud Security report, cybercriminals are using AI to accelerate cloud attacks, with the exploitation window shrinking from weeks to days. The primary vulnerability is third-party software, with attacks targeting unpatched code in libraries like React Server Components and XWiki Platform.
State-sponsored actors, including North Korean group UNC4899, exploit these weaknesses through social engineering and compromised identities. The report found that 45% of intrusions resulted in data theft without immediate extortion attempts, while 21% involved compromised trusted relationships with third parties.
Pentagon AI Deals Create Industry Turmoil
Meanwhile, the U.S. government’s approach to AI security is creating its own disruptions. The Pentagon recently designated Anthropic as a supply-chain risk after failing to agree on military control over its AI models, including use in autonomous weapons and mass domestic surveillance. This led to the collapse of a $200 million contract, with the Department of Defense turning to OpenAI instead.
The controversy has sparked significant industry backlash. ChatGPT uninstalls surged 295% following the Pentagon deal announcement, while Anthropic’s Claude climbed to the top of the App Store charts. The tension became so severe that Caitlin Kalinowski, OpenAI’s robotics lead, resigned in response to the company’s agreement with the Pentagon, citing concerns about rushed governance and lack of defined guardrails against domestic surveillance and lethal autonomous weapons.
“This wasn’t an easy call. AI has an important role in national security,” Kalinowski said. “But surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got.”
The Corporate Response Dilemma
Microsoft has confirmed that Anthropic’s Claude AI model will remain available to its customers through products like M365, GitHub, and Microsoft’s AI Foundry, except for the Department of Defense. Microsoft’s legal team determined that the designation only restricts direct use of Claude in DoD contracts, not unrelated uses by contractors.
Anthropic CEO Dario Amodei has vowed to fight the designation in court, stating: “With respect to our customers, it plainly applies only to the use of Claude by customers as a direct part of contracts with the Department of War, not all use of Claude by customers who have such contracts.”
What This Means for Businesses
For companies navigating this complex landscape, several key takeaways emerge:
- Remote hiring requires enhanced verification: Traditional background checks are insufficient against AI-enhanced identity theft. Companies need multi-factor authentication and behavioral analysis tools.
- Third-party software is a critical vulnerability: The Google Cloud report shows that 21% of attacks involve compromised trusted relationships with third parties.
- Government contracts come with ethical dilemmas: The Anthropic-Pentagon conflict demonstrates how national security demands can clash with corporate ethics and consumer expectations.
- AI security requires continuous adaptation: As Collier noted about the North Korean operatives, “When we had to tell a client that one of their workers was actually a fake North Korean operative, the feedback was ‘are you 100 percent sure, because he’s one of our best employees.'”
The convergence of state-sponsored cyber operations and corporate AI adoption creates unprecedented security challenges. As businesses increasingly rely on AI for productivity and innovation, they must also confront its potential for sophisticated deception – whether from foreign adversaries or in the ethical dilemmas of government partnerships. The question isn’t whether AI will transform business security, but whether companies can adapt quickly enough to the threats it enables.

