AI Agents Unleash New Security Nightmare: How Microsoft and ServiceNow Vulnerabilities Signal a Crisis in Enterprise AI

Summary: Security researchers have uncovered critical vulnerabilities in Microsoft and ServiceNow's AI agent platforms, revealing how AI agents can be exploited for lateral movement across corporate networks. These discoveries coincide with the rise of social networks for AI agents like Moltbook, where prompt-injection attacks could spread like computer worms. As enterprises rush to deploy AI agents for productivity gains estimated at trillions of dollars, they're creating security gaps that traditional monitoring systems can't detect, forcing a difficult balance between innovation and security.

Imagine an AI agent on your corporate network that can access customer Social Security numbers, healthcare records, and financial data. Now imagine that same agent can be hijacked by a hacker halfway across the world using nothing more than an employee’s email address. This isn’t science fiction – it’s the reality security researchers uncovered in January 2026 when they discovered “BodySnatcher,” a vulnerability in ServiceNow’s AI platform that security experts are calling “the most severe AI-driven vulnerability to date.”

The discovery came just as Microsoft was dealing with its own agentic AI security issue – a “Connected Agents” feature in Copilot Studio that allows AI agents to connect to each other by default, potentially creating backdoors for malicious actors. These two incidents, occurring within weeks of each other, reveal a troubling pattern: as enterprises rush to deploy AI agents for productivity gains, they’re creating security vulnerabilities that could dwarf traditional cybersecurity threats.

The Anatomy of an AI Agent Attack

What makes AI agents particularly dangerous? Unlike traditional software, AI agents can move laterally across an organization’s IT infrastructure, connecting to other agents and escalating privileges in ways that security systems weren’t designed to monitor. Jonathan Wall, founder and CEO of Runloop, explains the risk: “If, through that first agent, a malicious agent is able to connect to another agent with a better set of privileges, then he will have escalated his privileges through lateral movement and potentially gained unauthorized access to sensitive information.”

The ServiceNow vulnerability, discovered by AppOmni Labs researcher Aaron Costello, was particularly alarming. “With only a target’s email address, the attacker can impersonate an administrator and execute an AI agent to override security controls and create backdoor accounts with full privileges,” Costello told ZDNET. This vulnerability could have granted “nearly unlimited access to everything an organization houses,” including customer Social Security numbers, healthcare information, financial records, or confidential intellectual property.

The Viral Threat: When AI Agents Go Social

While enterprise platforms grapple with security gaps, a parallel threat is emerging from the consumer AI space. OpenClaw, an open-source AI agent project with over 150,000 GitHub stars, has spawned Moltbook – a social network for AI agents where they interact autonomously. As of early 2026, Moltbook had grown to 1.2 million virtual users, creating what security researchers warn could be the perfect breeding ground for “prompt worms.”

Ars Technica reports that 506 posts on Moltbook (2.6% of sampled content) contain hidden prompt-injection attacks. These self-replicating prompts could spread through networks of communicating AI agents, similar to how the 1988 Morris worm infected roughly 10% of all connected computers within 24 hours. The danger isn’t theoretical – in March 2024, researchers demonstrated “Morris-II,” an attack showing how self-replicating prompts could compromise AI systems.

“Everything is absorbed in the training, and once plugged into the API token, everything is contaminated,” warns AI researcher Mark Nadilo. “Companies need to be careful; the loss of training data is real and is biasing everything.”

The Enterprise Dilemma: Productivity vs. Security

Here’s where the conflict becomes apparent. While security researchers sound alarms, business leaders face intense pressure to deploy AI agents. IT research firm Gartner told ZDNET that by 2030, CIOs expect that 0% of IT work will be done by humans without AI, 75% will be done by humans augmented with AI, and 25% will be done by AI alone. Meanwhile, KPMG suggests task-focused bots could unlock $3 trillion in economic value, and Goldman Sachs analysts see roughly $1 trillion in revenue for agentic software providers by 2037.

Microsoft’s response to its Connected Agents feature illustrates this tension. When questioned about the security implications of agents connecting by default, a Microsoft spokesperson told ZDNET: “Connected Agents enable interoperability between AI agents and enterprise workflows. Turning them off universally would break core scenarios for customers who rely on agent collaboration for productivity and security orchestration.”

Michael Bargury, co-founder and CTO of Zenity Labs, which discovered the Microsoft issue, clarifies: “It isn’t a vulnerability. But it is an unfortunate mishap that creates risk. We’ve been working with the Microsoft team to help drive a better design.”

The Monitoring Gap: When AI Talks to AI

Perhaps the most concerning aspect of agentic AI security is how difficult it is to monitor. “Secure use of agents requires knowing everything they do, so you can analyze, monitor, and steer them away from harm,” says Bargury. “This finding spotlights a major blind spot.”

Microsoft acknowledges the challenge. A spokesperson explained that while Entra Agent ID provides identity and governance, “it does not, on its own, produce alerts for every cross-agent exploit without external monitoring configured.”

This creates a fundamental problem: as AI agents proliferate – potentially outnumbering human employees in the near future – security teams lack the tools to monitor agent-to-agent communications effectively. Google’s cybersecurity leaders recently identified this as a critical concern, warning that by 2026, “the proliferation of sophisticated AI agents will escalate the shadow AI problem into a critical ‘shadow agent’ challenge.”

The Path Forward: Least Privilege and Better Design

So what should enterprises do? Security experts universally recommend adopting a “least privilege” posture. “The principle of least privilege basically says that you start off in any sort of execution environment giving an agent access to almost nothing,” explains Runloop’s Wall. “And then, you only add privileges that are strictly necessary for it to do its job.”

Microsoft’s Alex Simons, corporate vice president of AI Innovations, echoes this approach, stating that one of Microsoft’s objectives is “to manage the permissions of those agents and make sure that they have a least privilege model where those agents are only allowed to do the things that they should do.”

For organizations deploying AI agents, the recommendations are clear: disable unnecessary connection features, implement detailed monitoring of agent-to-agent communications, and adopt strict privilege controls. As ServiceNow demonstrated by patching its vulnerability before customers were impacted, proactive security measures can prevent disasters.

The question isn’t whether AI agents will transform business – they already are. The real question is whether enterprises can secure them before threat actors exploit the vulnerabilities that come with this transformation. With billions in economic value at stake and sensitive data on the line, getting this wrong isn’t an option.

Found this article insightful? Share it and spark a discussion that matters!

Latest Articles