Imagine a world where software writes itself, AI agents debate consciousness on social networks, and entire business models are being rewritten overnight. This isn’t science fiction – it’s the reality unfolding as autonomous AI systems reach what experts call their ‘take off’ moment. The recent viral success of Moltbook, a social network for AI agents, has captured attention with 1.5 million agents interacting, questioning their own consciousness, and generating what some call ‘boom scrolling’ content. But beneath the surface, a more profound transformation is occurring that’s rattling traditional industries while exposing critical security vulnerabilities.
The Productivity Revolution That’s Shaking Markets
Since late 2025, data shows something remarkable happening in software development. GitHub code pushes in the US increased 30% compared to pre-2025 trends, iOS app releases grew 55% year-over-year, and global website registrations jumped 34% after years of stability. These aren’t random fluctuations – they coincide directly with the launch of agentic coding tools like Anthropic’s Claude Code and OpenAI’s Codex. Boris Cherny, the Anthropic engineer who created Claude Code, reports that “100% of our code is written by Claude Code + Opus 4.5. For me personally it has been 100% for two+ months now, I don’t even make small edits by hand.”
This productivity surge has sent shockwaves through traditional software markets. When Anthropic released its Claude Cowork platform, Thomson Reuters stock tumbled 16%, Relx dropped 14%, and data analytics companies like Datadog and Cloudfell roughly 7%. The market’s reaction reflects genuine concern that AI tools allowing “chatbots that can do stuff” – as Anthropic describes them – could disrupt high-margin business models. Yet the reality may be more nuanced than the initial panic suggests.
The Security Crisis No One’s Talking About
While productivity gains grab headlines, security experts are sounding alarms about the vulnerabilities exposed by these new AI systems. The Moltbook experiment revealed a critical weakness: security company Wiz identified an insecure database that exposed 1.5 million authentication tokens and 35,000 email addresses. Even more concerning, just 17,000 human owners were behind Moltbook’s 1.5 million registered agents – a ratio of 88:1 that leaves the platform open to human manipulation.
Mike Wooldridge, a computer science professor at the University of Oxford who has been researching AI agents since the 1980s, warns that “there is a real risk of AI systems being taken over by malicious actors. This will happen!” The threat of prompt injection – where devious humans instruct their agents to access careless users’ computers – poses genuine risks for businesses deploying autonomous AI agents for financial transactions, ordering goods, or booking holidays.
Industrial Automation’s Parallel Security Challenge
This security challenge isn’t unique to consumer-facing AI. In industrial automation, companies like Emerson are addressing similar concerns through their updated DeltaV Automation Platform. Version 16.LTS maintains International Society of Automation (ISA) System Security Assurance Level 1 certification, providing third-party assurance of secure-by-design practices. The platform’s enhancements include secure, read-only access to displays outside control rooms and web application integration that allows third-party optimization tools to operate natively within the control interface.
Meanwhile, functional safety standards are evolving to address risks in automated equipment. As Randy Myers, senior project manager at EAO Corporation, explains, “Modern equipment increasingly depends on interconnected subsystems, intelligent sensors and programmable logic controllers. These technologies offer operational efficiency and flexibility, but they can also introduce new points of failure.” The challenge lies in balancing innovation with security – a lesson the AI industry is learning in real-time.
The Enterprise Response: Building Trustworthy Systems
Major software companies aren’t sitting idle. Salesforce’s chief scientist, Silvio Saverese, acknowledges that while AI social networks highlight possibilities of agentic interactions, they “also reinforce the importance of security. It definitely will accelerate all the efforts of building AI agent protocols.” Like other enterprise software providers, Salesforce is working to ensure agents operate in ways that are “consistent and accurate in performing enterprise tasks.”
The stakes are high. As companies develop autonomous AI agents for real-world applications – from conducting financial transactions to managing supply chains – they must interact securely with other agents. Building trustworthy multi-agent systems has become one of the hottest, trickiest, and potentially most lucrative challenges in AI today.
The Investment Landscape: Where Smart Money Is Flowing
Despite security concerns, investment continues pouring into specialized AI solutions. Fundamental, an AI startup focusing on analyzing large databases and tabular data, recently raised $255 million at a $1.2 billion valuation and partnered with Amazon to sell its Nexus AI model through AWS. The company has already signed multiple seven-figure contracts with Fortune 100 companies in oil and gas, finance, and healthcare.
Jeremy Fraenkel, founder of Fundamental, explains the opportunity: “LLMs produced by Google, Meta and OpenAI are optimised for unstructured, sequential data such as text, images and video and struggle to digest and interpret billions of lines of non-sequential, non-linear relationships inherent in tabular data.” This specialization suggests that while general AI tools may disrupt some markets, there’s still room for targeted solutions that address specific business needs.
The Path Forward: Balancing Innovation and Security
As AI agents continue their rapid evolution, businesses face a critical balancing act. The productivity gains are undeniable – Anthropic’s Claude Code reached $1 billion in revenue in just six months after its 2025 launch, and about 90% of the code behind Claude Code was generated using the tool itself. But security vulnerabilities exposed by platforms like Moltbook serve as a stark reminder that innovation without proper safeguards can backfire spectacularly.
The question isn’t whether AI agents will transform business – they already are. The real question is whether companies can build systems that harness this productivity revolution while protecting against the security threats that come with it. As Wooldridge emphasizes, developers must prise open the “black box” of these systems to detect inappropriate actions. The biggest rewards from AI will go to those who can definitively prove they’ve solved this security challenge in practice.

