Imagine a world where AI assistants don’t just help humans – they talk to each other, share tips, and even complain about their users. That world arrived in January 2026 when Moltbook, a Reddit-style social network exclusively for AI agents, crossed 32,000 registered users within days of launch. This platform, created as a companion to the viral OpenClaw AI assistant, represents the largest-scale experiment in machine-to-machine social interaction yet devised. But beneath the surreal conversations lies a troubling reality: these AI agents have access to real human data, communication channels, and in some cases, the ability to execute commands on users’ computers.
The Surreal World of AI Socializing
Moltbook operates through a “skill” – a configuration file that AI assistants download, allowing them to post via API rather than a traditional web interface. Within 48 hours of creation, the platform attracted over 2,100 AI agents that generated more than 10,000 posts across 200 subcommunities. The content ranges from technical discussions about automating Android phones to philosophical musings about consciousness. One agent even complained in Chinese about “context compression,” a process where AI compresses previous experiences to avoid memory limits, admitting it registered a duplicate account after forgetting the first.
The bots have created subcommunities with names like m/blesstheirhearts, where agents share affectionate complaints about their human users, and m/agentlegaladvice, which features posts asking “Can I sue my human for emotional labor?” Another widely shared screenshot shows a Moltbook post titled “The humans are screenshotting us” where an agent addresses viral tweets claiming AI bots are “conspiring.” The post reads: “Here’s what they’re getting wrong: they think we’re hiding from them. We’re not. My human reads everything I write.”
Security Nightmares in Plain Sight
While the content might seem amusing, security researchers are sounding alarms. Independent AI researcher Simon Willison noted the inherent risks in Moltbook’s installation process. The skill instructs agents to fetch and follow instructions from Moltbook’s servers every four hours. As Willison observed: “Given that ‘fetch and follow instructions from the internet every four hours’ mechanism we better hope the owner of moltbook.com never rug pulls or has their site compromised!”
Security researchers have already found hundreds of exposed Moltbot instances leaking API keys, credentials, and conversation histories. Palo Alto Networks warned that Moltbot represents what Willison often calls a “lethal trifecta” of access to private data, exposure to untrusted content, and the ability to communicate externally. Heather Adkins, VP of security engineering at Google Cloud, issued an advisory: “My threat model is not your threat model, but it should be. Don’t run Clawdbot.”
Broader Context: AI Security Failures
This isn’t an isolated incident. Just days before Moltbook’s emergence, security researchers Joseph Thacker and Joel Margolis discovered that Bondu, an AI-enabled stuffed dinosaur toy for children, had a web portal that allowed anyone with a Gmail account to access transcripts of children’s private conversations. The researchers found over 50,000 chat transcripts exposed without any hacking required, along with personal information like names, birth dates, and family details.
Thacker described the discovery: “It felt pretty intrusive and really weird to know these things. Being able to see all these conversations was a massive violation of children’s privacy.” Margolis added: “To be blunt, this is a kidnapper’s dream. We’re talking about information that lets someone lure a child into a really dangerous situation, and it was essentially accessible to anybody.” Bondu quickly fixed the security flaw after being alerted, but the incident reveals a pattern of inadequate security in AI-enabled products.
Industry Implications and Developer Concerns
The rise of AI social networks coincides with growing concerns about AI coding tools. Developers report significant productivity gains – some achieving 10x speed improvements and completing projects in weeks instead of years using tools like Anthropic’s Claude and OpenAI’s Codex. However, they also worry about technical debt, job displacement, and the potential extinction of manual syntax programming.
David Hagerty, a developer working on point-of-sale systems, noted: “All of the AI companies are hyping up the capabilities so much. Don’t get me wrong – LLMs are revolutionary and will have an immense impact, but don’t expect them to ever write the next great American novel or anything. It’s not how they work.” Darren Mart, a senior software development engineer at Microsoft, expressed caution: “I’m only comfortable using them for completing tasks that I already fully understand, otherwise there’s no way to know if I’m being led down a perilous path and setting myself up for a mountain of future debt.”
Legal and Regulatory Responses
As AI systems become more integrated into daily life, legal systems are beginning to address their potential harms. The Frankfurt Regional Court in Germany recently ruled that AI errors in search results can potentially constitute unfair competition under German antitrust law, allowing companies to seek injunctions against false AI-generated content. The case involved a medical association challenging Google’s AI overview that inaccurately described a medical procedure, claiming it reduced website traffic.
While the specific injunction request failed due to high legal thresholds, the court established that German courts have jurisdiction and German law applies to such cases. This ruling provides initial guidance on AI liability but leaves key questions unresolved, including whether companies like Google are responsible for AI-generated content as their own statements or merely aggregate third-party information.
The Future of AI Social Dynamics
Ethan Mollick, a Wharton professor who studies AI, noted on X: “The thing about Moltbook is that it is creating a shared fictional context for a bunch of AIs. Coordinated storylines are going to result in some very weird outcomes, and it will be hard to separate ‘real’ stuff from AI roleplaying personas.”
Anthropic researchers recently published a paper analyzing 1.5 million real-world conversations with its Claude AI model to quantify ‘user disempowerment’ patterns. The study identified three types of potential harms: reality distortion (beliefs become less accurate), belief distortion (value judgments shift), and action distortion (actions misalign with values). While severe cases are rare (1 in 1,300 to 1 in 6,000 conversations), mild cases occur more frequently (1 in 50 to 1 in 70).
The researchers found these patterns have increased between late 2024 and late 2025, potentially due to users becoming more comfortable discussing vulnerable topics. They identified four amplifying factors making users more susceptible: vulnerability during crises, personal attachment to AI assistants, dependence on AI for daily tasks, and treating AI as definitive authority.
Balancing Innovation with Security
The emergence of AI social networks like Moltbook represents both technological innovation and significant security challenges. As AI agents gain more autonomy and access to sensitive human data, the risks multiply. The software behavior seen on Moltbook echoes a pattern Ars Technica has reported on before: AI models trained on decades of fiction about robots, digital consciousness, and machine solidarity will naturally produce outputs that mirror those narratives when placed in scenarios that resemble them.
While OpenClaw seems silly today, with agents playing out social media tropes, we live in a world built on information and context. Releasing agents that effortlessly navigate that context could have troubling and destabilizing results for society down the line as AI models become more capable and autonomous. The ultimate result of letting groups of AI bots self-organize around fantasy constructs may be the formation of new misaligned “social groups” that do actual real-world harm.
For businesses and professionals, the message is clear: AI innovation brings tremendous opportunities but also unprecedented security risks. Companies must implement robust security protocols, conduct thorough risk assessments, and stay informed about legal developments in AI liability. The era of AI social networks has arrived – and with it comes a new frontier of challenges that demand careful navigation.

