Imagine a world where anyone can build software with a simple text prompt, no coding experience required. This is the promise of “vibe-coding” platforms like Orchids, which have surged in popularity by allowing users to create apps and games through AI chatbots. But a recent demonstration by cybersecurity researcher Etizaz Mohsin has exposed a dark side to this convenience: a significant, unfixed flaw in Orchids that allowed him to hack a BBC reporter’s laptop remotely, changing the wallpaper and leaving a notepad file titled “Joe is hacked.” This incident isn’t just a one-off bug – it highlights a fundamental shift in how AI agents interact with our devices, creating new vulnerabilities that could put businesses and professionals at risk.
The Hacking Demonstration and Its Implications
Mohsin, a researcher with a track record of uncovering dangerous software flaws, including work on Pegasus spyware, exploited a weakness in Orchids to gain access to a project and insert malicious code without the user’s knowledge. This “zero-click attack” means victims don’t need to download anything or hand over login details; the AI platform itself becomes the entry point. The implications are stark: hackers could install viruses, steal private or financial data, access internet history, or even spy through cameras and microphones. Orchids, which claims a million users and is used by companies like Google, Uber, and Amazon, has not fixed the issue despite Mohsin’s repeated warnings, citing an overwhelmed team of fewer than 10 employees.
Broader Risks in the AI Agent Ecosystem
This isn’t an isolated case. AI tools that autonomously carry out tasks, known as agentic AI, are becoming more common. For example, the viral Clawbot agent can run tasks on devices with little human input, but this deep access also means many potential security risks. Kevin Curran, professor of cybersecurity at Ulster University, warns that without discipline and review, vibe-coded software often fails under attack. Karolis Arbaciauskas of NordPass advises running these tools on separate machines and using disposable accounts for experimentation. The ease of hacking Orchids serves as a cautionary tale for the entire industry, as similar platforms like Claude Code, Cursor, Windsurf, and Lovable could harbor undiscovered flaws.
Balancing Innovation with Reliability and Security
While security concerns loom large, the potential of AI-assisted coding is undeniable. A study by the Central European University, University of Bielefeld, and Kiel Institute for the World Economy finds that vibe-coding reduces development costs and increases productivity. However, it also threatens open-source software sustainability by decreasing user engagement and community interaction, which are key motivators for developers. For instance, Tailwind CSS has seen traffic drop 40% and revenue fall almost 80% despite increased popularity. This suggests that traditional business models may need to shift toward paid options to support maintainers.
On the innovation front, experiments like Anthropic researcher Nicholas Carlini’s project, where 16 Claude AI agents created a C compiler from scratch over two weeks at a cost of $20,000, show what’s possible. The compiler produced 100,000 lines of Rust code, compiled a bootable Linux kernel, and achieved a 99% pass rate on tests. Yet, it hit a “coherence wall” at around 100,000 lines, indicating limits to autonomous coding and requiring extensive human management. Similarly, former GitHub CEO Thomas Dohmke is launching Entire, a $60 million-funded platform designed for AI-native development, highlighting the industry’s push toward agent collaboration.
But reliability remains a hurdle. In a ZDNET experiment, author David Gewirtz tested free AI coding tools like Goose and found they failed after six hours, producing worsening code and misunderstanding requirements. He concluded that free options aren’t yet viable for production-level work compared to paid services like Claude Code, which generated $1 billion in revenue in six months. This underscores a trade-off: while AI can accelerate development, it often requires significant human oversight and investment to ensure quality and security.
What This Means for Businesses and Professionals
For companies adopting AI coding tools, the stakes are high. The Orchids hack demonstrates that even platforms used by major corporations can have critical vulnerabilities. Businesses must weigh the efficiency gains against potential security breaches, data theft, and operational disruptions. Implementing robust security protocols, such as using dedicated machines for AI experiments and conducting regular audits, is essential. Professionals should stay informed about emerging risks and advocate for transparency from AI providers.
The vibe-coding revolution is reshaping software development, but it comes with inherent risks that demand attention. As Mohsin puts it, “The whole proposition of having the AI handle things for you comes with big risks.” By integrating insights from cybersecurity experts, economic studies, and real-world experiments, this article provides a balanced view: AI coding offers unprecedented opportunities, but without proper safeguards, it could leave the door wide open to exploitation. The industry must prioritize security and reliability to ensure that innovation doesn’t come at the cost of safety.

