Imagine a world where software developers don’t just write code – they collaborate with AI agents that generate massive amounts of code, maintain perfect context across projects, and document every decision. This isn’t science fiction; it’s the vision driving Thomas Dohmke, former GitHub CEO, as he launches Entire, a platform designed specifically for the AI era. But as this new frontier emerges, it’s creating waves across the industry, from talent exoduses at major AI labs to fundamental questions about how we should integrate these powerful tools into our workflows.
The GitHub Veteran’s AI-First Vision
After nearly four years leading GitHub as CEO, Thomas Dohmke left Microsoft in September 2025 with an ambitious plan: to build a platform where developers and AI agents work hand-in-hand. His startup, Entire, has already raised $60 million in seed funding at a $300 million valuation, with backing from Microsoft’s investment arm M12, Felicis Ventures, Y-Combinator’s Garry Tan, and Yahoo co-founder Jerry Yang. What makes this different from existing platforms? Dohmke argues that current software ecosystems are “hindered by a manual production system that was never designed for the AI age.”
Entire’s approach centers on three key components: a Git-compatible database that combines code, objectives, constraints, and reasoning chains; a universal semantic reasoning layer for coordinating multiple agents; and an AI-native software development lifecycle. The platform’s “Checkpoints” feature addresses a critical limitation of current AI agents – their tendency to lose context in long-term projects. These checkpoints document complete sessions, including logs, prompts, affected files, and tool calls, allowing developers to trace decisions years later. “Even three years later, you can still understand why the agent wrote a line of code that way,” Dohmke explains.
The Talent Exodus at xAI: A Sign of Things to Come?
As Dohmke builds his AI-native platform, another story is unfolding at Elon Musk’s xAI. In a single week, at least nine engineers – including two co-founders – publicly announced their departures. More than half of xAI’s founding team has now left, with several indicating they’re starting new ventures. Yuhai (Tony) Wu, an xAI co-founder and reasoning lead, captured the sentiment in his resignation post: “It’s time for my next chapter. It is an era with full possibilities: a small team armed with AIs can move mountains and redefine what’s possible.”
These departures come amid significant controversy for xAI, including regulatory scrutiny after its Grok AI created nonconsensual explicit deepfakes and personal controversy surrounding Musk. While xAI maintains over 1,000 employees, making the departures unlikely to affect short-term capabilities, they raise broader questions about governance and stability in frontier AI companies. As one departing engineer, Vahid Kazemi, noted: “All AI labs are building the exact same thing, and it’s boring. I think there’s room for more creativity.”
The Hidden Costs of AI Adoption
While platforms like Entire promise increased efficiency, research suggests the reality might be more complex. A Harvard Business Review study conducted over eight months at a 200-person tech company found that employees who embraced AI tools ended up working longer hours as expectations rose. An engineer at the studied company confessed: “You had thought that maybe, oh, because you could be more productive with AI, then you save some time, you can work less. But then really, you don’t work less. You just work the same amount or even more.”
This finding is supported by a National Bureau of Economic Research study showing AI adoption led to just 3% time savings with no impact on earnings or hours worked. The gap between promised productivity gains and actual workplace outcomes highlights a critical challenge: Are we simply automating tasks, or are we fundamentally rethinking how work gets done?
The Security Blind Spot: Shadow AI
As AI adoption accelerates, security risks are growing in parallel. Microsoft’s ‘Cyber Pulse Report’ reveals that over 80% of Fortune 500 companies use AI assistants for programming, but only 47% have specific security controls for generative AI. More concerning: 29% of employees use unauthorized AI agents, creating what Microsoft researchers call “shadow AI” – the unauthorized use of AI tools without IT department knowledge.
Microsoft warns that “the rapid deployment of AI agents can bypass security and compliance controls and increase the risk of shadow AI.” The company recently discovered a ‘Memory Poisoning’ attack campaign targeting AI assistants, highlighting the vulnerabilities in current implementations. This creates a paradox: as companies rush to adopt AI for competitive advantage, they may be opening new security vulnerabilities that could undermine those very advantages.
Rethinking Our Relationship with AI Agents
Perhaps the most fundamental question emerging from this AI transformation is how we conceptualize these tools. Sangeet Paul Choudary, a senior fellow at Berkeley’s Haas School of Business, argues that we’re making a critical mistake by humanizing AI agents. “There’s been too much framing of AI as an alternative to humans, and hence job losses and all of those aspects,” he says. “And there’s too little framing of AI just as technology, and how do you leverage it, just as you would leverage any technology.”
Choudary’s perspective challenges the very premise of platforms like Entire. Instead of creating AI agents that act like colleagues, he suggests we need to reorganize work structures around technological capabilities. “As the AI improves, and as our ability to adopt AI constantly improves, what machines do and what humans do is constantly changing,” he explains. “We’ve never had the machines improve at such rapid rates.”
The European Alternative: Mistral’s Sovereign AI Push
While American companies dominate the AI landscape, European alternatives are gaining ground. French AI startup Mistral has seen its annualized revenue run rate soar from $20 million to over $400 million in the past year, with projections to surpass $1 billion in annual recurring revenue by year-end. The company, valued at nearly �12 billion, is investing �1.2 billion to build AI data centers in Sweden – its first facility outside France.
CEO Arthur Mensch highlights the geopolitical dimension: “Europe has realized that its dependency on US digital services was excessive and at breaking point today. We bring them leverage because we bring them models, software and compute that is fully independent from US players.” With customers including ASML, TotalEnergies, HSBC, and several European governments, Mistral represents a growing trend toward sovereign AI infrastructure.
The Road Ahead: More Than Just Better Tools
The emergence of platforms like Entire, the talent exodus from xAI, and the security warnings from Microsoft all point to a larger truth: we’re not just building better tools; we’re redefining the very nature of software development. The question isn’t whether AI will transform coding – it already is. The real question is how we’ll navigate the human, organizational, and security challenges that come with this transformation.
Will platforms that treat AI agents as colleagues lead to more efficient development, or will they simply amplify existing problems? Can we build security frameworks that keep pace with AI adoption? And perhaps most importantly, as smaller teams armed with AI “move mountains,” what happens to the institutional knowledge and stability that larger organizations provide? The answers to these questions will shape not just the future of software development, but the future of work itself.

