Imagine a future where the regulations keeping airplanes in the sky, preventing gas pipelines from exploding, and stopping toxic chemical trains from derailing are drafted not by seasoned experts with decades of experience, but by artificial intelligence in under 30 minutes. That future is now, according to a ProPublica investigation that has sent shockwaves through Washington and Silicon Valley alike. The U.S. Department of Transportation is pushing forward with plans to use Google’s Gemini AI to draft critical safety regulations – a move staffers call “wildly irresponsible” but that top officials defend as necessary modernization.
The ‘Good Enough’ Standard
At the heart of this controversy lies a fundamental question: what standard should apply when AI drafts regulations that could mean life or death? DOT’s top lawyer, Gregory Zerzan, provided a startling answer in December meeting notes: “We don’t need the perfect rule on XYZ. We don’t even need a very good rule on XYZ. We want good enough.” This philosophy represents a seismic shift in regulatory approach, prioritizing speed over precision in an arena where errors could have catastrophic consequences.
Staffers granted anonymity by ProPublica expressed deep skepticism about Gemini’s capabilities. They emphasized that DOT rulemaking requires intricate expertise in existing statutes, regulations, and case law – knowledge that sometimes takes decades to develop. Their concerns were amplified when a demonstration of Gemini’s rule-drafting produced documents missing key text that staffers would need to fill in manually. “It seems wildly irresponsible,” one staffer told ProPublica, capturing the anxiety permeating the department.
The Hidden Risks of AI Reliance
This controversy arrives at a particularly precarious moment for AI governance. As ZDNET reports in a separate analysis, AI models are quietly poisoning themselves through a phenomenon called “model collapse.” When AI systems are trained on AI-generated content – which is proliferating across corporate systems and public sources – their outputs drift increasingly from reality. Gartner predicts 50% of organizations will adopt zero-trust data governance by 2028 to combat this growing problem.
IBM distinguished engineer Phaedra Boinodiris warns: “Just having the data is not enough. Understanding the context and the relationships of the data is key. This is why you need to have an interdisciplinary approach to who gets to decide what data is correct.” This insight strikes at the core of DOT’s challenge: can an AI system truly understand the complex contextual relationships in transportation safety regulations that human experts have spent careers mastering?
A Contrarian Approach Emerges
While DOT embraces large language models like Gemini, another vision for AI development is gaining momentum. Turing Prize-winning AI scientist Yann LeCun recently founded AMI Labs, a startup focused on developing “world models” that understand the real world rather than just processing language. As TechCrunch reports, AMI Labs is in talks to raise funding at a $3.5 billion valuation and aims to apply its technology to high-stakes fields like healthcare, industrial process control, and robotics.
LeCun’s approach represents a contrarian bet against the current LLM-dominated landscape, emphasizing reliability, controllability, and safety. AMI Labs CEO Alex LeBrun told TechCrunch that “a big reason he took the role was the prospect of applying its world models to healthcare” – another high-stakes field where AI errors could prove disastrous. This alternative vision raises important questions: should government agencies be using technology that leading AI researchers are moving away from for critical applications?
The Economic Context
The DOT’s AI initiative doesn’t exist in a vacuum. As The Financial Times reports in “Humans vs bots,” AI adoption is creating complex economic tensions. While AI boosts productivity and corporate profits, it simultaneously reduces labor’s share of economic output. Workers now take home only 53.8% of America’s economic output, the lowest since records began in the 1940s, down from around 65% in the 1950s.
Tim O’Reilly, founder of O’Reilly Media, offers a crucial perspective: “The narrative from the AI labs is that when they build artificial general intelligence (AGI), it will unlock astonishing productivity and GDP will surge. It sounds compelling, especially if you’re the one building or investing in AI. But an economy isn’t just production. It is production matched to demand, and demand requires broadly distributed purchasing power.” This economic reality adds another layer to the DOT controversy: if AI reduces the need for human expertise in rulemaking, what happens to the institutional knowledge that ensures transportation safety?
The Political Dimension
ProPublica’s investigation reveals that President Donald Trump is “very excited” about the DOT initiative, with Zerzan telling staffers that Trump sees DOT as the “point of the spear” and expects other agencies to follow its lead. The White House has already credited DOT with “replacing decades-old rules with flexible, innovation-friendly frameworks,” including fast-tracking rules to allow for more automated vehicles on the roads.
Google, meanwhile, has remained silent on this specific use case for Gemini, though the company posted a blog Monday pitching Gemini for government more broadly. The tech giant has been competing aggressively for government contracts, undercutting OpenAI and Anthropic’s $1 deals by offering a year of access to Gemini for $0.47. In December, Google celebrated that DOT was “the first cabinet-level agency to fully transition its workforce away from legacy providers to Google Workspace with Gemini.”
The Path Forward
DOT expects that Gemini can handle 80 to 90 percent of the work of writing regulations, with federal workers eventually falling back into merely an oversight role, monitoring “AI-to-AI interactions.” But this vision raises fundamental questions about accountability and expertise. If AI drafts regulations and other AI systems monitor them, where does human judgment – and responsibility – enter the equation?
The department has already used Gemini to draft a still-unpublished Federal Aviation Administration rule, according to a DOT staffer briefed on the matter. As this initiative moves forward, transportation professionals, safety experts, and the public must ask: in our rush to modernize, are we trading proven human expertise for unproven AI efficiency? And when it comes to keeping planes in the sky and pipelines from exploding, is “good enough” really good enough?

