Imagine this: the person responsible for protecting America’s critical infrastructure from cyberattacks accidentally uploads sensitive government documents to a public AI chatbot. That’s exactly what happened last summer when Madhu Gottumukkala, acting director of the Cybersecurity and Infrastructure Security Agency (CISA), used ChatGPT to process “for official use only” contracting documents, triggering multiple internal security warnings. This incident, first reported by Politico, reveals a troubling disconnect between Washington’s rush to adopt AI and the practical security risks that come with it.
The Leak That Shouldn’t Have Happened
According to four Department of Homeland Security officials with knowledge of the incident, Gottumukkala’s uploads happened soon after he joined CISA and sought special permission to use OpenAI’s popular chatbot – a tool most DHS staffers are blocked from accessing. Instead, agency employees use approved AI-powered tools like DHSChat, which are configured to prevent queries or documents from leaving federal networks. The leaked information wasn’t classified but carried the “for official use only” designation, which DHS documents explain covers unclassified information that, if shared without authorization, could adversely impact privacy, welfare, or national programs.
Why would the nation’s top cyber defense official need to use a public AI tool when secure alternatives exist? One official told Politico it seemed like Gottumukkala “forced CISA’s hand into making them give him ChatGPT, and then he abused it.” The concern now is that this sensitive information could be used to answer prompts from any of ChatGPT’s 700 million active users. Cyber News reports that experts have warned “that using public AI tools poses real risks because uploaded data can be retained, breached, or used to inform responses to other users.”
A Pattern of Government AI Missteps
Gottumukkala’s ChatGPT incident isn’t an isolated case of government AI adoption gone wrong. Just weeks earlier, the US Department of Transportation began using Google’s Gemini AI to draft safety regulations for transportation systems, aiming to speed up rule-making from weeks or months to under 30 days. According to Ars Technica, DOT staffers and experts expressed concerns about AI hallucinations and errors potentially leading to flawed laws, injuries, or deaths.
DOT’s top lawyer, Gregory Zerzan, argued for “good enough” rules over perfection, telling staffers: “We don’t need the perfect rule on XYZ. We don’t even need a very good rule on XYZ. We want good enough.” But an anonymous DOT staffer called the approach “wildly irresponsible,” highlighting the tension between AI’s promised efficiency and the real-world consequences of getting safety regulations wrong.
The Quality Problem in AI Training Data
These government AI incidents point to a broader issue: the quality and reliability of information that AI systems are trained on and can access. TechCrunch recently reported that ChatGPT is now pulling answers from Elon Musk’s Grokipedia, an AI-generated encyclopedia developed by xAI that has been criticized for conservative bias and inaccuracies. GPT-5.2 cited Grokipedia nine times in response to various queries, though it avoided citing it on topics where its inaccuracies are widely known, such as the January 6 insurrection or HIV/AIDS.
An OpenAI spokesperson stated that the company “aims to draw from a broad range of publicly available sources and viewpoints,” but this raises questions about how AI companies vet their sources and what happens when government officials inadvertently feed sensitive information into these systems. If public AI tools are incorporating biased or inaccurate information from sources like Grokipedia, what does that mean for government agencies relying on these same tools?
Political Pressure and Security Compromises
The timing of Gottumukkala’s ChatGPT use is particularly noteworthy. CISA’s director of public affairs, Marci McCarthy, suggested that the ChatGPT use aligned with Donald Trump’s order to deploy AI across government. This comes as Sriram Krishnan, a former Silicon Valley engineer and venture capitalist, has become Trump’s key AI adviser, shaping the administration’s light-touch regulatory approach according to The Financial Times.
Krishnan authored bills on “Woke” AI, guided chip export rules to China, and drafted executive orders to counter state-level AI regulation. His role bridges Silicon Valley and Washington, earning praise from tech circles but raising questions about whether political pressure is pushing government agencies to adopt AI tools before proper security protocols are in place.
The Human Cost of Rapid AI Adoption
Gottumukkala’s tenure at CISA has been marked by controversy beyond the ChatGPT incident. Congress recently grilled him about mass layoffs that shrank CISA from about 3,400 staffers to 2,400, with lawmakers warning that steep cuts threatened national security and election integrity. At a House Homeland Security Committee hearing, Gottumukkala failed to forecast “how many cyber intrusions CISA expects from foreign adversaries as part of the 2026 midterm elections,” according to the Federal News Network.
Perhaps most concerning, Politico reported that Gottumukkala failed a polygraph when attempting to seek access to other “highly sensitive cyber intelligence.” While failing such a test isn’t necessarily damning – anxiety or technical errors could trigger negative results – Gottumukkala’s response was telling: he called the test “unsanctioned” and refused to discuss the results.
Balancing Innovation with Security
The fundamental question these incidents raise is: how can government agencies harness AI’s potential without compromising security? The answer may lie in developing specialized, secure AI tools rather than relying on public platforms. DHS already has DHSChat for this purpose, but Gottumukkala’s insistence on using ChatGPT suggests either a lack of awareness about available secure alternatives or a belief that public tools offer superior capabilities.
As AI becomes increasingly integrated into government operations, agencies need clear protocols for when and how to use these tools. They also need robust training for officials at all levels about the security risks involved. The fact that CISA’s acting director – someone with “more than 24 years of experience in information technology” according to the agency’s press release – made such a basic security error suggests that even experienced professionals may not fully understand the risks of public AI tools.
What’s clear from these incidents is that Washington’s enthusiasm for AI needs to be tempered with practical security considerations. As government agencies race to adopt AI tools, they must ensure that security protocols keep pace with technological adoption. Otherwise, the very tools meant to enhance government efficiency could become its greatest vulnerability.

