Imagine a world where military commanders can identify and strike enemy targets not in hours or days, but in minutes. This isn’t science fiction – it’s the reality unfolding inside the Pentagon’s Project Maven, an artificial intelligence system that has transformed from a controversial experiment into what Vice Admiral Frank “Trey” Whitworth now calls the “marquee targeting program of record.” The journey from internal skepticism to widespread adoption reveals both the promise and peril of integrating AI into life-and-death decisions.
The Skeptic Who Became a Believer
In early September 2024, at a private retreat for tech investors and defense leaders, two men stood face-to-face with dramatically different perspectives on AI warfare. Vice Admiral Whitworth, an exacting former SEAL Team 6 intelligence director, had previously grilled Marine Colonel Drew Cukor, Project Maven’s founding leader, about whether the AI system was moving too fast and bending targeting rules. “Tell me about what happens after the bad drop when we go through a congressional hearing and we’re getting hard questions?” Whitworth had demanded during a tense meeting.
Yet by that September evening, Whitworth had undergone a remarkable transformation. “Drew, this is important work,” he assured Cukor, praising the Maven Smart System platform. What changed? According to Cukor, Whitworth had “reasoned his way to endorsing Maven” after seeing how easily it integrated into combat scenarios. The system’s adaptability – it could work with any platform and update with each software release – proved compelling enough to convert one of its most prominent skeptics.
From Protests to Production
The program’s evolution is particularly striking given its controversial origins. In 2018, more than 3,000 Google employees protested the company’s involvement in “the business of war” when they discovered Google was part of Project Maven. Workers feared the AI could eventually be used for lethal targeting – fears that proved prescient. Today, Maven Smart System, built by Palantir, is being used in US operations against Iran and has become central to military targeting worldwide.
The numbers tell a story of explosive growth. By 2025, close to 25,000 US personnel were using Maven across more than 130 sites globally, from Australia and Bahrain to Uzbekistan and Yemen. The system had accumulated 1 billion AI detections in its computer vision data store and was detecting objects nearly five times faster than before. Perhaps most dramatically, Maven helped increase the US military’s targeting capacity from under 100 targets per day to 5,000 – a fiftyfold improvement enabled by AI acceleration.
The Business of War Meets Silicon Valley
Project Maven’s expansion reflects a broader trend in defense technology that extends beyond traditional military contractors. While Palantir secured contracts potentially worth $1.3 billion for Maven Smart System through 2029, other tech companies are navigating complex relationships with the Pentagon. According to TechCrunch reporting, Anthropic refused to allow its AI for mass surveillance or autonomous weapons, leading to a supply chain risk designation and lawsuit. Meanwhile, OpenAI secured a Pentagon deal despite public backlash that saw ChatGPT uninstalls jump 295% day-over-day.
The defense sector’s appetite for AI technology is creating unprecedented business opportunities. The U.S. Army recently signed a 10-year contract with defense tech startup Anduril potentially worth up to $20 billion, consolidating over 120 separate procurement actions. As Gabe Chiulli, Chief Technology Officer at the Department of Defense’s Office of the Chief Information Officer, noted: “The modern battlefield is increasingly defined by software. To maintain our advantage, we must be able to acquire and deploy software capabilities with speed and efficiency.”
The Human Element in Automated Warfare
Despite the rapid automation, human concerns persist. Emelia Probasco, a former Navy lieutenant who advocates for scaling up Maven’s use, argues the system needs proper training protocols. “I think if you’re going to be making lethal decisions through engagement,” she says, “then soldiers should be trained as if it were a weapons system.” Her concern stems from experience – as a fire control officer, she was responsible for determining whether her ship would launch Tomahawk cruise missiles. “Nobody wants to be the guy that shot down the Iranian civilian aircraft,” she notes, referencing the 1988 tragedy when a US Navy warship using the AEGIS system mistakenly shot down a passenger plane, killing 290 civilians.
These concerns are amplified by ongoing debates about AI’s fundamental capabilities. As noted in Financial Times analysis, current AI models often lack true understanding of cause and effect, merely mimicking correlations rather than comprehending interventions and counterfactuals. This limitation becomes particularly dangerous in domains like autonomous weapons, where AI hallucinations or misinterpretations could have catastrophic consequences.
The Accountability Challenge
As AI systems become more integrated into military operations, questions of accountability and oversight grow more urgent. Senator Elizabeth Warren has expressed concern about the Pentagon granting Elon Musk’s xAI access to classified networks, citing risks from Grok’s “disturbing outputs” including advice on violence and antisemitic content. In a letter to Defense Secretary Pete Hegseth, Warren demanded details on security safeguards, asking: “It is unclear what assurances or documentation xAI has provided to the Department of Defense about Grok’s security safeguards, data-handling practices, or safety controls.”
Within the military itself, officials acknowledge the risks while pushing forward. Brigadier General John Cogbill gave Maven Smart System a “C+” grade in August 2024, noting concerns about AI hallucinations, ethics, and the chance that data inputs could lead operators “to the wrong conclusion.” Yet the push for automation continues. General Christopher Donahue, when asked if Maven was a weapons system, responded: “Oh, absolutely. And ultimately all this stuff will become automated.”
The Global Expansion
Project Maven’s influence now extends far beyond traditional battlefields. The system is being used for border control, detecting people trying to cross the southern border, and has assisted in “the detection, classification, and ultimate interdiction of more than three dozen vessels suspected of engaging in illicit or clandestine activity,” according to an NGA official. Some observers worry this represents what 1950s intellectual Aim� C�saire called “imperial boomerang” – the tendency for colonial powers to eventually bring oppressive techniques used overseas back home.
Internationally, NATO has become a customer for Maven Smart System, with ten NATO countries considering purchases for their own militaries. The UK reportedly signed a �750 million (roughly $1 billion) deal for Palantir’s military AI tools during a high-profile state visit. As the system spreads, so do concerns about standardization and interoperability. James Rizzo, chief data officer for NORAD and NORTHCOM, notes that while Maven helps commands “talk” to each other with a common view of the world, “there’s also a chance that relying so much on Maven for a common view of America – and the world – could go very wrong.”
The Future of AI Warfare
As Project Maven continues to evolve, its proponents and critics agree on one thing: this is just the beginning. The integration of large language models into the platform is already speeding up processes, and the arrival of autonomous AI agents – which can carry out tasks unsupervised – will accelerate automation further. The Pentagon’s new military AI strategy focuses on becoming an “‘AI-First’ warfighting force,” eager to enlist AI agents for campaign planning, kill chains, and more.
Yet for all the technological advancement, the fundamental questions remain human ones. As Joe O’Callaghan, NGA’s director of AI mission, put it: “Maven is a movement. We’ve drunk the Kool-Aid.” The phrase, with its grim origin referencing a cyanide-laced drink that killed cult followers in 1978, serves as an unintentionally apt metaphor for the high-stakes world of AI warfare – where technological enthusiasm must constantly be balanced against ethical responsibility and human judgment.

