DHS's Mobile Fortify App: When Facial Recognition Fails to Verify Identity

Summary: The Department of Homeland Security's Mobile Fortify facial recognition app, deployed to immigration agents nationwide, cannot actually verify identities despite being marketed as such. Technical limitations, field failures, and dismantled privacy safeguards raise serious concerns about its use in enforcement operations, highlighting broader challenges in government AI deployment and oversight.

In the spring of 2025, the Department of Homeland Security (DHS) launched Mobile Fortify, a facial recognition app deployed to Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP) agents across the United States. The app was framed as a tool to “determine or verify” identities during immigration enforcement operations, but internal records and expert analysis reveal it cannot actually verify who people are – a critical limitation that raises serious questions about its deployment and use.

The Verification Gap

Despite DHS’s framing, Mobile Fortify is designed to generate candidate matches, not confirm identities. Nathan Wessler, deputy director of the ACLU’s Speech, Privacy, and Technology Project, notes: “Every manufacturer of this technology, every police department with a policy makes very clear that face recognition technology is not capable of providing a positive identification, that it makes mistakes, and that it’s only for generating leads.” This fundamental gap between capability and application has real-world consequences.

Field Failures and Human Cost

In Oregon testimony last year, an agent described how two photos of a woman in custody produced different identities through the app. The woman was handcuffed and looking downward, prompting the agent to physically reposition her – causing her to yelp in pain. The app returned a name and photo of a woman named Maria, which the agent rated “a maybe.” When she didn’t respond to agents calling “Maria, Maria,” they took another photo, with the second result deemed “possible.” The agent testified that probable cause was based on the woman speaking Spanish, her presence with others who appeared to be noncitizens, and a “possible match” via facial recognition.

Technical Limitations in Uncontrolled Environments

Testing by the National Institute of Standards and Technology shows face-recognition accuracy drops sharply when images are taken outside controlled settings. Mobile Fortify relies on algorithms from NEC Corporation of America, whose patents describe systems designed to operate at scale under imperfect conditions rather than conclusively verify identity. The technology converts face images into biometric templates and compares them against stored records using similarity scores and adjustable thresholds, with explicit trade-offs between speed, scale, and accuracy.

Broader DHS AI Integration

Mobile Fortify is part of a larger pattern of AI adoption within DHS. According to a 2025 inventory, ICE is using Palantir’s generative AI tools to sort and summarize immigration enforcement tips from public submissions. This expansion of AI capabilities across enforcement functions suggests a strategic shift toward automated decision-making in immigration processes.

Privacy Safeguards Dismantled

Records show DHS dismantled centralized privacy reviews and removed department-wide limits on facial recognition prior to Mobile Fortify’s deployment. The last enterprise-wide directive governing facial recognition use disappeared from DHS’s website three weeks after President Trump’s 2025 inauguration. That directive had prohibited facial recognition as the sole basis for enforcement actions and required US citizens to have opt-out rights when collection wasn’t for law enforcement purposes.

Data Retention and Watch Lists

Data collected through Mobile Fortify may be stored in the Seizure and Apprehension Workflow (SAW), described as a “biometric gallery of individuals for whom CBP maintains derogatory information.” Unlike systems used at ports of entry, SAW is designed for intelligence purposes and lead generation, with records retained for up to 15 years. A separate “Fortify the Border Hotlist” watch list exists, though criteria for inclusion and any removal process remain unclear.

Expert Warnings and Legislative Response

Mario Trujillo, a senior staff attorney at the Electronic Frontier Foundation, warns: “Facial recognition can be wrong, and it has been wrong in the past. Here, the safeguards you’d expect – confidence scores, clear thresholds, multiple candidate photos – don’t appear to be there.” In response, Senator Ed Markey and colleagues have introduced legislation aimed at prohibiting ICE and CBP from using certain facial-recognition and biometric surveillance tools, citing concerns about a “sweeping surveillance apparatus” used without consent, accountability, or clear legal limits.

The Bigger Picture: AI Governance Challenges

The Mobile Fortify case illustrates broader challenges in government AI deployment. Similar concerns have emerged around other federal AI implementations, including a recent incident where the acting director of the Cybersecurity and Infrastructure Security Agency accidentally uploaded sensitive government information to ChatGPT. These cases highlight the tension between rapid technology adoption and necessary governance frameworks.

As DHS continues to expand its use of AI tools – from facial recognition to generative AI for tip processing – the Mobile Fortify example serves as a cautionary tale about deploying technologies without adequate testing, transparency, or oversight. The fundamental question remains: When technology cannot reliably perform its stated function, what safeguards ensure it doesn’t cause harm?

Found this article insightful? Share it and spark a discussion that matters!

Latest Articles