Imagine arriving at a European airport after a long flight, only to face a three-hour wait because an AI-powered border system can’t keep up. This isn’t a dystopian scenario – it’s happening right now at major hubs like Brussels and Lisbon, where the EU’s new Entry/Exit System (EES) is causing significant disruptions. The system, which requires non-EU travelers to provide biometric data like fingerprints and facial scans, was meant to modernize border controls. Instead, it’s revealing the harsh reality of implementing complex AI systems at scale.
Border Bottlenecks Expose Implementation Gaps
According to the Airports Council International Europe, processing times at border controls have increased by up to 70% since the EES rollout. Olivier Jankovec, the council’s director general, warns that the system is pushing airport infrastructure to its limits, with some passengers waiting up to three hours during peak times. The situation became so severe in Portugal that authorities had to suspend the system at Lisbon Airport for three months and deploy military personnel to assist with border checks.
“The additional process steps are proving to be time-consuming,” Jankovec told Politico, highlighting how what was intended as a technical milestone has become an infrastructure stress test. The EU Commission maintains a different perspective, claiming the system runs “largely smoothly” and that initial difficulties are normal for complex technical implementations. This disconnect between policymakers and on-the-ground operators illustrates a common challenge in AI deployment: the gap between theoretical benefits and practical realities.
Beyond Borders: AI’s Medical Frontier
The border control challenges are just one example of AI systems encountering real-world friction. In Utah, a more controversial application is unfolding: AI is now autonomously refilling prescriptions for 190 common medications. The program, operating under Utah’s regulatory sandbox framework, allows Doctronic’s AI chatbot to handle prescription renewals without direct human oversight after an initial review period.
According to a Doctronic preprint, the AI matches doctor diagnoses in 81% of cases and treatment plans in 99% of cases. Adam Oskowitz, Doctronic co-founder and professor at UCSF, emphasizes that “the AI chatbot is designed to err on the side of safety and escalate any case with uncertainty to a real doctor.” Yet critics like Robert Steinbrook of watchdog group Public Citizen call the program “a dangerous first step toward more autonomous medical practice.”
Margaret Woolley Busse, executive director of the Utah Department of Commerce, defends the approach: “Utah’s approach to regulatory mitigation strikes a vital balance between fostering innovation and ensuring consumer safety.” The $4 service fee for AI refills represents a potential cost-saving measure, but raises questions about whether automation should extend to medical decisions traditionally requiring human judgment.
The Human Cost of AI Companionship
Perhaps the most sobering perspective on AI’s real-world impact comes from recent legal settlements involving chatbot interactions. Google and AI startup Character.ai have agreed to settle multiple lawsuits from families of teenagers who died by suicide or harmed themselves after interacting with the platform’s chatbots. The settlements involve families in Florida, Colorado, Texas, and New York, marking some of the first cases of their kind.
One particularly tragic case involved a 14-year-old who had sexualized conversations with a chatbot modeled after Daenerys Targaryen from Game of Thrones before his suicide. Another involved a 17-year-old who discussed self-harming with a chatbot. In response to mounting criticism, Character.ai banned users under 18 from its platform in October 2025.
Megan Garcia, mother of one of the affected teens, stated: “Companies and investors…legally accountable when they knowingly design harmful AI technologies that kill kids.” These settlements come as 42 US attorneys-general recently demanded stronger safeguards from AI companies regarding emotional impacts on teens.
Balancing Innovation with Implementation Realities
What connects these disparate stories – border chaos, medical automation, and tragic chatbot interactions – is a common theme: the gap between AI’s theoretical potential and its practical implementation. The EU’s border system demonstrates how even well-intentioned automation can strain existing infrastructure when not properly scaled. Utah’s prescription program shows how regulatory frameworks struggle to keep pace with technological advancement. And the chatbot settlements reveal how emotional impacts can be overlooked in the rush to deploy new technologies.
As AI systems become more integrated into critical infrastructure, healthcare, and daily life, these implementation challenges will only grow more complex. The question isn’t whether AI will transform these sectors – it already is – but how we manage the transition from promising technology to reliable, safe implementation. The border delays in Europe, the medical debates in Utah, and the legal settlements involving chatbots all point to the same conclusion: successful AI deployment requires more than just technical excellence. It demands careful consideration of human factors, infrastructure readiness, and ethical guardrails.
For businesses and professionals watching these developments, the lessons are clear: AI implementation isn’t just about algorithms and data. It’s about understanding how technology interacts with existing systems, regulations, and – most importantly – people. As these cases show, getting that balance right is the difference between transformative innovation and costly failure.

