Imagine deploying an AI agent to handle customer service, only to discover it’s making decisions based on outdated or inaccurate data. This isn’t a hypothetical scenario – it’s the reality facing nearly half of businesses adopting agentic AI today. A new survey of 600 chief data officers reveals that while 69% of companies with revenues over $500 million are using generative AI, up from 48% in 2025, the rush to implement these systems is outpacing the infrastructure needed to support them.
The Trust Gap in AI Implementation
The CDO Insights 2026 report, conducted by Informatica, Wakefield Research, and Deloitte, paints a concerning picture: 50% of companies planning to use agentic AI cite data quality and retrieval issues as major deployment barriers. Even more striking, 76% of data leaders report that governance hasn’t kept pace with AI adoption. “We’re building skyscrapers on shaky foundations,” one anonymous CDO told researchers, capturing the precarious nature of current AI implementations.
Despite these challenges, confidence appears paradoxically high. 65% of data leaders believe their employees trust the data used for AI, with that number rising to 74% among companies using agentic AI. But should this trust be concerning? As the report notes, 75% of CDOs believe their workforce needs upskilling in data literacy, and 74% in AI literacy. This suggests that trust might stem more from ignorance than from genuine data quality.
The Investment Response
Recognizing these challenges, 86% of data leaders plan to increase investment in data management over the next year. The priorities are clear: improving data privacy and security (43%), enhancing data and AI governance (41%), and boosting data and AI literacy (39%). “Better data makes it easier to adopt AI,” noted 61% of CDOs, highlighting the direct correlation between data quality and AI success.
The stakes are particularly high for agentic AI – systems that can autonomously perform tasks and make decisions. With 47% of companies already using these systems and another 31% planning adoption within 12 months, the pressure to get data right is intensifying. The primary benefits driving this adoption include enhanced customer experience (29%), improved business intelligence and decision-making (28%), and better regulatory compliance (27%).
The Cybersecurity Connection
This data governance crisis intersects with another critical business concern: cybersecurity. According to EY’s analysis, the biggest AI threats come from within organizations due to ungoverned employee use of AI tools. “Organizations should absolutely take a top-down approach to implementing security guardrails around employees’ use of AI,” says Dan Mellen, EY’s global cyber chief technology officer.
This internal threat is amplified by the fact that over nine in ten businesses’ AI initiatives have failed to produce meaningful results, according to MIT research. The combination of poor data governance and inadequate security measures creates a perfect storm for corporate risk.
The Broader Industry Context
Meanwhile, the AI industry itself is undergoing significant turbulence that affects corporate adoption strategies. Nvidia CEO Jensen Huang recently announced that his company is likely making its last investments in OpenAI and Anthropic, citing that investment opportunities close once these companies go public. This pullback comes amid tensions between AI companies and government entities, particularly around military applications.
Anthropic CEO Dario Amodei has been particularly vocal about ethical boundaries, accusing OpenAI’s messaging around its Department of Defense contract as “straight up lies” and “safety theater.” The controversy has had tangible business impacts: ChatGPT uninstalls jumped 295% after OpenAI’s DoD deal announcement, while Anthropic’s Claude rose to #2 in the App Store following the controversy.
The Global Impact
The data governance challenge extends beyond individual companies to entire industries and regions. India’s $300 billion IT outsourcing sector, which employs over 6 million people, faces existential questions as AI automates traditional services. At least 20,000 jobs have been lost in the past six months, according to industry experts, even as companies like Tata partner with OpenAI and Infosys with Anthropic to navigate the transition.
This global context matters because it influences how companies approach their own AI strategies. The regulatory environment is also shifting, with Europe’s NIS-2 directive requiring companies to strengthen cybersecurity measures – a requirement that directly impacts AI governance frameworks.
The Path Forward
So what should businesses do? First, recognize that data quality isn’t just an IT issue – it’s a strategic business priority. The survey shows that 57% of organizations view data reliability as a key barrier to moving AI projects from pilot to production. Addressing this requires both technological investment and cultural change.
Second, implement governance before scaling. With 86% of companies increasing data management investments, the focus should be on creating frameworks that ensure AI systems operate within defined parameters. This includes both technical controls and employee training programs.
Finally, maintain ethical vigilance. As the Anthropic-OpenAI controversy demonstrates, public perception matters. Companies that prioritize transparent, ethical AI implementation may gain competitive advantages in an increasingly skeptical market.
The message from data leaders is clear: successful AI adoption requires more than just cutting-edge algorithms. It demands reliable data, robust governance, and ethical consideration. As one CDO put it, “Trust must be the number one core value for businesses becoming agentic businesses.” In the race to implement AI, those who prioritize this foundation will likely emerge as the true winners.

