At the World Economic Forum in Davos this week, Google DeepMind chief Demis Hassabis delivered a sobering warning that cut through the usual tech optimism: parts of the artificial intelligence industry are showing “bubble-like” characteristics. His comments come as AI has transformed the annual gathering of global elites into what many are calling a tech conference, with conversations about machine learning overshadowing traditional topics like climate change and global poverty.
The Bubble Question
“Multibillion-dollar seed rounds in new start-ups that don’t have a product or technology or anything yet do seem a little bit unsustainable,” Hassabis told the Financial Times, suggesting this may lead to “corrections in some parts of the market.” This isn’t just theoretical speculation. The AI startup Humans& recently raised a staggering $480 million seed round without having a product on the market, while former OpenAI executive Mira Murati’s Thinking Machine Lab was valued at $10 billion just six months after its founding, despite giving few details about what it’s actually building.
Yet Hassabis’ warning stands in contrast to other tech leaders at Davos. Nvidia’s Jensen Huang and Microsoft’s Satya Nadella batted off concerns about over-investment, creating a fascinating tension at the heart of the industry’s current moment. This divergence of opinion reveals a fundamental question: Are we witnessing the early stages of the most transformative technology ever invented, or are we in another tech bubble waiting to burst?
Google’s Position in the Race
From Google’s perspective, the company appears well-positioned regardless of what happens next. “If the bubble bursts we will be fine,” Hassabis said. “We’ve got an amazing business that we can add AI features to and get more productivity out of.” This confidence stems from Google’s recent rebound in the AI race. After a difficult period following OpenAI’s ChatGPT release in 2022, Google’s AI models now surpass the performance of its smaller rival, and the search giant is closing the gap in chatbot users.
Google’s Gemini app now boasts 650 million monthly users, while AI Overview has reached two billion users, making it what Hassabis calls “the most used AI product in the world.” The momentum has driven parent company Alphabet’s valuation past $4 trillion, making it the second-largest company in the world after chipmaker Nvidia. But beyond these impressive numbers lies a strategic decision that sets Google apart from competitors: the company has confirmed it has no plans to introduce advertising into Gemini, unlike OpenAI which is preparing to introduce ads in ChatGPT’s free version starting in February.
The China Factor and AGI Timeline
Another critical dimension of the AI race involves China. About a year ago, Chinese group DeepSeek surprised Silicon Valley by developing a powerful and free-to-access AI model for a fraction of the price of its American competitors. Hassabis argues there was “overreaction in the west” to DeepSeek, maintaining that “the Chinese labs haven’t proven they can innovate beyond the frontier yet.” He estimates US tech companies still maintain a lead of “six months or so.”
This timeline becomes particularly interesting when considering artificial general intelligence (AGI) – machines that can surpass human abilities. Hassabis maintains his consistent prediction that AGI is about 5 to 10 years away, with 2030 being the earliest it could arrive. “Maybe it’s now about four to nine years,” he noted, suggesting a 50% chance over that time zone. This puts him at odds with other AI leaders who have been more aggressive in their timelines, though he notes some are now “updating to be a little bit longer and a little bit more realistic.”
Beyond the Hype: Practical Applications and Risks
While the AGI debate captures headlines, more immediate concerns are shaping the industry’s direction. Hassabis emphasized the need to focus on safe and responsible AI development, particularly in light of recent controversies. OpenAI faced lawsuits over claims its chatbot encouraged young people to commit suicide, while Elon Musk’s xAI was heavily criticized after it emerged its Grok chatbot had been used to generate sexualized images of women and children.
“For us, that’s doubling down on our AI for science and AI for medicine work and things like that, which are kind of unequivocal goods in the world,” Hassabis said. This practical approach extends to Google’s partnership with pharmaceutical giants J&J, Eli Lilly, and Novartis, with about 17 drug programs in total. The company is also building a materials science lab in the UK to test theoretical compounds that AI systems design for semiconductors, superconductors, and batteries.
Meanwhile, a growing problem threatens to undermine AI’s progress: model collapse. As AI-generated content proliferates across corporate systems and public sources, models trained on this synthetic data risk drifting from reality. Gartner predicts 50% of organizations will adopt zero-trust data governance by 2028 to combat this issue, emphasizing the need for human oversight and verification in AI systems.
The Next Frontier: Smart Glasses and World Models
Looking ahead, Hassabis sees smart glasses as a potential breakthrough application. “Maybe we were a bit too ahead of our time when we first started this 10-plus years ago at Google with the devices,” he admitted. “What was missing was a killer app for that. I think a universal digital assistant that helps you in your everyday life could well be that killer app.” Google has announced partnerships with fashion groups like Warby Parker to introduce new AI-infused spectacles.
This vision aligns with emerging approaches to AI development. Yann LeCun, the Turing Prize-winning AI scientist who recently left Meta, has founded AMI Labs to develop “world models” – intelligent systems that understand the real world. The startup, reportedly in talks to raise funding at a $3.5 billion valuation, aims to apply its technology to high-stakes fields like healthcare, industrial process control, and robotics. This represents a contrarian bet against large language models, emphasizing reliability, controllability, and safety.
The Talent War and Industry Future
Underpinning all these developments is an intense competition for talent. “Some researchers are getting offers for $100 million,” Hassabis noted, though he emphasized that top researchers are motivated by more than money. “These are phenomenally smart people. They could do anything with their skills. Are you doing good in the world?”
As for his own future, Hassabis dismissed speculation that he might succeed Alphabet chief Sundar Pichai. “No, I’m very happy with what I’m doing. I love being close to the science and the research,” he said, adding, “there’s only so much one can do in the day and still leave enough time for serious thinking.”
The Davos conversations reveal an industry at a crossroads. On one hand, unprecedented investment and rapid progress suggest we’re witnessing something truly transformative. On the other, warning signs of a bubble and growing concerns about safety and misuse suggest the need for caution. What’s clear is that the companies that balance ambitious vision with practical applications – and maintain their focus during what may be turbulent times ahead – will likely shape AI’s future impact on businesses and society.

