As Nvidia’s market capitalization soars past $4.5 trillion, making it the world’s most valuable company, investors and industry observers are grappling with a fundamental question: How do you value a company at the center of an AI revolution that’s still unfolding? The answer reveals a complex landscape where technological innovation, market competition, and practical constraints are reshaping the future of artificial intelligence hardware.
The Nvidia Conundrum: Valuation in an AI World
Nvidia’s journey from gaming chip specialist to AI powerhouse is nothing short of remarkable. Since 2020, its share price has surged over 3,000%, driven by its dominance in AI training chips. The company’s operating margins have climbed from the mid-teens a decade ago to about 60% last year, with expected operating profits exceeding $120 billion this year – more than the total value of all but the four biggest FTSE 100 companies. This success stems from founder Jensen Huang’s early bet on parallel processing, which made Nvidia’s chips faster and more energy-efficient than competitors.
But is this valuation sustainable? Simon Edelsten, fund manager at Goshawk Asset Management, offers a cautious perspective: “I like Nvidia, but I don’t think it’s a sensible price. And I think there are better alternatives from an investment perspective.” This skepticism isn’t unfounded. The AI hardware landscape is evolving rapidly, and Nvidia faces challenges on multiple fronts.
The Competition Heats Up: Beyond the GPU Monopoly
While Nvidia currently dominates the high-performance AI chip market, competitors are making significant strides. AMD recently unveiled its Instinct MI455X AI accelerator at CES 2026, featuring 320 billion transistors – 70% more than its predecessor – and manufactured using TSMC’s advanced 2nm and 3nm processes. This represents a serious challenge to Nvidia’s technological leadership.
Meanwhile, major tech companies are developing their own solutions. Alphabet has created Tensor chips that power its Gemini 3 AI model, which outperforms OpenAI’s ChatGPT in some benchmarks. Anthropic AI relies largely on chips designed by Alphabet and Amazon’s AWS. Even OpenAI, Nvidia’s largest client, presents a valuation challenge: Nvidia intends to supply over $100 billion worth of chips to OpenAI’s data centers, but OpenAI isn’t expected to turn a profit this decade, making this arrangement part revenue, part loan.
The Power Problem: A Physical Limit to AI Growth
Perhaps the most significant challenge facing the entire AI industry is energy consumption. The current training phase for AI models involves processing massive datasets and uses enormous amounts of electricity. As Edelsten notes, “Power limitations could pull the plug on the heady revenue growth numbers underpinning many AI stock forecasts.”
This isn’t just theoretical. Data center companies are already exploring expensive workarounds, including repurposing jet engines, because traditional power solutions are insufficient. If power constraints limit AI development, they could substantially undermine the valuations of companies like Anthropic, which may float on the Nasdaq this year based on hoped-for future cash flows.
China’s AI Chip Ambitions: Myth or Reality?
The geopolitical dimension adds another layer of complexity. Chinese AI chipmakers like Biren Technology, Moore Threads, and MetaX have seen dramatic stock surges following their public offerings, with some jumping 425% to 700% on their debuts. Bernstein analysts project that China’s domestic chip producers will capture 53% of the market this year, up from 29% in 2024.
However, these companies face significant challenges. While U.S. restrictions on Nvidia chip exports to China initially boosted domestic production, the industry contends with intense competition, loss-making companies, and high valuations. Cambricon and Biren have enterprise values worth over 40 times projected 2026 sales, compared to Nvidia’s peak forward multiple of 24 last year. As one Financial Times analysis put it, China’s AI chip “dragons’ firepower is mostly mythical,” drawing parallels to previous boom-and-bust cycles in industries like electric vehicles and solar panels.
The Next Phase: Efficiency and New Architectures
As the AI boom matures, efficiency is becoming increasingly important. The emergence of DeepSeek’s AI engine last year caused Nvidia’s shares to sell off sharply because DeepSeek’s results seemed comparable to OpenAI’s while apparently requiring fewer expensive, high-end chips. DeepSeek has since published research offering more efficient ways to train large language models, potentially reducing reliance on the most powerful chips.
This shift toward efficiency aligns with broader trends in AI research. Yann LeCun, the Turing Award-winning computer scientist who recently left Meta, argues that large language models are fundamentally limited. “I’m sure there’s a lot of people at Meta who would like me to not tell the world that LLMs basically are a dead end when it comes to superintelligence,” LeCun stated in an interview. He advocates for world models like V-JEPA, which learn from videos and spatial data to understand the physical world, and is now fundraising for Advanced Machine Intelligence Labs to pursue this research.
Investment Implications: Beyond the Hype Cycle
For investors, the AI chip sector presents both opportunities and risks. As Edelsten observes, “Valuing semiconductor stocks is always tricky. An old mate of mine compares it with valuing cement companies – both industries are capital-intensive and cyclical. These are not buy-and-hold stocks.”
The key insight is timing: buy at the bottom of the cycle when earnings have plunged and price/earnings ratios look ridiculously high, and sell at the cycle peak when price/earnings ratios look deceptively low. Nvidia has some advantages in this regard – it outsources manufacturing, avoiding the eye-watering capital costs of building fabrication plants – but it’s not immune to cyclical pressures.
Edelsten sees opportunities beyond Nvidia: “The AI boom is moving to its next phase, which could benefit other AI-related stocks we own, such as Broadcom and Taiwan Semiconductor.” He also notes that some “AI losers” could turn a corner in 2026, with software companies like Salesforce and SAP potentially using AI to enhance their offerings rather than being threatened by it.
The Road Ahead: A More Nuanced AI Future
As we look to 2026 and beyond, the AI hardware landscape is becoming more complex and nuanced. Nvidia’s recent unveiling of its Rubin AI supercomputing platform at CES 2026 – designed to reduce the cost of training large language models by up to 10x – shows the company isn’t standing still. But neither are its competitors, nor the broader ecosystem of AI researchers pushing beyond current limitations.
The fundamental question isn’t whether AI will transform industries – it already is – but which companies will capture the most value from this transformation. With power constraints, increasing competition, and shifting technological paradigms, the answer is far from certain. What’s clear is that the AI chip race has entered a critical new phase, one where efficiency, innovation, and practical constraints will determine the winners and losers in this multi-trillion-dollar market.

