Imagine trying to deploy cutting-edge artificial intelligence in your business, only to discover the computational costs could bankrupt you. This is the reality many companies face with today’s massive AI models – until now. A Spanish startup called Multiverse Computing is shaking up the industry by releasing free compressed AI models that promise enterprise-grade performance at half the size and cost.
The Compression Breakthrough
Multiverse Computing’s HyperNova 60B model, now available for free on Hugging Face, represents a significant leap in AI efficiency. At just 32GB, it’s roughly half the size of OpenAI’s gpt-oss-120B model it derives from, while maintaining comparable accuracy and performance. The secret lies in CompactifAI, a compression technology inspired by quantum computing principles that reduces memory usage and latency without sacrificing capability.
The updated HyperNova 60B 2602 version specifically enhances tool calling and agentic coding support – areas where inference costs typically skyrocket. According to the company, their compressed models have already outperformed competitors like Mistral Large 3 in certain benchmarks, positioning them as a viable alternative to U.S. tech giants.
Beyond Technology: Geopolitical Implications
What makes this development particularly noteworthy isn’t just the technology, but the geopolitical context. Multiverse positions itself as offering “sovereign solutions across the AI stack” – a clear nod to European concerns about over-reliance on American and Chinese AI technologies. This positioning recently helped secure a collaboration with the regional government of Arag�n and participation from the Spanish Agency for Technological Transformation in their $215 million Series B funding round.
The startup’s rumored �500 million funding round at a valuation exceeding �1.5 billion suggests investors see significant potential in this approach. While Multiverse’s reported �100 million annual recurring revenue pales next to OpenAI’s $20 billion, it’s approaching the scale of Mistral AI’s $400 million ARR – demonstrating growing demand for regional alternatives.
The Infrastructure Arms Race
Multiverse’s compression technology arrives amid an unprecedented AI infrastructure arms race. Meta’s recent multi-billion dollar chip deal with AMD reveals the staggering scale of investment required. Under that agreement, AMD will supply Meta with customized AI chips totaling 6 gigawatts of computing capacity – enough to power about 5 million U.S. households for a year.
Meta’s Santosh Janardhan, head of infrastructure, explained the strategic thinking: “We don’t believe that a single silicon solution will work for all of our workloads. There’s a place for Nvidia, there’s a place for AMD and… there’s a place for our own custom silicon as well. We need all three.” This diversification strategy, mirrored by other tech giants, highlights the industry’s recognition that no single approach will dominate.
Enterprise Adoption and Practical Impact
For businesses, the implications are profound. Multiverse already counts Iberdrola, Bosch, and the Bank of Canada among its enterprise customers – organizations that need reliable, cost-effective AI solutions without compromising on performance. The free availability of compressed models could democratize access to advanced AI capabilities, particularly for European companies seeking alternatives to U.S.-dominated platforms.
Consider Uber’s approach: engineers have built an AI version of CEO Dara Khosrowshahi to help teams prepare presentations. “About 90% of Uber’s software engineers are using AI in their work,” Khosrowshahi revealed, with 30% being “power users” who are “completely rethinking the architecture of the company.” This level of integration demonstrates how AI is becoming fundamental to business operations – and why efficient, affordable models matter.
Regulatory and Ethical Dimensions
The push for compressed, efficient AI models intersects with growing regulatory pressures. Recent fines against Reddit for age verification failures (�14.47 million from the UK’s Information Commissioner’s Office) and Discord’s delayed age verification rollout highlight increasing scrutiny of how platforms handle user data and safety. While these regulations primarily target social media, they signal broader trends that could eventually impact AI deployment.
Meanwhile, tensions between AI ethics and national security continue to escalate. The Pentagon’s dispute with Anthropic over military access to AI models reveals fundamental disagreements about appropriate use cases. As Defense Secretary Pete Hegseth pushes for broader military applications, companies like Anthropic resist allowing their technology for mass surveillance or autonomous weapons – creating a complex landscape for AI developers to navigate.
The Road Ahead
Multiverse Computing plans to open-source more compressed models throughout 2026, potentially accelerating adoption across industries. Their approach represents a third way between massive, expensive frontier models and limited, specialized solutions. By focusing on compression and efficiency, they address one of the most significant barriers to widespread AI adoption: cost.
As AMD CEO Lisa Su noted about the Meta deal, “Each gigawatt of compute is worth double-digit billions.” In this context, Multiverse’s compression technology isn’t just a technical achievement – it’s a potential game-changer for businesses seeking to leverage AI without astronomical infrastructure investments. The question isn’t whether AI will transform industries, but which approaches will prove most sustainable and accessible in the long run.

