The massive scale of frontier large language models (LLMs) has created a significant barrier for enterprise deployment: they are often too expensive and resource-heavy for practical use. Spanish “soonicorn” Multiverse Computing is tackling this efficiency gap head-on. By leveraging CompactifAI, a compression technology inspired by quantum computing, the Basque-based firm is shrinking massive models into manageable, high-performance tools without sacrificing accuracy.
The Power of HyperNova 60B
The company’s latest offering, HyperNova 60B 2602, is now available for free on Hugging Face. Derived from OpenAI’s gpt-oss-120b, the HyperNova model is a lean 32GB—roughly half the size of the original. Despite the smaller footprint, it maintains high potency while offering lower latency and significantly reduced memory usage.
Specialized for Agentic Tasks
The updated 2602 version is specifically optimized for high-cost inference tasks, such as tool calling and agentic coding. Multiverse claims this model even outperforms Mistral AI’s Mistral Large 3 in key areas, positioning itself as a top-tier European alternative to American tech giants.
Business Growth and Sovereign AI
Multiverse is rapidly expanding its global footprint with offices in North America and Europe, serving a prestigious roster of clients including Bosch, Iberdrola, and the Bank of Canada. While not yet officially a unicorn, reports suggest the company is in active discussions for a €500 million funding round that could value the startup at over €1.5 billion.
This growth is fueled by an increasing demand for “sovereign AI” solutions. By providing localized, efficient technology stacks, Multiverse has secured strategic partnerships with the regional government of Aragón and previous investment from the Spanish Agency for Technological Transformation (SETT). With a rumored annual recurring revenue (ARR) reaching €100 million in early 2026, Multiverse is proving that in the competitive world of LLMs, efficiency is the new frontier.






