The race to develop specialized hardware for Large Language Models (LLMs) has a well-funded new contender. MatX, a startup founded by former Google hardware veterans, has announced a massive $500 million Series B funding round. The company aims to disrupt the market currently dominated by Nvidia by developing chips specifically optimized for the next generation of AI.
The Mission: A 10x Performance Leap
MatX isn’t just looking for incremental improvements. The startup’s core objective is to produce processors that are 10 times more efficient than existing GPUs at both training and running LLMs. By stripping away the general-purpose overhead of traditional chips, MatX believes it can offer a radical leap in performance and cost-effectiveness for AI developers.
Engineering Pedigree and Heavyweight Backing
The company’s leadership brings significant weight to the venture. CEO Reiner Pope previously led AI software development for Google’s proprietary TPUs, while co-founder Mike Gunter served as a lead designer for the TPU hardware itself.
This expertise has attracted a high-profile roster of investors. The Series B was led by Jane Street and Situational Awareness, an investment fund established by former OpenAI researcher Leopold Aschenbrenner. Other notable participants in the round include:
- Marvell Technology
- Spark Capital
- NFDG
- Stripe co-founders Patrick and John Collison
Market Context and Future Roadmap
This latest capital injection follows a $100 million Series A in 2024 that valued the company at over $300 million. While MatX has not disclosed its current valuation, its closest rival, Etched, recently secured its own $500 million round at a $5 billion valuation, signaling intense investor appetite for specialized Nvidia alternatives.
MatX plans to use the new funds to move into the manufacturing phase with TSMC, with a roadmap to begin shipping its first chips to customers in 2027.






