High-throughput chips for LLMs
MatX designs high-throughput AI chips purpose-built for large language model training and inference. Founded by former Google TPU architects, the company's MatX One chip targets frontier AI labs with industry-leading FLOPS per mm², SRAM-resident weights for ultra-low latency, and massive scale-out interconnect supporting hundreds of thousands of chips. MatX raised a $500M Series B in February 2026 to accelerate chip production.
Create a free account to see funding visualizations and detailed round data.
Create Free AccountCreate a free account to see full funding history, valuations, and investor participation.
Create a free account to see which investors have funded this company.
Create Free Account