Positron AI, the leader in energy-efficient AI inference hardware, today announced an oversubscribed $230 million Series B ...
Every ChatGPT query, every AI agent action, every generated video is based on inference. Training a model is a one-time ...
Microsoft’s new Maia 200 inference accelerator chip enters this overheated market with a new chip that aims to cut the price ...
Maia 200 is most efficient inference system Microsoft has ever deployed, with 30% better performance per dollar than latest ...
Using the AIs will be way more valuable than AI training. AI training – feed large amounts of data into a learning algorithm to produce a model that can make predictions. AI Training is how we make ...
A.I. chip, Maia 200, calling it “the most efficient inference system” the company has ever built. The Satya Nadella -led tech ...
SEOUL, South Korea and SANTA CLARA, Calif., Sept. 11, 2025 /PRNewswire/ -- Moreh, an AI infrastructure software company, unveiled its distributed inference system on AMD and showcased the progress of ...
The mighty SoC is coming for the datacenter with inference as a prime target, especially given cost and power limitations. With multiple form factors stretching from edge to server, any company that ...
Calling it the highest performance chip of any custom cloud accelerator, the company says Maia is optimized for AI inference on multiple models.