Ask anyone what Nvidia makes, and they’re likely to first say “GPUs.” For decades, the chipmaker has been defined by advanced parallel computing, and the emergence of generative AI and the resulting surge in demand for GPUs has been a boon for the company.
But Nvidia’s recent moves signal that it’s looking to lock in more customers at the less compute-intensive end of the AI market—customers who don’t necessarily need the beefiest, most powerful GPUs to train AI models, but instead are looking for the most efficient ways to run agentic AI software. Nvidia recently spent billions to license technology from a chip startup focused on low-latency AI computing, and
→ Continue reading at WIRED