Nvidia moved to reassure investors after its shares fell around 3%, stating its AI platform remains “a generation ahead of the industry” despite growing attention on Google’s in-house tensor processing units. The company emphasized that its Blackwell GPUs can run every major AI model “everywhere computing is done,” while TPUs are described as application-specific chips optimized for narrower use cases.
The response follows reports that Meta is exploring a deal to use Google’s TPUs in its data centers, raising questions about future demand for Nvidia hardware. Nvidia countered that its GPUs offer greater performance, versatility, and fungibility than rival ASIC-based designs and noted that Google itself continues to purchase Nvidia chips.
Nvidia CEO Jensen Huang has also pointed to “scaling laws” in AI – the view that more compute and data create better models – as a key driver of sustained long-term demand for its systems, even as hyperscalers develop custom silicon.