OpenAI has reaffirmed its commitment to Nvidia and AMD for AI model training, despite testing Google’s tensor processing units (TPUs). Reuters reports that, while OpenAI is trialing some lower-tier TPUs, it has no intention of deploying them at scale, opting instead for proven performance from existing suppliers. Google’s advanced TPUs, designed for its Gemini model, remain reserved for internal use. OpenAI’s recent deal with Google Cloud supports its infrastructure needs but won’t lead to a significant shift towards TPUs in the immediate future. Investors had speculated that a TPU agreement might suggest diversification from Nvidia, but OpenAI’s reliance on established chip partners highlights the complexities of hardware scaling. This strategy indicates that Nvidia and AMD will continue to dominate OpenAI’s chip acquisitions, potentially limiting Google’s market share in AI hardware. Investors are now keenly observing future updates from OpenAI and Google Cloud for any changes in TPU utilization.
Source link