OpenAI is broadening its network of compute providers beyond Microsoft, exploring partnerships with Oracle, CoreWeave, and potentially Google. Although OpenAI is experimenting with Google’s tensor processing units (TPUs), it has no immediate plans to deploy them, as emphasized in a recent Reuters statement. This comes after headlines suggested a shift towards Google’s accelerators to lessen dependency on Microsoft and Nvidia hardware. Historically, OpenAI has utilized diverse hardware, from Nvidia’s DGX systems to AMD’s Instinct MI300, which offers economical advantages. The company is also developing its own AI chip to optimize computing efficiency. Google’s 7th-generation TPUs deliver impressive capabilities, yet OpenAI may resist adopting them due to concerns over performance, availability, and costs. Adapting OpenAI’s software to optimize for TPUs would require significant resources, making GPUs a more practical choice for the time being. This strategic move reflects OpenAI’s commitment to enhancing its operational flexibility.
Source link