OpenAI has confirmed it has no plans to utilize Google’s Tensor Processing Units (TPUs) for its products, despite reports suggesting otherwise. While OpenAI is in preliminary testing with TPUs, it does not plan to deploy them at scale due to the complexities involved in large-scale integration. The company continues to rely on Nvidia’s GPUs and AMD chips to meet its computing needs and has partnered with Google Cloud to enhance its infrastructure. However, Nvidia’s GPU services from CoreWeave still provide the majority of OpenAI’s computing power. As competition in AI accelerates, OpenAI is working on developing its own custom AI processor, with a critical manufacturing milestone expected later this year. Taiwanese media report that this in-house chip, developed with Broadcom and TSMC, could launch by Q4 2025. As demand for AI chips grows, companies like Google are also opening access to their TPUs for external clients, intensifying competition in the tech industry.
Source link

Share
Read more