Home AI OpenAI Adopts Google AI Chips to Reduce Costs and Decrease Dependence on...

OpenAI Adopts Google AI Chips to Reduce Costs and Decrease Dependence on Microsoft and Nvidia, According to Reports

0
OpenAI starts using Google AI chips to cut costs and rely less on Microsoft, Nvidia: Report

OpenAI is reportedly transitioning to Google’s tensor processing units (TPUs), aiming to reduce costs and lessen reliance on partners like Microsoft and Nvidia. This shift, first highlighted by Reuters, enables OpenAI to power AI applications, including ChatGPT, through Google Cloud. By utilizing TPUs, OpenAI seeks to lower inference costs, essential for scalable AI products, marking a significant departure from Nvidia’s GPUs, traditionally used for training and inference processes.

This collaboration signals a surprising partnership between two competitors in the AI realm, as Google expands TPU access to external clients. However, OpenAI’s use of Google’s most advanced TPUs remains restricted to maintain Google’s competitive edge.

Tensions between OpenAI and Microsoft have surfaced, with discussions about potential antitrust action prompting negotiations over revenue share and operational control. OpenAI plans to reduce revenue payouts to Microsoft from 20% to 10% by 2030, further complicating their relationship amidst evolving AI landscape demands.

Source link

NO COMMENTS

Exit mobile version