OpenAI Turns to Google TPU: Enemies Can Become Friends?
OpenAI begins leasing Google TPUs for ChatGPT, signaling a shift from Nvidia dominance and potential collaboration with a major rival, highlighting evolving AI infrastructure dynamics.

According to reports from Reuters and other media outlets, a knowledgeable source revealed that OpenAI has recently started renting Google’s AI chips to support ChatGPT and other products.
Currently, OpenAI is one of Nvidia’s largest buyers — these devices are essential for training and inference of large AI models.
It appears that OpenAI is not only trying to distance itself from Microsoft but is also beginning to move away from Nvidia.
But collaborating with Google? That’s quite surprising, given that Google’s Gemini series models make it one of OpenAI’s most direct and formidable competitors.

If you consider that OpenAI previously hired Richard Ho, a senior engineer director for Google Cloud TPU, as its hardware lead, and there are rumors that OpenAI is pushing for its own AI chip project, this collaboration becomes even more astonishing.

Richard Ho worked at Google for nearly nine years, involved in the development of the TPU series, serving as Senior Engineering Director; later, he joined Lightmatter as VP; in 2023, he joined OpenAI.
Why would OpenAI choose this route?
One reason is the rapid growth of OpenAI’s user base (recently announced to have 3 million paying enterprise users), which has led to a severe shortage of GPUs. To ensure ChatGPT’s inference capabilities are unaffected, they need alternative solutions.
Another reason might be to reduce dependence on Microsoft, which has been a recent focus for OpenAI. The two companies have also had some recent disagreements.

Summary of recent reports by @ns123abc on X
It is understood that this is the first time OpenAI has truly started using non-Nvidia chips, which could promote TPU to become a cheaper alternative to Nvidia GPUs.
Regarding the specific usage, according to The Information, OpenAI hopes to rent TPUs via Google Cloud, but Google employees have indicated that due to competition with Google in the AI race, Google will not rent out its most powerful TPUs to them.

Google Cloud TPU pricing
What does this mean for Google?
Google is currently expanding access to its Tensor Processing Units (TPUs) and has already gained clients like Apple, Anthropic, and Safe Superintelligence.
For years, AI model training and inference relied almost exclusively on Nvidia GPUs. Now, one of the world’s leading AI companies, OpenAI, is starting to purchase Google’s TPUs, marking a significant shift that not only commercializes Google’s internal TPU use but also provides a “heavyweight endorsement.” This could boost Google’s influence in the high-end AI cloud market and attract more large model companies to migrate, indicating that TPU’s performance, stability, and ecosystem tools have met OpenAI’s high standards.

Google has released the 7th generation TPU Ironwood, see the report: 42.5 Exaflops: Google’s new TPU outperforms the strongest supercomputers by 24 times, and the A2A collaboration protocol is released
This also sends a clear market signal: AI infrastructure is no longer exclusive to Nvidia, and diversification is beginning to take hold.
References
https://www.theinformation.com/articles/google-convinces-openai-use-tpu-chips-win-nvidia
https://www.reuters.com/business/openai-turns-googles-ai-chips-power-its-products-information-reports-2025-06-27/