A deal between Meta and Google suggests there could be a big change in the artificial intelligence (AI) infrastructure supply chain. Google Cloud centres run on Google’s custom tensor processing units (TPUs). Meta has opted to rent capacity in Google Cloud centres from 2026, and to deploy TPUs in Meta’s own AI-processing data centres from 2027. This could impact supply chain dynamics. Currently, NVIDIA’s graphics processing units (GPUs) hold around 80 per cent market share in AI compute, with AMD a distant second. Meta is NVIDIA’s biggest customer.
Google’s TPUs have hitherto only been deployed in-house to train its Gemini model, and Google rents out TPU time in its data centres. The decision to sell chips to Meta and other customers marks a major shift in strategy as Google will directly challenge NVIDIA.
The stakes are high. NVIDIA’s annual revenues amounted to over $165 billion in the last fiscal year ended June 2025. Its monopoly over the high-end AI chip market makes it the world’s most valuable company with a market value of $ 4.3 trillion.
Meta has earmarked a future capital expenditure of $72 billion for developing AI infrastructure. The Google deal may pick up a large proportion of that capex. This deal is a validation of Google’s hardware ambitions. Anthropic, which runs Claude, is reportedly planning to deploy TPUs, and Apple trains its Apple Intelligence models on TPUs. Google recently leveraged its own high credit rating to help customers, TeraWulf and FluidStack, to set up a data centre with TPUs, and Google provided a lease guarantee, which helped TeraWulf and FluidStack secure long-term, low-cost financing for hundreds of MW of AI capacity. In exchange, Google obtained warrants for 8 per cent of TeraWulf’s equity.
Any single-supplier market changes dramatically if that monopoly is broken. Apart from Google, other multinational corporations are also entering the field. There is a long waiting list for NVIDIA’s GPUs and the monopoly enables it to charge top dollar. Amazon, with its line-up of custom Trainium chips, and Microsoft, which has a big stake in OpenAI, are both reportedly looking at developing their own chips for faster deployment at lower cost in their respective data centres.
Google’s TPUs are not like-for-like with GPUs. GPUs are more flexible. TPUs only do matrix mathematics for deep learning, but deep learning is foundational to large language models, and TPUs may be more cost-effective at that key task.
Google has a long AI history – its Search, Map and Translate have used AI for years. Google also owns DeepMind, whose researchers won the 2024 Nobel Prize in Chemistry after their AlphaZero algorithm revolutionised the understanding of protein-folding.
NVIDIA claims, “NVIDIA is the only platform that runs every AI model and does it everywhere.” This is true, but Google’s Gemini3 serves as a good use case for TPUs, which were used to develop the model. Gemini3 is said to be better than ChatGPT from OpenAI and Claude from Anthropic in some areas.
The stock market responded sharply to the Meta-Google deal. NVIDIA stock was sold down by 2 per cent in the first week, while Alphabet (Google’s parent) stock was bid up by 8 per cent. Some analysts reckon TPUs could quickly capture up to 10 per cent of the AI compute market.
Competition is always a good thing. This could be a boon for downstream users, enabling hyperscalers and data centres and empowering telecom-associated sectors in general.