Google is developing an internal software project, code-named “TorchTPU,” to make its Tensor Processing Units (TPUs) more compatible with PyTorch, the open-source deep learning framework Reuters described as the leading option used by most AI developers.

PyTorch is an open-source machine learning framework built to move from research prototyping to production deployment. TPUs are Google’s custom-developed, application-specific integrated circuits (ASICs) designed to accelerate machine-learning workloads, and Cloud TPU is the Google Cloud service that makes them available as scalable compute.

A Google Cloud spokesperson told Reuters the company is seeing “massive, accelerating demand” for both TPU and GPU infrastructure and is focused on giving developers flexibility “regardless of the hardware they choose.”

The effort targets a software layer that Reuters said has helped Nvidia maintain an advantage in AI workloads. Nvidia’s documentation describes CUDA as a “parallel computing platform and programming model” that boosts performance by harnessing the power of the GPU.

Reuters reported that PyTorch’s history has been closely tied to CUDA, with Nvidia engineers working to ensure PyTorch runs efficiently on Nvidia chips, leaving CUDA deeply embedded in how many PyTorch environments are optimized.

Google’s TPU path has been shaped by a different default stack. Reuters said Google has historically tuned TPUs around JAX, a Python library used heavily inside Google, rather than the PyTorch tooling that many external customers standardize on.

Google’s own developer messaging has reinforced that approach: a Nov. 19 Google Developers Blog post introduced a “JAX AI Stack,” which Google described as a modular, production-ready platform built on JAX and co-designed with Cloud TPUs for scalable machine learning workloads.

PyTorch can already run on TPUs through PyTorch/XLA (torch_xla). The official PyTorch/XLA documentation says developers can create and train PyTorch models on TPUs with “only minimal changes required.”

Reuters, however, reported that earlier TPU efforts still left many teams doing significant extra engineering work, and said TorchTPU is intended to make TPUs more compatible and developer-friendly for organizations already built around PyTorch. Reuters also reported Google is working closely with Meta, a key PyTorch backer, and is considering open-sourcing parts of TorchTPU to speed adoption.

The timing also coincides with Google Cloud’s push for larger enterprise commitments. In remarks tied to Alphabet’s Q3 2025 results, CEO Sundar Pichai said Google Cloud signed more deals worth over $1 billion in the first nine months of 2025 than in the previous two years combined.

Reuters added that Google began materially expanding external access to TPUs after 2022, when Google Cloud gained greater oversight of TPU sales, and that Google is now also selling TPUs directly for customer data center deployments.

Large TPU commitments are already becoming public. Anthropic said in October it plans to expand its use of Google Cloud technologies, including access to up to one million TPUs, bringing “well over a gigawatt” of capacity online in 2026, and Google Cloud CEO Thomas Kurian cited TPUs’ “price-performance and efficiency.”