Google Trained Gemini 3 Entirely Using JAX on Its TPUs: Here Is Why It Matters
Discover Google's edge in training Gemini 3 entirely on JAX and TPUs, and learn to build your first neural network using Google’s JAX AI stack.

Google truly has an edge in building AI.
Check out their wide range of models across different categories compared to those of other prominent players in the market.

It’s also the only company vertically integrated end-to-end in the AI value chain.
Be it foundation models (Gemini), applications (ImageFX, Search with Gemini, NotebookLM), Cloud architectures (Google Cloud, Vertex AI), or Hardware (TPUs), Google is ahead in it all.

Google recently announced the release of Gemini 3, its most capable LLM to date. I don’t want to go into the benchmarks that describe how good it is. Instead, I want to get your attention to something else that is way more important.

There’s an ecosystem you might not be aware of since most media and consulting companies don’t often touch on it. This tweet by Jeff Dean, the Chief Scientist at Google DeepMind & Google Research, will point you towards it.

Google trained Gemini 3 using their JAX software stack and their TPUs.

This isn’t new. Google has long been pushing to improve its capabilities and reduce its reliance on NVIDIA GPUs.

And they have been gradually building what is now the JAX AI stack.
This stack is not just being used at Google but also by leading LLM providers such as Anthropic, xAI, and Apple (Search for the keyword ‘JAX’ in all these links).
Let’s talk about it in more detail.
What’s the JAX AI Stack?
The JAX AI stack is an end-to-end, open-source platform for machine learning at extreme scales.
There are four core components of this stack, as described below.

