NVIDIA and Google have deepened their long-running collaboration with new developments centered on the Gemini family of AI models and the introduction of NVIDIA’s Blackwell platform. This partnership focuses on enhancing software and hardware integration across Google Cloud services, including Vertex AI, Google Kubernetes Engine (GKE), and Cloud Run. Key advancements include improved support for community-developed tools such as JAX and OpenXLA, and optimization of NVIDIA software like NeMo and TensorRT-LLM. Google Cloud now offers NVIDIA HGX B200 and GB200 NVL72 hardware in its A4 and A4X virtual machines, which are purpose-built to support high-scale AI computing demands through advanced networking and cooling technologies.
With the integration of Blackwell into Google Distributed Cloud, organizations with strict data regulations, such as those in healthcare and finance, can now deploy Gemini models in private, secure environments. The Blackwell platform supports confidential computing, helping users maintain control over sensitive data while using Gemini models for complex reasoning and multimodal tasks. Additionally, the Gemma family of smaller, open models has been optimized for efficient performance and deployment flexibility. As part of their shared commitment to developers, NVIDIA and Google have also introduced a joint developer community to promote knowledge exchange and streamline the development and scaling of AI solutions across diverse computing environments.




















