Request Access Sign In

Experience the NVIDIA® H200™ NVL Tensor Core GPU

Experience first hand the new Colfax CX51060s‑X6H2 equipped with 4x NVIDIA® H200™ NVL Tensor Core GPUs and supercharge your AI and HPC workloads today

Best of all, it’s free.

AI Acceleration for Mainstream Enterprise Servers with H200 NVL

NVIDIA H200 NVL is ideal for lower-power, air-cooled enterprise rack designs that require flexible configurations, delivering acceleration for every AI and HPC workload regardless of size. With up to four GPUs connected by NVIDIA NVLink™ and a 1.5X memory increase, large language model (LLM) inference can be accelerated up to 1.7X and HPC applications achieve up to 1.3X more performance over the H100 NVL.

Enterprise‑Ready: AI Software Streamlines Development and Deployment

NVIDIA H200 NVL comes with a five-year NVIDIA AI Enterprise subscription and simplifies the way you build an enterprise AI-ready platform. H200 accelerates AI development and deployment for production-ready generative AI solutions, including computer vision, speech AI, retrieval augmented generation (RAG), and more. NVIDIA AI Enterprise includes NVIDIA NIM™, a set of easy-to-use microservices designed to speed up enterprise generative AI deployment. Together, deployments have enterprise-grade security, manageability, stability, and support. This results in performance-optimized AI solutions that deliver faster business value and actionable insights.

Meet your Test Drive System

Colfax CX51060s‑X6H2

  • 2x Intel® Xeon® 6900 Series Processors with P‑cores
  • Up to 10x NVIDIA H200 NVL PCIe GPUs
  • 24x DDR5 DIMM Sockets
  • 24x 2.5″ PCIe 5.0 NVMe Drive Bays
  • Nvidia CUDA™ Toolkit (12.8)
  • Ubuntu 24.04LTS

Intel® Xeon® 6900 series processor with P‑cores

With more cores, larger cache sizes, increased
memory bandwidth with the first server processor supporting Multiplexed Rank DIMM (MRDIMM), and more PCIe 5.0 lanes, Intel® Xeon® 6 processors with P-cores are uniquely architected to deliver significant performance leaps in key compute intensive workloads like AI, HPC and database.

Xeon® 6 is also the foundational central processing unit (CPU) for AI systems, pairing exceptionally well with a GPU as a host node CPU.

NVIDIA Hopper Architecture

The NVIDIA H200 Tensor Core GPU supercharges generative AI and high-performance computing (HPC) workloads with game-changing performance and memory capabilities. As the first GPU with HBM3e, the H200’s larger and faster memory fuels the acceleration of generative AI and large language models (LLMs) while advancing scientific computing for HPC workloads.