RTX 3050 - Order Now
Home / Blog / Tutorials / PyTorch CUDA Version Compatibility Matrix
Tutorials

PyTorch CUDA Version Compatibility Matrix

Complete PyTorch CUDA compatibility matrix. Know which CUDA toolkit, NVIDIA driver, and cuDNN versions work with each PyTorch release on your GPU server.

Why Version Compatibility Matters

You have a GPU server with NVIDIA driver 535 and you want to install PyTorch 2.3. Will it work? The answer depends on a chain of version requirements connecting PyTorch, CUDA toolkit, cuDNN, and the NVIDIA driver. Getting any link wrong produces silent failures or hard crashes.

This reference page gives you the exact version combinations that work, saving you from the trial-and-error cycle that plagues most PyTorch GPU server setups.

PyTorch CUDA Compatibility Matrix

Each PyTorch release is compiled against specific CUDA versions. You must install the matching wheel:

PyTorch 2.4   → CUDA 12.4, CUDA 12.1, CUDA 11.8
PyTorch 2.3   → CUDA 12.1, CUDA 11.8
PyTorch 2.2   → CUDA 12.1, CUDA 11.8
PyTorch 2.1   → CUDA 12.1, CUDA 11.8
PyTorch 2.0   → CUDA 11.8, CUDA 11.7
PyTorch 1.13  → CUDA 11.7, CUDA 11.6
PyTorch 1.12  → CUDA 11.6, CUDA 11.3

Minimum NVIDIA Driver for Each CUDA Version

The NVIDIA driver must support the CUDA version that PyTorch was compiled against. The driver CUDA version shown by nvidia-smi must be greater than or equal to PyTorch’s compiled CUDA version:

CUDA 12.4  → Driver ≥ 550.54
CUDA 12.3  → Driver ≥ 545.23
CUDA 12.2  → Driver ≥ 535.86
CUDA 12.1  → Driver ≥ 530.30
CUDA 12.0  → Driver ≥ 525.60
CUDA 11.8  → Driver ≥ 520.61
CUDA 11.7  → Driver ≥ 515.43

To check your current driver version:

nvidia-smi | head -3
# Look for "Driver Version: XXX.XX"

Installing the Correct Combination

Once you know your driver version, pick the highest compatible CUDA build of PyTorch:

# For CUDA 12.4 (Driver 550+)
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124

# For CUDA 12.1 (Driver 530+)
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121

# For CUDA 11.8 (Driver 520+)
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118

Using the wrong index URL is the most common mistake. The default pip install torch without an index URL may install a CPU-only build, which makes the GPU invisible even though the driver and hardware are fine.

Verification Script

Run this after installation to confirm all components agree:

import torch
import sys

print(f"Python:       {sys.version.split()[0]}")
print(f"PyTorch:      {torch.__version__}")
print(f"CUDA (torch): {torch.version.cuda}")
print(f"cuDNN:        {torch.backends.cudnn.version()}")
print(f"GPU avail:    {torch.cuda.is_available()}")
if torch.cuda.is_available():
    print(f"GPU name:     {torch.cuda.get_device_name(0)}")
    print(f"GPU count:    {torch.cuda.device_count()}")
    cap = torch.cuda.get_device_capability(0)
    print(f"Compute cap:  {cap[0]}.{cap[1]}")

Every line should produce a sensible value. If CUDA (torch) shows None, you have a CPU-only build. If GPU avail is False despite a valid CUDA version, the driver is too old — check the matrix above.

Common Pitfalls

  • Mixing conda and pip. Installing PyTorch via conda and then pip-installing updates can create version conflicts. Pick one package manager and stick with it.
  • Docker image mismatch. The CUDA version inside a Docker container must be compatible with the host driver. The container does not need the driver installed, but its CUDA runtime version must not exceed the host driver’s capability.
  • Virtual environment shadows. A system-wide PyTorch install can mask a virtualenv install. Always verify with pip show torch | grep Location.
  • Multiple CUDA toolkits. Having both /usr/local/cuda-11.8 and /usr/local/cuda-12.4 is fine — the PATH and LD_LIBRARY_PATH determine which one is active. PyTorch ships its own CUDA runtime, so the system toolkit version matters less than the PyTorch build.

Cross-Framework Notes

If your dedicated server runs multiple frameworks:

  • TensorFlow has its own CUDA requirements that differ from PyTorch. TensorFlow 2.16+ requires CUDA 12.3.
  • vLLM typically tracks PyTorch’s CUDA version since it depends on PyTorch.
  • Stable Diffusion UIs like ComfyUI and A1111 bundle their own PyTorch, which may differ from your system install.
  • For maximum compatibility, use Docker containers with each workload pinned to its tested combination.

Refer to our CUDA installation guide when setting up a new server, and our tutorials section for framework-specific setup walkthroughs.

Pre-Matched GPU Stacks

GigaGPU servers ship with driver, CUDA, and cuDNN versions that are tested together. Install PyTorch and start building.

Browse GPU Servers

Need a Dedicated GPU Server?

Deploy from RTX 3050 to RTX 5090. Full root access, NVMe storage, 1Gbps — UK datacenter.

Browse GPU Servers

admin

We benchmark, deploy, and optimise GPU infrastructure for AI workloads. All data in our guides comes from real-world testing on our UK-based dedicated GPU servers.

Ready to deploy your AI workload?

Dedicated GPU servers from our UK datacenter. NVMe storage, 1Gbps networking, full root access.

Browse GPU Servers Contact Sales

Have a question? Need help?