CUDA Setup in WSL2¶
How GPU Access Works in WSL2¶
WSL2 uses GPU Paravirtualization (GPU-PV), not PCI passthrough. The Windows
NVIDIA driver handles all GPU operations. WSL2 communicates with it via a
dxgkrnl kernel module over VMBus.
WSL2 app --> libcuda.so (stub at /usr/lib/wsl/lib/) --> dxgkrnl --> VMBus --> Windows NVIDIA driver --> GPU
What's Already There (No Action Needed)¶
Windows automatically mounts paravirtualized driver libraries into WSL2 at
/usr/lib/wsl/lib/:
libcuda.so, libcuda.so.1
libnvidia-ml.so, libnvidia-ml.so.1
libnvidia-encode.so, libnvidia-opticalflow.so
nvidia-smi
libdxcore.so, libd3d12.so
Verify GPU access:
What to Install (CUDA Toolkit for Compilation)¶
If you need CUDA libraries (cuDNN, cuBLAS) or to compile CUDA code:
# Add NVIDIA's WSL-specific repo
wget https://developer.download.nvidia.com/compute/cuda/repos/wsl-ubuntu/x86_64/cuda-wsl-ubuntu.pin
sudo mv cuda-wsl-ubuntu.pin /etc/apt/preferences.d/cuda-repository-pin-600
wget https://developer.download.nvidia.com/compute/cuda/12.6.0/local_installers/cuda-repo-wsl-ubuntu-12-6-local_12.6.0-1_amd64.deb
sudo dpkg -i cuda-repo-wsl-ubuntu-12-6-local_12.6.0-1_amd64.deb
sudo cp /var/cuda-repo-wsl-ubuntu-12-6-local/cuda-*-keyring.gpg /usr/share/keyrings/
sudo apt-get update
# ONLY install the toolkit -- NOT the full cuda metapackage
sudo apt-get install -y cuda-toolkit-12-6
Add to ~/.bashrc:
export PATH="/usr/local/cuda/bin:$PATH"
export LD_LIBRARY_PATH="/usr/local/cuda/lib64:/usr/lib/wsl/lib:$LD_LIBRARY_PATH"
What NOT to Install (CRITICAL)¶
NEVER install these inside WSL2:
| Package | Why |
|---|---|
cuda metapackage |
Includes Linux GPU driver, breaks GPU-PV |
cuda-12-x metapackage |
Same -- includes driver |
cuda-drivers |
This IS the Linux GPU driver |
nvidia-driver-* |
Overwrites /usr/lib/wsl/lib stubs |
Ubuntu's nvidia-cuda-toolkit |
May pull in driver dependencies |
Installing any Linux NVIDIA driver overwrites the Windows-provided stub libraries and breaks GPU paravirtualization entirely.
Verify CUDA¶
nvcc --version # CUDA compiler version
nvidia-smi # GPU info (partial -- see limitations.md)
python3 -c "import torch; print(torch.cuda.is_available())" # if PyTorch installed
RTX 4060 Specifics¶
- Architecture: Ada Lovelace (compute capability 8.9)
- Fully supported by CUDA 12.x
- No special configuration needed beyond standard setup
- 16 GB VRAM available (~15.5 GB after Windows desktop compositing overhead)
cuDNN Installation (for deep learning)¶
# Download cuDNN .deb from https://developer.nvidia.com/cudnn
# (requires NVIDIA Developer account)
sudo dpkg -i cudnn-local-repo-ubuntu2204-9.x.x.x_1.0-1_amd64.deb
sudo cp /var/cudnn-local-repo-*/cudnn-*-keyring.gpg /usr/share/keyrings/
sudo apt-get update
sudo apt-get install -y libcudnn9-cuda-12 libcudnn9-dev-cuda-12