Switching GPU Binding (Live) Toggle GPU Driver Script (vfio-pci ↔ nvidia)

A single NVIDIA GPU cannot:

  • Be passed through to a VM (via vfio-pci)
  • And be used on the host or in LXC at the same time

Why?

Because when you bind the GPU to vfio-pci on boot, it’s invisible to the host and cannot be used by NVIDIA’s kernel driver (nvidia.ko).

Switch Between VM and LXC Use (Rebind on demand)

If you don’t need both at the same time, you can manually switch the GPU between:

  1. Passthrough to VM (bind to vfio-pci)
  2. Use on host / LXC (bind to nvidia)

This lets you:

Then later give it back to the VM

Use the GPU for nvidia-smi or CUDA in an LXC container

here’s a single script that checks which driver is currently bound to your GPU, and automatically toggles between:

  • vfio-pci (for passthrough to VM)
  • nvidia (for use on host or LXC)
#!/bin/bash

# === CONFIGURATION ===
GPU="0000:0a:00.0"
AUDIO="0000:0a:00.1"
VMID=131         # Your Windows VM ID
LXCID=115        # Your LXC container ID using the GPU

# === FUNCTIONS ===

get_driver() {
    basename "$(readlink /sys/bus/pci/devices/$1/driver 2>/dev/null)"
}

unbind_driver() {
    echo "$1" > "/sys/bus/pci/devices/$1/driver/unbind"
}

bind_driver() {
    echo "$1" > "/sys/bus/pci/drivers/$2/bind"
}

switch_to_nvidia() {
    echo "→ Switching to NVIDIA driver (LXC use)..."

    echo "Stopping VM $VMID..."
    qm stop $VMID
    sleep 3

    echo "Unbinding GPU from current driver..."
    unbind_driver "$GPU"
    unbind_driver "$AUDIO"

    echo "Loading NVIDIA modules..."
    modprobe nvidia nvidia_uvm nvidia_drm nvidia_modeset

    echo "Binding GPU to nvidia..."
    bind_driver "$GPU" nvidia
    bind_driver "$AUDIO" snd_hda_intel

    echo "Starting LXC container $LXCID..."
    pct start $LXCID

    echo "✔ Switched to NVIDIA mode."
}

switch_to_vfio() {
    echo "→ Switching to VFIO (VM passthrough)..."

    echo "Stopping LXC container $LXCID..."
    pct stop $LXCID
    sleep 3

    echo "Unbinding GPU from current driver..."
    unbind_driver "$GPU"
    unbind_driver "$AUDIO"

    echo "Loading VFIO modules..."
    modprobe vfio-pci

    echo "Binding GPU to vfio-pci..."
    bind_driver "$GPU" vfio-pci
    bind_driver "$AUDIO" vfio-pci

    echo "Starting VM $VMID..."
    qm start $VMID

    echo "✔ Switched to VFIO mode."
}

# === MAIN ===

MODE="$1"
CURRENT_DRIVER=$(get_driver "$GPU")
echo "Detected GPU driver: ${CURRENT_DRIVER:-none}"

case "$MODE" in
    --to-nvidia)
        switch_to_nvidia
        ;;
    --to-vfio)
        switch_to_vfio
        ;;
    "")
        if [ "$CURRENT_DRIVER" == "vfio-pci" ]; then
            switch_to_nvidia
        elif [ "$CURRENT_DRIVER" == "nvidia" ]; then
            switch_to_vfio
        elif [ -z "$CURRENT_DRIVER" ]; then
            echo "⚠️ No driver bound. Defaulting to NVIDIA..."
            switch_to_nvidia
        else
            echo "❌ Unknown driver bound: $CURRENT_DRIVER"
            exit 1
        fi
        ;;
    *)
        echo "Usage: $0 [--to-nvidia | --to-vfio]"
        exit 1
        ;;
esac

# === FINAL STATUS DISPLAY ===
echo
echo "🔍 Final GPU driver status:"
SHORT_GPU=$(echo "$GPU" | cut -d':' -f2-)
lspci -k | grep "$SHORT_GPU" -A 3

Auto-toggle based on current driver

./toggle-gpu.sh

Force switch to NVIDIA for LXC

./toggle-gpu.sh --to-nvidia 

Force switch to VFIO for VM passthrough

./toggle-gpu.sh --to-vfio 

Step-by-Step NVIDIA Driver Installation for Proxmox Users

Use GPU in a VM (Passthrough only)

If you’re intentionally passing the GPU to a VM (e.g. Windows or Linux VM with GPU acceleration), then:

You should not install the NVIDIA driver on the Proxmox host.

Instead, install it inside the VM, and keep vfio-pci bound on the host.

Use GPU on the Proxmox Host or in LXC

Start by find the correct NVIDIA driver

https://www.nvidia.com/en-us/drivers

On the proxmox host :


sudo apt update
sudo apt install pve-headers-$(uname -r) build-essential dkms

wget https://us.download.nvidia.com/XFree86/Linux-x86_64/570.144/NVIDIA-Linux-x86_64-570.144.run
chmod +x ./NVIDIA-Linux-x86_64-570.144.run

./NVIDIA-Linux-x86_64-570.144.run
sudo ./NVIDIA-Linux-x86_64-570.144.run -dkms

nvidia-smi
nano /etc/modules-load.d/modules.conf
nvidia
nvidia_uvm
ls -al /dev/nvidia*
root@pve:~# ls -al /dev/nvidia*
crw-rw-rw- 1 root root 195,   0 Apr 29 08:21 /dev/nvidia0
crw-rw-rw- 1 root root 195, 255 Apr 29 08:21 /dev/nvidiactl
crw-rw-rw- 1 root root 511,   0 Apr 29 08:21 /dev/nvidia-uvm
crw-rw-rw- 1 root root 511,   1 Apr 29 08:21 /dev/nvidia-uvm-tools

/dev/nvidia-caps:
total 0
drwxr-xr-x  2 root root     80 Apr 29 08:21 .
drwxr-xr-x 20 root root   4800 Apr 29 08:21 ..
cr--------  1 root root 236, 1 Apr 29 08:21 nvidia-cap1
cr--r--r--  1 root root 236, 2 Apr 29 08:21 nvidia-cap2
nano /etc/pve/lxc/115.conf 1
lxc.cgroup2.devices.allow: c 195:* rwm
lxc.cgroup2.devices.allow: c 236:* rwm
lxc.cgroup2.devices.allow: c 511:* rwm
lxc.mount.entry: /dev/nvidia0 dev/nvidia0 none bind,optional,create=file
lxc.mount.entry: /dev/nvidiactl dev/nvidiactl none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-caps dev/nvidia-caps none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm dev/nvidia-uvm none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm-tools dev/nvidia-uvm-tools none bind,optional,create=file
pct push 115 ./NVIDIA-Linux-x86_64-570.144.run /root/NVIDIA-Linux-x86_64-570.144.run

On Lxc :

sh NVIDIA-Linux-x86_64-570.144.run  --no-kernel-module
nvidia-smi

For Docker:

# Add Nvidia repository key
apt install -y gpg
curl -fsSL https://developer.download.nvidia.com/compute/cuda/repos/debian12/x86_64/3bf863cc.pub | gpg --dearmor -o /etc/apt/keyrings/nvidia-archive-keyring.gpg

# Add Nvidia repository
echo "deb [signed-by=/etc/apt/keyrings/nvidia-archive-keyring.gpg] https://developer.download.nvidia.com/compute/cuda/repos/debian12/x86_64/ /" | tee /etc/apt/sources.list.d/nvidia-cuda-debian12.list

# Update package lists
apt update

# Install Nvidia container toolkit
apt install nvidia-container-toolkit

nano /etc/docker/daemon.json
{
  "default-runtime": "nvidia",
  "runtimes": {
  "nvidia": {
    "path": "nvidia-container-runtime",
    "runtimeArgs": []
  }
  }
}
sudo nvidia-ctk runtime configure --runtime=docker

nano /etc/nvidia-container-runtime/config.toml

# Set no-cgroups to true
no-cgroups = true

For testing

# Run a test Docker container to verify GPU usage
docker run --gpus all nvidia/cuda:12.6.1-base-ubuntu24.04 nvidia-smi

If needed before purge old nvidia driver

sudo apt remove --purge '^nvidia-.*'
sudo apt autoremove

Source :

https://yomis.blog/nvidia-gpu-in-proxmox-lxc

https://hostbor.com/gpu-passthrough-in-lxc-containers

  1. Be aware that one time the cgroup2 number as change after a full reboot (510->511) ↩︎

acces gpu thru docker

Installing NVIDIA Container Runtime

curl -s -L https://nvidia.github.io/nvidia-container-runtime/gpgkey | \
  sudo apt-key add -
distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/nvidia-container-runtime/$distribution/nvidia-container-runtime.list | \
  sudo tee /etc/apt/sources.list.d/nvidia-container-runtime.list
sudo apt-get update
sudo apt-get install nvidia-container-runtime

add to docker

add it to docker runtimes

sudo tee /etc/docker/daemon.json <<EOF
{
    "runtimes": {
        "nvidia": {
            "path": "/usr/bin/nvidia-container-runtime",
            "runtimeArgs": []
        }
    }
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker.service

and / or

sudo systemctl edit docker.service
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd --add-runtime=nvidia=/usr/bin/nvidia-container-runtime

install driver

sudo apt install nvidia-utils-525-server
sudo apt install nvidia-driver-525 nvidia-dkms-525