Switching GPU Binding (Live) Toggle GPU Driver Script (vfio-pci ↔ nvidia)

A single NVIDIA GPU cannot:

  • Be passed through to a VM (via vfio-pci)
  • And be used on the host or in LXC at the same time

Why?

Because when you bind the GPU to vfio-pci on boot, it’s invisible to the host and cannot be used by NVIDIA’s kernel driver (nvidia.ko).

Switch Between VM and LXC Use (Rebind on demand)

If you don’t need both at the same time, you can manually switch the GPU between:

  1. Passthrough to VM (bind to vfio-pci)
  2. Use on host / LXC (bind to nvidia)

This lets you:

Then later give it back to the VM

Use the GPU for nvidia-smi or CUDA in an LXC container

here’s a single script that checks which driver is currently bound to your GPU, and automatically toggles between:

  • vfio-pci (for passthrough to VM)
  • nvidia (for use on host or LXC)
#!/bin/bash

# === CONFIGURATION ===
GPU="0000:0a:00.0"
AUDIO="0000:0a:00.1"
VMID=131         # Your Windows VM ID
LXCID=115        # Your LXC container ID using the GPU

# === FUNCTIONS ===

get_driver() {
    basename "$(readlink /sys/bus/pci/devices/$1/driver 2>/dev/null)"
}

unbind_driver() {
    echo "$1" > "/sys/bus/pci/devices/$1/driver/unbind"
}

bind_driver() {
    echo "$1" > "/sys/bus/pci/drivers/$2/bind"
}

switch_to_nvidia() {
    echo "→ Switching to NVIDIA driver (LXC use)..."

    echo "Stopping VM $VMID..."
    qm stop $VMID
    sleep 3

    echo "Unbinding GPU from current driver..."
    unbind_driver "$GPU"
    unbind_driver "$AUDIO"

    echo "Loading NVIDIA modules..."
    modprobe nvidia nvidia_uvm nvidia_drm nvidia_modeset

    echo "Binding GPU to nvidia..."
    bind_driver "$GPU" nvidia
    bind_driver "$AUDIO" snd_hda_intel

    echo "Starting LXC container $LXCID..."
    pct start $LXCID

    echo "✔ Switched to NVIDIA mode."
}

switch_to_vfio() {
    echo "→ Switching to VFIO (VM passthrough)..."

    echo "Stopping LXC container $LXCID..."
    pct stop $LXCID
    sleep 3

    echo "Unbinding GPU from current driver..."
    unbind_driver "$GPU"
    unbind_driver "$AUDIO"

    echo "Loading VFIO modules..."
    modprobe vfio-pci

    echo "Binding GPU to vfio-pci..."
    bind_driver "$GPU" vfio-pci
    bind_driver "$AUDIO" vfio-pci

    echo "Starting VM $VMID..."
    qm start $VMID

    echo "✔ Switched to VFIO mode."
}

# === MAIN ===

MODE="$1"
CURRENT_DRIVER=$(get_driver "$GPU")
echo "Detected GPU driver: ${CURRENT_DRIVER:-none}"

case "$MODE" in
    --to-nvidia)
        switch_to_nvidia
        ;;
    --to-vfio)
        switch_to_vfio
        ;;
    "")
        if [ "$CURRENT_DRIVER" == "vfio-pci" ]; then
            switch_to_nvidia
        elif [ "$CURRENT_DRIVER" == "nvidia" ]; then
            switch_to_vfio
        elif [ -z "$CURRENT_DRIVER" ]; then
            echo "⚠️ No driver bound. Defaulting to NVIDIA..."
            switch_to_nvidia
        else
            echo "❌ Unknown driver bound: $CURRENT_DRIVER"
            exit 1
        fi
        ;;
    *)
        echo "Usage: $0 [--to-nvidia | --to-vfio]"
        exit 1
        ;;
esac

# === FINAL STATUS DISPLAY ===
echo
echo "🔍 Final GPU driver status:"
SHORT_GPU=$(echo "$GPU" | cut -d':' -f2-)
lspci -k | grep "$SHORT_GPU" -A 3

Auto-toggle based on current driver

./toggle-gpu.sh

Force switch to NVIDIA for LXC

./toggle-gpu.sh --to-nvidia 

Force switch to VFIO for VM passthrough

./toggle-gpu.sh --to-vfio 

PiHole en VM

suite au pb de config pour le dhcp dans kubernetes , je suprime le pod pihole et j’intalle pihole sur une vm ubuntu dans proxmox

Resultat immediat avec 100% de reussite

Creation template Ubuntu

Lors de mes tentative d’installation de rancher , j’ai reinstaller plusieur fois ubuntu.

Je crée donc un template pour mettre en place une procedure de deployement plutot que d’installe a chaque fois.

Premierement j’installe un VM ubuntu via l’iso , en activant l’option dans l’installation “ssh”

j’ajoute qemu-guest-agent pour que proxmox puisse recuperer des info de la VM

 sudo apt-get install qemu-guest-agent

en suivant cette video , j’utilise cloud-init pour faire un template deployable

sudo apt install cloud-init
un fois fait , pour crée une nouvelle vm ubuntu on fait un full clone