Step-by-Step NVIDIA Driver Installation for Proxmox Users

Start by find the correct NVIDIA driver

https://www.nvidia.com/en-us/drivers

On the proxmox host :

wget https://us.download.nvidia.com/XFree86/Linux-x86_64/570.144/NVIDIA-Linux-x86_64-570.144.run
chmod +x ./NVIDIA-Linux-x86_64-570.144.run
./NVIDIA-Linux-x86_64-570.144.run
sudo ./NVIDIA-Linux-x86_64-570.144.run -dkms
nvidia-smi
nano /etc/modules-load.d/modules.conf
nvidia
nvidia_uvm
ls -al /dev/nvidia*
root@pve:~# ls -al /dev/nvidia*
crw-rw-rw- 1 root root 195,   0 Apr 19 19:40 /dev/nvidia0
crw-rw-rw- 1 root root 195, 255 Apr 19 19:40 /dev/nvidiactl
crw-rw-rw- 1 root root 510,   0 Apr 19 19:40 /dev/nvidia-uvm
crw-rw-rw- 1 root root 510,   1 Apr 19 19:40 /dev/nvidia-uvm-tools

/dev/nvidia-caps:
total 0
drwxr-xr-x  2 root root     80 Apr 19 19:40 .
drwxr-xr-x 20 root root   4760 Apr 19 19:40 ..
cr--------  1 root root 236, 1 Apr 19 19:40 nvidia-cap1
cr--r--r--  1 root root 236, 2 Apr 19 19:40 nvidia-cap2
nano /etc/pve/lxc/103.conf
lxc.cgroup2.devices.allow: c 195:* rwm
lxc.cgroup2.devices.allow: c 236:* rwm
lxc.cgroup2.devices.allow: c 510:* rwm
lxc.mount.entry: /dev/nvidia0 dev/nvidia0 none bind,optional,create=file
lxc.mount.entry: /dev/nvidiactl dev/nvidiactl none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm dev/nvidia-uvm none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm-tools dev/nvidia-uvm-tools none bind,optional,create=file
pct push 103 ./NVIDIA-Linux-x86_64-525.89.02.run /root/NVIDIA-Linux-x86_64-570.144.run

On Lxc :

sh NVIDIA-Linux-x86_64-570.144.run  --no-kernel-module
nvidia-smi

For Docker:

# Add Nvidia repository key
curl -fsSL https://developer.download.nvidia.com/compute/cuda/repos/debian12/x86_64/3bf863cc.pub | gpg --dearmor -o /etc/apt/keyrings/nvidia-archive-keyring.gpg

# Add Nvidia repository
echo "deb [signed-by=/etc/apt/keyrings/nvidia-archive-keyring.gpg] https://developer.download.nvidia.com/compute/cuda/repos/debian12/x86_64/ /" | tee /etc/apt/sources.list.d/nvidia-cuda-debian12.list

# Update package lists
apt update

# Install Nvidia container toolkit
apt install nvidia-container-toolkit

nano /etc/docker/daemon.json
{
  "default-runtime": "nvidia",
  "runtimes": {
  "nvidia": {
    "path": "nvidia-container-runtime",
    "runtimeArgs": []
  }
  }
}
sudo nvidia-ctk runtime configure --runtime=docker

nano /etc/nvidia-container-runtime/config.toml

# Set no-cgroups to true
no-cgroups = true

For testing

# Run a test Docker container to verify GPU usage
docker run --gpus all nvidia/cuda:12.6.1-base-ubuntu24.04 nvidia-smi

If needed before purge old nvidia driver

sudo apt remove --purge '^nvidia-.*'
sudo apt autoremove

Source :

https://yomis.blog/nvidia-gpu-in-proxmox-lxc

https://hostbor.com/gpu-passthrough-in-lxc-containers

Home Lab Optimisé

Matériel utilisé

  • Raspberry Pi : Services légers (Pi-hole, Gatus).
  • NAS Synology : Stockage, médias (Emby, Transmission), et gestion documentaire (Paperless-ngx).
  • PC Proxmox : Virtualisation des services gourmands (VM/LXC).

Architecture Logicielle

1. Raspberry Pi

  • Pi-hole : Bloque les pubs et traqueurs.
  • Gatus : Surveille la disponibilité des services.
Emulation Raspberry (Proxmox)
VM-104
Pi-Hole
Bloque les pubs et traqueurs.
VM-10XGatus Surveille la disponibilité des services.

2. NAS Synology (DSM)

  • Médias : Emby, Transmission, SickChill, NZBGet, FileBot.
  • Documents : Paperless-ngx (via Docker).
  • Sauvegarde : Duplicati.

3. Proxmox (PC Principal)

Conteneur/VMApplicationsRôleVLAN
VM-101Home AssistantAutomatisation domotique.10
VM-103Frigate (+ Ollama)Analyse vidéo (GPU) + IA locale.10
VM-106Docker Developement , supervision et monitoring.10
VM-109Firefly IIIGestion financière.10
VM-102pfSenseRouteur/firewall (optionnel).
VM-108Serveur Web (WordPress)Site web/blog.20
VM-105JenkinsIntégration/déploiement (CI/CD).10

Appareils Connectés (IoT)

  • Google Nest et Smart TV :
    • Isolés dans un VLAN IoT pour la sécurité.
    • Interagissent avec :
      • Home Assistant (commandes vocales, scénarios).
      • Emby (streaming depuis le NAS).
    • Contrôlés via Pi-hole pour bloquer les pubs.

Bonnes Pratiques

  • Réseau :
    • VLANs séparés (Trusted, IoT, Web, Media).
    • Pare-feu (pfSense) pour isoler les flux.
  • GPU :
    • Partage entre Frigate et Ollama via Docker dans un LXC dédié.
  • Sauvegardes :
    • Backuper Paperless, WordPress, et configurations Docker.

Schéma Réseau & Applications


graph TD
  %% Entry Point
  Internet --> OrangeBox --> pfSense

  %% VLAN Zones from pfSense
  pfSense --> VLAN10
  pfSense --> VLAN20
  pfSense --> VLAN30
  pfSense --> VLAN40

  %% VLAN 10 - Trusted
  subgraph "VLAN 10 - Trusted"
    direction TB
    VLAN10 --- VM101["VM-101: Home Assistant"]
    VM101 --- LXC103["LXC-103: Frigate + Ollama"]
    LXC103 --- VM106["VM-106: Docker Host"]
    VM106 --- VM109["VM-109: Firefly III"]
    VM109 --- VM105["VM-105: Jenkins"]
    VLAN10 --- RPi[(Raspberry Pi)]
    RPi --- Pihole
    Pihole --- Gatus
  end

  %% VLAN 20 - Web
  subgraph "VLAN 20 - Web"
    direction TB
    VLAN20 --- VM108["VM-108: Web Server - WordPress"]
  end

  %% VLAN 30 - IoT
  subgraph "VLAN 30 - IoT WIP"
    direction TB
    VLAN30 --- GoogleNest[Google Nest]
    GoogleNest --- SmartTV[Smart TV]
  end

  %% VLAN 40 - Media
  subgraph "VLAN 40 - Media"
    direction TB
    VLAN40 --- NAS[(NAS - Synology DSM)]
    NAS --- Emby --- Paperless --- Transmission --- Duplicati
  end

  %% Styling
  style VLAN10 fill:#d5f5e3,stroke:#27ae60
  style VLAN20 fill:#d6eaf8,stroke:#3498db
  style VLAN30 fill:#fadbd8,stroke:#e74c3c
  style VLAN40 fill:#fdedec,stroke:#f39c12

Légende Détaillée

ÉlémentDescription
🟠 pfSense (VM1)Routeur/firewall gérant les VLANs et la sécurité.
🟢 Raspberry PiExécute Pi-hole (DNS) + Gatus (monitoring).
🔵 NAS SynologyStockage central + applications média (Emby) et docs (Paperless).
VLAN 10 (Trusted)Services critiques : HA, Frigate, Ollama, Dev(Docker,Jenkins).
VLAN 20 (Web)Services exposés : WordPress
VLAN 30 (IoT)Appareils connectés (Google Nest, Smart TV) isolés pour sécurité.
VLAN 40 (Media)Accès aux médias (Emby) depuis la Smart TV.

Flux Clés à Retenir

  1. Google Nest/Smart TV → Communiquent avec Home Assistant (VLAN 10) via règles firewall précises.
  2. Frigate (VLAN 10) → Envoie les alertes à Home Assistant et Smart TV (via VLAN 30 autorisé).
  3. WordPress (VLAN 20) → Accessibles depuis Internet (port forwarding contrôlé par pfSense).
  4. Paperless (NAS) → Consommé par l’utilisateur via interface web NON exposée

pfSense

Exemple de Configuration pfSense (Règles VLAN 30 → VLAN 10)

ActionSourceDestinationPortDescription
✅ AllowVLAN30VM-101 (HA)8123Accès à l’interface HA.
✅ AllowVLAN30LXC-103(Frigate)5000Flux vidéo pour affichage TV.
🚫 BlockVLAN30VLAN10*Bloquer tout autre accès.

Bonnes Pratiques

Pour les Nest

  • Mise à jour firmware : Vérifiez régulièrement via l’app Google Home.
  • Isolation : Bloquez l’accès aux autres VLANs sauf pour :
    • Home Assistant (port 8123).

Pour la Smart TV

  • DNS personnalisé : Redirigez-la vers Pi-hole (Raspberry Pi) pour bloquer les pubs.
    • Dans pfSense : DHCP → Option DNS = IP du Pi-hole.
  • Désactivez le suivi : Désactivez ACR (Automatic Content Recognition) dans les paramètres TV.

Intégration de la Smart TV

Configuration Réseau

  • VLAN : Même VLAN IoT (30) que les Nest pour simplifier.
  • Règles pfSense :
    • Autorisez la TV à accéder à :
      • Internet (streaming Netflix/YouTube).
      • Emby/Jellyfin (NAS) via le VLAN Media (ex: VLAN 40 si existant).

Interaction avec Home Lab

  • Pour Emby/Jellyfin (NAS) :
    • Montez un dossier partagé Synology en SMB/NFS accessible à la TV.
    • Exemple de configuration Emby :yamlCopy# docker-compose.yml (NAS) volumes: – /volume1/medias:/media
  • Contrôle via Home Assistant :
    • Intégrez la TV via HDMI-CEC ou API spécifique (ex: Samsung Tizen, LG webOS).
    • Automatisations possibles :
      • Allumer/éteindre la TV quand Frigate détecte un mouvement.
      • Afficher les caméras sur la TV via un dashboard.

Intégration des Google Nest (Assistant Google)

Configuration Réseau

  • VLAN Recommandé : Isolez-les dans un VLAN IoT (ex: VLAN 30) pour limiter l’accès au reste du réseau.
    • Pour pfSense (VM-102) :CopyCréez un VLAN 30 → Interface dédiée → Règles de firewall : – Autoriser OUT vers Internet (HTTPS/DNS). – Bloquer l’accès aux autres VLANs (sauf exceptions comme Home Assistant).

Communication avec Home Assistant (VM-101)

  • Via le protocole local :
    • Activez Google Assistant SDK dans Home Assistant.
    • Utilisez Nabu Casa (ou un domaine personnalisé avec HTTPS) pour la liaison sécurisée.
  • Scénarios :
    • Contrôle des lumières/prises via commandes vocales.
    • Synchronisation avec vos calendriers/rappels.

DNS

💡 Network Overview (Goal)

  • Orange Box (ISP Router/Gateway):
    • IP: 192.168.1.1
    • LAN/Internet Gateway
  • pfSense (Firewall/Router):
    • WAN Interface: Gets IP from 192.168.1.0/24 (e.g. 192.168.1.2)
    • LAN Interface: New network 192.212.5.0/24 (e.g. 192.212.5.1)
  • Home Lab Devices:
    • On VLANs behind pfSense
  • Pi-hole:
    • Installed behind pfSense (e.g. 192.212.5.2)

🧠 What You Want:

  1. Devices on the lab network use Pi-hole for DNS.
  2. pfSense uses Pi-hole for DNS too (optional but recommended).
  3. Internet access for lab network is through pfSense ➝ Orange Box ➝ Internet.
  4. Lab network stays isolated from home network.

✅ Step-by-Step DNS Configuration

1. Install and Set Up Pi-hole

  • Install Pi-hole on a device behind pfSense (VM, Raspberry Pi, etc.).
  • Give it a static IP, e.g. 192.212.5.2
  • During setup, don’t use DHCP (let pfSense handle that).
  • Choose public upstream DNS (Cloudflare 1.1.1.1, Google 8.8.8.8, etc.)

2. Configure pfSense to Use Pi-hole as DNS

a. Set DNS Server
  • Go to System > General Setup in pfSense.
  • In the DNS Server Settings, add: nginxCopyEditDNS Server 1: 192.212.5.2 (your Pi-hole IP)
  • Uncheck “Allow DNS server list to be overridden by DHCP/PPP on WAN” — this avoids getting ISP’s DNS from the Orange Box.
b. Disable DNS Resolver (Optional)

If you don’t want pfSense to do any DNS resolution, you can:

  • Go to Services > DNS Resolver, and disable it.
  • Or keep it enabled for pfSense’s internal name resolution, but forward to Pi-hole.

3. Configure DHCP on pfSense VLANs)

  • Go to Services > DHCP Server > LAN
  • Under “DNS Servers”, set: DNS Server: 192.212.5.2
  • Now, all clients getting IPs from pfSense will also use Pi-hole as DNS.

4. (Optional) Block DNS Leaks

To prevent clients from bypassing Pi-hole (e.g., hardcoded DNS like 8.8.8.8):

  • Go to Firewall > NAT > Port Forward
  • Create rules to redirect all port 53 (DNS) traffic to Pi-hole IP.

Example:

  • Interface: LAN
  • Protocol: TCP/UDP
  • Destination Port: 53
  • Redirect target IP: 192.212.5.2 (Pi-hole)
  • Redirect Port: 53

Quelques rappels utiles

  • pfSense est la gateway de chaque VLAN → donc une IP par VLAN
  • Le DNS de chaque client dans chaque VLAN doit pointer vers le Pi-hole
  • pfSense peut rediriger les requêtes DNS via une règle NAT (port 53) vers le Pi-hole si nécessaire

Configuration Ip/pfsense/proxmox

ElementSchema IPRegle Pfsense
pfsense.5.1interface:
LAN -> .5.X
LAN_VLAN10 -> .10.X
LAN_VLAN20 -> .20.X
LAN_VLAN30 -> .30.X
LAN_VLAN40 -> .40.X
pihole.5.2NAT : LAN*** address:53 192.212.5.2:53
Raspberry.5.3pihole and gatus
home assistant.10.105NAT redirect old home assistant:
– LAN .30.105:1883 -> .10.105:8123
– LAN .30.105:1883 -> .10.105:8123
frigate.10.115
Kubuntu.10.140
docker.10.200
jenkins.10.140
fireflyIII.10.155
webserver.20.200
synology.40.111(network/interfaces)
auto vmbr2.40
iface vmbr2.40 inet static
address 192.212.40.245/24
vlan-raw-device vmbr2
qt21101l.5.101
px30_evb.5.102
Octoprint.5.110
Doorbell.5.150
dome01.5.151
dome02.5.152

nano /etc/network/interfaces

auto vmbr2.10
iface vmbr2.10 inet static
    address 192.212.10.245/24
    vlan-raw-device vmbr2
auto vmbr2.20
iface vmbr2.20 inet static
    address 192.212.20.245/24
    vlan-raw-device vmbr2
auto vmbr2.30
iface vmbr2.30 inet static
    address 192.212.30.245/24
    vlan-raw-device vmbr2
auto vmbr2.40
iface vmbr2.40 inet static
    address 192.212.40.245/24
    vlan-raw-device vmbr2

Installation complete de kubernetes , minikube , prometheus et grafana (et pods)

On recommence kubernetes

A force de triturer les parametre de la VM et du kubernetes , j’ai briqué mon dashboard et la CPU est constament a 60% au repos.

Je decide de recree un VM Ubunutu a partir de mon template et d’essayer minikube a la place de microk8s

Les pods que je desire :

  • emby
  • sickchill
  • transmission
  • filebot
  • prometheus
  • grafana

Clonage d’un VM Ubunutu

Je clone en Full mon template d’ubuntu

Passer en IP Static

sudo nano /etc/netplan/00-installer-config.yaml
network:
  version: 2
  renderer: networkd
  ethernets:
    ens18:
     dhcp4: no
     addresses: [192.168.1.20/24]
     gateway4: 192.168.1.1
     nameservers:
       addresses: [192.168.1.1]

recuperation de mon filesytem

sous PVE:

Je monte la VM de mon precedant clone microk8s dans ma VM minikube afin de recopier le contenu dans la nouvelle VM.

nano /etc/pve/qemu-server/105.conf

scsi1: cyclops:vm-100-disk-0,size=32G
scsi2: cyclops:vm-105-disk-0,size=32G

dans ma nouvelle VM (id:105) :

je cherche mes 2 disk(100;1005)

ls -lh /dev/disk/by-id/
ata-QEMU_DVD-ROM_QM00001 -> ../../sr0
ata-QEMU_DVD-ROM_QM00003 -> ../../sr1
scsi-0QEMU_QEMU_HARDDISK_drive-scsi0 -> ../../sda
scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-part1 -> ../../sda1
scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-part2 -> ../../sda2
scsi-0QEMU_QEMU_HARDDISK_drive-scsi1 -> ../../sdc
scsi-0QEMU_QEMU_HARDDISK_drive-scsi1-part1 -> ../../sdc1
scsi-0QEMU_QEMU_HARDDISK_drive-scsi2 -> ../../sdb

le scsi2 (disk 105) n’a pas de parttion , j’en crée une

sudo fdisk /dev/sdb

formaté la partition

sudo mkfs.ext4 /dev/sdb1

creation des repertoires de montage

sudo mkdir /usr/kubedata
sudo mkdir /usr/old_kubedata

j’ajoute en montage automatique mon nouveau disk(105)

sudo nano /etc/fstab

/dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi2-part1 /usr/kubedata ext4 defaults    0 0

sudo mount -a

et je monte en manuel l’ancien disk(100)

sudo mount /dev/sdc1 /usr/old_kubedata/

copie du contenu de l’ancien disk(100) dans le nouveau disk(105)

sudo cp -r /usr/old_kubedata/* /usr/kubedata/

demontage de l’ancien disk (100)

sudo umount  /usr/old_kubedata/ 
sudo rm /usr/old_kubedata/ -R

sous PVE:

Je demonte l’ancien disk(100) de ma nouvelle VM(105)

nano /etc/pve/qemu-server/105.conf

supression de la ligne

scsi1: cyclops:vm-100-disk-0,size=32G

Installation de minikube

j’install Docker 1

sudo apt-get update

sudo apt-get install \
    ca-certificates \
    curl \
    gnupg \
    lsb-release
	
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
  
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io

j’installe minikube 2

curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube
sudo usermod -aG docker $USER && newgrp docker
minikube start
minikube kubectl -- get po -A
nano ./.bashrc
alias kubectl="minikube kubectl --"

modifier l’editeur par defaut

sudo nano /etc/environment
KUBE_EDITOR="/usr/bin/nano"

Install Prometheus and Grafana

source 3

install Helm

curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
helm version

Install Prometheus and Grafana on Kubernetes using Helm 3

helm repo add stable https://charts.helm.sh/stable
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm install stable prometheus-community/kube-prometheus-stack
kubectl edit svc stable-kube-prometheus-sta-prometheus

Changer ClusterIP pour LoadBalancer/NodePort

kubectl edit svc stable-grafana

Changer ClusterIP pour LoadBalancer/NodePort

Web Grafana

UserName: admin
Password: prom-operator

sinon récupéré le password grafana

kubectl get secret --namespace default grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo

Install SickChill

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sickchillserver 
  namespace: default
  labels:
    app: sickchill
spec:
  replicas: 1
  selector:
    matchLabels:
      app: sickchill
  template:
    metadata:
      labels:
        run: sickchillserver 
        app: sickchill
    spec:
      containers:
      - name: sickchillserver 
        image: lscr.io/linuxserver/sickchill
        env:
          - name: "PUID"
            value: "1000"
          - name: "PGID"
            value: "1000" 
        ports:
        - containerPort: 8081
          name: tr-http
        volumeMounts:
          - mountPath: /config
            name: tr-config
          - mountPath: /downloads
            name: tr-videoclub
            subPath: 00-Tmp/sickchill/downloads
          - mountPath: /tv
            name: tr-videoclub
            subPath: 30-Series
          - mountPath: /anime
            name: tr-videoclub
            subPath: 40-Anime
      volumes:
      - name: tr-videoclub
        nfs:
          server: 192.168.1.40
          path: /mnt/Magneto/9-VideoClub
      - name: tr-config
        hostPath:
          type: DirectoryOrCreate
          path: /usr/kubedata/sickchillserver/config
---
apiVersion: v1
kind: Service
metadata:
  name: sickchill-svc
spec:
  selector:
    app: sickchill
  ports:
    - name: "http"
      port: 8081
      targetPort: 8081
  type: NodePort

Install Transmission

apiVersion: apps/v1
kind: Deployment
metadata:
  name: transmissionserver 
  namespace: default
  labels:
    app: transmission
spec:
  replicas: 1
  selector:
    matchLabels:
      app: transmission
  template:
    metadata:
      labels:
        run: transmissionserver 
        app: transmission
    spec:
      containers:
      - name: transmissionserver 
        image: lscr.io/linuxserver/transmission
        env:
          - name: "PUID"
            value: "1000"
          - name: "PGID"
            value: "1000" 
        ports:
        - containerPort: 9091
          name: tr-http
        - containerPort: 51413
          name: tr-https
        volumeMounts:
        - mountPath: /config
          name: tr-config
        - mountPath: /downloads-sickchill
          name: tr-media-sickchill
        - mountPath: /script
          name: tr-script
        - mountPath: /watch
          name: tr-watch
      volumes:
      - name: tr-config
        hostPath:
          type: DirectoryOrCreate
          path: /usr/kubedata/transmissionserver/config
      - name: tr-media-sickchill
        hostPath:
          type: DirectoryOrCreate
          path: /Videoclub/00-Tmp/sickchill/downloads
      - name: tr-script
        hostPath:
          type: DirectoryOrCreate
          path: /Videoclub/00-Tmp/transmission/script
      - name: tr-watch
        hostPath:
          type: DirectoryOrCreate
          path: /Videoclub/00-Tmp/transmission/watch
---
apiVersion: v1
kind: Service
metadata:
  name: transmission
spec:
  selector:
    app: transmission
  ports:
    - name: "http"
      port: 9091
      targetPort: 9091
    - name: "https"
      port: 51413
      targetPort: 51413
  type: NodePort

Install Emby

apiVersion: apps/v1
kind: Deployment
metadata:
  name: embyserver 
  namespace: default
  labels:
    app: emby
spec:
  replicas: 1
  selector:
    matchLabels:
      app: emby
  template:
    metadata:
      labels:
        run: embyserver 
        app: emby
    spec:
      containers:
      - name: embyserver 
        image: emby/embyserver:latest
        env:
          - name: "UID"
            value: "1000"
          - name: "GID"
            value: "100" 
          - name: "GIDLIST"
            value: "100" 
        ports:
        - containerPort: 8096
          name: emby-http
        - containerPort: 8920
          name: emby-https
        volumeMounts:
        - mountPath: /config
          name: emby-config
        - mountPath: /mnt/videoclub
          name: emby-media
      volumes:
      - name: emby-media
        nfs:
          server: 192.168.1.40
          path: /mnt/Magneto/9-VideoClub
      - name: emby-config
        hostPath:
          type: DirectoryOrCreate
          path: /usr/kubedata/embyserver/config
---
apiVersion: v1
kind: Service
metadata:
  name: emby
spec:
  selector:
    app: emby
  ports:
    - name: "http"
      port: 8096
      targetPort: 8096
    - name: "https"
      port: 8920
      targetPort: 8920
  type: NodePort

Install FileBot

apiVersion: apps/v1
kind: Deployment
metadata:
  name: filebot-node 
  namespace: default
  labels:
    app: filebot
spec:
  replicas: 1
  selector:
    matchLabels:
      app: filebot
  template:
    metadata:
      labels:
        run: filebot-node 
        app: filebot
    spec:
      containers:
      - name: filebot-node 
        image: maliciamrg/filebot-node-479
        ports:
        - containerPort: 5452
          name: filebot-http
        volumeMounts:
        - mountPath: /data
          name: filebot-data
        - mountPath: /videoclub
          name: filebot-media
      volumes:
      - name: filebot-data
        hostPath:
          type: DirectoryOrCreate
          path: /usr/kubedata/filebot-node/data
      - name: filebot-media
        nfs:
          server: 192.168.1.40
          path: /mnt/Magneto/9-VideoClub
---
apiVersion: v1
kind: Service
metadata:
  name: filebot
spec:
  selector:
    app: filebot
  ports:
    - name: "http"
      port: 5452
      targetPort: 5452
  type: NodePort

Resultat

david@legion2:~$ kubectl get pods -A -o wide
NAMESPACE        NAME                                                     READY   STATUS      RESTARTS        AGE     IP             NODE       NOMINATED NODE   READINESS GATES
default          alertmanager-stable-kube-prometheus-sta-alertmanager-0   2/2     Running     2 (4h21m ago)   14h     172.17.0.2     minikube   <none>           <none>
default          embyserver-56689875b4-wmxww                              1/1     Running     0               53m     172.17.0.12    minikube   <none>           <none>
default          filebot-node-7786dfbf67-fh7s8                            1/1     Running     0               47m     172.17.0.13    minikube   <none>           <none>
default          prometheus-stable-kube-prometheus-sta-prometheus-0       2/2     Running     2 (4h21m ago)   14h     172.17.0.7     minikube   <none>           <none>
default          sickchillserver-7494d84848-cwjkm                         1/1     Running     0               4h15m   172.17.0.8     minikube   <none>           <none>
default          stable-grafana-5dcdf4bbc6-q5shg                          3/3     Running     3 (4h21m ago)   14h     172.17.0.3     minikube   <none>           <none>
default          stable-kube-prometheus-sta-operator-5fd44cc9bf-nmgdq     1/1     Running     1 (4h21m ago)   14h     172.17.0.6     minikube   <none>           <none>
default          stable-kube-state-metrics-647c4868d9-f9vrb               1/1     Running     2 (4h19m ago)   14h     172.17.0.5     minikube   <none>           <none>
default          stable-prometheus-node-exporter-j6w5f                    1/1     Running     1 (4h21m ago)   14h     192.168.49.2   minikube   <none>           <none>
default          transmissionserver-7d5d8c49db-cxktx                      1/1     Running     0               62m     172.17.0.11    minikube   <none>           <none>
ingress-nginx    ingress-nginx-admission-create--1-nzdhc                  0/1     Completed   0               3h51m   172.17.0.10    minikube   <none>           <none>
ingress-nginx    ingress-nginx-admission-patch--1-mxxmc                   0/1     Completed   1               3h51m   172.17.0.9     minikube   <none>           <none>
ingress-nginx    ingress-nginx-controller-5f66978484-w8cqj                1/1     Running     0               3h51m   172.17.0.9     minikube   <none>           <none>
kube-system      coredns-78fcd69978-cq2hn                                 1/1     Running     1 (4h21m ago)   15h     172.17.0.4     minikube   <none>           <none>
kube-system      etcd-minikube                                            1/1     Running     1 (4h21m ago)   15h     192.168.49.2   minikube   <none>           <none>
kube-system      kube-apiserver-minikube                                  1/1     Running     1 (4h21m ago)   15h     192.168.49.2   minikube   <none>           <none>
kube-system      kube-controller-manager-minikube                         1/1     Running     1 (4h21m ago)   15h     192.168.49.2   minikube   <none>           <none>
kube-system      kube-ingress-dns-minikube                                1/1     Running     0               3h44m   192.168.49.2   minikube   <none>           <none>
kube-system      kube-proxy-d8m7r                                         1/1     Running     1 (4h21m ago)   15h     192.168.49.2   minikube   <none>           <none>
kube-system      kube-scheduler-minikube                                  1/1     Running     1 (4h21m ago)   15h     192.168.49.2   minikube   <none>           <none>
kube-system      storage-provisioner                                      1/1     Running     4 (4h19m ago)   15h     192.168.49.2   minikube   <none>           <none>
metallb-system   controller-66bc445b99-wvdnq                              1/1     Running     0               3h44m   172.17.0.10    minikube   <none>           <none>
metallb-system   speaker-g49dw                                            1/1     Running     0               3h44m   192.168.49.2   minikube   <none>           <none>
david@legion2:~$ kubectl get svc -A -o wide
NAMESPACE       NAME                                                 TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)                          AGE     SELECTOR
default         alertmanager-operated                                ClusterIP      None             <none>          9093/TCP,9094/TCP,9094/UDP       14h     app.kubernetes.io/name=alertmanager
default         emby                                                 LoadBalancer   10.101.254.121   192.168.1.102   8096:30524/TCP,8920:30171/TCP    55m     app=emby
default         filebot                                              LoadBalancer   10.106.51.20     192.168.1.103   5452:31628/TCP                   48m     app=filebot
default         kubernetes                                           ClusterIP      10.96.0.1        <none>          443/TCP                          15h     <none>
default         prometheus-operated                                  ClusterIP      None             <none>          9090/TCP                         14h     app.kubernetes.io/name=prometheus
default         sickchill-svc                                        LoadBalancer   10.107.60.50     192.168.1.100   8081:32026/TCP                   4h16m   app=sickchill
default         stable-grafana                                       LoadBalancer   10.102.236.29    192.168.1.104   80:31801/TCP                     15h     app.kubernetes.io/instance=stable,app.kubernetes.io/name=grafana
default         stable-kube-prometheus-sta-alertmanager              ClusterIP      10.105.89.179    <none>          9093/TCP                         15h     alertmanager=stable-kube-prometheus-sta-alertmanager,app.kubernetes.io/name=alertmanager
default         stable-kube-prometheus-sta-operator                  ClusterIP      10.99.183.242    <none>          443/TCP                          15h     app=kube-prometheus-stack-operator,release=stable
default         stable-kube-prometheus-sta-prometheus                NodePort       10.110.38.166    <none>          9090:32749/TCP                   15h     app.kubernetes.io/name=prometheus,prometheus=stable-kube-prometheus-sta-prometheus
default         stable-kube-state-metrics                            ClusterIP      10.104.176.119   <none>          8080/TCP                         15h     app.kubernetes.io/instance=stable,app.kubernetes.io/name=kube-state-metrics
default         stable-prometheus-node-exporter                      ClusterIP      10.106.253.56    <none>          9100/TCP                         15h     app=prometheus-node-exporter,release=stable
default         transmission                                         LoadBalancer   10.104.43.182    192.168.1.101   9091:31067/TCP,51413:31880/TCP   64m     app=transmission
ingress-nginx   ingress-nginx-controller                             NodePort       10.107.183.72    <none>          80:31269/TCP,443:30779/TCP       3h52m   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
ingress-nginx   ingress-nginx-controller-admission                   ClusterIP      10.97.189.150    <none>          443/TCP                          3h52m   app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
kube-system     kube-dns                                             ClusterIP      10.96.0.10       <none>          53/UDP,53/TCP,9153/TCP           15h     k8s-app=kube-dns
kube-system     stable-kube-prometheus-sta-coredns                   ClusterIP      None             <none>          9153/TCP                         15h     k8s-app=kube-dns
kube-system     stable-kube-prometheus-sta-kube-controller-manager   ClusterIP      None             <none>          10257/TCP                        15h     component=kube-controller-manager
kube-system     stable-kube-prometheus-sta-kube-etcd                 ClusterIP      None             <none>          2379/TCP                         15h     component=etcd
kube-system     stable-kube-prometheus-sta-kube-proxy                ClusterIP      None             <none>          10249/TCP                        15h     k8s-app=kube-proxy
kube-system     stable-kube-prometheus-sta-kube-scheduler            ClusterIP      None             <none>          10251/TCP                        15h     component=kube-scheduler
kube-system     stable-kube-prometheus-sta-kubelet                   ClusterIP      None             <none>          10250/TCP,10255/TCP,4194/TCP     14h     <none>

PiHole en VM

suite au pb de config pour le dhcp dans kubernetes , je suprime le pod pihole et j’intalle pihole sur une vm ubuntu dans proxmox

Resultat immediat avec 100% de reussite

Kubernetes et Emby Server

Preparation

Passer 32Go de la ZFS de Proxmox au ubuntu qui host kubernetes

Dans TrueNas exposer /mnt/Magneto/9-VideoClub en SMB

Passer la carte Graphic a ubuntu

Installation filesystem

installation de Cifs et edition de fstab pour monter automatiquement le share truenas

sudo apt install cifs-utils
sudo nano /etc/fstab

//192.168.1.46/9-VideoClub /Videoclub cifs uid=0,credentials=/home/david/.smb,iocharset=utf8,noperm 0 0

Cree une partion sur le disk de 32Go provenant de Proxmox via fdisk , formater en ext4 et monter cette partion sur le filesystem /usr/kubedata

sudo nano /etc/fstab

/dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi1-part1 /usr/kubedata ext4 defaults    0 0

deployement

La commande docker avec le filesystem preparé

sudo docker run -d \
    --name embyserver \
    --volume /usr/kubedata/embyserver/config:/config \
    --volume /Videoclub:/mnt/videoclub \
    --net=host \
    --device /dev/dri:/dev/dri \
    --publish 8096:8096 \
    --publish 8920:8920 \
    --env UID=1000 \
    --env GID=100 \
    --env GIDLIST=100 \
    emby/embyserver:latest

traduction en kubernetes deploy :

apiVersion: apps/v1
kind: Deployment
metadata:
  name: embyserver 
  namespace: default
  labels:
    app: emby
spec:
  replicas: 1
  selector:
    matchLabels:
      app: emby
  template:
    metadata:
      labels:
        run: embyserver 
        app: emby
    spec:
      containers:
      - name: embyserver 
        image: emby/embyserver:latest
        env:
          - name: "UID"
            value: "1000"
          - name: "GID"
            value: "100" 
          - name: "GIDLIST"
            value: "100" 
        ports:
        - containerPort: 8096
          name: emby-http
        - containerPort: 8920
          name: emby-https
        volumeMounts:
        - mountPath: /config
          name: emby-config
        - mountPath: /mnt/videoclub
          name: emby-media
      volumes:
      - name: emby-media
        hostPath:
          type: Directory
          path: /Videoclub
      - name: emby-config
        hostPath:
          type: DirectoryOrCreate
          path: /usr/kubedata/embyserver/config
---
apiVersion: v1
kind: Service
metadata:
  name: emby
spec:
  selector:
    app: emby
  ports:
    - name: "http"
      port: 8096
      targetPort: 8096
    - name: "https"
      port: 8920
      targetPort: 8920
  type: NodePort

puis on recupere le recupere le port d’exposition

kubectl get all --all-namespaces | grep emby

resultat le dashboard est accecible https://<master-ip>:30647

Creation template Ubuntu

Lors de mes tentative d’installation de rancher , j’ai reinstaller plusieur fois ubuntu.

Je crée donc un template pour mettre en place une procedure de deployement plutot que d’installe a chaque fois.

Premierement j’installe un VM ubuntu via l’iso , en activant l’option dans l’installation “ssh”

j’ajoute qemu-guest-agent pour que proxmox puisse recuperer des info de la VM

 sudo apt-get install qemu-guest-agent

en suivant cette video , j’utilise cloud-init pour faire un template deployable

sudo apt install cloud-init
un fois fait , pour crée une nouvelle vm ubuntu on fait un full clone

Premier VM TrueNAS

Dabord pour cettte premier vm je souhaite tester la monté de drive ,c’est pour cela que jJe formate mes 2 disques usb ssd de recup (husk (300go) et penance (500go) ) et je les attaches au serveur.

Apres avoir telecharger l’iso de TrueNAS-12.0-U6.iso on crée une nouvelle VM

Passthrough Physical Disk to Virtual Machine (VM)

https://pve.proxmox.com/wiki/Passthrough_Physical_Disk_to_Virtual_Machine_(VM)

Recupere la liste des disks sur le serveur

lsblk |awk 'NR==1{print $0" DEVICE-ID(S)"}NR>1{dev=$1;printf $0" ";system("find /dev/disk/by-id -lname \"*"dev"\" -printf \" %p\"");print "";}'|grep -v -E 'part|lvm'
NAME                         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT DEVICE-ID(S)
sda                            8:0    0 111.8G  0 disk   /dev/disk/by-id/wwn-0x50026b738058fd3e /dev/disk/by-id/ata-KINGSTON_SA400S37120G_50026B738058FD3E
sdb                            8:16   0   1.8T  0 disk   /dev/disk/by-id/wwn-0x50014ee2b5e069bb /dev/disk/by-id/ata-WDC_WD20EZRX-22D8PB0_WD-WCC4M7UHY201
sdc                            8:32   0 465.8G  0 disk   /dev/disk/by-id/ata-HGST_HTS545050A7E680_RBF50FM52MY7UR /dev/disk/by-id/wwn-0x5000cca7a3e53fde
sdd                            8:48   0 931.5G  0 disk   /dev/disk/by-id/wwn-0x50014ee261364daf /dev/disk/by-id/ata-WDC_WD10EZRX-00D8PB0_WD-WCC4M0EE37RE
sde                            8:64   0 298.1G  0 disk   /dev/disk/by-id/usb-JMicron_Generic_0123456789ABCDEF-0:0
sdf                            8:80   0   3.6T  0 disk   /dev/disk/by-id/wwn-0x50014ee26394dace /dev/disk/by-id/ata-WDC_WD40EZRZ-00GXCB0_WD-WCC7K5HL7SHY

puis on attache un disk a la vm

qm set 102 -scsi1 /dev/disk/by-id/ata-HGST_HTS545050A7E680_RBF50FM52MY7UR

Modernisation Infra

La refonte et modernisation de mon d’infrastructure local passe par un hypervisor et un system de containerisation.

Bare métal

  • Proxmox
Proxmox Server Solutions

Virtualize(hypervisor)

  • Windows + Blueiris(sécurité)
  • Firewall (open sense)
  • Ubuntu (docker, kubernetes, rancher)
  • Freenas (file sharing)
File:Logo-ubuntu cof-orange-hex.svg - Wikimedia Commons

Container

  • Home assistant/Home bridge (automation)
  • Pi hole
  • Emby
  • Heimdall Application Dashboard
  • ZoneMinder (sécurité)
  • Unify server (Home network)
  • Next cloud (cloud file)
  • Syncthing (Backup)
  • Web server (wordpress, dokuwiki, sickchill)
  • Perso api
  • Mysql
  • Nginx proxy (Reverse proxy)
  • Prometheus + Grafana (reporting)
  • FTP fillezilla
  • Document Management System
  • Transmission
  • Nzbget
  • Redmine

Docker Logos | Docker