https://forum.proxmox.com/threads/tutorial-mounting-nfs-share-to-an-unprivileged-lxc.138506/
##
##host
sudo nano /etc/fstab
192.212.40.111:/volume1/10-System /mnt/NAS_10-System nfs defaults 0 0
192.212.40.111:/volume2/30-Mail /mnt/NAS_30-Mail nfs defaults 0 0
192.212.40.111:/volume1/70-Photocopie /mnt/NAS_70-Photocopie nfs defaults 0 0
192.212.40.111:/volume1/80-Photo /mnt/NAS_80-Photo nfs defaults 0 0
192.212.40.111:/volume3/90-VideoClub /mnt/NAS_90-VideoClub nfs defaults 0 0
192.212.40.111:/volume3/99-Ftp /mnt/NAS_99-Ftp nfs defaults 0 0
sudo systemctl daemon-reload
sudo mount -a
#host for LXC200
nano /etc/pve/lxc/200.conf
mp0: /mnt/NAS_10-System/docker/VM200,mp=/System
mp1: /mnt/NAS_80-Photo,mp=/Photo
#host for LXC125
nano /etc/pve/lxc/125.conf
mp0: /mnt/NAS_10-System/docker/VM125,mp=/System
mp1: /mnt/NAS_90-VideoClub,mp=/VideoClub
mp2: /mnt/NAS_30-Mail,mp=/Mail
#host for LXC103
nano /etc/pve/lxc/103.conf
mp0: /mnt/NAS_10-System/docker/RASP103,mp=/System
#LXC***
sudo apt update
sudo apt install cifs-utils smbclient nfs-common passwd -y
sudo groupadd -g 10000 lxc_shares
sudo usermod -aG lxc_shares root
sudo usermod -aG lxc_shares david
sudo reboot
Create a Self-Signed SSL Certificate on Windows
You can create a self-signed certificate using PowerShell.
- Open PowerShell as Administrator.
- Run this command to create a new self-signed cert and export the key and certificate as
.pem
files:
powershellCopyEdit# Define file paths
$certPath = "C:\Users\<YourUser>\bolt-certs"
New-Item -ItemType Directory -Path $certPath -Force
# Create self-signed cert
$cert = New-SelfSignedCertificate -DnsName "localhost" -CertStoreLocation "cert:\LocalMachine\My"
# Export certificate (public part)
Export-Certificate -Cert $cert -FilePath "$certPath\cert.pem"
# Export private key as PFX
$pfxPath = "$certPath\cert.pfx"
$password = ConvertTo-SecureString -String "YourStrongPassword" -Force -AsPlainText
Export-PfxCertificate -Cert $cert -FilePath $pfxPath -Password $password
- Convert the
.pfx
file to.key
and.pem
files (Docker usually wants.key
and.crt
or.pem
separately).
You can do this using OpenSSL (if you have it installed, e.g., via Git Bash or [WSL]):
bashCopyEdit# Navigate to cert folder (adjust path)
cd /c/Users/<YourUser>/bolt-certs
# Extract key
openssl pkcs12 -in cert.pfx -nocerts -out key.pem -nodes -password pass:YourStrongPassword
# Extract cert
openssl pkcs12 -in cert.pfx -clcerts -nokeys -out cert.pem -password pass:YourStrongPassword
Mirror pve-root Using LVM RAID1
What this means:
You configure a mirror (RAID1) so that any write to pve-root
is also written to sda1
. If your NVMe dies, you can still boot from sda1
.
🧱 Requirements:
sda1
must be equal to or larger thanpve-root
(96 GB in your case)- You must convert
pve-root
into a RAID1 logical volume (LVM mirror) - Some downtime or maintenance mode required
🧰 How-To (Overview Only):
- Backup First! (Always)
- Check current setup:
lvdisplay pve/root
- Wipe and prep
sda1
:pvcreate /dev/sda
vgextend pve /dev/sda
- Convert
pve-root
to RAID1:lvconvert --type mirror -m1 --mirrorlog core pve/root /dev/sda
This mirrors pve/root
from your NVMe disk onto sda
.
Option | Meaning |
---|---|
--type mirror | Convert the LV to a mirror (RAID1) |
-m1 | Use 1 mirror copy = total of 2 devices |
--mirrorlog core | Store mirror log in RAM |
pve/root | The logical volume to convert (your root) |
/dev/sda | The new disk to mirror onto |
- Confirm with:
lvs -a -o +devices
root@pve:~# lvs -a -o +devices
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert Devices
data pve twi-aotz-- 794.30g 0.01 0.24 data_tdata(0)
[data_tdata] pve Twi-ao---- 794.30g /dev/nvme0n1p3(26624)
[data_tmeta] pve ewi-ao---- 8.10g /dev/nvme0n1p3(229966)
[lvol0_pmspare] pve ewi------- 8.10g /dev/nvme0n1p3(232040)
root pve mwi-aom--- 96.00g 100.00 root_mimage_0(0),root_mimage_1(0)
[root_mimage_0] pve iwi-aom--- 96.00g /dev/nvme0n1p3(2048)
[root_mimage_1] pve iwi-aom--- 96.00g /dev/sda(0)
swap pve -wi-ao---- 8.00g /dev/nvme0n1p3(0)
- Optional but smart: Update your bootloader (
grub
) to know how to boot from either disk:- update-initramfs -u
- update-grub
- grub-install /dev/sda
✅ Pros:
- Real-time mirroring (RAID1)
- Transparent failover if one device fails (bootable if configured)
⚠️ Cons:
- Adds complexity
- If misconfigured, can break boot
- Doesn’t protect against file deletion or config mistakes (RAID is not a backup)
how to check the mirror status, detect failures, and know when to do maintenance:
Use lvdisplay
for more detail:
lvdisplay /dev/pve/root
Look for:
Mirror status: OK
(or similar)- If a device has failed, you’ll see something like “failed” or “inconsistent”
Suggested Disk Layout Strategy
Use Case | Current Disk | Suggested Role |
---|---|---|
Proxmox Root FS | nvme0n1 | ✅ Keep for now (fast, wear low) |
VM/LXC storage | sdc1 (SSD) | ✅ Good, isolate high I/O loads |
Backup / ISOs | sdb1 (HDD) | ✅ Archive/slow storage |
Spare/Buffer | sda1 (SSD) | ⚠️ Could mirror root or use as L2ARC/ZIL (if ZFS) |
2. Watch NVMe Write Wear Over Time
Your NVMe shows:
yamlCopyEditPercentage Used: 0%
That’s excellent — you’re still early in the wear cycle. But with Proxmox, check every few months using:
bashCopyEditsmartctl -a /dev/nvme0n1 | grep -i percentage
3. Add Log Management
To reduce wear:
- Use
tmpfs
for/var/log
(if RAM allows) - Limit
journald
persistence:
# /etc/systemd/journald.conf
Storage=volatile
SystemMaxUse=200M
4. Consider Backup OS Snapshots or Mirroring Root
Use sda1
to:
- Mirror
pve-root
using LVM RAID1 - Or just use it as a backup location via rsync or
lvm-snapshots
mount smb nas into proxmox
mkdir /mnt/NAS_10-System
mkdir /mnt/NAS_30-Mail
mkdir /mnt/NAS_70-Photocopie
mkdir /mnt/NAS_80-Photo
mkdir /mnt/NAS_90-VideoClub
mkdir /mnt/NAS_99-Ftp
nano /etc/fstab
//192.212.40.111/10-System /mnt/NAS_10-System cifs rw,credentials=/root/.sharelogin,nobrl,uid=101000,gid=101000 0 0
//192.212.40.111/30-Mail /mnt/NAS_30-Mail cifs rw,credentials=/root/.sharelogin,nobrl,uid=101000,gid=101000 0 0
//192.212.40.111/70-Photocopie /mnt/NAS_70-Photocopie cifs rw,credentials=/root/.sharelogin,nobrl,uid=101000,gid=101000 0 0
//192.212.40.111/80-Photo /mnt/NAS_80-Photo cifs rw,credentials=/root/.sharelogin,nobrl,uid=101000,gid=101000 0 0
//192.212.40.111/90-VideoClub /mnt/NAS_90-VideoClub cifs rw,credentials=/root/.sharelogin,nobrl,uid=101000,gid=101000 0 0
//192.212.40.111/99-Ftp /mnt/NAS_99-Ftp cifs rw,credentials=/root/.sharelogin,nobrl,uid=101000,gid=101000 0 0
systemctl daemon-reload
mount -a
Switching GPU Binding (Live) Toggle GPU Driver Script (vfio-pci ↔ nvidia)
A single NVIDIA GPU cannot:
- Be passed through to a VM (via
vfio-pci
) - And be used on the host or in LXC at the same time
Why?
Because when you bind the GPU to vfio-pci
on boot, it’s invisible to the host and cannot be used by NVIDIA’s kernel driver (nvidia.ko
).
Switch Between VM and LXC Use (Rebind on demand)
If you don’t need both at the same time, you can manually switch the GPU between:
- Passthrough to VM (bind to
vfio-pci
) - Use on host / LXC (bind to
nvidia
)
This lets you:
Then later give it back to the VM
Use the GPU for nvidia-smi
or CUDA in an LXC container
here’s a single script that checks which driver is currently bound to your GPU, and automatically toggles between:
vfio-pci
(for passthrough to VM)nvidia
(for use on host or LXC)
#!/bin/bash
# === CONFIGURATION ===
GPU="0000:0a:00.0"
AUDIO="0000:0a:00.1"
VMID=131 # Your Windows VM ID
LXCID=115 # Your LXC container ID using the GPU
# === FUNCTIONS ===
get_driver() {
basename "$(readlink /sys/bus/pci/devices/$1/driver 2>/dev/null)"
}
unbind_driver() {
echo "$1" > "/sys/bus/pci/devices/$1/driver/unbind"
}
bind_driver() {
echo "$1" > "/sys/bus/pci/drivers/$2/bind"
}
switch_to_nvidia() {
echo "→ Switching to NVIDIA driver (LXC use)..."
echo "Stopping VM $VMID..."
qm stop $VMID
sleep 3
echo "Unbinding GPU from current driver..."
unbind_driver "$GPU"
unbind_driver "$AUDIO"
echo "Loading NVIDIA modules..."
modprobe nvidia nvidia_uvm nvidia_drm nvidia_modeset
echo "Binding GPU to nvidia..."
bind_driver "$GPU" nvidia
bind_driver "$AUDIO" snd_hda_intel
echo "Starting LXC container $LXCID..."
pct start $LXCID
echo "✔ Switched to NVIDIA mode."
}
switch_to_vfio() {
echo "→ Switching to VFIO (VM passthrough)..."
echo "Stopping LXC container $LXCID..."
pct stop $LXCID
sleep 3
echo "Unbinding GPU from current driver..."
unbind_driver "$GPU"
unbind_driver "$AUDIO"
echo "Loading VFIO modules..."
modprobe vfio-pci
echo "Binding GPU to vfio-pci..."
bind_driver "$GPU" vfio-pci
bind_driver "$AUDIO" vfio-pci
echo "Starting VM $VMID..."
qm start $VMID
echo "✔ Switched to VFIO mode."
}
# === MAIN ===
MODE="$1"
CURRENT_DRIVER=$(get_driver "$GPU")
echo "Detected GPU driver: ${CURRENT_DRIVER:-none}"
case "$MODE" in
--to-nvidia)
switch_to_nvidia
;;
--to-vfio)
switch_to_vfio
;;
"")
if [ "$CURRENT_DRIVER" == "vfio-pci" ]; then
switch_to_nvidia
elif [ "$CURRENT_DRIVER" == "nvidia" ]; then
switch_to_vfio
elif [ -z "$CURRENT_DRIVER" ]; then
echo "⚠️ No driver bound. Defaulting to NVIDIA..."
switch_to_nvidia
else
echo "❌ Unknown driver bound: $CURRENT_DRIVER"
exit 1
fi
;;
*)
echo "Usage: $0 [--to-nvidia | --to-vfio]"
exit 1
;;
esac
# === FINAL STATUS DISPLAY ===
echo
echo "🔍 Final GPU driver status:"
SHORT_GPU=$(echo "$GPU" | cut -d':' -f2-)
lspci -k | grep "$SHORT_GPU" -A 3
Auto-toggle based on current driver
./toggle-gpu.sh
Force switch to NVIDIA for LXC
./toggle-gpu.sh --to-nvidia
Force switch to VFIO for VM passthrough
./toggle-gpu.sh --to-vfio
passing NVIDIA GPUs to Windows VMs
passing NVIDIA GPUs to Windows VMs because NVIDIA detects you’re running in a virtualized environment and blocks the driver
The /etc/pve/qemu-server/131.conf
args: -cpu host,hv_vapic,hv_stimer,hv_time,hv_synic,hv_vpindex,+invtsc,-hypervisor
bios: ovmf
boot:
cores: 16
cpu: host
efidisk0: local-lvm:vm-131-disk-0,efitype=4m,pre-enrolled-keys=1,size=4M
hostpci0: 0000:0a:00,device-id=0x2882,pcie=1,vendor-id=0x10de,x-vga=1
ide0: PveSsd900:131/vm-131-disk-0.qcow2,size=180G
kvm: 1
machine: pc-q35-9.0
memory: 16384
meta: creation-qemu=9.2.0,ctime=1747334710
name: win10Gaming
net0: virtio=BC:24:11:77:A3:BC,bridge=vmbr2,firewall=1
numa: 0
onboot: 1
ostype: win10
scsihw: virtio-scsi-single
smbios1: uuid=45849243-d81c-4be4-9528-4620ee509da8,manufacturer=QkVTU1RBUiBURUNIIExJTUlURUQ=,product=SE04MA==,version=NS4xNg==,serial=RGVmYXVsdCBzdHJpbmc=,sku=RGVmYXVsdCBzdHJpbmc=,family=RGVmYXVsdCBzdHJpbmc=,base64=1
sockets: 1
tags: 5;sharegpu;windows
usb0: host=1532:0083
usb1: host=145f:0316
usb2: host=0a12:0001
vga: none
vmgenid: 87821f0a-458f-45da-8691-62fcd515c190
Bind proxmox Inteface to a specifi IP
To achieve this setup — where Proxmox’s web interface on port 8006 is only accessible via one specific NIC and not the other — you need to bind the Proxmox web GUI to a specific IP address.
Here’s how you can do that:
🔧 Steps to Bind Port 8006 and 3128(spice) to a Specific NIC/IP
- Identify the NIC/IPs: Run: bashCopyEdit
ip a
Let’s assume:- NIC1 (management):
192.212.5.245
— this should allow port 8006 - NIC2 (isolated):
10.10.10.10
— this should block port 8006
- NIC1 (management):
- Edit Proxmox Web GUI service config: Open this file: bashCopyEdit
nano /etc/default/pveproxy
- Bind it to a specific IP (management interface): Find or add the line:
LISTEN_IP="192.212.5.245"
- Restart the
pveproxy
service:systemctl restart pveproxy
andsystemctl restart spiceproxy
This change will make the Proxmox GUI listen only on192.212.5.245
, and not on all interfaces.
✅ Optional: Confirm It’s Working
You can test by running:
ss -tuln | grep 8006
You should see:
LISTEN 0 50 192.212.5.245:8006 ...
And not 0.0.0.0:8006
or 10.10.10.10:8006
.
Step-by-Step NVIDIA Driver Installation for Proxmox Users
Use GPU in a VM (Passthrough only)
If you’re intentionally passing the GPU to a VM (e.g. Windows or Linux VM with GPU acceleration), then:
You should not install the NVIDIA driver on the Proxmox host.
Instead, install it inside the VM, and keep vfio-pci
bound on the host.
Use GPU on the Proxmox Host or in LXC
Start by find the correct NVIDIA driver
https://www.nvidia.com/en-us/drivers
On the proxmox host :
sudo apt update
sudo apt install pve-headers-$(uname -r) build-essential dkms
wget https://us.download.nvidia.com/XFree86/Linux-x86_64/570.144/NVIDIA-Linux-x86_64-570.144.run
chmod +x ./NVIDIA-Linux-x86_64-570.144.run
./NVIDIA-Linux-x86_64-570.144.run
sudo ./NVIDIA-Linux-x86_64-570.144.run -dkms
nvidia-smi
nano /etc/modules-load.d/modules.conf
nvidia
nvidia_uvm
ls -al /dev/nvidia*
root@pve:~# ls -al /dev/nvidia*
crw-rw-rw- 1 root root 195, 0 Apr 29 08:21 /dev/nvidia0
crw-rw-rw- 1 root root 195, 255 Apr 29 08:21 /dev/nvidiactl
crw-rw-rw- 1 root root 511, 0 Apr 29 08:21 /dev/nvidia-uvm
crw-rw-rw- 1 root root 511, 1 Apr 29 08:21 /dev/nvidia-uvm-tools
/dev/nvidia-caps:
total 0
drwxr-xr-x 2 root root 80 Apr 29 08:21 .
drwxr-xr-x 20 root root 4800 Apr 29 08:21 ..
cr-------- 1 root root 236, 1 Apr 29 08:21 nvidia-cap1
cr--r--r-- 1 root root 236, 2 Apr 29 08:21 nvidia-cap2
nano /etc/pve/lxc/115.conf 1
lxc.cgroup2.devices.allow: c 195:* rwm
lxc.cgroup2.devices.allow: c 236:* rwm
lxc.cgroup2.devices.allow: c 511:* rwm
lxc.mount.entry: /dev/nvidia0 dev/nvidia0 none bind,optional,create=file
lxc.mount.entry: /dev/nvidiactl dev/nvidiactl none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-caps dev/nvidia-caps none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm dev/nvidia-uvm none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm-tools dev/nvidia-uvm-tools none bind,optional,create=file
pct push 115 ./NVIDIA-Linux-x86_64-570.144.run /root/NVIDIA-Linux-x86_64-570.144.run
On Lxc :
sh NVIDIA-Linux-x86_64-570.144.run --no-kernel-module
nvidia-smi
For Docker:
# Add Nvidia repository key
apt install -y gpg
curl -fsSL https://developer.download.nvidia.com/compute/cuda/repos/debian12/x86_64/3bf863cc.pub | gpg --dearmor -o /etc/apt/keyrings/nvidia-archive-keyring.gpg
# Add Nvidia repository
echo "deb [signed-by=/etc/apt/keyrings/nvidia-archive-keyring.gpg] https://developer.download.nvidia.com/compute/cuda/repos/debian12/x86_64/ /" | tee /etc/apt/sources.list.d/nvidia-cuda-debian12.list
# Update package lists
apt update
# Install Nvidia container toolkit
apt install nvidia-container-toolkit
nano /etc/docker/daemon.json
{
"default-runtime": "nvidia",
"runtimes": {
"nvidia": {
"path": "nvidia-container-runtime",
"runtimeArgs": []
}
}
}
sudo nvidia-ctk runtime configure --runtime=docker
nano /etc/nvidia-container-runtime/config.toml
# Set no-cgroups to true
no-cgroups = true
For testing
# Run a test Docker container to verify GPU usage
docker run --gpus all nvidia/cuda:12.6.1-base-ubuntu24.04 nvidia-smi

If needed before purge old nvidia driver
sudo apt remove --purge '^nvidia-.*'
sudo apt autoremove
Source :
- Be aware that one time the cgroup2 number as change after a full reboot (510->511) ↩︎
Home Lab Optimisé
Matériel utilisé
- Raspberry Pi : Services légers (Pi-hole, Gatus).
- NAS Synology : Stockage, médias (Emby), et gestion documentaire (Paperless-ngx).
- PC Proxmox : Virtualisation des services gourmands (VM/LXC).
Architecture Logicielle
1. Raspberry Pi
- Pi-hole : Bloque les pubs et traqueurs.
- Gatus : Surveille la disponibilité des services.
Emulation Raspberry (Proxmox)
VM-103 | Raspberry : – Pi-Hole – Dashy – Gatus | Bloque les pubs et traqueurs , Surveille la disponibilité des services. | — |
2. NAS Synology (DSM)
via Docker :
- Médias : Emby.
- Documents : Paperless-ngx.
- Sauvegarde : Duplicati.
Apps Synology:
- Photos : SynoPhotos ????
3. Proxmox (PC Principal)
Conteneur/VM | Applications | Rôle | VLAN |
---|---|---|---|
VM-105 | Home Assistant | Automatisation domotique. | 10 |
VM-101 | pfSense | Routeur/firewall. | — |
LXC-115 | Frigate + Ollama (Docker) | Analyse vidéo (GPU) + IA locale. | 10 |
LXC-200 | DevBox (Docker): – Jenkins – Developement | Developement , Intégration/déploiement (CI/CD). | 10 |
LXC-125 | Services (Docker): – Firefly III – Transmission – SickChill – NZBGet – FileBot – Paperless-ai | Services (Gestion financière , Video ) | 10 |
VM-250 | Serveur Web: – WordPress – Bounce Weather | Site web/blog. | 20 |
Appareils Connectés (IoT)
- Google Nest et Smart TV :
- Isolés dans un VLAN IoT pour la sécurité.
- Interagissent avec :
- Home Assistant (commandes vocales, scénarios).
- Emby (streaming depuis le NAS).
- Contrôlés via Pi-hole pour bloquer les pubs.
Bonnes Pratiques
- Réseau :
- VLANs séparés (Trusted, IoT, Web, Media).
- Pare-feu (pfSense) pour isoler les flux.
- GPU :
- Partage entre Frigate et Ollama via Docker dans un LXC dédié.
- Sauvegardes :
- Backuper Paperless, WordPress, et configurations Docker.
Schéma Réseau & Applications
graph TD %% Entry Point Internet --> OrangeBox --> pfSense %% VLAN Zones from pfSense pfSense --> VLAN10 pfSense --> VLAN20 pfSense --> VLAN30 pfSense --> VLAN40 pfSense --> RPi[(Raspberry Pi)] RPi --- Pihole Pihole --- Gatus %% VLAN 10 - Trusted subgraph "VLAN 10 - Trusted" direction TB VLAN10 --- VM101["VM-105: Home Assistant"] VM101 --- LXC103["LXC-115: Frigate + Ollama"] LXC103 --- VM106["LXC-200: Docker DevBox"] VM106 --- VM109["LXC-125: Docker Services"] end %% VLAN 20 - Web subgraph "VLAN 20 - Web" direction TB VLAN20 --- VM108["VM-250 : Web Server - WordPress"] end %% VLAN 30 - IoT subgraph "VLAN 30 - IoT WIP" direction TB VLAN30 --- GoogleNest[Google Nest] GoogleNest --- SmartTV[Smart TV] end %% VLAN 40 - Media subgraph "VLAN 40 - Media" direction TB VLAN40 --- NAS[(NAS - Synology DSM)] NAS --- Emby --- Paperless --- Duplicati end %% Styling style VLAN10 fill:#d5f5e3,stroke:#27ae60 style VLAN20 fill:#d6eaf8,stroke:#3498db style VLAN30 fill:#fadbd8,stroke:#e74c3c style VLAN40 fill:#fdedec,stroke:#f39c12
Légende Détaillée
Élément | Description |
---|---|
🟠 pfSense (VM1) | Routeur/firewall gérant les VLANs et la sécurité. |
🟢 Raspberry Pi | Exécute Pi-hole (DNS) + Gatus (monitoring). |
🔵 NAS Synology | Stockage central + applications média (Emby) et docs (Paperless). |
VLAN 10 (Trusted) | Services critiques : HA, Frigate, Ollama, Dev(Docker,Jenkins). |
VLAN 20 (Web) | Services exposés : WordPress |
VLAN 30 (IoT) | Appareils connectés (Google Nest, Smart TV) isolés pour sécurité. |
VLAN 40 (Media) | Accès aux médias (Emby) depuis la Smart TV. |
Flux Clés à Retenir
- Google Nest/Smart TV → Communiquent avec Home Assistant (VLAN 10) via règles firewall précises.
- Frigate (VLAN 10) → Envoie les alertes à Home Assistant et Smart TV (via VLAN 30 autorisé).
- WordPress (VLAN 20) → Accessibles depuis Internet (port forwarding contrôlé par pfSense).
- Paperless (NAS) → Consommé par l’utilisateur via interface web NON exposée
pfSense
Exemple de Configuration pfSense (Règles VLAN 30 → VLAN 10)
Action | Source | Destination | Port | Description |
---|---|---|---|---|
✅ Allow | VLAN30 | VM-105 (HA) | 8123 | Accès à l’interface HA. |
✅ Allow | VLAN30 | LXC-115(Frigate) | 5000 | Flux vidéo pour affichage TV. |
🚫 Block | VLAN30 | VLAN10 | * | Bloquer tout autre accès. |
Bonnes Pratiques
Pour les Nest
- Mise à jour firmware : Vérifiez régulièrement via l’app Google Home.
- Isolation : Bloquez l’accès aux autres VLANs sauf pour :
- Home Assistant (port
8123
).
- Home Assistant (port
Pour la Smart TV
- DNS personnalisé : Redirigez-la vers Pi-hole (Raspberry Pi) pour bloquer les pubs.
- Dans pfSense : DHCP → Option DNS = IP du Pi-hole.
- Désactivez le suivi : Désactivez ACR (Automatic Content Recognition) dans les paramètres TV.
Intégration de la Smart TV
Configuration Réseau
- VLAN : Même VLAN IoT (30) que les Nest pour simplifier.
- Règles pfSense :
- Autorisez la TV à accéder à :
- Internet (streaming Netflix/YouTube).
- Emby/Jellyfin (NAS) via le VLAN Media (ex: VLAN 40 si existant).
- Autorisez la TV à accéder à :
Interaction avec Home Lab
- Pour Emby/Jellyfin (NAS) :
- Montez un dossier partagé Synology en SMB/NFS accessible à la TV.
- Exemple de configuration Emby : docker-compose.yml (NAS) volumes: – /volume1/medias:/media
- Contrôle via Home Assistant :
- Intégrez la TV via HDMI-CEC ou API spécifique (ex: Samsung Tizen, LG webOS).
- Automatisations possibles :
- Allumer/éteindre la TV quand Frigate détecte un mouvement.
- Afficher les caméras sur la TV via un dashboard.
Intégration des Google Nest (Assistant Google)
Configuration Réseau
- VLAN Recommandé : Isolez-les dans un VLAN IoT (ex: VLAN 30) pour limiter l’accès au reste du réseau.
- Pour pfSense (VM-101) :CopyCréez un VLAN 30 → Interface dédiée → Règles de firewall : – Autoriser OUT vers Internet (HTTPS/DNS). – Bloquer l’accès aux autres VLANs (sauf exceptions comme Home Assistant).
Communication avec Home Assistant (VM-101)
- Via le protocole local :
- Activez Google Assistant SDK dans Home Assistant.
- Utilisez Nabu Casa (ou un domaine personnalisé avec HTTPS) pour la liaison sécurisée.
- Scénarios :
- Contrôle des lumières/prises via commandes vocales.
- Synchronisation avec vos calendriers/rappels.
DNS
💡 Network Overview (Goal)
- Orange Box (ISP Router/Gateway):
- IP:
192.168.1.1
- LAN/Internet Gateway
- IP:
- pfSense (Firewall/Router):
- WAN Interface: Gets IP from
192.168.1.0/24
(e.g.192.168.1.2
) - LAN Interface: New network
192.212.5.0/24
(e.g.192.212.5.1
)
- WAN Interface: Gets IP from
- Home Lab Devices:
- On
VLANs
behind pfSense
- On
- Pi-hole:
- Installed behind pfSense (e.g.
192.212.5.2
)
- Installed behind pfSense (e.g.
🧠 What You Want:
- Devices on the lab network use Pi-hole for DNS.
- pfSense uses Pi-hole for DNS too (optional but recommended).
- Internet access for lab network is through pfSense ➝ Orange Box ➝ Internet.
- Lab network stays isolated from home network.
✅ Step-by-Step DNS Configuration
1. Install and Set Up Pi-hole
- Install Pi-hole on a device behind pfSense (VM, Raspberry Pi, etc.).
- Give it a static IP, e.g.
192.212.5.2
- During setup, don’t use DHCP (let pfSense handle that).
- Choose public upstream DNS (Cloudflare
1.1.1.1
, Google8.8.8.8
, etc.)
2. Configure pfSense to Use Pi-hole as DNS
a. Set DNS Server
- Go to System > General Setup in pfSense.
- In the DNS Server Settings, add: nginxCopyEdit
DNS Server 1: 192.212.5.2 (your Pi-hole IP)
- Uncheck “Allow DNS server list to be overridden by DHCP/PPP on WAN” — this avoids getting ISP’s DNS from the Orange Box.
b. Disable DNS Resolver (Optional)
If you don’t want pfSense to do any DNS resolution, you can:
- Go to Services > DNS Resolver, and disable it.
- Or keep it enabled for pfSense’s internal name resolution, but forward to Pi-hole.
3. Configure DHCP on pfSense VLANs)
- Go to Services > DHCP Server > LAN
- Under “DNS Servers”, set:
DNS Server: 192.212.5.2
- Now, all clients getting IPs from pfSense will also use Pi-hole as DNS.
4. (Optional) Block DNS Leaks
To prevent clients from bypassing Pi-hole (e.g., hardcoded DNS like 8.8.8.8):
- Go to Firewall > NAT > Port Forward
- Create rules to redirect all port 53 (DNS) traffic to Pi-hole IP.
Example:
- Interface: LAN
- Protocol: TCP/UDP
- Destination Port: 53
- Redirect target IP:
192.212.5.2
(Pi-hole) - Redirect Port: 53
Quelques rappels utiles
- pfSense est la gateway de chaque VLAN → donc une IP par VLAN
- Le DNS de chaque client dans chaque VLAN doit pointer vers le Pi-hole
- pfSense peut rediriger les requêtes DNS via une règle NAT (port 53) vers le Pi-hole si nécessaire
Configuration Ip/pfsense/proxmox
Element | Schema IP | Regle Pfsense |
PfSense | .5.1 (VM-101) | interface: LAN -> .5.X LAN_VLAN10 -> .10.X LAN_VLAN20 -> .20.X LAN_VLAN30 -> .30.X LAN_VLAN40 -> .40.X |
PiHole | .5.2 (VM-102) | NAT : LAN*** address:53 192.212.5.2:53 |
Raspberry | .5.3 (VM-103) | pihole and gatus |
HomeAssistant | .10.105 | NAT redirect old home assistant: – LAN .30.105:1883 -> .10.105:8123 – LAN .30.105:1883 -> .10.105:8123 |
FrigateOllama | .10.115 | |
DockerServices | .10.125 | |
Kubuntu | .10.135 | |
DockerDevbox | .10.200 | |
WebServer | .20.250 |
synology | .40.111 | (network/interfaces)auto vmbr2.40 |
qt21101l | .5.101 | |
px30_evb | .5.102 | |
Octoprint | .5.110 | |
Doorbell | .5.150 | |
dome01 | .5.151 | |
dome02 | .5.152 | |
ipcam_dome | .5.160 | |
ipcam_0001 | .5.161 | |
ipcam_0002 | .5.162 |
nano /etc/network/interfaces
auto vmbr2.10
iface vmbr2.10 inet static
address 192.212.10.245/24
vlan-raw-device vmbr2
auto vmbr2.20
iface vmbr2.20 inet static
address 192.212.20.245/24
vlan-raw-device vmbr2
auto vmbr2.30
iface vmbr2.30 inet static
address 192.212.30.245/24
vlan-raw-device vmbr2
auto vmbr2.40
iface vmbr2.40 inet static
address 192.212.40.245/24
vlan-raw-device vmbr2