Create a Self-Signed SSL Certificate on Windows

You can create a self-signed certificate using PowerShell.

  1. Open PowerShell as Administrator.
  2. Run this command to create a new self-signed cert and export the key and certificate as .pem files:
powershellCopyEdit# Define file paths
$certPath = "C:\Users\<YourUser>\bolt-certs"
New-Item -ItemType Directory -Path $certPath -Force

# Create self-signed cert
$cert = New-SelfSignedCertificate -DnsName "localhost" -CertStoreLocation "cert:\LocalMachine\My"

# Export certificate (public part)
Export-Certificate -Cert $cert -FilePath "$certPath\cert.pem"

# Export private key as PFX
$pfxPath = "$certPath\cert.pfx"
$password = ConvertTo-SecureString -String "YourStrongPassword" -Force -AsPlainText
Export-PfxCertificate -Cert $cert -FilePath $pfxPath -Password $password
  1. Convert the .pfx file to .key and .pem files (Docker usually wants .key and .crt or .pem separately).
    You can do this using OpenSSL (if you have it installed, e.g., via Git Bash or [WSL]):
bashCopyEdit# Navigate to cert folder (adjust path)
cd /c/Users/<YourUser>/bolt-certs

# Extract key
openssl pkcs12 -in cert.pfx -nocerts -out key.pem -nodes -password pass:YourStrongPassword

# Extract cert
openssl pkcs12 -in cert.pfx -clcerts -nokeys -out cert.pem -password pass:YourStrongPassword

Mirror pve-root Using LVM RAID1

What this means:

You configure a mirror (RAID1) so that any write to pve-root is also written to sda1. If your NVMe dies, you can still boot from sda1.

đź§± Requirements:

  • sda1 must be equal to or larger than pve-root (96 GB in your case)
  • You must convert pve-root into a RAID1 logical volume (LVM mirror)
  • Some downtime or maintenance mode required

đź§° How-To (Overview Only):

  • Backup First! (Always)
  • Check current setup:
    • lvdisplay pve/root
  • Wipe and prep sda1:
    • pvcreate /dev/sda
    • vgextend pve /dev/sda
  • Convert pve-root to RAID1:
    • lvconvert --type mirror -m1 --mirrorlog core pve/root /dev/sda

This mirrors pve/root from your NVMe disk onto sda.

OptionMeaning
--type mirrorConvert the LV to a mirror (RAID1)
-m1Use 1 mirror copy = total of 2 devices
--mirrorlog coreStore mirror log in RAM
pve/rootThe logical volume to convert (your root)
/dev/sdaThe new disk to mirror onto
  • Confirm with:
    • lvs -a -o +devices
root@pve:~# lvs -a -o +devices
  LV              VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert Devices
  data            pve twi-aotz-- 794.30g             0.01   0.24                             data_tdata(0)
  [data_tdata]    pve Twi-ao---- 794.30g                                                     /dev/nvme0n1p3(26624)
  [data_tmeta]    pve ewi-ao----   8.10g                                                     /dev/nvme0n1p3(229966)
  [lvol0_pmspare] pve ewi-------   8.10g                                                     /dev/nvme0n1p3(232040)
  root            pve mwi-aom---  96.00g                                    100.00           root_mimage_0(0),root_mimage_1(0)
  [root_mimage_0] pve iwi-aom---  96.00g                                                     /dev/nvme0n1p3(2048)
  [root_mimage_1] pve iwi-aom---  96.00g                                                     /dev/sda(0)
  swap            pve -wi-ao----   8.00g                                                     /dev/nvme0n1p3(0)

  • Optional but smart: Update your bootloader (grub) to know how to boot from either disk:
    • update-initramfs -u
    • update-grub
    • grub-install /dev/sda

âś… Pros:

  • Real-time mirroring (RAID1)
  • Transparent failover if one device fails (bootable if configured)

⚠️ Cons:

  • Adds complexity
  • If misconfigured, can break boot
  • Doesn’t protect against file deletion or config mistakes (RAID is not a backup)

how to check the mirror status, detect failures, and know when to do maintenance:

Use lvdisplay for more detail:

lvdisplay /dev/pve/root

Look for:

  • Mirror status: OK (or similar)
  • If a device has failed, you’ll see something like “failed” or “inconsistent”

Suggested Disk Layout Strategy

Use CaseCurrent DiskSuggested Role
Proxmox Root FSnvme0n1âś… Keep for now (fast, wear low)
VM/LXC storagesdc1 (SSD)âś… Good, isolate high I/O loads
Backup / ISOssdb1 (HDD)âś… Archive/slow storage
Spare/Buffersda1 (SSD)⚠️ Could mirror root or use as L2ARC/ZIL (if ZFS)

2. Watch NVMe Write Wear Over Time

Your NVMe shows:

yamlCopyEditPercentage Used: 0%

That’s excellent — you’re still early in the wear cycle. But with Proxmox, check every few months using:

bashCopyEditsmartctl -a /dev/nvme0n1 | grep -i percentage

3. Add Log Management

To reduce wear:

  • Use tmpfs for /var/log (if RAM allows)
  • Limit journald persistence:
# /etc/systemd/journald.conf
Storage=volatile
SystemMaxUse=200M

4. Consider Backup OS Snapshots or Mirroring Root

Use sda1 to:

  • Mirror pve-root using LVM RAID1
  • Or just use it as a backup location via rsync or lvm-snapshots

Switching GPU Binding (Live) Toggle GPU Driver Script (vfio-pci ↔ nvidia)

A single NVIDIA GPU cannot:

  • Be passed through to a VM (via vfio-pci)
  • And be used on the host or in LXC at the same time

Why?

Because when you bind the GPU to vfio-pci on boot, it’s invisible to the host and cannot be used by NVIDIA’s kernel driver (nvidia.ko).

Switch Between VM and LXC Use (Rebind on demand)

If you don’t need both at the same time, you can manually switch the GPU between:

  1. Passthrough to VM (bind to vfio-pci)
  2. Use on host / LXC (bind to nvidia)

This lets you:

Then later give it back to the VM

Use the GPU for nvidia-smi or CUDA in an LXC container

here’s a single script that checks which driver is currently bound to your GPU, and automatically toggles between:

  • vfio-pci (for passthrough to VM)
  • nvidia (for use on host or LXC)
#!/bin/bash

# === CONFIGURATION ===
GPU="0000:0a:00.0"
AUDIO="0000:0a:00.1"
VMID=131         # Your Windows VM ID
LXCID=115        # Your LXC container ID using the GPU

# === FUNCTIONS ===

get_driver() {
    basename "$(readlink /sys/bus/pci/devices/$1/driver 2>/dev/null)"
}

unbind_driver() {
    echo "$1" > "/sys/bus/pci/devices/$1/driver/unbind"
}

bind_driver() {
    echo "$1" > "/sys/bus/pci/drivers/$2/bind"
}

switch_to_nvidia() {
    echo "→ Switching to NVIDIA driver (LXC use)..."

    echo "Stopping VM $VMID..."
    qm stop $VMID
    sleep 3

    echo "Unbinding GPU from current driver..."
    unbind_driver "$GPU"
    unbind_driver "$AUDIO"

    echo "Loading NVIDIA modules..."
    modprobe nvidia nvidia_uvm nvidia_drm nvidia_modeset

    echo "Binding GPU to nvidia..."
    bind_driver "$GPU" nvidia
    bind_driver "$AUDIO" snd_hda_intel

    echo "Starting LXC container $LXCID..."
    pct start $LXCID

    echo "âś” Switched to NVIDIA mode."
}

switch_to_vfio() {
    echo "→ Switching to VFIO (VM passthrough)..."

    echo "Stopping LXC container $LXCID..."
    pct stop $LXCID
    sleep 3

    echo "Unbinding GPU from current driver..."
    unbind_driver "$GPU"
    unbind_driver "$AUDIO"

    echo "Loading VFIO modules..."
    modprobe vfio-pci

    echo "Binding GPU to vfio-pci..."
    bind_driver "$GPU" vfio-pci
    bind_driver "$AUDIO" vfio-pci

    echo "Starting VM $VMID..."
    qm start $VMID

    echo "âś” Switched to VFIO mode."
}

# === MAIN ===

MODE="$1"
CURRENT_DRIVER=$(get_driver "$GPU")
echo "Detected GPU driver: ${CURRENT_DRIVER:-none}"

case "$MODE" in
    --to-nvidia)
        switch_to_nvidia
        ;;
    --to-vfio)
        switch_to_vfio
        ;;
    "")
        if [ "$CURRENT_DRIVER" == "vfio-pci" ]; then
            switch_to_nvidia
        elif [ "$CURRENT_DRIVER" == "nvidia" ]; then
            switch_to_vfio
        elif [ -z "$CURRENT_DRIVER" ]; then
            echo "⚠️ No driver bound. Defaulting to NVIDIA..."
            switch_to_nvidia
        else
            echo "❌ Unknown driver bound: $CURRENT_DRIVER"
            exit 1
        fi
        ;;
    *)
        echo "Usage: $0 [--to-nvidia | --to-vfio]"
        exit 1
        ;;
esac

# === FINAL STATUS DISPLAY ===
echo
echo "🔍 Final GPU driver status:"
SHORT_GPU=$(echo "$GPU" | cut -d':' -f2-)
lspci -k | grep "$SHORT_GPU" -A 3

Auto-toggle based on current driver

./toggle-gpu.sh

Force switch to NVIDIA for LXC

./toggle-gpu.sh --to-nvidia 

Force switch to VFIO for VM passthrough

./toggle-gpu.sh --to-vfio 

Bind proxmox Inteface to a specifi IP

To achieve this setup — where Proxmox’s web interface on port 8006 is only accessible via one specific NIC and not the other — you need to bind the Proxmox web GUI to a specific IP address.

Here’s how you can do that:


đź”§ Steps to Bind Port 8006 and 3128(spice) to a Specific NIC/IP

  1. Identify the NIC/IPs: Run: bashCopyEditip a Let’s assume:
    • NIC1 (management): 192.212.5.245 — this should allow port 8006
    • NIC2 (isolated): 10.10.10.10 — this should block port 8006
  2. Edit Proxmox Web GUI service config: Open this file: bashCopyEditnano /etc/default/pveproxy
  3. Bind it to a specific IP (management interface): Find or add the line: LISTEN_IP="192.212.5.245"
  4. Restart the pveproxy service: systemctl restart pveproxy and systemctl restart spiceproxy This change will make the Proxmox GUI listen only on 192.212.5.245, and not on all interfaces.

âś… Optional: Confirm It’s Working

You can test by running:

ss -tuln | grep 8006

You should see:

LISTEN  0  50  192.212.5.245:8006  ...

And not 0.0.0.0:8006 or 10.10.10.10:8006.

Step-by-Step NVIDIA Driver Installation for Proxmox Users

Use GPU in a VM (Passthrough only)

If you’re intentionally passing the GPU to a VM (e.g. Windows or Linux VM with GPU acceleration), then:

You should not install the NVIDIA driver on the Proxmox host.

Instead, install it inside the VM, and keep vfio-pci bound on the host.

Use GPU on the Proxmox Host or in LXC

Start by find the correct NVIDIA driver

https://www.nvidia.com/en-us/drivers

On the proxmox host :


sudo apt update
sudo apt install pve-headers-$(uname -r) build-essential dkms

wget https://us.download.nvidia.com/XFree86/Linux-x86_64/570.144/NVIDIA-Linux-x86_64-570.144.run
chmod +x ./NVIDIA-Linux-x86_64-570.144.run

./NVIDIA-Linux-x86_64-570.144.run
sudo ./NVIDIA-Linux-x86_64-570.144.run -dkms

nvidia-smi
nano /etc/modules-load.d/modules.conf
nvidia
nvidia_uvm
ls -al /dev/nvidia*
root@pve:~# ls -al /dev/nvidia*
crw-rw-rw- 1 root root 195,   0 Apr 29 08:21 /dev/nvidia0
crw-rw-rw- 1 root root 195, 255 Apr 29 08:21 /dev/nvidiactl
crw-rw-rw- 1 root root 511,   0 Apr 29 08:21 /dev/nvidia-uvm
crw-rw-rw- 1 root root 511,   1 Apr 29 08:21 /dev/nvidia-uvm-tools

/dev/nvidia-caps:
total 0
drwxr-xr-x  2 root root     80 Apr 29 08:21 .
drwxr-xr-x 20 root root   4800 Apr 29 08:21 ..
cr--------  1 root root 236, 1 Apr 29 08:21 nvidia-cap1
cr--r--r--  1 root root 236, 2 Apr 29 08:21 nvidia-cap2
nano /etc/pve/lxc/115.conf 1
lxc.cgroup2.devices.allow: c 195:* rwm
lxc.cgroup2.devices.allow: c 236:* rwm
lxc.cgroup2.devices.allow: c 511:* rwm
lxc.mount.entry: /dev/nvidia0 dev/nvidia0 none bind,optional,create=file
lxc.mount.entry: /dev/nvidiactl dev/nvidiactl none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-caps dev/nvidia-caps none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm dev/nvidia-uvm none bind,optional,create=file
lxc.mount.entry: /dev/nvidia-uvm-tools dev/nvidia-uvm-tools none bind,optional,create=file
pct push 115 ./NVIDIA-Linux-x86_64-570.144.run /root/NVIDIA-Linux-x86_64-570.144.run

On Lxc :

sh NVIDIA-Linux-x86_64-570.144.run  --no-kernel-module
nvidia-smi

For Docker:

# Add Nvidia repository key
apt install -y gpg
curl -fsSL https://developer.download.nvidia.com/compute/cuda/repos/debian12/x86_64/3bf863cc.pub | gpg --dearmor -o /etc/apt/keyrings/nvidia-archive-keyring.gpg

# Add Nvidia repository
echo "deb [signed-by=/etc/apt/keyrings/nvidia-archive-keyring.gpg] https://developer.download.nvidia.com/compute/cuda/repos/debian12/x86_64/ /" | tee /etc/apt/sources.list.d/nvidia-cuda-debian12.list

# Update package lists
apt update

# Install Nvidia container toolkit
apt install nvidia-container-toolkit

nano /etc/docker/daemon.json
{
  "default-runtime": "nvidia",
  "runtimes": {
  "nvidia": {
    "path": "nvidia-container-runtime",
    "runtimeArgs": []
  }
  }
}
sudo nvidia-ctk runtime configure --runtime=docker

nano /etc/nvidia-container-runtime/config.toml

# Set no-cgroups to true
no-cgroups = true

For testing

# Run a test Docker container to verify GPU usage
docker run --gpus all nvidia/cuda:12.6.1-base-ubuntu24.04 nvidia-smi

If needed before purge old nvidia driver

sudo apt remove --purge '^nvidia-.*'
sudo apt autoremove

Source :

https://yomis.blog/nvidia-gpu-in-proxmox-lxc

https://hostbor.com/gpu-passthrough-in-lxc-containers

  1. Be aware that one time the cgroup2 number as change after a full reboot (510->511) ↩︎

Home Lab Optimisé

Matériel utilisé

  • Raspberry Pi : Services lĂ©gers (Pi-hole, Gatus).
  • NAS Synology : Stockage, mĂ©dias (Emby), et gestion documentaire (Paperless-ngx).
  • PC Proxmox : Virtualisation des services gourmands (VM/LXC).

Architecture Logicielle

1. Raspberry Pi

  • Pi-hole : Bloque les pubs et traqueurs.
  • Gatus : Surveille la disponibilitĂ© des services.
Emulation Raspberry (Proxmox)
VM-103Raspberry :
– Pi-Hole
– Dashy
– Gatus 
Bloque les pubs et traqueurs , Surveille la disponibilité des services.

2. NAS Synology (DSM)

via Docker :

  • MĂ©dias : Emby.
  • Documents : Paperless-ngx.
  • Sauvegarde : Duplicati.

Apps Synology:

  • Photos  : SynoPhotos ????

3. Proxmox (PC Principal)

Conteneur/VMApplicationsRĂ´leVLAN
VM-105Home AssistantAutomatisation domotique.10
VM-101pfSenseRouteur/firewall.
LXC-115Frigate + Ollama (Docker)Analyse vidéo (GPU) + IA locale.10
LXC-200DevBox (Docker):
– Jenkins
– Developement
Developement , Intégration/déploiement (CI/CD).10
LXC-125Services (Docker):
– Firefly III
– Transmission
– SickChill
– NZBGet
– FileBot
– Paperless-ai
Services
(Gestion financière , Video )
10
VM-250Serveur Web:
– WordPress
– Bounce Weather
Site web/blog.20

Appareils Connectés (IoT)

  • Google Nest et Smart TV :
    • IsolĂ©s dans un VLAN IoT pour la sĂ©curitĂ©.
    • Interagissent avec :
      • Home Assistant (commandes vocales, scĂ©narios).
      • Emby (streaming depuis le NAS).
    • ContrĂ´lĂ©s via Pi-hole pour bloquer les pubs.

Bonnes Pratiques

  • RĂ©seau :
    • VLANs sĂ©parĂ©s (Trusted, IoT, Web, Media).
    • Pare-feu (pfSense) pour isoler les flux.
  • GPU :
    • Partage entre Frigate et Ollama via Docker dans un LXC dĂ©diĂ©.
  • Sauvegardes :
    • Backuper Paperless, WordPress, et configurations Docker.

Schéma Réseau & Applications


graph TD
  %% Entry Point
  Internet --> OrangeBox --> pfSense

  %% VLAN Zones from pfSense
  pfSense --> VLAN10
  pfSense --> VLAN20
  pfSense --> VLAN30
  pfSense --> VLAN40
  pfSense --> RPi[(Raspberry Pi)]
  RPi --- Pihole
  Pihole --- Gatus

  %% VLAN 10 - Trusted
  subgraph "VLAN 10 - Trusted"
    direction TB
    VLAN10 --- VM101["VM-105: Home Assistant"]
    VM101 --- LXC103["LXC-115: Frigate + Ollama"]
    LXC103 --- VM106["LXC-200: Docker DevBox"]
    VM106 --- VM109["LXC-125: Docker Services"]

  end

  %% VLAN 20 - Web
  subgraph "VLAN 20 - Web"
    direction TB
    VLAN20 --- VM108["VM-250 : Web Server - WordPress"]
  end

  %% VLAN 30 - IoT
  subgraph "VLAN 30 - IoT WIP"
    direction TB
    VLAN30 --- GoogleNest[Google Nest]
    GoogleNest --- SmartTV[Smart TV]
  end

  %% VLAN 40 - Media
  subgraph "VLAN 40 - Media"
    direction TB
    VLAN40 --- NAS[(NAS - Synology DSM)]
    NAS --- Emby --- Paperless --- Duplicati
  end

  %% Styling
  style VLAN10 fill:#d5f5e3,stroke:#27ae60
  style VLAN20 fill:#d6eaf8,stroke:#3498db
  style VLAN30 fill:#fadbd8,stroke:#e74c3c
  style VLAN40 fill:#fdedec,stroke:#f39c12

Légende Détaillée

ÉlémentDescription
🟠 pfSense (VM1)Routeur/firewall gérant les VLANs et la sécurité.
🟢 Raspberry PiExécute Pi-hole (DNS) + Gatus (monitoring).
🔵 NAS SynologyStockage central + applications média (Emby) et docs (Paperless).
VLAN 10 (Trusted)Services critiques : HA, Frigate, Ollama, Dev(Docker,Jenkins).
VLAN 20 (Web)Services exposés : WordPress
VLAN 30 (IoT)Appareils connectés (Google Nest, Smart TV) isolés pour sécurité.
VLAN 40 (Media)Accès aux médias (Emby) depuis la Smart TV.

Flux Clés à Retenir

  1. Google Nest/Smart TV â†’ Communiquent avec Home Assistant (VLAN 10) via règles firewall prĂ©cises.
  2. Frigate (VLAN 10) → Envoie les alertes Ă  Home Assistant et Smart TV (via VLAN 30 autorisĂ©).
  3. WordPress (VLAN 20) → Accessibles depuis Internet (port forwarding contrĂ´lĂ© par pfSense).
  4. Paperless (NAS) → ConsommĂ© par l’utilisateur via interface web NON exposĂ©e

pfSense

Exemple de Configuration pfSense (Règles VLAN 30 → VLAN 10)

ActionSourceDestinationPortDescription
âś… AllowVLAN30VM-105 (HA)8123Accès Ă  l’interface HA.
✅ AllowVLAN30LXC-115(Frigate)5000Flux vidéo pour affichage TV.
🚫 BlockVLAN30VLAN10*Bloquer tout autre accès.

Bonnes Pratiques

Pour les Nest

  • Mise Ă  jour firmware : VĂ©rifiez rĂ©gulièrement via l’app Google Home.
  • Isolation : Bloquez l’accès aux autres VLANs sauf pour :
    • Home Assistant (port 8123).

Pour la Smart TV

  • DNS personnalisĂ© : Redirigez-la vers Pi-hole (Raspberry Pi) pour bloquer les pubs.
    • Dans pfSense : DHCP → Option DNS = IP du Pi-hole.
  • DĂ©sactivez le suivi : DĂ©sactivez ACR (Automatic Content Recognition) dans les paramètres TV.

Intégration de la Smart TV

Configuration Réseau

  • VLAN : MĂŞme VLAN IoT (30) que les Nest pour simplifier.
  • Règles pfSense :
    • Autorisez la TV Ă  accĂ©der Ă  :
      • Internet (streaming Netflix/YouTube).
      • Emby/Jellyfin (NAS) via le VLAN Media (ex: VLAN 40 si existant).

Interaction avec Home Lab

  • Pour Emby/Jellyfin (NAS) :
    • Montez un dossier partagĂ© Synology en SMB/NFS accessible Ă  la TV.
    • Exemple de configuration Emby : docker-compose.yml (NAS) volumes: – /volume1/medias:/media
  • ContrĂ´le via Home Assistant :
    • IntĂ©grez la TV via HDMI-CEC ou API spĂ©cifique (ex: Samsung Tizen, LG webOS).
    • Automatisations possibles :
      • Allumer/Ă©teindre la TV quand Frigate dĂ©tecte un mouvement.
      • Afficher les camĂ©ras sur la TV via un dashboard.

Intégration des Google Nest (Assistant Google)

Configuration Réseau

  • VLAN RecommandĂ© : Isolez-les dans un VLAN IoT (ex: VLAN 30) pour limiter l’accès au reste du rĂ©seau.
    • Pour pfSense (VM-101) :CopyCrĂ©ez un VLAN 30 → Interface dĂ©diĂ©e → Règles de firewall : – Autoriser OUT vers Internet (HTTPS/DNS). – Bloquer l’accès aux autres VLANs (sauf exceptions comme Home Assistant).

Communication avec Home Assistant (VM-101)

  • Via le protocole local :
    • Activez Google Assistant SDK dans Home Assistant.
    • Utilisez Nabu Casa (ou un domaine personnalisĂ© avec HTTPS) pour la liaison sĂ©curisĂ©e.
  • ScĂ©narios :
    • ContrĂ´le des lumières/prises via commandes vocales.
    • Synchronisation avec vos calendriers/rappels.

DNS

đź’ˇ Network Overview (Goal)

  • Orange Box (ISP Router/Gateway):
    • IP: 192.168.1.1
    • LAN/Internet Gateway
  • pfSense (Firewall/Router):
    • WAN Interface: Gets IP from 192.168.1.0/24 (e.g. 192.168.1.2)
    • LAN Interface: New network 192.212.5.0/24 (e.g. 192.212.5.1)
  • Home Lab Devices:
    • On VLANs behind pfSense
  • Pi-hole:
    • Installed behind pfSense (e.g. 192.212.5.2)

đź§  What You Want:

  1. Devices on the lab network use Pi-hole for DNS.
  2. pfSense uses Pi-hole for DNS too (optional but recommended).
  3. Internet access for lab network is through pfSense âžť Orange Box âžť Internet.
  4. Lab network stays isolated from home network.

âś… Step-by-Step DNS Configuration

1. Install and Set Up Pi-hole

  • Install Pi-hole on a device behind pfSense (VM, Raspberry Pi, etc.).
  • Give it a static IP, e.g. 192.212.5.2
  • During setup, don’t use DHCP (let pfSense handle that).
  • Choose public upstream DNS (Cloudflare 1.1.1.1, Google 8.8.8.8, etc.)

2. Configure pfSense to Use Pi-hole as DNS

a. Set DNS Server
  • Go to System > General Setup in pfSense.
  • In the DNS Server Settings, add: nginxCopyEditDNS Server 1: 192.212.5.2 (your Pi-hole IP)
  • Uncheck “Allow DNS server list to be overridden by DHCP/PPP on WAN” — this avoids getting ISP’s DNS from the Orange Box.
b. Disable DNS Resolver (Optional)

If you don’t want pfSense to do any DNS resolution, you can:

  • Go to Services > DNS Resolver, and disable it.
  • Or keep it enabled for pfSense’s internal name resolution, but forward to Pi-hole.

3. Configure DHCP on pfSense VLANs)

  • Go to Services > DHCP Server > LAN
  • Under “DNS Servers”, set: DNS Server: 192.212.5.2
  • Now, all clients getting IPs from pfSense will also use Pi-hole as DNS.

4. (Optional) Block DNS Leaks

To prevent clients from bypassing Pi-hole (e.g., hardcoded DNS like 8.8.8.8):

  • Go to Firewall > NAT > Port Forward
  • Create rules to redirect all port 53 (DNS) traffic to Pi-hole IP.

Example:

  • Interface: LAN
  • Protocol: TCP/UDP
  • Destination Port: 53
  • Redirect target IP: 192.212.5.2 (Pi-hole)
  • Redirect Port: 53

Quelques rappels utiles

  • pfSense est la gateway de chaque VLAN → donc une IP par VLAN
  • Le DNS de chaque client dans chaque VLAN doit pointer vers le Pi-hole
  • pfSense peut rediriger les requĂŞtes DNS via une règle NAT (port 53) vers le Pi-hole si nĂ©cessaire

Configuration Ip/pfsense/proxmox

ElementSchema IPRegle Pfsense
PfSense.5.1 (VM-101)interface:
LAN -> .5.X
LAN_VLAN10 -> .10.X
LAN_VLAN20 -> .20.X
LAN_VLAN30 -> .30.X
LAN_VLAN40 -> .40.X
PiHole.5.2 (VM-102)NAT : LAN*** address:53 192.212.5.2:53
Raspberry.5.3 (VM-103)pihole and gatus
HomeAssistant.10.105NAT redirect old home assistant:
– LAN .30.105:1883 -> .10.105:8123
– LAN .30.105:1883 -> .10.105:8123
FrigateOllama.10.115
DockerServices.10.125
Kubuntu.10.135
DockerDevbox.10.200
WebServer.20.250
synology.40.111(network/interfaces)
auto vmbr2.40
iface vmbr2.40 inet static
address 192.212.40.245/24
vlan-raw-device vmbr2
qt21101l.5.101
px30_evb.5.102
Octoprint.5.110
Doorbell.5.150
dome01.5.151
dome02.5.152
ipcam_dome.5.160
ipcam_0001.5.161
ipcam_0002.5.162

nano /etc/network/interfaces

auto vmbr2.10
iface vmbr2.10 inet static
    address 192.212.10.245/24
    vlan-raw-device vmbr2
auto vmbr2.20
iface vmbr2.20 inet static
    address 192.212.20.245/24
    vlan-raw-device vmbr2
auto vmbr2.30
iface vmbr2.30 inet static
    address 192.212.30.245/24
    vlan-raw-device vmbr2
auto vmbr2.40
iface vmbr2.40 inet static
    address 192.212.40.245/24
    vlan-raw-device vmbr2

install frigate

Installation :

https://www.hacf.fr/installation-frigate-proxmox/

https://community-scripts.github.io/ProxmoxVE/scripts?id=frigate

sudo apt -y install nfs-common
sudo apt -y install cifs-utils
sudo mkdir /Ftp

 sudo nano /etc/fstab
 //192.212.5.111/40-Ftp                       /Ftp                  cifs rw,credentials=/root/.sharelogin,nobrl,_netdev,uid=1000,gid=1000 0 0

sudo ln -s /Ftp/frigate /media/

in ha:

https://github.com/blakeblackshear/frigate-hass-integration

https://github.com/blakeblackshear/frigate-hass-addons

low space

On my server (with docker) i have sometime the space of the root directory to 0%

df -h
Filesystem                    Size  Used Avail Use% Mounted on
tmpfs                          86M   11M   75M  13% /run
/dev/sda2                      20G   20G     0 100% /
tmpfs                         476M     0  476M   0% /dev/shm
tmpfs                         5,0M     0  5,0M   0% /run/lock
192.212.40.6:/6-40-SystemSvg  227G   32G  184G  15% /SystemSvg
192.212.40.6:/9-VideoClub     1,8T  774G  967G  45% /VideoClub
tmpfs                         146M  8,0K  146M   1% /run/user/1000

docker clean non essential stuff

docker system prune -a
docker volume rm $(docker volume ls -qf dangling=true)
docker system prune --all --volumes --force

empty trash

rm -rf ~/.local/share/Trash/*

or

sudo apt install trash-cli
trash-empty

system clean sweep

sudo apt-get autoremove
sudo apt-get clean
sudo apt-get autoclean

find big stuff in file system

sudo du -h --max-depth=1 | sort -h
0       ./dev
0       ./proc
0       ./sys
4,0K    ./cdrom
4,0K    ./media
4,0K    ./mnt
4,0K    ./srv
4,0K    ./VideoClub
16K     ./lost+found
16K     ./opt
52K     ./root
60K     ./home
68K     ./tmp
1,3M    ./run
6,7M    ./etc
428M    ./boot
823M    ./SystemSvg
1,7G    ./snap
4,7G    ./var
9,9G    ./usr
20G     .

limit log in container

https://forums.docker.com/t/some-way-to-clean-up-identify-contents-of-var-lib-docker-overlay/30604/53

<service_name>
	logging:
		options:
			max-size: "10m"
			max-file: "5"

https://forums.docker.com/t/some-way-to-clean-up-identify-contents-of-var-lib-docker-overlay/30604/52

/etc/docker/daemon.json

{
  "log-opts": {
    "max-size": "10m",
    "max-file": "5"
  }
}

dont forget the “,” if they have allready param in daemon.json