https://paperless-ng.readthedocs.io/en/latest/setup.html#install-paperless-from-docker-hub
docker-compose.yml et docker-compose.env
https://paperless-ng.readthedocs.io/en/latest/setup.html#install-paperless-from-docker-hub
docker-compose.yml et docker-compose.env
Sqlite.SqliteException (0x80004005): SQLite Error 5: ‘database is locked’.
I’m running jellyfin in docker and got this error because my config file was mounted over samba / cifs. The solution was to add nobrl
to the mount options.
volumes:
jellyfin-data:
driver_opts:
type: "cifs"
device: "//192.168.1.69/whatever/Jellyfin"
o: "addr=192.168.19.10,rw"
o: "uid=0,username=phanton,password=8517,nobrl"
NFS by default will downgrade any files created with the root permissions to the nobody:nogroup user:group.
This is a security feature that prevents privileges from being shared unless specifically requested.
It may be that you would like to enable the “no_root_squash” option in the nfs server’s /etc/exports file.
https://forum.proxmox.com/threads/mount-nfs-shares-in-a-host.78761/
pour le lxc du mediacenter qui est monter en unpriviliged
jai monte rle nfs du videoclub dans pve
puis j’ai ajouter dans le /etx/pve/lxc/105.conf
mp0: /mnt/pve/videoclub,mp=/usr/VideoClub
https://docs.docker.com/engine/install/ubuntu/
sudo apt-get update
sudo apt-get -y install ca-certificates curl gnupg lsb-release
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get -y install docker-ce docker-ce-cli containerd.io
sudo apt -y install nfs-common
sudo apt -y install cifs-utils
sudo mkdir /SystemSvg
sudo mkdir /VideoClub
sudo nano /home/david/.sharelogin
username=[username]
password=[password]
sudo nano /etc/fstab
//192.168.1.111/9-VideoClub /VideoClub cifs rw,credentials=/home/david/.sharelogin,uid=1000,gid=1000 0 0
//192.168.1.111/6-SystemSvg/VM_112 /SystemSvg cifs rw,credentials=/home/david/.sharelogin,nobrl,uid=1000,gid=1000 0 0
sudo mount -a
mkdir /SystemSvg/sickchill
mkdir /SystemSvg/sickchill/config
sudo docker kill sickchill
sudo docker rm sickchill
sudo docker run -d --name=sickchill -e PUID=1000 -e PGID=1000 -e TZ=Europe/London -p 8081:8081 -v /SystemSvg/sickchill/config:/config -v /VideoClub/00-Tmp:/downloads -v /VideoClub/30-Series:/tv -v /VideoClub/40-Anime:/anime --restart unless-stopped lscr.io/linuxserver/sickchill
mkdir /SystemSvg/transmission
mkdir /SystemSvg/transmission/config
sudo docker kill transmission
sudo docker rm transmission
sudo docker run -d --name=transmission -e PUID=1000 -e PGID=1000 -e TZ=Europe/London -e TRANSMISSION_WEB_HOME=/combustion-release/ `#optional` -p 9091:9091 -p 51413:51413 -p 51413:51413/udp -v /SystemSvg/transmission/config:/config -v /VideoClub/00-Tmp/transmission/downloads:/downloads -v /VideoClub/00-Tmp/transmission/script:/script -v /VideoClub/00-Tmp/transmission/watch:/watch --restart unless-stopped lscr.io/linuxserver/transmission
mkdir /SystemSvg/filebot
mkdir /SystemSvg/filebot/data
sudo docker kill filebot
sudo docker rm filebot
sudo docker run -d --name=filebot -p 5452:5452 -v /SystemSvg/filebot/data:/data -v /VideoClub:/videoclub --restart unless-stopped maliciamrg/filebot-node-479
mkdir /SystemSvg/nzbget
mkdir /SystemSvg/nzbget/config
sudo docker kill nzbget
sudo docker rm nzbget
sudo docker run -d --name=nzbget -e PUID=1000 -e PGID=1000 -e TZ=Europe/London -p 6789:6789 -v /SystemSvg/nzbget/config:/config -v /VideoClub/00-Tmp/nzbget:/downloads --restart unless-stopped lscr.io/linuxserver/nzbget
mkdir /SystemSvg/jellyfin
mkdir /SystemSvg/jellyfin/config
mkdir /SystemSvg/jellyfin/cache
sudo docker kill jellyfin
sudo docker rm jellyfin
sudo docker run -d --name jellyfin --user 1000:1000 --net=host --volume /SystemSvg/jellyfin/config:/config --volume /SystemSvg/jellyfin/cache:/cache --mount type=bind,source=/VideoClub/10-Film,target=/media/10-Film --mount type=bind,source=/VideoClub/20-Film_Vf,target=/media/20-Film_Vf --mount type=bind,source=/VideoClub/30-Series,target=/media/30-Series --mount type=bind,source=/VideoClub/40-Anime,target=/media/40-Anime --restart=unless-stopped jellyfin/jellyfin
sudo docker ps -a
sudo docker exec -it filebot bin/bash
sudop docker run -d -p 9001:9001 --name portainer_agent --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/docker/volumes:/var/lib/docker/volumes portainer/agent:2.6.3
sudo docker run -d -p 8000:8000 -p 9443:9443 --name portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data cr.portainer.io/portainer/portainer-ce:2.9.3
Inbound
traffic
++ +---------+
|| |ConfigMap|
|| +--+------+
|| |
|| | CIDR range to provision
|| v
|| +--+----------+
|| |MetalLB | +-------+
|| |Load balancer| |Ingress|
|| +-+-----------+ +---+---+
|| | |
|| | External IP assigned |Rules described in spec
|| | to service |
|| v v
|| +--+--------------------+ +---+------------------+
|| | | | Ingress Controller |
|---->+ ingress-nginx service +----->+ (NGINX pod) |
+---->| +----->+ |
+-----------------------+ +----------------------+
||
VV
+-----------------+
| Backend service |
| (app-lb) |
| |
+-----------------+
||
VV
+--------------------+
| Backend pod |
| (httpbin) |
| |
+--------------------+
A force de triturer les parametre de la VM et du kubernetes , j’ai briqué mon dashboard et la CPU est constament a 60% au repos.
Je decide de recree un VM Ubunutu a partir de mon template et d’essayer minikube a la place de microk8s
Les pods que je desire :
Je clone en Full mon template d’ubuntu
sudo nano /etc/netplan/00-installer-config.yaml
network:
version: 2
renderer: networkd
ethernets:
ens18:
dhcp4: no
addresses: [192.168.1.20/24]
gateway4: 192.168.1.1
nameservers:
addresses: [192.168.1.1]
Je monte la VM de mon precedant clone microk8s dans ma VM minikube afin de recopier le contenu dans la nouvelle VM.
nano /etc/pve/qemu-server/105.conf
scsi1: cyclops:vm-100-disk-0,size=32G
scsi2: cyclops:vm-105-disk-0,size=32G
je cherche mes 2 disk(100;1005)
ls -lh /dev/disk/by-id/
ata-QEMU_DVD-ROM_QM00001 -> ../../sr0
ata-QEMU_DVD-ROM_QM00003 -> ../../sr1
scsi-0QEMU_QEMU_HARDDISK_drive-scsi0 -> ../../sda
scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-part1 -> ../../sda1
scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-part2 -> ../../sda2
scsi-0QEMU_QEMU_HARDDISK_drive-scsi1 -> ../../sdc
scsi-0QEMU_QEMU_HARDDISK_drive-scsi1-part1 -> ../../sdc1
scsi-0QEMU_QEMU_HARDDISK_drive-scsi2 -> ../../sdb
le scsi2 (disk 105) n’a pas de parttion , j’en crée une
sudo fdisk /dev/sdb
formaté la partition
sudo mkfs.ext4 /dev/sdb1
creation des repertoires de montage
sudo mkdir /usr/kubedata
sudo mkdir /usr/old_kubedata
j’ajoute en montage automatique mon nouveau disk(105)
sudo nano /etc/fstab
/dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi2-part1 /usr/kubedata ext4 defaults 0 0
sudo mount -a
et je monte en manuel l’ancien disk(100)
sudo mount /dev/sdc1 /usr/old_kubedata/
copie du contenu de l’ancien disk(100) dans le nouveau disk(105)
sudo cp -r /usr/old_kubedata/* /usr/kubedata/
demontage de l’ancien disk (100)
sudo umount /usr/old_kubedata/
sudo rm /usr/old_kubedata/ -R
Je demonte l’ancien disk(100) de ma nouvelle VM(105)
nano /etc/pve/qemu-server/105.conf
supression de la ligne
scsi1: cyclops:vm-100-disk-0,size=32G
j’install Docker 1
sudo apt-get update
sudo apt-get install \
ca-certificates \
curl \
gnupg \
lsb-release
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io
j’installe minikube 2
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube
sudo usermod -aG docker $USER && newgrp docker
minikube start
minikube kubectl -- get po -A
nano ./.bashrc
alias kubectl="minikube kubectl --"
modifier l’editeur par defaut
sudo nano /etc/environment
KUBE_EDITOR="/usr/bin/nano"
source 3
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
helm version
helm repo add stable https://charts.helm.sh/stable
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm install stable prometheus-community/kube-prometheus-stack
kubectl edit svc stable-kube-prometheus-sta-prometheus
Changer ClusterIP pour LoadBalancer/NodePort
kubectl edit svc stable-grafana
Changer ClusterIP pour LoadBalancer/NodePort
UserName: admin
Password: prom-operator
sinon récupéré le password grafana
kubectl get secret --namespace default grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
apiVersion: apps/v1
kind: Deployment
metadata:
name: sickchillserver
namespace: default
labels:
app: sickchill
spec:
replicas: 1
selector:
matchLabels:
app: sickchill
template:
metadata:
labels:
run: sickchillserver
app: sickchill
spec:
containers:
- name: sickchillserver
image: lscr.io/linuxserver/sickchill
env:
- name: "PUID"
value: "1000"
- name: "PGID"
value: "1000"
ports:
- containerPort: 8081
name: tr-http
volumeMounts:
- mountPath: /config
name: tr-config
- mountPath: /downloads
name: tr-videoclub
subPath: 00-Tmp/sickchill/downloads
- mountPath: /tv
name: tr-videoclub
subPath: 30-Series
- mountPath: /anime
name: tr-videoclub
subPath: 40-Anime
volumes:
- name: tr-videoclub
nfs:
server: 192.168.1.40
path: /mnt/Magneto/9-VideoClub
- name: tr-config
hostPath:
type: DirectoryOrCreate
path: /usr/kubedata/sickchillserver/config
---
apiVersion: v1
kind: Service
metadata:
name: sickchill-svc
spec:
selector:
app: sickchill
ports:
- name: "http"
port: 8081
targetPort: 8081
type: NodePort
apiVersion: apps/v1
kind: Deployment
metadata:
name: transmissionserver
namespace: default
labels:
app: transmission
spec:
replicas: 1
selector:
matchLabels:
app: transmission
template:
metadata:
labels:
run: transmissionserver
app: transmission
spec:
containers:
- name: transmissionserver
image: lscr.io/linuxserver/transmission
env:
- name: "PUID"
value: "1000"
- name: "PGID"
value: "1000"
ports:
- containerPort: 9091
name: tr-http
- containerPort: 51413
name: tr-https
volumeMounts:
- mountPath: /config
name: tr-config
- mountPath: /downloads-sickchill
name: tr-media-sickchill
- mountPath: /script
name: tr-script
- mountPath: /watch
name: tr-watch
volumes:
- name: tr-config
hostPath:
type: DirectoryOrCreate
path: /usr/kubedata/transmissionserver/config
- name: tr-media-sickchill
hostPath:
type: DirectoryOrCreate
path: /Videoclub/00-Tmp/sickchill/downloads
- name: tr-script
hostPath:
type: DirectoryOrCreate
path: /Videoclub/00-Tmp/transmission/script
- name: tr-watch
hostPath:
type: DirectoryOrCreate
path: /Videoclub/00-Tmp/transmission/watch
---
apiVersion: v1
kind: Service
metadata:
name: transmission
spec:
selector:
app: transmission
ports:
- name: "http"
port: 9091
targetPort: 9091
- name: "https"
port: 51413
targetPort: 51413
type: NodePort
apiVersion: apps/v1
kind: Deployment
metadata:
name: embyserver
namespace: default
labels:
app: emby
spec:
replicas: 1
selector:
matchLabels:
app: emby
template:
metadata:
labels:
run: embyserver
app: emby
spec:
containers:
- name: embyserver
image: emby/embyserver:latest
env:
- name: "UID"
value: "1000"
- name: "GID"
value: "100"
- name: "GIDLIST"
value: "100"
ports:
- containerPort: 8096
name: emby-http
- containerPort: 8920
name: emby-https
volumeMounts:
- mountPath: /config
name: emby-config
- mountPath: /mnt/videoclub
name: emby-media
volumes:
- name: emby-media
nfs:
server: 192.168.1.40
path: /mnt/Magneto/9-VideoClub
- name: emby-config
hostPath:
type: DirectoryOrCreate
path: /usr/kubedata/embyserver/config
---
apiVersion: v1
kind: Service
metadata:
name: emby
spec:
selector:
app: emby
ports:
- name: "http"
port: 8096
targetPort: 8096
- name: "https"
port: 8920
targetPort: 8920
type: NodePort
apiVersion: apps/v1
kind: Deployment
metadata:
name: filebot-node
namespace: default
labels:
app: filebot
spec:
replicas: 1
selector:
matchLabels:
app: filebot
template:
metadata:
labels:
run: filebot-node
app: filebot
spec:
containers:
- name: filebot-node
image: maliciamrg/filebot-node-479
ports:
- containerPort: 5452
name: filebot-http
volumeMounts:
- mountPath: /data
name: filebot-data
- mountPath: /videoclub
name: filebot-media
volumes:
- name: filebot-data
hostPath:
type: DirectoryOrCreate
path: /usr/kubedata/filebot-node/data
- name: filebot-media
nfs:
server: 192.168.1.40
path: /mnt/Magneto/9-VideoClub
---
apiVersion: v1
kind: Service
metadata:
name: filebot
spec:
selector:
app: filebot
ports:
- name: "http"
port: 5452
targetPort: 5452
type: NodePort
david@legion2:~$ kubectl get pods -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default alertmanager-stable-kube-prometheus-sta-alertmanager-0 2/2 Running 2 (4h21m ago) 14h 172.17.0.2 minikube <none> <none>
default embyserver-56689875b4-wmxww 1/1 Running 0 53m 172.17.0.12 minikube <none> <none>
default filebot-node-7786dfbf67-fh7s8 1/1 Running 0 47m 172.17.0.13 minikube <none> <none>
default prometheus-stable-kube-prometheus-sta-prometheus-0 2/2 Running 2 (4h21m ago) 14h 172.17.0.7 minikube <none> <none>
default sickchillserver-7494d84848-cwjkm 1/1 Running 0 4h15m 172.17.0.8 minikube <none> <none>
default stable-grafana-5dcdf4bbc6-q5shg 3/3 Running 3 (4h21m ago) 14h 172.17.0.3 minikube <none> <none>
default stable-kube-prometheus-sta-operator-5fd44cc9bf-nmgdq 1/1 Running 1 (4h21m ago) 14h 172.17.0.6 minikube <none> <none>
default stable-kube-state-metrics-647c4868d9-f9vrb 1/1 Running 2 (4h19m ago) 14h 172.17.0.5 minikube <none> <none>
default stable-prometheus-node-exporter-j6w5f 1/1 Running 1 (4h21m ago) 14h 192.168.49.2 minikube <none> <none>
default transmissionserver-7d5d8c49db-cxktx 1/1 Running 0 62m 172.17.0.11 minikube <none> <none>
ingress-nginx ingress-nginx-admission-create--1-nzdhc 0/1 Completed 0 3h51m 172.17.0.10 minikube <none> <none>
ingress-nginx ingress-nginx-admission-patch--1-mxxmc 0/1 Completed 1 3h51m 172.17.0.9 minikube <none> <none>
ingress-nginx ingress-nginx-controller-5f66978484-w8cqj 1/1 Running 0 3h51m 172.17.0.9 minikube <none> <none>
kube-system coredns-78fcd69978-cq2hn 1/1 Running 1 (4h21m ago) 15h 172.17.0.4 minikube <none> <none>
kube-system etcd-minikube 1/1 Running 1 (4h21m ago) 15h 192.168.49.2 minikube <none> <none>
kube-system kube-apiserver-minikube 1/1 Running 1 (4h21m ago) 15h 192.168.49.2 minikube <none> <none>
kube-system kube-controller-manager-minikube 1/1 Running 1 (4h21m ago) 15h 192.168.49.2 minikube <none> <none>
kube-system kube-ingress-dns-minikube 1/1 Running 0 3h44m 192.168.49.2 minikube <none> <none>
kube-system kube-proxy-d8m7r 1/1 Running 1 (4h21m ago) 15h 192.168.49.2 minikube <none> <none>
kube-system kube-scheduler-minikube 1/1 Running 1 (4h21m ago) 15h 192.168.49.2 minikube <none> <none>
kube-system storage-provisioner 1/1 Running 4 (4h19m ago) 15h 192.168.49.2 minikube <none> <none>
metallb-system controller-66bc445b99-wvdnq 1/1 Running 0 3h44m 172.17.0.10 minikube <none> <none>
metallb-system speaker-g49dw 1/1 Running 0 3h44m 192.168.49.2 minikube <none> <none>
david@legion2:~$ kubectl get svc -A -o wide
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default alertmanager-operated ClusterIP None <none> 9093/TCP,9094/TCP,9094/UDP 14h app.kubernetes.io/name=alertmanager
default emby LoadBalancer 10.101.254.121 192.168.1.102 8096:30524/TCP,8920:30171/TCP 55m app=emby
default filebot LoadBalancer 10.106.51.20 192.168.1.103 5452:31628/TCP 48m app=filebot
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 15h <none>
default prometheus-operated ClusterIP None <none> 9090/TCP 14h app.kubernetes.io/name=prometheus
default sickchill-svc LoadBalancer 10.107.60.50 192.168.1.100 8081:32026/TCP 4h16m app=sickchill
default stable-grafana LoadBalancer 10.102.236.29 192.168.1.104 80:31801/TCP 15h app.kubernetes.io/instance=stable,app.kubernetes.io/name=grafana
default stable-kube-prometheus-sta-alertmanager ClusterIP 10.105.89.179 <none> 9093/TCP 15h alertmanager=stable-kube-prometheus-sta-alertmanager,app.kubernetes.io/name=alertmanager
default stable-kube-prometheus-sta-operator ClusterIP 10.99.183.242 <none> 443/TCP 15h app=kube-prometheus-stack-operator,release=stable
default stable-kube-prometheus-sta-prometheus NodePort 10.110.38.166 <none> 9090:32749/TCP 15h app.kubernetes.io/name=prometheus,prometheus=stable-kube-prometheus-sta-prometheus
default stable-kube-state-metrics ClusterIP 10.104.176.119 <none> 8080/TCP 15h app.kubernetes.io/instance=stable,app.kubernetes.io/name=kube-state-metrics
default stable-prometheus-node-exporter ClusterIP 10.106.253.56 <none> 9100/TCP 15h app=prometheus-node-exporter,release=stable
default transmission LoadBalancer 10.104.43.182 192.168.1.101 9091:31067/TCP,51413:31880/TCP 64m app=transmission
ingress-nginx ingress-nginx-controller NodePort 10.107.183.72 <none> 80:31269/TCP,443:30779/TCP 3h52m app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
ingress-nginx ingress-nginx-controller-admission ClusterIP 10.97.189.150 <none> 443/TCP 3h52m app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 15h k8s-app=kube-dns
kube-system stable-kube-prometheus-sta-coredns ClusterIP None <none> 9153/TCP 15h k8s-app=kube-dns
kube-system stable-kube-prometheus-sta-kube-controller-manager ClusterIP None <none> 10257/TCP 15h component=kube-controller-manager
kube-system stable-kube-prometheus-sta-kube-etcd ClusterIP None <none> 2379/TCP 15h component=etcd
kube-system stable-kube-prometheus-sta-kube-proxy ClusterIP None <none> 10249/TCP 15h k8s-app=kube-proxy
kube-system stable-kube-prometheus-sta-kube-scheduler ClusterIP None <none> 10251/TCP 15h component=kube-scheduler
kube-system stable-kube-prometheus-sta-kubelet ClusterIP None <none> 10250/TCP,10255/TCP,4194/TCP 14h <none>
“do it simple”
Apres avoir desintaller les precedante version de prometheus et grafana.
j’installe l’addons
microk8s enable prometheus
dans le pod grafana je modifie la password admin
grafana-cli admin reset-admin-password admin
Apres log
kubectl get pod -n kubernetes-dashboard | grep Evicted | awk '{print $1}' | xargs kubectl delete pod -n kubernetes-dashboard
kubectl get pods --all-namespaces | grep Evicted | awk '{print $2," -n ",$1}' | xargs kubectl delete pod