heimdall.daisy-street.local

Service

Passer le service heimdall de nodeport ClusterIP

Ingress

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name:  heimdall-svc-ingress
  namespace: default
spec:
  ingressClassName: public
  rules:
  - host: heimdall.daisy-street.local
    http:
      paths:
      - pathType: Prefix
        path: "/"
        backend:
          service:
            name: heimdall-svc
            port:
              number: 80

Hosts

   192.168.1.26     heimdall.daisy-street.local    

Resultat

Reference:

Ajout pod pihole-node

deployement

La commande docker-compose avec le filesystem preparé

version: "3"

# More info at https://github.com/pi-hole/docker-pi-hole/ and https://docs.pi-hole.net/
services:
  pihole:
    container_name: pihole
    image: pihole/pihole:latest
    ports:
      - "53:53/tcp"
      - "53:53/udp"
      - "67:67/udp"
      - "80:80/tcp"
    environment:
      TZ: 'America/Chicago'
      # WEBPASSWORD: 'set a secure password here or it will be random'
    # Volumes store your data between container upgrades
    volumes:
      - './etc-pihole/:/etc/pihole/'
      - './etc-dnsmasq.d/:/etc/dnsmasq.d/'
    # Recommended but not required (DHCP needs NET_ADMIN)
    #   https://github.com/pi-hole/docker-pi-hole#note-on-capabilities
    cap_add:
      - NET_ADMIN
    restart: unless-stopped

traduction en kubernetes deploy :

apiVersion: apps/v1
kind: Deployment
metadata:
  name: piholeserver 
  namespace: default
  labels:
    app: pihole
spec:
  replicas: 1
  selector:
    matchLabels:
      app: pihole
  template:
    metadata:
      labels:
        run: piholeserver 
        app: pihole
    spec:
      containers:
      - name: piholeserver 
        image: pihole/pihole:latest
        env:
          - name: "DNS1"
            value: "9.9.9.9"
          - name: "DNS2"
            value: "149.112.112.112"
        ports:
        - protocol: TCP
          containerPort: 53
          name: pihole-http53t
        - protocol: UDP
          containerPort: 53
          name: pihole-http53u
        - containerPort: 67
          name: pihole-http67
        - containerPort: 80
          name: pihole-http
        volumeMounts:
        - mountPath: /etc/pihole/
          name: pihole-config
        - mountPath: /etc/dnsmasq.d/
          name: pihole-dnsmasq
      volumes:
      - name: pihole-config
        hostPath:
          type: DirectoryOrCreate
          path: /usr/kubedata/piholeserver/pihole
      - name: pihole-dnsmasq
        hostPath:
          type: DirectoryOrCreate
          path: /usr/kubedata/piholeserver/dnsmasq.d
---		  
apiVersion: v1
kind: Service
metadata:
  name: pihole-svc
spec:
  selector:
    app: pihole
  ports:
    - name: "http53u"
      protocol: UDP
      port: 53
      targetPort: 53
    - name: "http53t"
      protocol: TCP
      port: 53
      targetPort: 53
    - name: "http67"
      port: 67
      targetPort: 67
    - name: "http"
      port: 80
      targetPort: 80
      nodePort: 30499
  type: NodePort

puis on recupere le port d’exposition

kubectl get all --all-namespaces | grep heimdall

resultat le dashboard est accecible https://<master-ip>:31541

Le password de l’admin est dans la log du pod

ou on peut definir un password en ligne de commande dans le pod

sudo pihole -a -p

Ajout pod heimdall-node

deployement

La commande docker avec le filesystem preparé

docker run -d \
  --name=heimdall \
  -e PUID=1000 \
  -e PGID=1000 \
  -e TZ=Europe/London \
  -p 80:80 \
  -p 443:443 \
  -v </path/to/appdata/config>:/config \
  --restart unless-stopped \
  lscr.io/linuxserver/heimdall

traduction en kubernetes deploy :

apiVersion: apps/v1
kind: Deployment
metadata:
  name: heimdallserver 
  namespace: default
  labels:
    app: heimdall
spec:
  replicas: 1
  selector:
    matchLabels:
      app: heimdall
  template:
    metadata:
      labels:
        run: heimdallserver 
        app: heimdall
    spec:
      containers:
      - name: heimdallserver 
        image: lscr.io/linuxserver/heimdall
        env:
          - name: "UID"
            value: "1000"
          - name: "GID"
            value: "100"  
        ports:
        - containerPort: 80
          name: heimdall-http
        - containerPort: 443
          name: heimdall-https
        volumeMounts:
        - mountPath: /config
          name: heimdall-config
      volumes:
      - name: heimdall-config
        hostPath:
          type: DirectoryOrCreate
          path: /usr/kubedata/heimdallserver/config
---
apiVersion: v1
kind: Service
metadata:
  name: heimdall-svc
spec:
  selector:
    app: heimdall
  ports:
    - name: http
      port: 80
      targetPort: 80
      nodePort: 32501
    - name: https
      port: 443
      targetPort: 443
  type: NodePort

puis on recupere le port d’exposition

kubectl get all --all-namespaces | grep heimdall

resultat le dashboard est accecible https://<master-ip>:32501

Ajout pod Filebot-node

deployement

La commande docker avec le filesystem preparé

docker run --rm -it \ 
     -v /Videoclub:/videoclub \
     -v /usr/kubedata/filebot-node/data:/data \
     -p 5452:5452 \
     maliciamrg/filebot-node-479

traduction en kubernetes deploy :

apiVersion: apps/v1
kind: Deployment
metadata:
  name: filebot-node 
  namespace: default
  labels:
    app: filebot
spec:
  replicas: 1
  selector:
    matchLabels:
      app: filebot
  template:
    metadata:
      labels:
        run: filebot-node 
        app: filebot
    spec:
      containers:
      - name: filebot-node 
        image: maliciamrg/filebot-node-479
        ports:
        - containerPort: 5452
          name: filebot-http
        volumeMounts:
        - mountPath: /data
          name: filebot-data
        - mountPath: /videoclub
          name: filebot-media
      volumes:
      - name: filebot-data
        hostPath:
          type: DirectoryOrCreate
          path: /usr/kubedata/filebot-node/data
      - name: filebot-media
        hostPath:
          type: Directory
          path: /Videoclub
---
apiVersion: v1
kind: Service
metadata:
  name: filebot
spec:
  selector:
    app: filebot
  ports:
    - name: "http"
      port: 5452
      targetPort: 5452
  type: NodePort

puis on recupere le recupere le port d’exposition

kubectl get all --all-namespaces | grep filebot

resultat le dashboard est accecible https://<master-ip>:32580

filebot-node en version 4.7.9 sans license

Deployer l’image docker

Deployer et lancer l’image filebot-node dans un docker

https://hub.docker.com/r/rednoah/filebot

docker run --rm -it -v $PWD:/volume1 -v data:/data -p 5452:5452 rednoah/filebot:node &

puis recupere l’id du container

docker container ls

Modifier l’image

envoyer le fichier filebot_4.7.9_amd64.deb dans le container

docker cp filebot_4.7.9_amd64.deb c35b578723a3:/tmp

Entre dans le container

docker exec -it c35b578723a3 bash

deployer filebot

sudo dpkg -i /tmp/filebot_4.7.9_amd64.deb
puis editer le app.js et suprimer le “–apply”.
sudo apt update
sudo apt install nano
nano /opt/filebot-node/server/app.js

commit de l’image

docker commit c35b578723a3 maliciamrg/filebot-node-479

sauvegarde l’image

docker save -o filebot-node-479.tar maliciamrg/filebot-node-479

ou

docker login
docker image push maliciamrg/filebot-node-479

Nettoyer docker

arreter le container

docker container kill c35b578723a3

et suprimer l’image

docker image rm maliciamrg/filebot-node-479

Ajout pod Transmission

deployement

La commande docker avec le filesystem preparé

docker run -d \
  --name=transmission \
  -e PUID=1000 \
  -e PGID=1000 \
  -e TZ=Europe/London \
  -e TRANSMISSION_WEB_HOME=/combustion-release/ `#optional` \
  -e USER=username `#optional` \
  -e PASS=password `#optional` \
  -e WHITELIST=iplist `#optional` \
  -e HOST_WHITELIST=dnsnane list `#optional` \
  -p 9091:9091 \
  -p 51413:51413 \
  -p 51413:51413/udp \
  -v <path to data>:/config \
  -v <path to downloads>:/downloads \
  -v <path to watch folder>:/watch \
  --restart unless-stopped \
  lscr.io/linuxserver/transmission

traduction en kubernetes deploy :

apiVersion: apps/v1
kind: Deployment
metadata:
  name: transmissionserver 
  namespace: default
  labels:
    app: transmission
spec:
  replicas: 1
  selector:
    matchLabels:
      app: transmission
  template:
    metadata:
      labels:
        run: transmissionserver 
        app: transmission
    spec:
      containers:
      - name: transmissionserver 
        image: lscr.io/linuxserver/transmission
        env:
          - name: "PUID"
            value: "1000"
          - name: "PGID"
            value: "1000" 
        ports:
        - containerPort: 9091
          name: tr-http
        - containerPort: 51413
          name: tr-https
        volumeMounts:
        - mountPath: /config
          name: tr-config
        - mountPath: /downloads
          name: tr-media
        - mountPath: /watch
          name: tr-watch
      volumes:
      - name: tr-watch
        hostPath:
          type: DirectoryOrCreate
          path: /Videoclub/00-Tmp/transmission/watch
      - name: tr-media
        hostPath:
          type: DirectoryOrCreate
          path: /Videoclub/00-Tmp/transmission/downloads
      - name: tr-config
        hostPath:
          type: DirectoryOrCreate
          path: /usr/kubedata/transmissionserver/config
---
apiVersion: v1
kind: Service
metadata:
  name: transmission
spec:
  selector:
    app: transmission
  ports:
    - name: "http"
      port: 9091
      targetPort: 9091
    - name: "https"
      port: 51413
      targetPort: 51413
  type: NodePort

puis on recupere le recupere le port d’exposition

kubectl get all --all-namespaces | grep transmission

resultat le dashboard est accecible https://<master-ip>:30312

Kubernetes et Emby Server

Preparation

Passer 32Go de la ZFS de Proxmox au ubuntu qui host kubernetes

Dans TrueNas exposer /mnt/Magneto/9-VideoClub en SMB

Passer la carte Graphic a ubuntu

Installation filesystem

installation de Cifs et edition de fstab pour monter automatiquement le share truenas

sudo apt install cifs-utils
sudo nano /etc/fstab

//192.168.1.46/9-VideoClub /Videoclub cifs uid=0,credentials=/home/david/.smb,iocharset=utf8,noperm 0 0

Cree une partion sur le disk de 32Go provenant de Proxmox via fdisk , formater en ext4 et monter cette partion sur le filesystem /usr/kubedata

sudo nano /etc/fstab

/dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi1-part1 /usr/kubedata ext4 defaults    0 0

deployement

La commande docker avec le filesystem preparé

sudo docker run -d \
    --name embyserver \
    --volume /usr/kubedata/embyserver/config:/config \
    --volume /Videoclub:/mnt/videoclub \
    --net=host \
    --device /dev/dri:/dev/dri \
    --publish 8096:8096 \
    --publish 8920:8920 \
    --env UID=1000 \
    --env GID=100 \
    --env GIDLIST=100 \
    emby/embyserver:latest

traduction en kubernetes deploy :

apiVersion: apps/v1
kind: Deployment
metadata:
  name: embyserver 
  namespace: default
  labels:
    app: emby
spec:
  replicas: 1
  selector:
    matchLabels:
      app: emby
  template:
    metadata:
      labels:
        run: embyserver 
        app: emby
    spec:
      containers:
      - name: embyserver 
        image: emby/embyserver:latest
        env:
          - name: "UID"
            value: "1000"
          - name: "GID"
            value: "100" 
          - name: "GIDLIST"
            value: "100" 
        ports:
        - containerPort: 8096
          name: emby-http
        - containerPort: 8920
          name: emby-https
        volumeMounts:
        - mountPath: /config
          name: emby-config
        - mountPath: /mnt/videoclub
          name: emby-media
      volumes:
      - name: emby-media
        hostPath:
          type: Directory
          path: /Videoclub
      - name: emby-config
        hostPath:
          type: DirectoryOrCreate
          path: /usr/kubedata/embyserver/config
---
apiVersion: v1
kind: Service
metadata:
  name: emby
spec:
  selector:
    app: emby
  ports:
    - name: "http"
      port: 8096
      targetPort: 8096
    - name: "https"
      port: 8920
      targetPort: 8920
  type: NodePort

puis on recupere le recupere le port d’exposition

kubectl get all --all-namespaces | grep emby

resultat le dashboard est accecible https://<master-ip>:30647

Exposer le Dashboard K8s

https://stackoverflow.com/questions/48286170/how-to-access-canonical-kubernetes-dashboard-externally

Pour rendre accesible le dashboard Kurbernetes “form outside the machine” il est possible de modifier le type d’exposition de clusterIp a NodePort.

kubectl -n kube-system edit service kubernetes-dashboard

remplacer “type: ClusterIp” par “type: NodePort”

en executant la commande suivante on recupere le port d’exposition

kubectl -n kube-system get service kubernetes-dashboard

resultat le dashboard est accecible https://<master-ip>:31834

installation des images Docker

Docker Logos | Docker

install emby

microk8s kubectl create deployment embyserver --image=emby/embyserver:latest
microk8s kubectl expose deployment embyserver --type=NodePort --port=8096
microk8s kubectl port-forward -n default service/embyserver 8096:8096 --address 192.168.1.26 &

install sickchill

microk8s kubectl create deployment sickchill --image=sickchill/sickchill
microk8s kubectl expose deployment sickchill --type=NodePort --port=8081

microk8s kubectl port-forward -n default service/sickchill 8081:8081 --address 192.168.1.26 &

install transmission

microk8s kubectl create deployment transmission --image=linuxserver/transmission
microk8s kubectl expose deployment transmission --type=NodePort --port=9091
microk8s kubectl port-forward -n default service/transmission 9091:9091 --address 192.168.1.26 &

edit file /config/settings.json

Ubuntu , Docker, Kubernetes

Installer ubuntu sur proxmox

Installer Docker

curl https://releases.rancher.com/install-docker/20.10.sh | sh

Update Ubuntu

Afin deviter des erreur pendant l’install il faut mettre a jour ubuntu avant de lancer la procedure d’install

sudo apt update
sudo apt upgrade
sudo reboot

Installer Kubernetes

https://ubuntu.com/tutorials/install-a-local-kubernetes-with-microk8s#2-deploying-microk8s

sudo snap install microk8s --classic
sudo ufw allow in on cni0 && sudo ufw allow out on cni0
sudo ufw default allow routed
microk8s enable dns dashboard storage
microk8s kubectl get all --all-namespaces
microk8s kubectl port-forward -n  kube-system service/kubernetes-dashboard 10443:443 --address 0.0.0.0 &&

creation d’un alias pour lancer les commandes “microk8s kubectl” directement avec ” kubectl”

sudo snap alias microk8s.kubectl kubectl
david@legion2:~$ microk8s kubectl get all --all-namespaces
NAMESPACE     NAME                                             READY   STATUS    RESTARTS       AGE
kube-system   pod/coredns-7f9c69c78c-7ljk2                     1/1     Running   1 (6h2m ago)   6h36m
kube-system   pod/calico-kube-controllers-6b654d96bd-ngxnq     1/1     Running   1 (6h2m ago)   14h
kube-system   pod/calico-node-tb2cz                            1/1     Running   1 (6h2m ago)   14h
kube-system   pod/metrics-server-85df567dd8-gfjvk              1/1     Running   0              5h57m
kube-system   pod/kubernetes-dashboard-59699458b-66gng         1/1     Running   0              5h53m
kube-system   pod/dashboard-metrics-scraper-58d4977855-lg8qw   1/1     Running   0              5h53m
kube-system   pod/hostpath-provisioner-5c65fbdb4f-nvclh        1/1     Running   0              5h53m
default       pod/embyserver-56d8c5b5bc-4xtj9                  1/1     Running   0              13m

NAMESPACE     NAME                                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                  AGE
default       service/kubernetes                  ClusterIP   10.152.183.1     <none>        443/TCP                  14h
kube-system   service/kube-dns                    ClusterIP   10.152.183.10    <none>        53/UDP,53/TCP,9153/TCP   6h36m
kube-system   service/metrics-server              ClusterIP   10.152.183.220   <none>        443/TCP                  5h57m
kube-system   service/kubernetes-dashboard        ClusterIP   10.152.183.11    <none>        443/TCP                  5h54m
kube-system   service/dashboard-metrics-scraper   ClusterIP   10.152.183.66    <none>        8000/TCP                 5h54m
default       service/embyserver                  NodePort    10.152.183.74    <none>        8096:30829/TCP           9m48s

NAMESPACE     NAME                         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-system   daemonset.apps/calico-node   1         1         1       1            1           kubernetes.io/os=linux   14h

NAMESPACE     NAME                                        READY   UP-TO-DATE   AVAILABLE   AGE
kube-system   deployment.apps/coredns                     1/1     1            1           6h36m
kube-system   deployment.apps/calico-kube-controllers     1/1     1            1           14h
kube-system   deployment.apps/metrics-server              1/1     1            1           5h57m
kube-system   deployment.apps/kubernetes-dashboard        1/1     1            1           5h54m
kube-system   deployment.apps/dashboard-metrics-scraper   1/1     1            1           5h54m
kube-system   deployment.apps/hostpath-provisioner        1/1     1            1           5h54m
default       deployment.apps/embyserver                  1/1     1            1           13m

NAMESPACE     NAME                                                   DESIRED   CURRENT   READY   AGE
kube-system   replicaset.apps/calico-kube-controllers-69d7f794d9     0         0         0       14h
kube-system   replicaset.apps/coredns-7f9c69c78c                     1         1         1       6h36m
kube-system   replicaset.apps/calico-kube-controllers-6b654d96bd     1         1         1       14h
kube-system   replicaset.apps/metrics-server-85df567dd8              1         1         1       5h57m
kube-system   replicaset.apps/kubernetes-dashboard-59699458b         1         1         1       5h53m
kube-system   replicaset.apps/dashboard-metrics-scraper-58d4977855   1         1         1       5h53m
kube-system   replicaset.apps/hostpath-provisioner-5c65fbdb4f        1         1         1       5h53m
default       replicaset.apps/embyserver-56d8c5b5bc                  1         1         1       13m