Changer l’éditeur pour nano

Linux

On Linux (Ubuntu, for example), typically the default command-line EDITOR is Vim. If so, no further action is needed to use the kubectl edit command. If you want to use a different editor, create an environment variable named KUBE_EDITOR with the value set to the path of your preferred text editor.

JAVA_HOME

1.1 Edit /etc/environment file with a text editor like vim or nano, need root or sudo.

Add JAVA_HOME at the next line, and points to a specified JDK folder directly.

PATH=”/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games” JAVA_HOME=/usr/lib/jvm/adoptopenjdk-11-hotspot-amd64

source /etc/environment

echo $JAVA_HOME

Note
The new changes will disappear if we close the current session or reopen a new terminal because a new shell does not trigger the /etc/environment. Try to restart the Ubuntu or login again; the new changes in /etc/environment will apply automatically.

whereis nano
nano: /usr/bin/nano /usr/share/nano /usr/share/man/man1/nano.1.gz

sudo nano /etc/environment
KUBE_EDITOR="/usr/bin/nano"

Monitoring

Installer Helm

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update

kubectl delete namespace monitoring
kubectl delete crd alertmanagerconfigs.monitoring.coreos.com
kubectl delete crd alertmanagers.monitoring.coreos.com
kubectl delete crd podmonitors.monitoring.coreos.com
kubectl delete crd probes.monitoring.coreos.com
kubectl delete crd prometheuses.monitoring.coreos.com
kubectl delete crd prometheusrules.monitoring.coreos.com
kubectl delete crd servicemonitors.monitoring.coreos.com
kubectl delete crd thanosrulers.monitoring.coreos.com

kubectl create namespace monitoring
helm install kube-prometheus-stack prometheus-community/kube-prometheus-stack -n monitoring --debug --set prometheusOperator.admissionWebhooks.enabled=false --set prometheusOperator.tls.enabled=false

Grafana

Recupere le user et mdp pour grafana

sudo kubectl get secret --namespace monitoring prometheus-grafana -o yaml

Ajouter un service loadbalancer pour acces via lan

apiVersion: v1
kind: Service
metadata:
  name: grafana-web-service
  namespace: monitoring  
spec:
  selector:
    app: kube-prometheus-stack-grafana
  ports:
    - name: web
      protocol: TCP
      port: 80
      targetPort: 80
  type: LoadBalancer

If you are using Prometheus Operator then user/pass is:

user: admin
pass: prom-operator

Erreur

J’ai eu l’erreur

Error: INSTALLATION FAILED: Kubernetes cluster unreachable: Get "http://localhost:8080/version?timeout=32s": dial tcp [::1]:8080: connect: connection refused

pour la coriger il faut que le user ai le parametre de kube dans son home , pour ce faire :

kubectl config view --raw > ~/.kube/config
chmod go-r ~/.kube/config

ce qui creera un fichier config tel que :

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: ---certif---
    server: https://---ip---:---port---
  name: microk8s-cluster
contexts:
- context:
    cluster: microk8s-cluster
    user: ---nomuser---
  name: microk8s
current-context: microk8s
kind: Config
preferences: {}
users:
- name: ---nomuser---
  user:
    token: ---token---

source :

https://stash.run/docs/v2021.6.18/guides/latest/monitoring/prometheus_operator/

PiHole en VM

suite au pb de config pour le dhcp dans kubernetes , je suprime le pod pihole et j’intalle pihole sur une vm ubuntu dans proxmox

Resultat immediat avec 100% de reussite

Pihole pour dhcp

pour utilise le dhcp de pihole il faut le port 67 et 547 pour ipv6

apiVersion: v1
kind: Service
metadata:
  name: pihole-dns-dhcp-service
  namespace: pihole-ns  
spec:
  selector:
    app: pihole
  ports:
    - name: dhcp
      protocol: UDP
      port: 67
      targetPort: 67
    - name: dhcpv6
      protocol: UDP
      port: 547
      targetPort: 547
    - name: dns
      protocol: UDP
      port: 53
      targetPort: 53
  type: LoadBalancer

https://github.com/MoJo2600/pihole-kubernetes/issues/18

pour que le pod pihole puisse fonctionner sur le LAn et pas sur le reseau interne kubernetes

apiVersion: apps/v1
kind: Deployment
metadata:
  name: piholeserver 
  namespace: pihole-ns
  labels:
    app: pihole
spec:
  replicas: 1
  selector:
    matchLabels:
      app: pihole
  template:
    metadata:
      labels:
        run: piholeserver 
        app: pihole
    spec:
      containers:
      - name: piholeserver 
        image: pihole/pihole:latest

        hostNetwork: true
        securityContext.privileged: true
        env:
          - name: "DNS1"
            value: "9.9.9.9"
          - name: "DNS2"
            value: "149.112.112.112"
        volumeMounts:
        - mountPath: /etc/pihole/
          name: pihole-config
        - mountPath: /etc/dnsmasq.d/
          name: pihole-dnsmasq
      volumes:
      - name: pihole-config
        hostPath:
          type: DirectoryOrCreate
          path: /usr/kubedata/piholeserver/pihole
      - name: pihole-dnsmasq
        hostPath:
          type: DirectoryOrCreate
          path: /usr/kubedata/piholeserver/dnsmasq.d

Persistant Volume or Nfs

smb

//192.168.1.40/9-VideoClub /Videoclub cifs uid=0,credentials=/home/david/.smb,iocharset=utf8,noperm 0 0

nfs

mount -t nfs 192.168.1.40:/mnt/Magneto/9-VideoClub /Videoclub

Installer nfs

sudo apt install nfs-common

Distribuer des volumes specifiques par fonction

Monter le PersistantVolumes fonctionel

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-9
spec:
  capacity:
    storage: 1000Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Recycle
  storageClassName: slow
  mountOptions:
    - hard
    - nfsvers=4.1
  nfs:
    path: /mnt/Magneto/9-VideoClub
    server: 192.168.1.40

on claim le volume fonctionel

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-9
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1000Gi
  selector:
    matchLabels:
      pv: pv-9

puis on monte le pvc dans chaque deployement de pod avec des subpaths

ex:


apiVersion: v1
kind: Pod
metadata:
  name: mypod
spec:
      containers:
        - name: sickchillserver
          image: lscr.io/linuxserver/sickchill
          volumeMounts:
            - mountPath: /config
              name: tr-config
            - mountPath: /downloads
              name: tr-videoclub
              subPath: 00-Tmp/sickchill/downloads
            - mountPath: /tv
              name: tr-videoclub
              subPath: 30-Series
            - mountPath: /anime
              name: tr-videoclub
              subPath: 40-Anime
  volumes:
        - name: tr-videoclub
          persistentVolumeClaim:
            claimName: pvc-9
        - name: tr-config
          hostPath:
            path: /usr/kubedata/sickchillserver/config
            type: DirectoryOrCreate

Autre methode

pour bypasser la declaration d’un persistentVolume on peut declarer le le repertoire NFS directement dans le Deployement/pod



apiVersion: v1
kind: Pod
metadata:
  name: mypod
spec:
      containers:
        - name: sickchillserver
          image: lscr.io/linuxserver/sickchill
          volumeMounts:
            - mountPath: /config
              name: tr-config
            - mountPath: /downloads
              name: tr-videoclub
              subPath: 00-Tmp/sickchill/downloads
            - mountPath: /tv
              name: tr-videoclub
              subPath: 30-Series
            - mountPath: /anime
              name: tr-videoclub
              subPath: 40-Anime
  volumes:
        - name: tr-videoclub
          nfs:
            server: 192.168.1.40
            path: /mnt/Magneto/9-VideoClub
        - name: tr-config
          hostPath:
            path: /usr/kubedata/sickchillserver/config
            type: DirectoryOrCreate

Organisation reseau

Je fais 3 groupes pour mon 192.168.1.*

Dans l’avenir je decouperais en 2 sous reseau (technique / fonctionel et autre)

Technique : 1-99

Gateway Orange (DHCP desactivé)1
Tv Orange2
Proxmox10
Kubernetes20
TrueNas40
pihole Dns/dhcp Udp 53/67/54750

Fonctionel : 100-199

Kubernetes Plage IP Service100-149
-Kubernetes Dashborad100

Autre/Pc/IoT : 200-254

sudo nano /etc/netplan/*.yaml
sudo netplan apply

cle rsa

ajouter la cle public a chaque serveur pour la connection ssh


cat id_rsa_ubuntu.pub >> ~/.ssh/authorized_keys
sudo systemctl restart ssh

Ajout pod Sickchill

deployement

La commande docker avec le filesystem preparé

docker run -d \
  --name=sickchill \
  -e PUID=1000 \
  -e PGID=1000 \
  -e TZ=Europe/London \
  -p 8081:8081 \
  -v /path/to/data:/config \
  -v /path/to/data:/downloads \
  -v /path/to/data:/tv \
  --restart unless-stopped \
  lscr.io/linuxserver/sickchill

traduction en kubernetes deploy :

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sickchillserver 
  namespace: default
  labels:
    app: sickchill
spec:
  replicas: 1
  selector:
    matchLabels:
      app: sickchill
  template:
    metadata:
      labels:
        run: sickchillserver 
        app: sickchill
    spec:
      containers:
      - name: sickchillserver 
        image: lscr.io/linuxserver/sickchill
        env:
          - name: "PUID"
            value: "1000"
          - name: "PGID"
            value: "1000" 
        ports:
        - containerPort: 8081
          name: tr-http
        volumeMounts:
        - mountPath: /config
          name: tr-config
        - mountPath: /downloads
          name: tr-downloads
        - mountPath: /tv
          name: tr-tv
        - mountPath: /anime
          name: tr-anime
      volumes:
      - name: tr-anime
        hostPath:
          type: DirectoryOrCreate
          path: /Videoclub/40-Anime
      - name: tr-tv
        hostPath:
          type: DirectoryOrCreate
          path: /Videoclub/30-Series
      - name: tr-downloads
        hostPath:
          type: DirectoryOrCreate
          path: /Videoclub/00-Tmp/sickchill/downloads
      - name: tr-config
        hostPath:
          type: DirectoryOrCreate
          path: /usr/kubedata/sickchillserver/config
---
apiVersion: v1
kind: Service
metadata:
  name: sickchill-svc
spec:
  selector:
    app: sickchill
  ports:
    - name: "http"
      port: 8081
      targetPort: 8081
  type: NodePort

puis on recupere le recupere le port d’exposition

kubectl get all --all-namespaces | grep sickchill

resultat le dashboard est accessible https://<master-ip>:30610