Webux Lab - Blog
Webux Lab Logo

Webux Lab

By Studio Webux

Search

By Tommy Gingras

Last update 2022-08-22

ToolKubernetes

Kubernetes

Environment

  • CPU Host Intel(R) Xeon(R) CPU L5520 @ 2.27GHz
  • Linux: Rocky Linux 9
  • Kubernetes: 1.24.4
  • 2 VM using virsh with virtio Network interfaces
    • Master
      • 2 Cores
      • 4GB Memory
      • 50GB Hard Drive (No SSD)
      • 1GB Network
    • Node
      • 4 Cores
      • 4GB Memory
      • 100GB Hard Drive (No SSD)
      • 1GB Network

Installation Notes

I follow the official documentation and search on internet for the bugs I had

References:

Prerequisites

Source: https://kubernetes.io/fr/docs/setup/production-environment/tools/kubeadm/install-kubeadm/

All Nodes & Master

dnf install git tmux nmap-ncat -y

## SWAP
sudo swapoff -a
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

## SELinux
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

### sysctl
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

# sysctl params required by setup, params persist across reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

# Apply sysctl params without reboot
sudo sysctl --system

On the Master

sudo firewall-cmd --permanent --add-port=6443/tcp
sudo firewall-cmd --permanent --add-port=2379-2380/tcp
sudo firewall-cmd --permanent --add-port=10250/tcp
sudo firewall-cmd --permanent --add-port=10251/tcp
sudo firewall-cmd --permanent --add-port=10252/tcp
sudo firewall-cmd --permanent --add-port=9099/tcp
sudo firewall-cmd --permanent --add-port=10256/tcp
sudo firewall-cmd --reload

hostnamectl set-hostname k8s-master

On the Node(s)

sudo firewall-cmd --permanent --add-port=10250/tcp
sudo firewall-cmd --permanent --add-port=30000-32767/tcp
sudo firewall-cmd --permanent --add-port=9099/tcp
sudo firewall-cmd --permanent --add-port=10256/tcp
sudo firewall-cmd --reload

hostnamectl set-hostname k8s-node-01

DNS

Don't forget to update the /etc/hosts file or your DNS to fit your configuration.

Example, do the same for each host:

vi /etc/hosts
192.168.2.46    k8s-master
192.168.2.47    k8s-node-01

Install docker and containerd (All Nodes and Master)

sudo dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
sudo dnf -y install docker-ce docker-ce-cli containerd.io docker-compose-plugin
sudo systemctl --now enable docker
sudo systemctl enable containerd

Setup containerd:

sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml

vi /etc/containerd/config.toml
``` 

Look for: `[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]`
and update the `SystemdCgroup = true`

```bash
sudo systemctl restart containerd

Install Kubernetes (All Nodes and Master)

# Kubeadm & kubectl
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF

sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

sudo systemctl enable --now kubelet

Initialize the cluster (Master Only)

Replace 192.168.2.46 with your advertise ip. The pod network cidr is defined by your network add-on.

kubeadm init \
    --pod-network-cidr=10.244.0.0/16 \
    --apiserver-advertise-address=192.168.2.46 \
    --control-plane-endpoint=k8s-master

It should takes 2/3 minutes to bring up your cluster, copy the join command for future usage. And execute the commands printed on your screen to configure your Cluster access.

Example of a join command:

kubeadm join 192.168.2.46:6443 --token 1notey.2v575oery8hhgia2 \
	--discovery-token-ca-cert-hash sha256:92addacc41319e3d9be6e2fd4c6813cb459dac5ad3cc63ae5815820ac021bc74

Network Using Calico

Please check the official documentation for all details.

kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

At this points you should see something like:

[root@k8s-master ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                       READY   STATUS              RESTARTS   AGE
default       helloworld                                 0/1     ContainerCreating   0          7s
kube-system   calico-kube-controllers-5b97f5d8cf-288k6   1/1     Running             0          5m6s
kube-system   calico-node-g2txx                          0/1     Running             0          5m6s
kube-system   calico-node-rwl4d                          0/1     Running             0          5m6s
kube-system   coredns-6d4b75cb6d-5wzbr                   1/1     Running             0          6m6s
kube-system   coredns-6d4b75cb6d-qpdj9                   1/1     Running             0          6m5s
kube-system   etcd-k8s-master                            1/1     Running             47         6m31s
kube-system   kube-apiserver-k8s-master                  1/1     Running             45         6m18s
kube-system   kube-controller-manager-k8s-master         1/1     Running             30         6m30s
kube-system   kube-proxy-6qztz                           1/1     Running             0          5m21s
kube-system   kube-proxy-g2vvh                           1/1     Running             0          6m6s
kube-system   kube-scheduler-k8s-master                  1/1     Running             47         6m31s

Deploy example app

kubectl run helloworld --image=k8s.gcr.io/echoserver:1.4 --port=8080
kubectl expose pod  helloworld --type=NodePort

Get the exposed port:

[root@k8s-master ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
helloworld   NodePort    10.111.116.80   <none>        8080:31676/TCP   11s
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP          9m57s

Use your external node IP to access the service.


Stress Test using Fibonacci

Source: https://github.com/errm/fib

kubectl run fib --image=errm/fib  --port=9292
kubectl expose pod fib --type=NodePort
kubectl get svc
kubectl logs fib

curl http://192.168.2.47:32246/20

After connecting to this Url, go see the grafana CPU Graph to see the load.


Monitoring Using Prometheus and Grafana

I used this operator: https://github.com/prometheus-operator/kube-prometheus

Deployment

kubectl apply --server-side -f manifests/setup
until kubectl get servicemonitors --all-namespaces ; do date; sleep 1; echo ""; done
kubectl apply -f manifests/

Dashboards

Expose Port to access the dashboards: Source: https://github.com/prometheus-operator/kube-prometheus/blob/main/docs/access-ui.md

kubectl --namespace monitoring port-forward svc/grafana 3000 --address 0.0.0.0

Stress test with artillery

Good article for more details: https://blog.appsignal.com/2021/11/10/a-guide-to-load-testing-nodejs-apis-with-artillery.html

npm i -g artillery@latest

Create a config file like the one below:

config:
  target: "http://192.168.2.47:31676/"
  phases:
    - duration: 60
      arrivalRate: 5
      name: Warm up
    - duration: 120
      arrivalRate: 5
      rampTo: 1000
      name: Ramp up load
    - duration: 600
      arrivalRate: 1000
      name: Sustained load

scenarios:
  - name: "Dummy Test"
    flow:
      - get:
          url: "/"

Launch the script:

artillery run test.yml

Conclusion

Here is the first part of my notes when playing and learning Kubernetes.