Skip to content

Creating Kubernetes cluster with kubeadm on Power (RHEL or CentOS)

Rajalakshmi-Girish edited this page Mar 8, 2024 · 4 revisions

Note: Considering Centos as the underlying OS on servers

Step 1: Prepare Kubernetes Servers(create vm’s and update o/s)

  • The minimal server requirements for the servers used in the cluster are:
  • 2 GiB or more of RAM per machine–any less leaves little room for your apps.
  • At least 2 CPUs on the machine that you use as a control-plane node.
  • Full network connectivity among all machines in the cluster – Can be private or public
  • Login to all servers and update the OS.
sudo yum -y update && sudo systemctl reboot

Step 2: Install kubelet, kubeadm and kubectl

(To be done on all servers)

  • Add Kubernetes repository for CentOS
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.29/rpm/repodata/repomd.xml.key
exclude=kubelet kubeadm kubectl cri-tools kubernetes-cni
EOF
  • Install the required packages

sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

  • Confirm installation by checking the version of kubectl

$ kubectl version --client

Step 3: Disable SELinux and Swap

(To be done on all servers)

  • If you have SELinux in enforcing mode, turn it off or use Permissive mode.
sudo setenforce 0
sudo sed -i 's/^SELINUX=.*/SELINUX=permissive/g' /etc/selinux/config
  • Turn off swap.
sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
sudo swapoff -a
  • Configure sysctl
sudo modprobe overlay
sudo modprobe br_netfilter
sudo tee /etc/sysctl.d/kubernetes.conf<<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

sudo sysctl --system

Step 4: Install Container runtime - Containerd

(To be done on all servers)

Installing Containerd runtime

  • Configure persistent loading of modules
sudo tee /etc/modules-load.d/containerd.conf <<EOF
overlay
br_netfilter
EOF
  • Install required packages
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
  • Add Docker repo
sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
  • Install containerd
sudo yum install -y containerd.io
  • Configure containerd and start service
sudo mkdir -p /etc/containerd
sudo containerd config default > /etc/containerd/config.toml
  • restart containerd
sudo systemctl restart containerd
sudo systemctl enable containerd
  • Using the systemd cgroup driver To use the systemd cgroup driver in /etc/containerd/config.toml with runc, set
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
  ...
  [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
    SystemdCgroup = true

If you apply this change make sure to restart containerd again:

sudo systemctl restart containerd

Step 5: Configure Firewall

(To be done on all servers)

  • Master Server ports:
sudo firewall-cmd --add-port={6443,2379-2380,10250,10251,10252,5473,179,5473}/tcp --permanent
sudo firewall-cmd --add-port={4789,8285,8472}/udp --permanent
sudo firewall-cmd --reload
  • Worker Node ports:
sudo firewall-cmd --add-port={10250,30000-32767,5473,179,5473}/tcp --permanent
sudo firewall-cmd --add-port={4789,8285,8472}/udp --permanent
sudo firewall-cmd --reload

Step 6: Initialize your control-plane node

(To be done on master node)

  • Login to the server to be used as master and make sure that the br_netfilter module is loaded:
$ lsmod | grep br_netfilter
br_netfilter           22256  0 
bridge                151336  2 br_netfilter,ebtable_broute
  • Enable kubelet service.
sudo systemctl enable kubelet
  • Create cluster
sudo kubeadm init
  • Configure kubectl
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Step 7: Install network plugin (Calico)

(To be done on master node)

  • kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
  • Confirm that all the pods are running kubectl get pods --all-namespaces
  • Confirm master node is ready kubectl get nodes -o wide

Step 8: Add worker nodes

(To be done on worker nodes)

  • The join command that was given is used to add a worker node to the cluster. Below is the example:
kubeadm join 9.115.99.4:6443 --token 9t6qvh.yg820g6bwyio2t4c \
        --discovery-token-ca-cert-hash sha256:f3d6c8f73418838400e5c2fb8db13af009d33f013b17f92ec31d25badb7ca97f

Step 9: Deploy application on cluster(Optional - only for the testing)

Example with Nginx

nginx-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-hello-world
  labels:
    app: nginx-hello-world
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx-hello-world
  template:
    metadata:
      labels:
        app: nginx-hello-world
    spec:
      containers:
        - name: nginx
          image: nginx
      terminationGracePeriodSeconds: 5
      serviceAccountName: default

nginx-service.yaml

apiVersion: v1
kind: Service
metadata:
  labels:
    app: nginx-hello-world
  name: nginx-hello-world
spec:
  ports:
  - name: http
    port: 8080
    nodePort: 31704
    protocol: TCP
    targetPort: 80
  selector:
    app: nginx-hello-world
  type: NodePort
kubectl apply -f nginx-deployment.yaml
kubectl apply -f nginx-service.yaml
curl http://<EXTERNAL-IP>:<NODE-PORT> -k

Step 10: Install Kubernetes Dashboard (Optional)

wget https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended.yaml -O kubernetes-dashboard-deployment.yml
kubectl apply -f kubernetes-dashboard-deployment.yml
  • Get admin token to log in Dashboard
kubectl create serviceaccount cluster-admin-dashboard-sa
kubectl create clusterrolebinding cluster-admin-dashboard-sa \
>   --clusterrole=cluster-admin \
>   --serviceaccount=default:cluster-admin-dashboard-sa
kubectl get secret | grep cluster-admin-dashboard-sa
kubectl describe secret <name_of_above_secret>

Reference: