Thursday, September 16, 2021

Installation of Kubernetes cluster using "kubeadm"- Final verison [16Sept2021]

Following steps are tested on 16 Sept 2021

Environment:

Master node : Centos7
Worker: centos7

Resource Links:

https://computingforgeeks.com/install-kubernetes-cluster-on-centos-with-kubeadm/
https://www.tecmint.com/install-kubernetes-cluster-on-centos-7/
https://www.inmotionhosting.com/support/edu/software/install-kubernetes-on-centos/

Getting external traffic into Kubernetes – ClusterIp, NodePort, LoadBalancer, and Ingress
https://www.ovh.com/blog/getting-external-traffic-into-kubernetes-clusterip-nodeport-loadbalancer-and-ingress/

Exposing an External IP Address to Access an Application in a Cluster:
https://kubernetes.io/docs/tutorials/stateless-application/expose-external-ip-address/

kubectl expose deployment hello-world --type=LoadBalancer --name=my-service

You may need following:

sudo passwd centos

On master server

  1. Update the OS and hostname
sudo yum -y update
sudo hostnamectl set-hostname master-node

Add below two lines to the end of /etc/hosts file

192.168.42.77 master-node
192.168.42.53 worker-node-1

sudo setenforce 0 sudo sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux

sudo yum install -y yum-utils device-mapper-persistent-data lvm2


  1. Install and configure firewalld

sudo yum install firewalld -y
sudo systemctl start firewalld
sudo systemctl status firewalld
sudo firewall-cmd --state

output: running

  1. Set the following firewall rules on ports

sudo firewall-cmd --add-port={6443,2379-2380,10250,10251,10252,5473,179,5473}/tcp --permanent
sudo firewall-cmd --add-port={4789,8285,8472}/udp --permanent
sudo firewall-cmd --reload
sudo modprobe br_netfilter
sudo echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables
	

Leave the above echo command, if the file is already containing 1

  1. Add Kubernetes repository

sudo tee /etc/yum.repos.d/kubernetes.repo<<EOF
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

  1. Install Kubeadm
sudo yum install kubeadm -y

  1. Install Docker
  • Create the daemon file manually. This is also because of some conflict in University's Openstack Environment.
cd /etc
sudo mkdir -p /etc/docker/
cd docker

sudo tee /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ],

"default-address-pools": [{"base":"172.80.0.0/16","size":24}]

}
EOF

  • Now run this command. It will add the official Docker repository, download the latest version of Docker, and install it:
curl -fsSL https://get.docker.com/ | sh
  • After installation has completed, start the Docker daemon:
sudo systemctl start docker
  • Verify that it’s running:
sudo systemctl status docker
  • Enable and start both services (kubelet and docker)

sudo systemctl enable kubelet
sudo systemctl start kubelet
sudo systemctl enable docker

ERROR if docker is unable to start, try to check the /etc/docker/daemon.json file


  1. Disable swap using the following command:
swapoff -a

  1. Configure the sysctl.

sudo modprobe overlay
sudo modprobe br_netfilter

sudo tee /etc/sysctl.d/kubernetes.conf<<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

sudo sysctl --system

  • Login to the server to be used as master and make sure that the br_netfilter module is loaded:
lsmod | grep br_netfilter
Output:
br_netfilter 22256 0
bridge 151336 2 br_netfilter,ebtable_broute

  1. Then initialize Kubernetes Master
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
Output :
At the end of the output you should see something below

  • Configure the home directory

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

  1. Now check the cluster with the below command in the Master node and install the pod network:
kubectl get nodes
Output:
NAME STATUS ROLES AGE VERSION
master-node NotReady control-plane,master 7m24s v1.22.1

  • Installing the pod network:
output:
Warning: policy/v1beta1 PodSecurityPolicy is declearprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
We're using the 'flannel' virtual network.


To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.42.77:6443 --token ghgvji.0540qyz0piyez8vl --discovery-token-ca-cert-hash sha256:feabb5bd4613acacdd90dd539d288b3483090817ec8bb0a9997f97876c8bb94f

On Worker Node

  1. Update the OS and hostname
sudo yum -y update
hostnamectl set-hostname worker-node-1
  • Add below line to the end of /etc/hosts file

192.168.42.53 worker-node-1

sudo setenforce 0
sudo sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux

sudo yum install -y yum-utils device-mapper-persistent-data lvm2

  1. Install and configure firewalld (same to the master node)

sudo yum install firewalld -y
sudo systemctl start firewalld
sudo systemctl status firewalld
sudo firewall-cmd --state

output: running

  1. Enable following ports:

sudo firewall-cmd --add-port={10250,30000-32767,5473,179,5473}/tcp --permanent
sudo firewall-cmd --add-port={4789,8285,8472}/udp --permanent
sudo firewall-cmd --reload

sudo modprobe br_netfilter
sudo echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables

  1. Add Kubernetes repository (same to the master node)

sudo tee /etc/yum.repos.d/kubernetes.repo<<EOF
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF

  1. Install Kubeadm (same to the master node)
sudo yum install kubeadm -y

  1. Install Docker (same to the master node)
  • Create the daemon file manually. This is also because of some conflict in University's Openstack Environment.
sudo mkdir -p /etc/docker
cd docker

sudo tee /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ],

"default-address-pools": [{"base":"172.80.0.0/16","size":24}]

}
EOF

  • Now run this command. It will add the official Docker repository, download the latest version of Docker, and install it:
curl -fsSL https://get.docker.com/ | sh
  • After installation has completed, start the Docker daemon:
sudo systemctl start docker
ERROR if docker is unable to start, try to check the /etc/docker/daemon.json file
  • Verify that it’s running:
sudo systemctl status docker
  • Enable and start both services (kubelet and docker)

sudo systemctl enable kubelet
sudo systemctl start kubelet
sudo systemctl enable docker

  1. Disable swap using the following command: (same to the master node)
swapoff -a

  1. Configure the sysctl.

sudo modprobe overlay
sudo modprobe br_netfilter

sudo tee /etc/sysctl.d/kubernetes.conf<<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

sudo sysctl --system

  • Login to the server to be used as master and make sure that the br_netfilter module is loaded:
lsmod | grep br_netfilter
Output:
br_netfilter 22256 0
bridge 151336 2 br_netfilter,ebtable_broute

  1. By this time your master node should be ready
Now you should have the link (similar to the link below) to join
kubeadm join 192.168.42.77:6443 --token ghgvji.0540qyz0piyez8vl \
--discovery-token-ca-cert-hash sha256:feabb5bd4613acacdd90dd539d288b3483090817ec8bb0a9997f97876c8bb94f

  1. Now join the network as a worker node:




Following command should be taken from the master node
sudo kubeadm join 192.168.42.123:6443 --token cwb5su.r67yrbkomcp9pb6z --discovery-token-ca-cert-hash sha256:86b262257585aceab65b8dc12aa199acb4057e649414a90ae21a50fe75dec17a

Output:
you should see similar output




Reset kubeadm on Master node

sudo kubeadm reset

sudo rm -rf .kube/

or

sudo rm -rf $HOME/.kube/config

sudo rm -rf /var/lib/etcd

No comments:

Post a Comment