Thursday, July 6, 2023

Docker and kubernetes setup on centos 9 in virtualbox

 


Home lab simplifies docker and kubernetes installation on centos VMs on virtual box, with the below setup(which needs to be added as below to /etc/hosts on all the nodes):

192.168.56.3 master

192.168.56.4 node01

192.168.56.5 node02

You can modify the network configuration in the network file /etc/NetworkManager/system-connections/enp0s3.nmconnection (as NetworkManager is the way to go network manager from version 9 onwards)

 

1.     Install centos 9 from iso on a virtualbox VM(2GBs RAM and 2 processors) and give it a hostname such as master, which should be accessible by the other VMs and internet access, and also optionally accessible by the host too (for accessing it using putty)


2.  Install docker and start its services:


yum install -y yum-utils git make

yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo

yum install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y

systemctl start docker && systemctl enable docker


3.       Switch off the swap in the master VM and comment out the swap on the /etc/fstab to have the change permanent across reboots:


swapoff -a


4.       Disable firewalld or add exception for the below ports as indicated below:


firewall-cmd --permanent --add-port=6443/tcp

firewall-cmd --permanent --add-port=2379/tcp

firewall-cmd --permanent --add-port=2380/tcp

firewall-cmd --permanent --add-port=10250/tcp

firewall-cmd --permanent --add-port=10259/tcp

firewall-cmd --permanent --add-port=10257/tcp
systemctl reload firewalld.service

5.       Download go lang from https://go.dev/doc/install, extract it and add it to the environment variable PATH


wget https://go.dev/dl/go1.20.5.linux-amd64.tar.gz
tar xvf go1.20.5.linux-amd64.tar.gz
mv go /usr/local/

export PATH=$PATH:/usr/local/go/bin

6.       Clone and build the container runtime (cri-dockerd) following the steps on https://github.com/Mirantis/cri-dockerd (go lang needs to be downloaded, extracted and added to the path environment variable)


git clone https://github.com/Mirantis/cri-dockerd.git
cd cri-dockerd
make cri-dockerd
mkdir -p /usr/local/bin
install -o root -g root -m 0755 cri-dockerd /usr/local/bin/cri-dockerd
install packaging/systemd/* /etc/systemd/system

sed -i -e 's,/usr/bin/cri-dockerd,/usr/local/bin/cri-dockerd,' /etc/systemd/system/cri-docker.service

systemctl daemon-reload
systemctl enable cri-docker.service
systemctl enable --now cri-docker.socket

7.       Install kubeadm, kubelet and kubectl


cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=Kubernetes

baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch

enabled=1

gpgcheck=1

gpgkey=https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

exclude=kubelet kubeadm kubectl

EOF

sudo setenforce 0

sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

 

sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes

 

sudo systemctl enable --now kubelet

 

8.       Initialize the Kubernetes cluster (this step only in the master):


kubeadm init  --cri-socket unix:///var/run/cri-dockerd.sock --pod-network-cidr=192.168.0.0/24

pay attention to the range 192.168.0.0/24, it should be dedicated to the Kubernetes internal communication, and not the same network used in your VMs normal access network

if you don’t use cri-socket option it will give error that you have multiple runtime and you need to select 1
 of them

output will be something like below:

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.

Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

9.       Execute kubectl get nodes to get the below output:


E0706 11:31:19.632061   38975 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused

E0706 11:31:19.632304   38975 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused

E0706 11:31:19.634063   38975 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused

E0706 11:31:19.636012   38975 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused

E0706 11:31:19.638900   38975 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused

The connection to the server localhost:8080 was refused - did you specify the right host or port?


10.   Execute the below commands to get the Kubernetes master running on the NotReady status(this step only in the master):


mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

 

11.   Go to https://kubernetes.io/docs/concepts/cluster-administration/addons/ and choose Calico for deploying a network to your cluster, which would lead you finally to the calico project https://docs.tigera.io/calico/latest/getting-started/kubernetes/quickstart (this step only in the master)

12.   Download and execute the command below to add the customer resource definitions (this step only in the master)


wget --no-check-certificate https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/tigera-operator.yaml

kubectl create -f tigera-operator.yaml


13.   Download https://raw.githubusercontent.com/projectcalico/calico/v3.26.1/manifests/custom-resources.yaml and modify cidr value to the used Kubernetes network range(mentioned in step 8 with the flag --pod-network-cidr) (this step only in the master)


content of the file custom-resources.yaml after the modification:




14.   Wait about 3 minutes, use the command kubectl get pods -A and kubectl get nodes -A and you will get the control-plane node ready and all the pods in the running state:






15.   Follow the same steps in the worker nodes exception for the kubeadm command which would be join instead of init as below(this step only in the worker nodes):


kubeadm join 192.168.56.3:6443 --token 4hs3dt.60onfv5nk4lxlz2f  --discovery-token-ca-cert-hash sha256:db3cbc9937a0c9c17ccf0bcf6842326f1d27cecd4caceb8fe006fe19d16435df --cri-socket unix:///var/run/cri-dockerd.sock

wait for around 5 minutes to see all the pods in the Running state and node node01 is in the Ready state

Now you can test create a pod of nginx:

in the master node execute kubectl run nginx -o yaml --image=nginx