How to deploy k8s cluster using vagrant?
The previous post show you how to launch k8s 1.21.4 cluster on ubuntu 20.04 using vagrant in few mins, this post will tell you whole secret how this happens in the back.
Tool chain:
- win10
- Virtulbox 6.0: hypervisior for VM
- packer 1.6.1: vagrant box auto builder
- vagrant 2.2.10: deploy multi VM nodes cluster on win10
The workflow like this:
- build Ubuntu 20.4 k8s 1.21.4 ready vagrant box and upload to Vagrant Cloud, docker, kubeadm, kubectl, kubelet installed
- prepare Vagrantfile and deploy k8s cluster by Vagrant
Let’s go more details step by step.
build ubuntu k8s ready box using Packer
Packer is from same company https://www.hashicorp.com/ for Vagrant, it is an open source tool for creating identical machine images for multiple platforms from a single source configuration.
I already built and uploaded Ubuntu k8s ready vagrant box here,
https://app.vagrantup.com/dreamcloud/boxes/ubuntu20-k8s
It will save lots of time to customize the k8s cluster later from this vagrant box with k8s package ready.
The main work is to install expected k8s packages:
# update/upgrade system apt-get update && apt-get upgrade -y # install docker apt-get install -y docker.io systemctl enable docker systemctl start docker usermod -aG docker vagrant # turn off swap sudo swapoff -a sudo sed -i '/swap/d' /etc/fstab # install k8s echo 'deb http://apt.kubernetes.io/ kubernetes-xenial main' >> /etc/apt/sources.list.d/kubernetes.list curl -sS https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - apt-get update && apt-get install -y -q kubelet=1.21.4-00 kubeadm=1.21.4-00 kubectl=1.21.4-00 apt-mark hold kubelet kubeadm kubectl
build k8s cluster using Vagrant
It is time to build k8s cluster based on above vagrant box, it is very fast since k8s packages already installed on node.
The Vagrant uses single configure file Vagrantfile to deploy vm cluster, the main tasks in Vagrantfile are inline scripts:
- masterScript
- nodeScript
masterScript
$masterScript = <<SCRIPT # update k8s-master /etc/hosts sudo sed -i '/k8s/d' /etc/hosts sudo sed -i "1i192.168.22.23 k8s-master" /etc/hosts sudo sed -i "2i192.168.22.24 k8s-node1" /etc/hosts sudo sed -i "2i192.168.22.25 k8s-node2" /etc/hosts sudo kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=192.168.22.23 --ignore-preflight-errors=NumCPU \ | tee /tmp/kubeadm.log # allow normal user to run kubectl if [ -d $HOME/.kube ]; then rm -r $HOME/.kube fi mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config # install calico network addon kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml # allow run on master kubectl taint nodes --all node-role.kubernetes.io/master- # cli completion sudo apt-get install bash-completion -y source <(kubectl completion bash) echo "source <(kubectl completion bash)" >> ~/.bashrc echo 'alias k=kubectl' >>~/.bashrc echo 'complete -F __start_kubectl k' >>~/.bashrc echo "export do='--dry-run=client -o yaml'" >>~/.bashrc SCRIPT
It generates token and saves to temp file then transfer to worker node, it is good enough for dev env setup since k8s cluster build up in few mins in my case.
nodeScript
$nodeScript = <<SCRIPT # update k8s-node /etc/hosts sudo sed -i '/k8s/d' /etc/hosts sudo sed -i "1i192.168.22.23 k8s-master" /etc/hosts sudo sed -i "2i192.168.22.24 k8s-node1" /etc/hosts sudo sed -i "2i192.168.22.25 k8s-node2" /etc/hosts # add private key curl -Lo $HOME/.ssh/vagrant https://raw.githubusercontent.com/hashicorp/vagrant/master/keys/vagrant chmod 0600 $HOME/.ssh/vagrant # join cluster scp -q -o "StrictHostKeyChecking no" -i $HOME/.ssh/vagrant k8s-master:/tmp/kubeadm.log /tmp/kubeadm.log token=$(cat /tmp/kubeadm.log |grep "kubeadm join"|head -1 |awk -Ftoken '{print $2}'|awk '{print $1}') certhash=$(cat /tmp/kubeadm.log |grep discovery-token-ca-cert-hash|tail -1|awk '{print $2}') sudo kubeadm join k8s-master:6443 --token $token \ --discovery-token-ca-cert-hash $certhash # allow normal user to run kubectl if [ -d $HOME/.kube ]; then rm -r $HOME/.kube fi mkdir -p $HOME/.kube scp -q -o "StrictHostKeyChecking no" -i $HOME/.ssh/vagrant k8s-master:$HOME/.kube/config $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config # cli completion sudo apt-get install bash-completion -y source <(kubectl completion bash) echo "source <(kubectl completion bash)" >> ~/.bashrc echo 'alias k=kubectl' >>~/.bashrc echo 'complete -F __start_kubectl k' >>~/.bashrc echo "export do='--dry-run=client -o yaml'" >>~/.bashrc SCRIPT