In this lab you will bootstrap two Kubernetes worker nodes. The following components will be installed: runc, containerd, kubelet, and kube-proxy.
The commands in this lab must be run on each worker instance: node01
and node02
Login to each controller instance using vagrant ssh terminal.
You can perform this step with tmux.
Install the OS dependencies:
sudo dnf install -y socat conntrack ipset
The socat binary enables support for the
kubectl port-forward
command.
By default, the kubelet will fail to start if is enabled. It is recommended that swap be disabled to ensure Kubernetes can provide proper resource allocation and quality of service.
Verify if swap is enabled:
swapon --show
If output is empty then swap is not enabled. If swap is enabled run the following command to disable swap immediately:
sudo touch /etc/systemd/zram-generator.conf
sudo swapoff -a
To ensure swap remains off after reboot consult your Linux distro documentation.
sudo mkdir -p \
/var/lib/kubelet \
/var/lib/kube-proxy \
/var/lib/kubernetes/pki \
/var/run/kubernetes
{
cd ~/downloads
mkdir -p containerd
tar -xvf crictl-v1.31.1-linux-amd64.tar.gz
tar -xvf containerd-1.7.23-linux-amd64.tar.gz -C containerd
mv runc.amd64 runc
chmod +x crictl kubectl kube-proxy kubelet runc
sudo cp crictl kubectl kube-proxy kubelet runc /usr/local/bin/
sudo cp containerd/bin/* /bin/
cd ~
}
CIDR ranges used within the cluster
export POD_CIDR=10.244.0.0/16
export SERVICE_CIDR=10.96.0.0/16
Compute cluster DNS address, which is conventionally .10 in the service CIDR range
export CLUSTER_DNS=$(echo $SERVICE_CIDR | awk 'BEGIN {FS="."} ; { printf("%s.%s.%s.10", $1, $2, $3) }')
Install the containerd
configuration files:
{
sudo mkdir -p /etc/containerd/
sudo cp configs/containerd-config.toml /etc/containerd/config.toml
sudo cp configs/containerd.service /etc/systemd/system/
}
Create the kubelet-config.yaml
configuration file:
{
envsubst < templates/kubelet-config.yaml.template \
| sudo tee /var/lib/kubelet/kubelet-config.yaml
envsubst < templates/kubelet.service.template \
| sudo tee /etc/systemd/system/kubelet.service
sudo cp ${HOSTNAME}.kubeconfig /var/lib/kubelet/kubelet.kubeconfig
sudo cp ${HOSTNAME}.key ${HOSTNAME}.crt /var/lib/kubernetes/pki/
sudo cp ${HOSTNAME}.kubeconfig /var/lib/kubelet
sudo cp ca.crt /var/lib/kubernetes/pki/
}
{
envsubst < templates/kube-proxy-config.yaml.template \
| sudo tee /var/lib/kube-proxy/kube-proxy-config.yaml
sudo cp kube-proxy.crt kube-proxy.key /var/lib/kubernetes/pki/
sudo cp kube-proxy.kubeconfig /var/lib/kube-proxy
sudo cp configs/kube-proxy.service /etc/systemd/system/
}
sudo chown root:root /var/lib/kubernetes/pki/*
sudo chmod 600 /var/lib/kubernetes/pki/*
sudo chown root:root /var/lib/kubelet/*
sudo chmod 600 /var/lib/kubelet/*
{
sudo systemctl daemon-reload
sudo systemctl enable containerd kubelet kube-proxy
sudo systemctl start containerd kubelet kube-proxy
}
Now return to the controlplane01
node.
List the registered Kubernetes nodes from the controlplane node:
kubectl get nodes --kubeconfig admin.kubeconfig
Output will be similar to
NAME STATUS ROLES AGE VERSION
node01 NotReady <none> 93s v1.28.4
node02 NotReady <none> 93s v1.28.4
The nodes are not ready as we have not yet installed pod networking. This comes later.
On node01
and node02
nodes, run the following, selecting option 4
./cert_verify.sh
Next: Configuring kubectl for Remote Access
Prev: Bootstrapping the Kubernetes Control Plane