In this lab you will bootstrap the Kubernetes control plane across 3 compute instances and configure it for high availability. You will also create an external load balancer that exposes the Kubernetes API Servers to remote clients. The following components will be installed on each node: Kubernetes API Server, Scheduler, and Controller Manager.
Note that in a production-ready cluster it is recommended to have an odd number of controlplane nodes as for multi-node services like etcd, leader election and quorum work better. See lecture on this (KodeKloud, Udemy).
If you examine the command line arguments passed to the various control plane components, you should recognise many of the files that were created in earlier sections of this course, such as certificates, keys, kubeconfigs, the encryption configuration etc.
The commands in this lab up as far as the RBAC configuration must be run on each controller instance: controlplane01
, controlplane02
and controlplane03
.
You can perform this step with tmux.
Reference: https://kubernetes.io/releases/download/#binaries
Install the Kubernetes binaries:
{
cd ~/downloads
chmod +x kube-apiserver \
kube-controller-manager \
kube-scheduler \
kubectl
sudo cp kube-apiserver \
kube-controller-manager \
kube-scheduler \
kubectl /usr/local/bin/
cd ~
}
{
sudo mkdir -p /var/lib/kubernetes/pki
sudo cp ca.crt ca.key /var/lib/kubernetes/pki
for c in kube-apiserver service-account apiserver-kubelet-client etcd-server kube-scheduler kube-controller-manager
do
sudo cp "$c.crt" "$c.key" /var/lib/kubernetes/pki/
done
sudo chown root:root /var/lib/kubernetes/pki/*
sudo chmod 600 /var/lib/kubernetes/pki/*
}
The instance internal IP address will be used to advertise the API Server to members of the cluster. The load balancer IP address will be used as the external endpoint to the API servers.
Retrieve these internal IP addresses:
export LOADBALANCER=$(dig +short loadbalancer)
IP addresses of the two controlplane nodes, where the etcd servers are.
export CONTROL01=$(dig +short controlplane01)
export CONTROL02=$(dig +short controlplane02)
export CONTROL03=$(dig +short controlplane03)
CIDR ranges used within the cluster
export POD_CIDR=10.244.0.0/16
export SERVICE_CIDR=10.96.0.0/16
Create the kube-apiserver.service
systemd unit file:
envsubst < templates/kube-apiserver.service.template \
| sudo tee /etc/systemd/system/kube-apiserver.service
Move the kube-controller-manager
kubeconfig into place:
sudo cp kube-controller-manager.kubeconfig /var/lib/kubernetes/
Create the kube-controller-manager.service
systemd unit file:
envsubst < templates/kube-controller-manager.service.template \
| sudo tee /etc/systemd/system/kube-controller-manager.service
Move the kube-scheduler
kubeconfig into place:
sudo cp kube-scheduler.kubeconfig /var/lib/kubernetes/
Create the kube-scheduler.yaml
configuration file:
sudo mkdir -p /etc/kubernetes/config/
sudo cp configs/kube-scheduler.yaml /etc/kubernetes/config/
Create the kube-scheduler.service
systemd unit file:
envsubst < templates/kube-scheduler.service.template \
| sudo tee /etc/systemd/system/kube-scheduler.service
sudo chmod 600 /var/lib/kubernetes/*.kubeconfig
At controlplane01
, controlplane02
and controlplane03
nodes, run the following, selecting option 3
./cert_verify.sh
{
sudo systemctl daemon-reload
sudo systemctl enable kube-apiserver kube-controller-manager kube-scheduler
sudo systemctl start kube-apiserver kube-controller-manager kube-scheduler
}
Allow up to 10 seconds for the Kubernetes API Server to fully initialize.
After running the above commands on both controlplane nodes, run the following on controlplane01
kubectl get componentstatuses --kubeconfig admin.kubeconfig
It will give you a deprecation warning here, but that's ok.
Output
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy ok
Remember to run the above commands on each controller node:
controlplane01
,controlplane02
andcontrolplane03
.
In this section you will configure RBAC permissions to allow the Kubernetes API Server to access the Kubelet API on each worker node. Access to the Kubelet API is required for retrieving metrics, logs, and executing commands in pods.
This tutorial sets the Kubelet
--authorization-mode
flag toWebhook
. Webhook mode uses the SubjectAccessReview API to determine authorization.
Run the below on the controlplane01
node.
Create the system:kube-apiserver-to-kubelet
ClusterRole with permissions to access the Kubelet API and perform most common tasks associated with managing pods:
kubectl apply -f configs/kube-apiserver-to-kubelet.yaml \
--kubeconfig admin.kubeconfig
In this section you will provision an external load balancer to front the Kubernetes API Servers. The kubernetes-the-hard-way
static IP address will be attached to the resulting load balancer.
A NLB operates at layer 4 (TCP) meaning it passes the traffic straight through to the back end servers unfettered and does not interfere with the TLS process, leaving this to the Kube API servers.
Login to loadbalancer
instance using vagrant ssh
(or multipass shell
on Apple Silicon).
sudo dnf install -y haproxy
Read IP addresses of controlplane nodes and this host to shell variables
CONTROL01=$(dig +short controlplane01)
CONTROL02=$(dig +short controlplane02)
CONTROL03=$(dig +short controlplane03)
LOADBALANCER=$(dig +short loadbalancer)
Create HAProxy configuration to listen on API server port on this host and distribute requests evenly to the two controlplane nodes.
We configure it to operate as a layer 4 loadbalancer (using mode tcp
), which means it forwards any traffic directly to the backends without doing anything like SSL offloading.
cat <<EOF | sudo tee /etc/haproxy/haproxy.cfg
frontend kubernetes
bind 0.0.0.0:6443
option tcplog
mode tcp
default_backend kubernetes-controlplane-nodes
backend kubernetes-controlplane-nodes
mode tcp
balance roundrobin
option tcp-check
server controlplane01 ${CONTROL01}:6443 check fall 3 rise 2
server controlplane02 ${CONTROL02}:6443 check fall 3 rise 2
server controlplane03 ${CONTROL03}:6443 check fall 3 rise 2
EOF
sudo systemctl enable haproxy
sudo systemctl start haproxy
Make an HTTP request for the Kubernetes version info:
curl -k https://${LOADBALANCER}:6443/version
This should output some details about the version and build information of the API server.
Next: Bootstrapping the Kubernetes Worker Nodes
Prev: Bootstrapping the etcd Cluster