In this lab you will generate a kubeconfig file for the kubectl
command line utility based on the admin
user credentials.
Run the commands in this lab from the same directory used to generate the admin client certificates.
Each kubeconfig requires a Kubernetes API Server to connect to. To support high availability the IP address assigned to the external load balancer fronting the Kubernetes API Servers will be used.
On controlplane01
Get the kube-api server load-balancer IP.
LOADBALANCER=$(dig +short loadbalancer)
Generate a kubeconfig file suitable for authenticating as the admin
user:
{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.crt \
--embed-certs=true \
--server=https://${LOADBALANCER}:6443
kubectl config set-credentials admin \
--client-certificate=admin.crt \
--client-key=admin.key
kubectl config set-context kubernetes-the-hard-way \
--cluster=kubernetes-the-hard-way \
--user=admin
kubectl config use-context kubernetes-the-hard-way
}
Reference doc for kubectl config here
Check the health of the remote Kubernetes cluster:
kubectl get componentstatuses
Output will be similar to this. It may or may not list both etcd instances, however this is OK if you verified correct installation of etcd in lab 7.
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy ok
List the nodes in the remote Kubernetes cluster:
kubectl get nodes
output
NAME STATUS ROLES AGE VERSION
node01 NotReady <none> 118s v1.28.4
node02 NotReady <none> 118s v1.28.4
The nodes are not ready as we have not yet installed pod networking. This comes later.
Next: Deploy Liferay
Prev: Bootstrapping the Kubernetes Worker Nodes