Throughout this workshop, we will use Play with Kubernetes (PWK) as our hosted lab environment. For this DockerCon18 workshop only, a temporarily-provisioned space has been provided workshop.play-with-k8s.com. If you would like to use a different Kubernetes cluster (like your lab cluster or Docker for Desktop or Minikube), you can skip lab-1 (this lab).
1.1 Visit https://workshop.play-with-k8s.com.
Once you start the session, you will have your own lab environment.
Now add one instance by clicking the ADD NEW INSTANCE
button on the left. When you create your first instance, it will have the name node1
. Each instance has Docker Community Edition (CE) and kubeadm preinstalled.
Warning: Please donot follow the instructions as it is. We will be following similar but slightly different instructions described below.
We will use node1
as the master node for our cluster. While we will create a multi-node cluster in this lab, creating a multi-master cluster is out of the scope of this workshop.
Before we start bootstrapping the cluster, first update DNS settings on the node. This step is needed to get external network connectivity from within the Kubernetes cluster we are about to setup. Let's use one of Google's public name servers:
echo "nameserver 8.8.8.8" >> /etc/resolv.conf
(any public name servers will work)
Next, bootstrap the Kubernetes cluster by initializing the master (node1
) node:
kubeadm init --apiserver-advertise-address $(hostname -i)
Sample output from initialization:
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run (as a regular user):
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
http://kubernetes.io/docs/admin/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join --token 0c6e9e.607906dbdcacbf64 192.168.0.8:6443 --discovery-token-ca-cert-hash sha256:b8116ec1b224d82983b10353498d222f6f2e8fcbdf5d1075b4eece0f37df5896
Waiting for api server to startup.........
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
daemonset "kube-proxy" configured
No resources found
As part of the initialization kubeadm
has written config files needed, setup RBAC and deployed Kubernetes control plane components (like kube-apiserver
, kube-dns
, kube-proxy
, etcd
, etc.). Control plane components are deployed as Docker containers. kubectl
is also configured for the root
user's account.
Please copy and save the kubeadm join
command from the previous output for later use. This command will be used to join other nodes to your cluster. The command should look like the one below (do not use this example output):
kubeadm join --token 0c6e9e.607906dbdcacbf64 192.168.0.8:6443 --discovery-token-ca-cert-hash sha256:b8116ec1b224d82983b10353498d222f6f2e8fcbdf5d1075b4eece0f37df5896
Check the status of the nodes and then the pods. To check the status of the nodes:
kubectl get nodes
Output of the previous command:
[node1 ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
node1 NotReady master 1h v1.10.2
To check the status of the pods:
kubectl get pods --all-namespaces
Output from the previous command:
[node1 ~]$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-node1 1/1 Running 0 1h
kube-system kube-apiserver-node1 1/1 Running 0 59m
kube-system kube-controller-manager-node1 1/1 Running 0 1h
kube-system kube-dns-545bc4bfd4-nnbwn 0/3 Pending 0 1h
kube-system kube-proxy-pxq27 1/1 Running 0 1h
kube-system kube-scheduler-node1 1/1 Running 0 1h
We can see that the master node is NotReady
state. Next, we need to install a network plugin, so that our pods and nodes can communicate with each other.
Also, kube-dns
will not start up before a network is installed. The general recommendation is to install a Container Network Interface (CNI)-based network driver. For this workshop, we will use Weave Net.
kubectl apply -n kube-system -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 |tr -d '\n')"
Output from previous command:
[node1 ~]$ kubectl apply -n kube-system -f \
> "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 |tr -d '\n')"
serviceaccount "weave-net" created
clusterrole "weave-net" created
clusterrolebinding "weave-net" created
role "weave-net" created
rolebinding "weave-net" created
daemonset "weave-net" created
Re-check the status of the nodes, we will see it is now in Ready
state.
[node1 ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
node1 Ready master 1h v1.10.2
Check the status of the pods next:
[node1 ~]$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-node1 1/1 Running 0 1h
kube-system kube-apiserver-node1 1/1 Running 0 1h
kube-system kube-controller-manager-node1 1/1 Running 0 1h
kube-system kube-dns-545bc4bfd4-nnbwn 3/3 Running 0 1h
kube-system kube-proxy-pxq27 1/1 Running 0 1h
kube-system kube-scheduler-node1 1/1 Running 0 1h
kube-system weave-net-wq5t5 2/2 Running 0 2m
We can see all the pods are in Running
state.
We will build a 3-node cluster.
Click the ADD NEW INSTANCE
button on the left.
As before, on each of the instances, first update the DNS settings:
echo "nameserver 8.8.8.8" >> /etc/resolv.conf
Now we can make the new nodes join the Kubernetes cluster by executing the kubeadm join
you previously copied and saved. Run that command on each of the two new nodes:
kubeadm join --token 0c6e9e.607906dbdcacbf64 192.168.0.8:6443 --discovery-token-ca-cert-hash sha256:b8116ec1b224d82983b10353498d222f6f2e8fcbdf5d1075b4eece0f37df5896
Output from previous command:
[node2 ~]$ kubeadm join --token 0c6e9e.607906dbdcacbf64 192.168.0.8:6443 --discovery-token-ca-cert-hash sha256:b8116ec1b224d82983b10353498d222f6f2e8fcbdf5d1075b4eece0f37df5896
Initializing machine ID from random generator.
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[preflight] Skipping pre-flight checks
[discovery] Trying to connect to API Server "192.168.0.8:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.0.8:6443"
[discovery] Requesting info from "https://192.168.0.8:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.0.8:6443"
[discovery] Successfully established connection with API Server "192.168.0.8:6443"
[bootstrap] Detected server version: v1.8.13
[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)
Node join complete:
* Certificate signing request sent to master and response
received.
* Kubelet informed of new secure connection details.
Run 'kubectl get nodes' on the master to see this machine join.
Go back to the master node node1
terminal and check the status of the nodes:
watch kubectl get nodes
Initially, the new nodes will be in Not Ready
state and will eventually become Ready
Output from previous command:
Every 2.0s: kubectl get nodes Mon May 21 03:27:34 2018
NAME STATUS ROLES AGE VERSION
node1 Ready master 2h v1.10.2
node2 Ready <none> 22m v1.10.2
node3 Ready <none> 55s v1.10.2
We now have a 3-node Kubernetes cluster ready for an Istio deployment.
To sum up or catch up ;)
echo "nameserver 8.8.8.8" >> /etc/resolv.conf
kubeadm init --apiserver-advertise-address $(hostname -i)
kubectl apply -n kube-system -f \
"https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 |tr -d '\n')"
Add slave nodes (please use the join commands output by kubeadm init
):
kubeadm join --token 0c6e9e.607906dbdcacbf64 192.168.0.8:6443 --discovery-token-ca-cert-hash sha256:b8116ec1b224d82983b10353498d222f6f2e8fcbdf5d1075b4eece0f37df5896
Check node status:
watch kubectl get nodes