- A node is a
machine – physical or virtual – on which kubernetes is installed
. - A node is a worker machine and this is
were containers will be launched
by kubernetes
- The master is
another node with Kubernetes installed in it
, and is configured as a Master. - The master
watches over the nodes in the cluster
and isresponsible for the actual orchestration of containers
on the worker nodes.
-
API Server
- The API server
acts as the front-end for kubernetes
. - The
users, management devices, Command line interfaces all talk to the API
server to interact with the kubernetes cluster.
- The API server
-
ETCD service
- ETCD is a
distributed reliable key-value store
used by kubernetesto store all data used to manage the cluster
. - Think of it this way,
when you have multiple nodes and multiple masters in your cluster, etcd stores all that information on all the nodes
in the cluster in a distributed manner. - ETCD is
responsible for implementing locks within the cluster to ensure there are no conflicts between the Masters
- ETCD is a
-
Schedulers service
- The scheduler is
responsible for distributing work or containers across multiple nodes
. - It
looks for newly created containers
andassigns them to Nodes
.
- The scheduler is
-
container Runtime
- The container runtime is the underlying
software that is used to run containers
. - In this case it
happens to be Docker
. other runtime are Rocket or CRIO
.
- The container runtime is the underlying
-
Controllers
- The controllers
are the brain behind orchestration
. - They are
responsible for noticing and responding when nodes, containers or endpoints goes down
. - The controllers
makes decisions to bring up new containers in such cases
- The controllers
-
kubelet
- kubelet is the
agent that runs on each node in the cluster
. - The agent is
responsible for making sure that the containers are running on the nodes as expected
- kubelet is the
-
Master node have
- API Server
- Controllers
- Schedulers
- ETCD
-
Worker node have
- kubelet
- container runtine
kubectl run
kubectl run command is used to deploy an application on the cluster.kubectl culster-info
used to view information about the clusterkubectl get pods
list all the nodes part of the clusterkubectl get pods -o wide
shows 2 extra fiels IP and node on which it is runningkubectl describe pods
shows detailed report of pods
- minikube:
utility you could only setup a single node kubernetes cluster
. - kubeadmin:
tool helps us setup a multi node cluster with master and workers on separate machines
. - kubelet: the
component that runs on all of the machines in your cluster and does things like starting pods and containers
. - kubectl: the
command line util to talk to your cluster
.
Steps to install kubeadmin (https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/)
-
First, you must have multiple systems or virtual machines created for configuring a cluster. Once the systems are created, designate one as master and others as worker nodes.
-
The next step is to
install a container runtime
on the hosts. We will be using Docker, so we must install Docker on all the nodes. -
The next step is to
install kubeadmin tool on all the nodes
. The kubeadmin tool helps us bootstrap the kubernetes solution by installing and configuring all the required components in the right nodes.-
Have unique HOSTNAME for each node and master
- edit hostname file in
/etc/hostname
- also edit in
/etc/hosts
- edit hostname file in
-
add a static ip adapter like
Host only
-
Now add a ip
-
edit file
/etc/network/interfaces
-
add following cmd
auto nameofAdapterWhcihWasCreated(enp0s8) iface nameofAdapterWhcihWasCreated(enp0s8) inet static address 192.168.56.2(witin the range of the adapter) netmask 255.255.255.0
-
-
disable swap
- execute
swapoff -a
- go to
/etc/fstab
andcomment of swap line
.
- execute
-
-
The next step is to
initialize the Master server
. During this process all therequired components are installed
and configured on the master server. That way we can start the cluster level configurations from the master server. (https://kubernetes.io/docs/setup/production-environment/container-runtimes/)-
run
sudo apt-get update
in all worker-node and master node. -
-
Install packages to allow apt to use a repository over HTTPS
-
sudo apt-get update && sudo apt-get install -y apt-transport-https ca-certificates curl software-properties-common
-
-
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key --keyring /etc/apt/trusted.gpg.d/docker.gpg add -
-
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
- run
sudo apt-get update
in all
-
sudo apt-get update && sudo apt-get install -y containerd.io=1.2.13-2 docker-ce=5:19.03.11~3-0~ubuntu-$(lsb_release -cs) docker-ce-cli=5:19.03.11~3-0~ubuntu-$(lsb_release -cs)
-
-
-
You will install these packages on all of your machines:
-
kubeadm
: thecommand to bootstrap the cluster
. -
kubelet
: thecomponent that runs on all of the machines in your cluster and does things like starting pods and containers
. -
kubectl
: thecommand line util to talk to your cluster
.
-
-
run
sudo apt-get update && sudo apt-get install -y apt-transport-https curl
in all nodes -
run
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
in all nodes -
write following lines:-
cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list deb https://apt.kubernetes.io/ kubernetes-xenial main EOF
-
run
sudo apt-get update
in all nodes. -
run
sudo apt-get install -y kubelet kubeadm kubectl
in all nodes to install kebelet, kubeadm, kubectl -
After this before initializing the cluster one has to
create a pod-network
You must deploy aContainer Network Interface (CNI) based Pod network add-on so that your Pods can communicate with each other
.- Take care that your Pod network must not overlap with any of the host networks: you are likely to see problems if there is any overlap. (If you find a collision between your network plugin's preferred Pod network and some of your host networks, you should think of a suitable CIDR block to use instead, then use that during
kubeadm init
with--pod-network-cidr
and as a replacement in your network plugin's YAML).
- Take care that your Pod network must not overlap with any of the host networks: you are likely to see problems if there is any overlap. (If you find a collision between your network plugin's preferred Pod network and some of your host networks, you should think of a suitable CIDR block to use instead, then use that during
-
kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=<static-ip-address-mkubemaster> --ignore-preflight-errors=all
has to be run in master node -
use any network like
CALICO
orflannel
:-kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml or kubectl --kubeconfig=/etc/kubernetes/admin.conf create -f https://docs.projectcalico.org/v3.14/manifests/calico.yaml
-
If you want to be able to run kubectl commands as non-root user, then as a non-root user perform these
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config
-
to get the token to join the cluster
kubeadm token create --print-join-command
-
run the cmd printed from the above cmd in the worker with
--ignore-preflight-errors=all
option
-
-
Once the master is initialized and before joining the worker nodes to the master, we must ensure that the network pre-requisites are met. A normal network connectivity between the systems is not SUFFICIENT for this. Kubernetes requires a
special network between the master and worker nodes which is called as a
POD network
. -
Last step is to join the worker nodes to the master node.
- A POD is a
single instance of an application
. - A POD is the
smallest object, that you can create in kubernetes
. - Kubernetes does not deploy containers directly on the worker nodes.
The containers are encapsulated into a Kubernetes object
known as PODs. - PODs usually have a
one-to-one relationship with containers running your application
. - To scale UP you create new PODs and** to scale down you delete PODs**.
- You
do not add additional containers to an existing POD to scale
your application.
- But sometimes you might have a scenario were you have a helper container, that might be doing
some kind of supporting task for our web application
such as processing a user entered data, processing a file uploaded by the user etc. and you want these helper containers to live along side your application container. - In that case, you
CAN have both of these containers part of the same POD, so that when a new application container is created, the helper is also created and when it dies the helper also dies since they are part of the same POD
. - The two containers can
also communicate with each other directly by referring to each other as localhost’ since they share the same network namespace
. Plus they can easily share the same storage space as well.