Skip to content

This project demonstartes the differences between dockerswarm and k8s with Examples

Notifications You must be signed in to change notification settings

safuvanh/K8S-vs-DockerSwarm

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 

Repository files navigation

K8s vs Docker Swarm

This project demonstartes the differences between dockerswarm and k8s with Example

swarmvskubes_newcover

Introduction

This repository contains resources and information comparing Kubernetes (K8s) and Docker Swarm, two popular container orchestration platforms. The goal of this project is to provide insights into their features, differences, use cases, and considerations to help users make informed decisions when choosing between them.

Kubernetes (K8s)

Kubernetes is an open source container orchestration platform that was initially designed by Google to manage their containers. Kubernetes has a more complex cluster structure than Docker Swarm. It usually has a builder and worker nodes architecture divided further into pods, namespaces, config maps.

Docker Swarm

Docker Swarm is an open-source container orchestration platform built and maintained by Docker. Under the hood, Docker Swarm converts multiple Docker instances into a single virtual host. A Docker Swarm cluster generally contains three items:

  • Nodes
  • Services and tasks
  • Load balancers Nodes are individual instances of the Docker engine that control your cluster and manage the containers used to run your services and tasks. Docker Swarm clusters also include load balancing to route requests across nodes.

Differences

  • Architecture: Kubernetes follows a master-worker architecture with a separate control plane, whereas Docker Swarm uses a simpler architecture with manager and worker nodes.
  • Complexity: Kubernetes is more complex to set up and manage compared to Docker Swarm, requiring a steeper learning curve.
  • Scaling: Kubernetes provides more advanced scaling capabilities and is better suited for larger, more complex deployments.
  • Community and Ecosystem: Kubernetes has a larger and more mature community with a vast ecosystem of third-party tools and resources compared to Docker Swarm.

Use Cases

  • Kubernetes (K8s): Ideal for large-scale, complex deployments requiring advanced orchestration features, high availability, and scalability.
  • Docker Swarm: Suitable for smaller projects or organizations looking for a simpler, integrated container orchestration solution with less complexity and overhead.

Setup Kubernetes Cluster Using Kubeadm

Kubeadm-800x500

Prerequisites

  • Minimum two Ubuntu nodes [One master and one worker node]

  • Nodes should have minimum of 2 CPU and 2GB RAM.

  • Add these ports to security group of nodes:

    Screenshot 2024-03-19 001830

Installation Steps:

  • Launch 2 ubuntu instances with 2 CPU and 2GB RAM Screenshot (332)

  • Create a script file for needed steps:

    Step 1: Enable iptables Bridged Traffic on all the Nodes
    Step 2: Disable swap on all the Nodes
    Step 3: Install CRI-O Runtime on all the nodes
    Step 4: Install Kubeadm & Kubelet & Kubectl on all Nodes

  • vim kube.sh and add all commands for these steps

    Screenshot (333)

  • chmod +x kube.sh && ./kube.sh Run this shell script on both 2 nodes

Initialize Kubeadm On Master Node To Setup Control Plane

  • Initialize the master node control plane configurations using the kubeadm command. Replace with your actual private ip

    sudo kubeadm init --apiserver-advertise-address="PrivateIp"  --apiserver-cert-extra-sans="PrivateIp"  --pod-network-cidr=192.168.0.0/16  --ignore-preflight-errors Swap
    

    Screenshot (335)

  • Use the following commands from the output to create the kubeconfig in master so that you can use kubectl to interact with cluster API.
    To start using your cluster, you need to run the following as a regular user:

    
    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    

    Alternatively, if you are the root user, you can run:
    export KUBECONFIG=/etc/kubernetes/admin.conf
    
  • You can get the cluster info using the following command.kubectl cluster-info

    Screenshot 2024-03-19 004838

  • By default, apps won’t get scheduled on the master node. If you want to use the master node for scheduling apps, taint the master node.

    kubectl taint nodes --all node-role.kubernetes.io/control-plane-
    

Kubernetes Cluster Important Configurations

image

Join Worker Nodes To Kubernetes Master Node

  • We have set up cri-o, kubelet, and kubeadm utilities on the worker nodes as well.

  • Copy the script file and create in worker node and run it.

  • Execute the following command in the master node to recreate the token with the join command.

    kubeadm token create --print-join-command
    
  • This command performs the TLS bootstrapping for the nodes.

    sudo kubeadm join 172.31.43.208:6443 --token j4eice.33vgvgyf5cxw4u8i \
      --discovery-token-ca-cert-hash sha256:37f94469b58bcc8f26a4aa44441fb17196a585b37288f85e22475b00c36f1c61
    

    Screenshot 2024-03-19 092956

  • Now execute the kubectl command from the master node to check if the node is added to the master.kubectl get nodes
    Screenshot 2024-03-19 093318

Install Calico Network Plugin for Pod Networking

  • Kubeadm does not configure any network plugin. You need to install a network plugin of your choice for kubernetes PodNetworking and enable network policy

  • I am using the Calico network plugin for this setup.

  • Execute the following commands to install the Calico network plugin operator on the cluster.

    kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
    
  • Kubeadm doesn’t install metrics server component during its initialization. We have to install it separately.

  • To verify this, if you run the kubectl top nodes command, you will see the Metrics API not available error.

  • To install the metrics server, Explore the following metric server manifest file Repository

    Screenshot 2024-03-19 094932

Deploy IPSR Application

  • Now that we have all the components to make the cluster and applications work, let’s deploy a sample IPSR Application and see if we can access it over a NodePort

  • Create a deployment vim deployment.yml It deploys the pod in the default namespace.

    Screenshot (340)

  • Execute the following command for create and view deployment

    kubectl create -f deployment.yml
    kubectl get deployments
    
  • Expose the Nginx deployment on a NodePort 32000

  • Create a service for the ipsr app vim service.yml
    Screenshot (342)

  • Add this port Number to node Security group

  • Execute the following command for create and view Services

    kubectl create -f service.yml
    kubectl get services
    

    image

  • Check the pod status using the following command.kubectl get pods

    image

  • Once the deployment is up, you should be able to access the IPSR home page on the allocated NodePort.Copy the Workernode Public ip and acces it on Alllocated port

    Screenshot (348)

    Screenshot (347)

  • Now, Iam trying to delete the running pods and it will be created new pods automatically

    image

Add Kubeadm Config to Workstation

  • If you prefer to connect the Kubeadm cluster using kubectl from your workstation, you can merge the kubeadm admin.conf with your existing kubeconfig file.
  • Follow the steps given below for the configuration.
    Step 1: Copy the contents of admin.conf from the control plane node and save it in a file named kubeadm-config.yaml in your workstation.
    Step 2: Take a backup of the existing kubeconfig. cp ~/.kube/config ~/.kube/config.bak
    Step 3: Merge the default config with kubeadm-config.yaml and export it to KUBECONFIG variable
    export KUBECONFIG=~/.kube/config:/path/to/kubeadm-config.yaml
    Step 4: Merger the configs to a file kubectl config view --flatten > ~/.kube/merged_config.yaml
    Step 5: Replace the old config with the new config mv ~/.kube/merged_config.yaml ~/.kube/config
    Step 6: List all the contexts kubectl config get-contexts -o name
    Step 7: Set the current context to the kubeadm cluster. kubectl config use-context <cluster-name-here>
  • Now, you should be able to connect to the Kubeadm cluster from your local workstation kubectl utility.

Destroy Created Services

  • kubectl delete pods --all
    kubectl delete servies --all
    kubectl delete deployments --all
    kubectl delete nodes --all
    kubeadm reset
    
    image

Docker Swarm

Step-By-Step Implementation to Deploying Example web app using Docker Swarm

Prerequisites

  • Minimum two Ubuntu nodes [One Leader and one worker node]
  • Add the port 2377 to security group of nodes

Steps:

  • Install docker with docker engine on all nodes

    sudo apt-get update -y
    sudo apt install docker.io -y
    
  • Now, Open the “Swarm Manager” node and run the command

    sudo docker swarm init
    
  • This will be creating empty swarm

    image

  • After initializing the swarm on the “swarm-manager” node, there will be one key generated to add other nodes into the swarm as a worker. Copy and run that key on the rest of the servers.

    image

  • To check about all the nodes in the manager, run this command docker node ls

    image

  • Now, on Manager Node we will create a service,

    sudo docker service create --namename ipsr-app-service --replicas 2 --publish 8080:80 safuvanh/ipsr
    

    image

  • Now, this service will be running on all nodes. To check this, Copy the Ip Address of any of the nodes followed by port 8080. As <public ip of any instance>:8080

    Screenshot (358)

    Screenshot (359)

  • Now, Iam trying to delete the running container and it will be created new container automatically

    image

  • If you want to remove any node from the environment,Run this command sudo docker swarm leave , As we removed, one of the workers and inside the status, we can see it’s down.

    image
    image

  • For deleting created docker service, Run this command

    sudo docker service rm <service-name>
    

    image

    Conclusion

    Choosing between Kubernetes and Docker Swarm depends on various factors such as project requirements, team expertise, scalability needs, and complexity tolerance. Kubernetes offers advanced features and scalability, making it suitable for large-scale deployments with complex requirements. On the other hand, Docker Swarm provides simplicity and ease of use, making it a preferred choice for smaller projects or teams with limited DevOps expertise. Evaluating these platforms based on your specific use case and considering factors like community support and integration capabilities will help you make the right decision for your container orchestration needs.

About

This project demonstartes the differences between dockerswarm and k8s with Examples

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages