Skip to content

Latest commit

 

History

History
130 lines (88 loc) · 1.96 KB

README.md

File metadata and controls

130 lines (88 loc) · 1.96 KB

Rebalancing Kubernetes clusters with the Descheduler

Getting started

You need to create a Linode token to access the API:

linode-cli profile token-create
export LINODE_TOKEN=<insert the token here>
# Create the cluster
terraform -chdir=01-clusters init
terraform -chdir=01-clusters apply -auto-approve

# Cleanup
terraform -chdir=01-clusters destroy -auto-approve

Make sure that your kubectl is configured with the current kubeconfig file:

export KUBECONFIG="${PWD}/kubeconfig"

Installing the descheduler

Deploy podinfo:

kubectl apply -f 03-demo/01-stress.yaml

Deploy the descheduler with:

kubectl apply -f 03-demo/02-descheduler.yaml

Dashboard

kubectl proxy --www=./dashboard &
open "http://localhost:8001/static/"

Restart

kubectl apply -f 03-demo/03-restart.yaml

Duplicates

First cordon one of the nodes:

kubectl get nodes
kubectl cordon <node name>

Then evict the pods

kubectl drain <node name>

Scale the deployment to 3 replicas:

kubectl scale --replicas=3 deployment/app

Confirm that the three replicas are running only in two nodes:

kubectl get pods -o wide

Apply the duplicate policy:

kubectl apply -f 03-demo/04-duplicates.yaml

Finally, uncordon the node and observe the descheduler deleting the pod:

kubectl uncordon <node name>

Low utilization

Apply the low utilization policy:

kubectl apply -f 03-demo/05-usage.yaml

Cordon one of the nodes:

kubectl get nodes
kubectl cordon <node name>

Then evict the pods

kubectl drain <node name>

Scale the deployment to 12 replicas:

kubectl scale --replicas=12 deployment/app

Confirm that the three replicas are running only in two nodes:

kubectl get pods -o wide

Finally, uncordon the node and observe the descheduler rebalancing the pods:

kubectl uncordon <node name>