You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When nodes are empty (meaning no pods from deployment) scale down happening
What happened instead?:
Something prevent nodes to scale down : see this spurious log :
unremovable: memory requested (0% of allocatable) is above the scale-down utilization threshold
on one of the candidate node.
How to reproduce it (as minimally and precisely as possible):
Nothing more to add. Below config should be sufficient.
Anything else we need to know?:
putting 0.01 for scale-down-utilization-threshold seems to works but it's a bit counter intuitive. and what we want actually is that cluster autoscaller dont' care about resource but just scale down empty nodes. I wonder why such a complex heuristics?
The text was updated successfully, but these errors were encountered:
Which component are you using?:
cluster-autoscaller
v1.29.0
Component version:
What k8s version are you using (
kubectl version
)?:What environment is this in?:
in EKS/AWS
launch with args like this:
What did you expect to happen?:
When nodes are empty (meaning no pods from deployment) scale down happening
What happened instead?:
Something prevent nodes to scale down : see this spurious log :
on one of the candidate node.
How to reproduce it (as minimally and precisely as possible):
Nothing more to add. Below config should be sufficient.
Anything else we need to know?:
putting 0.01 for scale-down-utilization-threshold seems to works but it's a bit counter intuitive. and what we want actually is that cluster autoscaller dont' care about resource but just scale down empty nodes. I wonder why such a complex heuristics?
The text was updated successfully, but these errors were encountered: