-
Notifications
You must be signed in to change notification settings - Fork 296
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BestEffort pods are using swap #343
Comments
/cc |
This is because KEP 2400 was never supported, as best I can tell. |
yea, its more of a feature request for KEP 2400. I was hoping someone in the cri-dockerd could explore implementing this? |
PRs are welcome, and a couple of the regular contributors have done other KEP enablement work and might be interested in picking this up (but also I can't speak for their interest or priorities). |
I think it was a known issue, support for Docker is not a requirement for adding new features.
|
What happened?
I already opened a ticket on kube repo which leaded me here.
I was testing the support for swap and I came to an unexpected behavior. In the documentation it is specified that only pods that fall under the
Burstable
class can use the host's swap memory. However, I created both a deployment with 1 replica of ubuntu belonging to theBurstable
class, and one belonging to theBestEffort
class, where I ran the commandstress --vm 1 --vm-bytes 6G --vm-hang 0
to see the consumption of memory made. The host has 4GB RAM memory and 5GB swap. In both situations, the pod started using swap after exceeding the RAM memory requirement. Wasn't the BestEffort pod supposed to be restarted when it reached the limit of the host's RAM memory? I mention that the kubelet is configured toswapBehavior=LimitedSwap
. I attached two pictures where you can see the normal consumption of host, and consumption after running stress command inside pod.
What did you expect to happen?
I expected the
BestEffort
pod to be killed when it consumes more RAM memory than the host have available.How can we reproduce it (as minimally and precisely as possible)?
kubectl get pod <pod-name> --output=yaml
apt update & apt install stress
. Then runstress --vm 1 --vm-bytes 6G --vm-hang 0
kubectl get po -o wide
then ssh to that node and runhtop
. Now you should see that the deployed BestEffort pod is consuming swap memory, which according to the Docs, it shouldn't.memory.swap.max
, this is set to max. From what I understand, even ifswapBehavior
was set toLimitedSwap
inkubelet
, somehow cri-dockerd may be set the cgroup formemory.swap.max
tomax
.Anything else we need to know?
I am using cgroup v2.
Here is my kubelet config.
Kubernetes version
Cloud provider
OS version
Install tools
Container runtime (CRI) and version (if applicable)
Related plugins (CNI, CSI, ...) and versions (if applicable)
calico:
version: 3.27.2
The text was updated successfully, but these errors were encountered: