Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

conntrack hashsize alteration fails on large CPU counts #10

Open
rask opened this issue Apr 15, 2020 · 2 comments
Open

conntrack hashsize alteration fails on large CPU counts #10

rask opened this issue Apr 15, 2020 · 2 comments

Comments

@rask
Copy link

rask commented Apr 15, 2020

See kubernetes/kubernetes#58610

When the CPU count is large (e.g. I had it at 12 which is the amount on my host), conntrack hashsize may need to be increased when starting kube-proxy during k8s boot. The problem in LXC setups seems to be that the /sys/.../conntrack/hashsize file cannot be edited in any way inside the container, leading to failure if it needs to be altered.

My fix was to limit the CPU count on the system to 4 cores, which resulted in no wanted changes to the hashsize value.

Maybe add a note about this into the guide?

@corneliusweig
Copy link
Owner

corneliusweig commented Apr 18, 2020

Oh wow, nice find. Could you send a PR with the steps to reduce the CPU count? (Please also --signoff your git commits).


FWIW, have you also tried the other suggested workarounds to pass --masquerade-all --conntrack-max=0 --conntrack-max-per-core=0 to the kube-proxy? If that does not work, have you tried this:

On the host you should be able to write to hashsize, like (see http://blog.michali.net/2017/08/09/ipv6-support-for-docker-in-docker/)

echo "262144" > /sys/module/nf_conntrack/parameters/hashsize

In order to not forget this, you should be able to put this into a lxc.hook.pre-start hook (see https://stgraber.org/2013/12/23/lxc-1-0-some-more-advanced-container-usage/).

@rask
Copy link
Author

rask commented Apr 20, 2020

Will try those workarounds, thanks! I'll try and get a PR for you soon.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants