Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Document the way the Kubernetes scheduler mode impacts Karpenter #1228

Open
stevehipwell opened this issue May 2, 2024 · 3 comments
Open
Labels
kind/feature Categorizes issue or PR as related to a new feature. triage/accepted Indicates an issue or PR is ready to be actively worked on.

Comments

@stevehipwell
Copy link

Description

What problem are you trying to solve?
I have the following hypothesis.

With the default Kubernetes scheduler configuration of LeastAllocated, Karpenter is statistically more likely to disrupt pods and/or require consolidation than if the scheduler was running in MostAllocated mode.

I'd like to see this tested and documented. I suspect that in some cases, especially where eviction on some pods is disabled, that this would provide a significant price reduction.

How important is this feature to you?
I think having this officially documented would help users understand how to get the most out of Karpenter and would add weight to requests to cloud providers to support customising the scheduler.

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
@stevehipwell stevehipwell added the kind/feature Categorizes issue or PR as related to a new feature. label May 2, 2024
@stevehipwell
Copy link
Author

CC @jonathan-innis (this is what I spoke to you at KubeCon EU about)

@jonathan-innis jonathan-innis added needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels May 3, 2024
@jonathan-innis
Copy link
Member

I'd like to see this tested and documented. I suspect that in some cases, especially where eviction on some pods is disabled, that this would provide a significant price reduction

Agreed. This sounds like a good feature to explore. Obviously, for us to give a recommendation for those that have direct control of their SchedulerConfiguration and the control plane, but also platforms like EKS should explore this alternative option to see if they get increased performance with Karpenter from this other setting.

I suspect that you are right about the performance, at least that we would be able to act more aggressively with consolidation since we'd have "less cluster churn" disrupting our current consolidation decision-making.

@jonathan-innis
Copy link
Member

/triage accepted

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels May 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

No branches or pull requests

3 participants