-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Running executor pods with different nodeselectors #2329
Comments
Hey, do you mean mixing executors between spot and on-demand nodes? For example, 40% on spot and 60% on on-demand? |
Yes @jacobsalway. |
The properties for executors in Spark on Kubernetes apply to all executors, so I think the answer to your question on different node selectors for different executors is that you can't. However I think this could be done at the node provisioning and/or scheduling level. Here are some approaches that come to mind:
|
Thanks for replying @jacobsalway . We can do that at scheduler level but our use case is more like of creating spark as a service in which user can specify these properties. I am willing to submit a PR for this, in my opinion this will help others as well. Let me know your thoughts. |
Could you go into more detail on how this feature would look? Is it something akin to EMR instance fleets? |
I was thinking more in directions to take array as a type in executor rather than having single executor. It will help to extend other functionalities as well. |
Obviously we're welcome to all PRs and will happily review, but I think you might find some difficulty in trying to implement this. Spark on Kubernetes doesn't support any concept of executor groups/fleets, so even if the |
Will check and get back to you on this @jacobsalway . |
What question do you want to ask?
I have a requirement where I want to give users flexibility to choose number of spot and on-demand executors. Is there any way by which I can achieve this?
Additional context
No response
Have the same question?
Give it a 👍 We prioritize the question with most 👍
The text was updated successfully, but these errors were encountered: