You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, dask cluster is built using a fixed autoscaler and worker blueprints. We need to specify 2 parameters:
CPU and RAM capacity of the workers
Maximum amount of workers in dask autoscaler
Also, we need to decide on how we share the configuration between these 2 options.
We can have heavy workers with fewer number of maximum amount of dask workers
--- OR ----
Light workers with a higher number of maximum amount of dask workers
We are also facing a possibly major problem in this task, because dask autoscaler does not seem to obey the maximum number of workers rule. In a few compute-intensive tasks, I have observed it exceeds the limit not only by a few workers but even more. If this issue cannot be fixed at all, we might even need to stop using dask autoscaler completely or implement our patched auto-scaling logic.
The text was updated successfully, but these errors were encountered:
Currently, dask cluster is built using a fixed autoscaler and worker blueprints. We need to specify 2 parameters:
Also, we need to decide on how we share the configuration between these 2 options.
maximum amount of dask workers
--- OR ----
maximum amount of dask workers
We are also facing a possibly major problem in this task, because
dask autoscaler
does not seem to obey the maximum number of workers rule. In a few compute-intensive tasks, I have observed it exceeds the limit not only by a few workers but even more. If this issue cannot be fixed at all, we might even need to stop using dask autoscaler completely or implement our patched auto-scaling logic.The text was updated successfully, but these errors were encountered: