-
-
Notifications
You must be signed in to change notification settings - Fork 719
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LocalCluster does not respect memory_limit keyword when it is large #7155
Comments
from dask.distributed import Client
client = Client(memory_limit="3 GB")
client.run(lambda dask_worker: dask_worker.memory_limit)
|
Thanks @mrocklin -- I'm able to reproduce. This behavior is coming from this line distributed/distributed/worker_memory.py Line 404 in 3a23650
where we cap things at |
It may also be that this is correct behavior. It was surprising (reasonably so I think). It's subjective on if we want to let users do dumb things. I think that we do want to let them opt-in to being dumb, but I don't have a strong opinion here. |
I'm not sure whether or not we should let users over-subscribe or not. This may lead to bad behavior with, for example, the active memory manager. I suspect @crusaderky will have insight here. Regardless, if we keep the current behavior, it'd be good to emit a warning (or something similar) to the user letting them know they've requested more memory than is available and we're capping at the system memory. That way, there will at least be visibility into what's happening |
I consider this expected behavior. Is there any sane use case for allowing larger values? From a UX POV we should raise a warning if this happens such that the user knows what's going on. This also relates roughly to #6895 which discusses making the |
To clarify: if you have 4 workers, the current cap will let you set for each worker the whole memory of your host. This is potentially desirable, as the workload may for whatever reason be very unbalanced. Beyond that, I cannot think of any sensible use case. AMM ReduceReplicas does not make any considerations on the memory_limit. |
Sounds like a fine outcome to me. |
It seems to respect the keyword when it's lower than available memory, but not when it's greater than. Granted I don't have 1.2 TB of memory on my laptop, but maybe it makes sense to allow the user to over-subscribe.
The text was updated successfully, but these errors were encountered: