Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

distributed.diskutils INFO showing in notebook cell #6649

Open
ncclementi opened this issue Jun 29, 2022 · 4 comments
Open

distributed.diskutils INFO showing in notebook cell #6649

ncclementi opened this issue Jun 29, 2022 · 4 comments

Comments

@ncclementi
Copy link
Member

When I'm on a Jupyterlab session, sometimes when starting a client, with Local cluster. I don't have a reproducer as this happens only sometimes. But the following logs appear on red when I run:

from dask.distributed import Client

client = Client(n_workers=4)
2022-06-29 07:53:44,649 - distributed.diskutils - INFO - Found stale lock file and directory '/Users/ncclementi/Documents/git/my_forks/dask-tutorial/dask-worker-space/worker-2tl8qil1', purging
2022-06-29 07:53:44,650 - distributed.diskutils - INFO - Found stale lock file and directory '/Users/ncclementi/Documents/git/my_forks/dask-tutorial/dask-worker-space/worker-bl6_3xyz', purging
2022-06-29 07:53:44,650 - distributed.diskutils - INFO - Found stale lock file and directory '/Users/ncclementi/Documents/git/my_forks/dask-tutorial/dask-worker-space/worker-xwo8clv2', purging
2022-06-29 07:53:44,650 - distributed.diskutils - INFO - Found stale lock file and directory '/Users/ncclementi/Documents/git/my_forks/dask-tutorial/dask-worker-space/worker-e7vgwheh', purging

I usually can "fix" this by deleting the existing dask-worker-space directory.
Environment:

  • Dask version: 2022.6.1
  • Python version: 3.9
  • Operating System: MacOS
  • Install method (conda, pip, source): conda
@fjetter
Copy link
Member

fjetter commented Jun 29, 2022

This should only happen if your previous process running the cluster died unexpectedly. If a previous cluster died without giving the worker time to close gracefully (e.g. a SIGKILL) distributed will notice "stale" locks of earlier worker instances once started and is cleaning this up.

@ncclementi
Copy link
Member Author

ncclementi commented Jun 29, 2022

This should only happen if your previous process running the cluster died unexpectedly

Is restarting a notebook kernel or an interrupt qualify as "unexpected"?

distributed will notice "stale" locks of earlier worker instances once started and is cleaning this up.

This makes sense but, do we need the red warning on the notebooks? I feel this puts users off because it's red. It can also be hard to understand for beginners if there is something wrong or not.

@ncclementi
Copy link
Member Author

ncclementi commented Jun 29, 2022

It's fairly common to restart your kernel when working on a notebook, and this happens when doing so.

purging_bug

@fjetter
Copy link
Member

fjetter commented Jun 30, 2022

I'm open to lowering this log to debug level

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants