You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As far as I can tell, the reason for this is that ipykernel runs a event loop in the background (to support top-level await calls), which causes the loop started tornado.ioloop.IOLoop.run_sync to error.
Getting this to work is an absolute rabbit hole, though: nesting loops is – intentionally, it seems – not supported by asyncio which is what tornado.ioloop wraps starting with tornado>=6, so this will likely need a separate thread for the new event loop. dask / distributed might have a mechanism to execute async code synchronously already, even in jupyterlab, which could be reused here?
The text was updated successfully, but these errors were encountered:
Yeah wrapping async code in IPython/Jupyter is a minefield. The code here is well overdue an overhaul so I'm not surprised things like this come up.
I spent a bunch of time thinking about this in kr8s (which is why we don't see these issues in dask-kubernetes). I built out a bunch of thread loop dispatching machnery over there that uses anyio to handle things and it works pretty nicely. Maybe we should pull that into a separate library for easier reuse.
There's also asyncer which tries to solve a lot of the same problems but with a slightly different use case in mind.
I tried to launch a local cluster on jupyterlab using the python API, and received an error saying that the loop was already running:
traceback
versions
As far as I can tell, the reason for this is that
ipykernel
runs a event loop in the background (to support top-levelawait
calls), which causes the loop startedtornado.ioloop.IOLoop.run_sync
to error.Getting this to work is an absolute rabbit hole, though: nesting loops is – intentionally, it seems – not supported by
asyncio
which is whattornado.ioloop
wraps starting withtornado>=6
, so this will likely need a separate thread for the new event loop.dask
/distributed
might have a mechanism to execute async code synchronously already, even injupyterlab
, which could be reused here?The text was updated successfully, but these errors were encountered: