-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🐛 [BUG]: Cowbird is not backward compatible with existing Jupyter users #425
Comments
This looks like the volume mounted as Just a wild guess. The order by which the volumes are created could be the source of the root owner. Since there is a step for jupyter persistence volume creation, it might not play nice with docker-compose configuration that would auto-create volume mount locations (as root) if they do not exist. The creation is performed by this hook: birdhouse-deploy/birdhouse/components/jupyterhub/jupyterhub_config.py.template Lines 147 to 152 in 13645f3
https://github.com/bird-house/birdhouse-deploy/blob/master/birdhouse/components/jupyterhub/jupyterhub_config.py.template#L173 Note that care should be taken with overrides if they play with similar properties:
|
This is the same issue as #392
birdhouse-deploy/birdhouse/env.local.example Lines 390 to 395 in 67c6ca1
birdhouse-deploy/birdhouse/components/jupyterhub/jupyterhub_config.py.template Lines 126 to 129 in 67c6ca1
I don't know why the Dockerspawner decides to create them in that order but that's how it's done consistently. |
I am happy it is consistent. The worst kind of problems are intermittent ones. But I think the sequence is appropriate. |
This is a reasonable hint but should not happen since the
No, the error happens only when that dir do not exist yet. If I manually create it before spawning the Jupyter server (which is my documented work-around), the error is gone and we can spawn the Jupyter server successfully.
No, Jupyterhub persistance data-volume is for the sessions tokens only. User data are not data-volume but direct volume-mount from disk. |
Isn't this just because the webhook action that creates the directory is only triggered when the user is created: birdhouse-deploy/birdhouse/components/cowbird/config/magpie/config.yml.template Lines 35 to 36 in 13645f3
And the user is already created so the webhook isn't triggered (see: https://pavics-magpie.readthedocs.io/en/latest/configuration.html#webhook-user-create) |
This code was added to consider the situation where the user already exists, and no webhook would be triggered. birdhouse-deploy/birdhouse/components/jupyterhub/jupyterhub_config.py.template Lines 151 to 155 in 67c6ca1
I'm not sure why it doesn't resolve the same way as when the directory is manually created. Could it be that
|
Does adding a birdhouse-deploy/birdhouse/components/jupyterhub/jupyterhub_config.py.template Lines 161 to 163 in 67c6ca1
|
This code (mkdir + chown) was there already before Cowbird was added to the stack and I can confirm it works fine on Below old code with existing mkdir + chown: birdhouse-deploy/birdhouse/config/jupyterhub/jupyterhub_config.py.template Lines 53 to 60 in 775c3b3
Is it possible Cowbird volume-mount
Or maybe adding a symlink instead, see this comment? birdhouse-deploy/birdhouse/components/jupyterhub/jupyterhub_config.py.template Lines 119 to 120 in 67c6ca1
|
Oh interesting. How does this hook knows to create a new dir or symlink to an existing |
The Magpie Webhook registered to occur on |
Yes that should solve the problem (when old users were created before cowbird was enabled) |
We can solve the issue of having read-only volumes mounted on top of each other by changing the location of one or the other. birdhouse-deploy/birdhouse/env.local.example Line 390 in 67c6ca1
to:
Or similar. I also think it would be a good idea to move this code out of env.local.example and into an optional component. |
Yes, or Same idea, both sharing solutions have their own public folder so they do not step on each other foot.
Yes ! At the beginning, I thought about using this as a live example of how |
Should it be creating the dir or the symlink? See comment in code birdhouse-deploy/birdhouse/components/jupyterhub/jupyterhub_config.py.template Lines 119 to 120 in 67c6ca1
|
Summary
Activating Cowbird with existing Jupyter users have many road blocks. This is in contrast with the usual "just enable the new component in
env.local
and it should play nice with all existing components" message we are trying to convey in the stack.A migration guide for system with existing Jupyter users would have been helpful.
Below are the various problems I faced so far and any work-around I was able to find. Will add more to this list as I try out Cowbird.
Details
For each existing Jupyter users,
/data/user_workspaces/$USER
have to be manually createdOtherwise this error in
docker logs jupyterhub
:[E 2024-01-16 15:30:36.478 JupyterHub user:884] Unhandled error starting lvu's server: The user lvu's workspace doesn't exist in the workspace directory, but should have been created by Cowbird already.
Conflict with the existing poor man's public share
If the poor man's public share in
birdhouse-deploy/birdhouse/env.local.example
Lines 377 to 425 in 13645f3
PUBLIC_WORKSPACE_WPS_OUTPUTS_SUBDIR
inenv.local
to a different value thanpublic
.Otherwise this error when spawning a new Jupyterlab server:
Spawn failed: 500 Server Error for http+docker://localhost/v1.43/containers/2239816099ea7b8bf440b76fc0a1d4a43248bb1e5073fc043ef1c1062cdd3cff/start: Internal Server Error ("failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting "/data/user_workspaces/public/wps_outputs" to rootfs at "/notebook_dir/public/wps_outputs": mkdir /pvcs1/var-lib/docker/overlay2/ec7672b5d034e55d21465dd1e41c0333e0c5db2adb2dcec9f0f2a37bb968fe10/merged/notebook_dir/public/wps_outputs: read-only file system: unknown")
.See 🐛 [BUG]: jupyterlab server fails to spawn due to read-only volume mount #392 (comment)
Content of
/notebook_dir/writable-workspace
for all existing Jupyter users seem to have disappearedThis is because without Cowbird enabled,
/notebook_dir/writable-workspace
is binded to/data/jupyterhub_user_data/$USER
. But with Cowbird enabled,/notebook_dir/writable-workspace
is binded to/data/user_workspaces/$USER
, which is a new dir that is empty.No work-around found so far.
To Reproduce
Steps to reproduce the behavior:
2.0.0
env.local
by uncommenting this sectionbirdhouse-deploy/birdhouse/env.local.example
Lines 377 to 425 in 13645f3
writable-workspace
2.0.0
where Cowbird is enabled by defaultenv.local
, ex:./components/jupyterhub
Environment
Concerned Organizations
@fmigneault @ChaamC @Nazim-crim @mishaschwartz @eyvorchuk
The text was updated successfully, but these errors were encountered: