You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We could set the number of CPUs used during the building of R packages to >1 to get a speed up. However I am not sure how we would automatically determine what number to set this to. I think counting the number of cores from inside a container will give you the number of cores on the host which you might or might not be able to use. When running on k8s it would be even trickier because the resources are limited via the pod config.
My assumption is that running with a configuration of (say) ncpus=16 when really you can only use 2 cores will slow things down even more than running with ncpus=1 does.
Maybe in connection with jupyterhub/binderhub#579 we can set it when we know what the user set the limit to?
mmmm I don't know that much about how to check for the CPU power available and request something different depending on what's returned. This feels like it'd be pretty tricky to debug or maintain, no?
via #1240:
We could set the number of CPUs used during the building of R packages to >1 to get a speed up. However I am not sure how we would automatically determine what number to set this to. I think counting the number of cores from inside a container will give you the number of cores on the host which you might or might not be able to use. When running on k8s it would be even trickier because the resources are limited via the pod config.
My assumption is that running with a configuration of (say) ncpus=16 when really you can only use 2 cores will slow things down even more than running with ncpus=1 does.
Maybe in connection with jupyterhub/binderhub#579 we can set it when we know what the user set the limit to?
related: jupyterhub/binderhub#412
The text was updated successfully, but these errors were encountered: