-
Notifications
You must be signed in to change notification settings - Fork 5.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP: Forward device requests #7124
Conversation
Signed-off-by: Yoanis Gil <[email protected]>
This PR is pretty much WIP as I have tested only one use case and also depends on docker/docker-py#2471 |
Signed-off-by: Yoanis Gil <[email protected]>
@rumpl @ndeloof @ulyssessouza I'm happy to make this PR more compliant (tests, documentation, etc) as long as you think it is heading int he right direction. |
Signed-off-by: Yoanis Gil <[email protected]>
Signed-off-by: Yoanis Gil <[email protected]>
@@ -597,6 +599,21 @@ | |||
} | |||
} | |||
} | |||
}, | |||
"device_requests": { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
leading schema definition is https://github.com/docker/cli/tree/master/cli/compose/schema/data
see #7047
I'm currently working on getting this schema duplication issue resolved.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That actually raises a very good point. Maybe the GPU allocation request should be based on generic_resources?. Just sayin because of this: https://github.com/docker/cli/blob/master/cli/compose/loader/full-example.yml
But then that will require a translation to a DeviceRequest
@@ -151,6 +151,8 @@ | |||
|
|||
"external_links": {"type": "array", "items": {"type": "string"}, "uniqueItems": true}, | |||
"extra_hosts": {"$ref": "#/definitions/list_or_dict"}, | |||
"device_requests": {"$ref": "#/definitions/device_requests"}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Within schema v3, runtime constraints have been moved into deployment.resources
. Sounds a better place for such device allocation request
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can move but seems to me like it's better to wait until you figure out how to reconcile schema definitions between docker and compose before making any other change ...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Feedback
I thought a docker-compose file tries to match the docker command line. If I want to set set
and the docker-compose looks like: services:
myservice:
ipc: "host",
pid: "host",
privileged: true,
... With newer versions of docker, the command line to use the nvidia runtime:
so I would assume that the docker-compose would look like: services:
myservice:
gpus: "all",
... but with this PR (according to the comments #6691), gpu usage would look like: services:
gpu:
image: 'nvidia/cuda:9.0-base'
command: 'nvidia-smi'
device_requests:
- capabilities:
- "gpu" which seems much more complicated to me. |
Is there anyone working on this PR still? |
@lshamis I believe the motivation behind the
I think it's more violating of law-of-least-surprisal to lack this syntax when Nearly my entire department is still using 2.X syntax for To give you an idea, here's the sidequest I currently have to embark on to test my ML api, vs what used to simply be
|
I'm not (mostly because of the lack of feedback and/or interest). |
FYI: docker/docker-py#2471 has been merged. |
Can someone tell what stops the PR from being merged? |
There are conflicting files, it needs to be rebased I guess. |
Guys, (@rumpl @ndeloof @ulyssessouza), would it be possible to give the necessary feedback to @yoanisgil so that he can finalize this PR or so that somebody can take over and finalize it so that it can be merged? ML applications, as well as GPU based ones cannot be used very well with The largest hindrance in the form of lacking |
Any progress on merging this into master? |
Agree with PhotoTeerborChoka. If this is just a "rebase"/conflict issue, would it speed up things if I rebase a fork from @yoanisgil (with his consent) ? |
@alexandrecharpy please go ahead. At the time I submitted the PR I had the time and will to make this happen but unfortunately that's no longer the case. |
Thanks for taking the time to create this issue/pull request! Unfortunately, Docker Compose V1 has reached end-of-life and we are not accepting any more changes (except for security issues). Please try and reproduce your issue with Compose V2 or rewrite your pull request to be based on the v2 branch and create a new issue or PR with the relevant Compose V2 information. |
Add support for Nvidia GPUs (solves #6691)