-
Notifications
You must be signed in to change notification settings - Fork 716
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Running TFJob on GPU only #887
Comments
Sure, you can define it in the template. |
Thanks @gaocegege . To add some more context below is what I was thinking i will have a cluster with CPU's for Jupyter Notebook and Kubeflow UI etc and GPU's for running the TF job. The GPU's will be part of an horizontal autoscaling group so that the GPU nodes only come up when I run the TF job. At other times I should be able to use the cluster with CPU only. I didn't see any examples of such a deployment and was wondering if this is possible. Also, when you suggest template are you referring to the TFJob template? |
Yeah, the template is a pod template so you can define the affinity here. I think your requirements could be meet using affinity. |
Thanks, I will try and will let you know if I see any issues. |
Can I use NodeSelector or NodeAffinity today to schedule TFJob to use GPU's specifically? I don't see it in the TFJob spec.
The text was updated successfully, but these errors were encountered: