-
Notifications
You must be signed in to change notification settings - Fork 989
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pod status of tfjob always pending after #693 #727
Comments
We'll continue to support tfjobs: for current tfjobs, it's better to use volcano:0.3, and we'll raise PR in tf-operator to correct that. |
/cc @hzxuzhonghu , please help to raise a PR in kubeflow/tf-operator for that. |
Not only tf-operator, other operators such as pytorch-operator, mpi-operator all have the same problems. |
We plan to release 0.4 in the next week in order to support kubeflow. And by then, i can file a pr in kubeflow. |
That was fixed in |
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened:
we use volcano to schedule tfjobs, but all pods status are always pending, after some research ,we found that in #693: volcano no longer support scheduling.v1alpha1 and scheduling.v1alpha2 API。
but podgroup created by tf-operator is scheduling.v1alpha1 version
so will volcano no longer compatible with tfjobs?
What you expected to happen:
volcano can scheduler tfjobs 、mpijobs。。
Environment:
The text was updated successfully, but these errors were encountered: