You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In a cluster with jobset API installed with a version before coordinator support, then upgraded to a version with coordinator support, when submitting a jobset yaml with coordinator field set, the coordinator field is nil in the resulting stored jobset
What you expected to happen:
Coordinator field is not nil
How to reproduce it (as minimally and precisely as possible):
This is reproducible in a kind cluster with the follwing jobset:
Expect to fail: kubectl apply -f js.yaml: Error from server (BadRequest): error when creating "js.yaml": JobSet in version "v1alpha2" cannot be handled as a JobSet: strict decoding error: unknown field "spec.coordinator"
Upgrade to version with coordinator: kubectl apply --server-side -f https://github.com/kubernetes-sigs/jobset/releases/download/v0.7.0/manifests.yaml
kubectl apply -f js.yaml
Coordinator field is nil: kubectl get jobsets js -o yaml | grep coordinator
Anything else we need to know?:
If I change the field in the spec to coordinatorz, apply fails: strict decoding error: unknown field "spec.coordinatoz"
If I change the coordinator.replicatedJob field to def-not-exists, the apply succeeds. So I think the coordinator field is getting set to nil most likely in the CRD conversion webhook and before the validation webhook
Environment:
Kubernetes version (use kubectl version):
Client Version: v1.31.1
Kustomize Version: v5.4.2
Server Version: v1.31.1
JobSet version (use git describe --tags --dirty --always): v0.4.0 and v0.7.0
Cloud provider or hardware configuration: kind
Install tools: kind
Others:
The text was updated successfully, but these errors were encountered:
After some more testing, the issue is only reproducible in the kind cluster when I submit the jobset before the v0.4.0 manager pod is deleted by the deployment which makes sense why it would behave that way. I originally saw this in a GKE cluster and used kind as a minimal repro, I will keep investigating in the GKE cluster
What happened:
In a cluster with jobset API installed with a version before coordinator support, then upgraded to a version with coordinator support, when submitting a jobset yaml with coordinator field set, the coordinator field is nil in the resulting stored jobset
What you expected to happen:
Coordinator field is not nil
How to reproduce it (as minimally and precisely as possible):
This is reproducible in a kind cluster with the follwing jobset:
kind create cluster
kubectl apply --server-side -f https://github.com/kubernetes-sigs/jobset/releases/download/v0.4.0/manifests.yaml
kubectl apply -f js.yaml
:Error from server (BadRequest): error when creating "js.yaml": JobSet in version "v1alpha2" cannot be handled as a JobSet: strict decoding error: unknown field "spec.coordinator"
kubectl apply --server-side -f https://github.com/kubernetes-sigs/jobset/releases/download/v0.7.0/manifests.yaml
kubectl apply -f js.yaml
kubectl get jobsets js -o yaml | grep coordinator
Anything else we need to know?:
coordinatorz
, apply fails:strict decoding error: unknown field "spec.coordinatoz"
def-not-exists
, the apply succeeds. So I think the coordinator field is getting set to nil most likely in the CRD conversion webhook and before the validation webhookEnvironment:
kubectl version
):git describe --tags --dirty --always
): v0.4.0 and v0.7.0The text was updated successfully, but these errors were encountered: