-
Notifications
You must be signed in to change notification settings - Fork 670
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update apis to allow for compatibility with k8s 1.22 #1972
Conversation
Thank you for opening this pull request! 🙌 |
The CRD for the flyteworkflows is incomplete, atm. A schema should be added in the near future. See flyteorg#1273 Signed-off-by: ferdinand.szekeresch <[email protected]>
Signed-off-by: Sören Brunk <[email protected]>
@EngHabu @kumare3 Looking at the failing test it seems we also need to update Flytepropeller to be able to deal with
|
@EngHabu I haven't looked at the code yet (my go knowledge is still somewhat limited) but AFAIK propeller dynamically creates Since 1.22 removes I initially thought we'd have to change propeller to create The only difference in our case is that a schema is required in |
The error in the test is caused by this line when we're trying to update the custom resource status: // TODO we will need to call updatestatus when it is supported. But to preserve metadata like (label/finalizer) we will need to use update
// update the GetExecutionStatus block of the FlyteWorkflow resource. UpdateStatus will not
// allow changes to the Spec of the resource, which is ideal for ensuring
// nothing other than resource status has been updated.
p.wfStore.Update(ctx, mutatedWf, workflowstore.PriorityClassCritical) |
@sbrunk I'm interested in helping out with the where I can. FlyteAdmin actually creates the FlyteWorkflow CRD to start a workflow execution which is then detected by FlytePropeller using it's k8s watch. Then as you indicated FlytePropeller updates the CRD as needed in the line you mentioned. I suspect the CRD update may require changed in both FlyteAdmin and FlytePropeller. Hopefully it's as easy as updating the API version. |
@hamersaw thank you! You should be able to repro this locally by starting I'm not sure where things go wrong... Updating the Api version may require regeneration of clientset in propeller using -maybe- some updated versions of k8s code-gen repo? but yeah I would start with a simple repo and looking at the FlyteWorkflow CRD Instance Admin creates after launching an execution... |
@fsz285 @sbrunk I think you missed the
cc @EngHabu this, of course, should be a temporary workaround. I think we really should flesh out a fully defined schema for the FlyteWorkflow CRD moving forward. Additionally, although it seems to work with the 0.20 k8s.io/api dependencies in a majority of our repos (I suspect we are using fairly stable APIs), we should probably update these as well. Thoughts? |
Required to prevent pruning of fields by the API server until we have a schema for workflows. Signed-off-by: Sören Brunk <[email protected]>
@hamersaw yeah that totally makes sense. Thanks for looking into it. I just added |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for all the work!
Congrats on merging your first pull request! 🎉 |
@EngHabu This change does break the helm chart for everyone who is on Kubernetes 1.18 or earlier (ingress v1beta1)... I know those are old versions but still a good number of folks use those versions, and EKS supports that until March 2022. |
If we really need this (March 2022 is only one month away though), it is possible to use helm capabilities to define an ingress differently depending on the available API/k8s version: |
I've an attempt at that here: #2114 |
The CRD for the flyteworkflows is incomplete, atm. A schema should be
added in the near future.
See #1273