Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update workload form to work with pod manifest #10208

Draft
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

rak-phillip
Copy link
Member

@rak-phillip rak-phillip commented Jan 2, 2024

Summary

This updates the pod workload spec to match the expected manifest described by kubectl explain workloads.pod.spec. Previously, pods were treated as a Workload Resource 1, but workload resources are meant for dealing with Deployments, Jobs, ReplicaSets, etc... It's more accurate to treat Pods as a workload, as described in 2

We also create a new method, updateWorkloadManifest(), that can properly parse a manifest for both Workloads (Pods) and Workload Resources (Jobs, Deployments, ReplicaSets, etc...).

Fixes #10171

Occurred changes and/or fixed issues

  • Identified and updated all instances of pod spec to generate a manifest that more closely resembles what's described by kubectl explain workloads.pod.spec
  • Updated the saveWorkload() method so that it can now account for pods in addition to existing workloads

Technical notes summary

I noted some difficulties associated with this change in #10171 (comment). In summary, the manifest for a Pod differs from Deployments, ReplicaSet, StatefulSets, etc... The tightly coupled nature of CruResource and the Yaml Editor makes it a little difficult to simply write a parser to consume the YAML as we would expect it for a pod, so we have to take a more difficult approach of updating spec in all the instances where we might be dealing with a pod to make sure that it adheres to the correct shape.

With the complexity of this form, there might be edge cases that have been missed.

Areas or cases that should be tested

Creating, Reading, Updating existing workload forms, this includes:

  • CronJobs
  • DaemonSets
  • Deployments
  • Jobs
  • StatefulSets
  • Pods

Areas which could experience regressions

Screenshot/Video

Before

image

After

image

Footnotes

  1. https://kubernetes.io/docs/concepts/workloads/controllers/

  2. https://kubernetes.io/docs/concepts/workloads/pods/

This updates the pod workload spec to match the expected manifest described by `kubectl explain workloads.pod.spec`. Previously, pods were treated as a Workload Resource [^1], but workload resources are meant for dealing with Deployments, Jobs, ReplicaSets, etc... It's more accurate to treat Pods as a workload, as described in [^2]

[^1]: https://kubernetes.io/docs/concepts/workloads/controllers/
[^2]: https://kubernetes.io/docs/concepts/workloads/pods/

Signed-off-by: Phillip Rak <[email protected]>
This creates a new method, `updateWorkloadManifest()`, that can properly parse a manifest for both Workloads (Pods) and Workload Resources (Jobs, Deployments, ReplicaSets, etc...). 

Signed-off-by: Phillip Rak <[email protected]>
@github-actions github-actions bot added this to the v2.8.next1 milestone Jan 2, 2024
@mantis-toboggan-md mantis-toboggan-md self-requested a review January 2, 2024 22:48
Comment on lines -197 to -202
if ((this.mode === _EDIT || this.mode === _VIEW || this.realMode === _CLONE ) && this.value.type === 'pod') {
const podSpec = { ...this.value.spec };
const metadata = { ...this.value.metadata };

this.$set(this.value.spec, 'template', { spec: podSpec, metadata });
}
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't believe that this is required anymore as it appears to have existed so that we can translate a valid pod spec into an invalid one.

const spec = this.value.spec;
let podTemplateSpec = type === WORKLOAD_TYPES.CRON_JOB ? spec.jobTemplate.spec.template.spec : spec?.template?.spec;
let podTemplateSpec = type === WORKLOAD_TYPES.CRON_JOB ? spec.jobTemplate.spec.template.spec : this.value.type === POD ? spec : spec?.template?.spec;
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I might consider rewriting this so that it's much more clear that we have multiple conditionals to deal Pods, Cron job, and the rest.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't find this terribly unreadable. It looks like it renders lines 202-204 redundant though

Comment on lines 757 to 759
this.type !== WORKLOAD_TYPES.JOB &&
this.type !== WORKLOAD_TYPES.CRON_JOB &&
(this.mode === _CREATE || this.realMode === _CLONE)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This might be another case where we want to add

this.type !== POD

@rak-phillip rak-phillip changed the title Bugfix/10171 yaml pod spec Update workload form to work with pod manifest Jan 3, 2024
@richard-cox
Copy link
Member

Just a note, i don't think there'll be conflicts or changes, but there's some work in a pending pr around the area of saving workloads. I think this PR might make a line or two of that redundant

const spec = this.value.spec;
let podTemplateSpec = type === WORKLOAD_TYPES.CRON_JOB ? spec.jobTemplate.spec.template.spec : spec?.template?.spec;
let podTemplateSpec = type === WORKLOAD_TYPES.CRON_JOB ? spec.jobTemplate.spec.template.spec : this.value.type === POD ? spec : spec?.template?.spec;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't find this terribly unreadable. It looks like it renders lines 202-204 redundant though

},
set(neu) {
if (this.isCronJob) {
if (this.isPod) {
this.spec = { ...this.spec, ...neu };
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this would prevent deleting any fields from podTemplateSpec. Not sure how much it matters but it seems like an unintended effect?

Comment on lines +806 to 814
if (spec?.affinity && Object.keys(spec?.affinity).length === 0) {
delete spec.affinity;
}

// Removing `affinity` fixes the issue with setting the `imagePullSecrets`
// However, this field should not be set. Therefore this is explicitly removed.
if (template?.spec?.imagePullSecrets && template?.spec?.imagePullSecrets.length === 0) {
delete template.spec.imagePullSecrets;
if (spec?.imagePullSecrets && spec?.imagePullSecrets.length === 0) {
delete spec.imagePullSecrets;
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When I create a pod in the UI then go to 'edit config' and save without making changes OR with only a valid change (eg changing image) I get an error:
Screen Shot 2024-01-10 at 4 25 51 PM

I traced it back to this block of code. When a pod is created in the UI and affinity isn't configured it's still saved with affinity: {} - we remove that on edit to ensure pods created outside of the ui don't have this field erroneously added, which, to me, really sounds like it shouldn't work for pods created in the UI but it does in master. I'm guessing because the whole pod edit process is jacked up anyway...?

@rak-phillip rak-phillip marked this pull request as draft January 22, 2024 17:13
@rak-phillip rak-phillip modified the milestones: v2.9.0, v2.10.0 Jul 1, 2024
@nwmac nwmac modified the milestones: v2.10.0, v2.11.0 Oct 25, 2024
@nwmac nwmac modified the milestones: v2.12.0, v2.11.0 Nov 1, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

When creating a pod, switching between config view and yaml view breaks the pod spec
4 participants