Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unknown fields in logs #141

Open
kannon92 opened this issue May 12, 2023 · 12 comments
Open

Unknown fields in logs #141

kannon92 opened this issue May 12, 2023 · 12 comments
Labels
lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.

Comments

@kannon92
Copy link
Contributor

kannon92 commented May 12, 2023

As I am looking at the integration logs, I see the following warning:

2023-05-12T19:07:27Z  INFO    KubeAPIWarningLogger    unknown field "spec.replicatedJobs[0].template.metadata.creationTimestamp"
  2023-05-12T19:07:27Z  INFO    KubeAPIWarningLogger    unknown field "spec.replicatedJobs[0].template.spec.template.metadata.creationTimestamp"
  2023-05-12T19:07:27Z  INFO    KubeAPIWarningLogger    unknown field "spec.replicatedJobs[1].template.metadata.creationTimestamp"
  2023-05-12T19:07:27Z  INFO    KubeAPIWarningLogger    unknown field "spec.replicatedJobs[1].template.spec.template.metadata.creationTimestamp"
  S

I don't think its causing any issue but wanted to bring awareness to this. Not sure of the solution or if it is necessary to solve it. It happens also in the logs of the integration tests in the CI.

@kannon92 kannon92 changed the title Missing fields in logs Unknown fields in logs May 12, 2023
@ahg-g
Copy link
Contributor

ahg-g commented May 19, 2023

Any idea which file is producing those logs?

@kannon92
Copy link
Contributor Author

No, and google wasn't too helpful in figuring out where it was coming from. I'm still new to writing controllers so not sure where the responsibility of controller-runtime is versus jobset_controller.

I didn't see any examples of us using this field in the controller.

@tenzen-y
Copy link
Member

tenzen-y commented Jun 1, 2023

@kannon92 @ahg-g That error is caused by the controller-gen bug: kubernetes-sigs/controller-tools#402. And then, the bug was fixed in controller-gen v0.11.4

@tenzen-y
Copy link
Member

tenzen-y commented Jun 1, 2023

It seems that we have already used v0.11.4. So, we probably don't face this error.

jobset/Makefile

Line 187 in 321a9d6

CONTROLLER_TOOLS_VERSION ?= v0.11.4

apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
annotations:
controller-gen.kubebuilder.io/version: v0.11.4
name: jobsets.jobset.x-k8s.io

@kannon92
Copy link
Contributor Author

kannon92 commented Jun 1, 2023

@tenzen-y
Copy link
Member

tenzen-y commented Jun 1, 2023

Which PR? Maybe it is fixed by rebasing the PR.

@tenzen-y
Copy link
Member

tenzen-y commented Jun 1, 2023

Actually, we couldn't see the error on the latest PR (#171).

It still exists #171 ...

@vsoch
Copy link
Contributor

vsoch commented Jun 6, 2023

I just hit this locally:

2023-06-06T18:32:26Z    INFO    KubeAPIWarningLogger    unknown field "spec.replicatedJobs[0].template.metadata.creationTimestamp"
2023-06-06T18:32:26Z    INFO    KubeAPIWarningLogger    unknown field "spec.replicatedJobs[0].template.spec.template.metadata.creationTimestamp"
2023-06-06T18:32:26Z    INFO    KubeAPIWarningLogger    unknown field "spec.replicatedJobs[1].template.metadata.creationTimestamp"
2023-06-06T18:32:26Z    INFO    KubeAPIWarningLogger    unknown field "spec.replicatedJobs[1].template.spec.template.metadata.creationTimestamp"
2023-06-06T18:32:26Z    INFO    KubeAPIWarningLogger    unknown field "spec.replicatedJobs[2].template.metadata.creationTimestamp"
2023-06-06T18:32:26Z    INFO    KubeAPIWarningLogger    unknown field "spec.replicatedJobs[2].template.spec.template.metadata.creationTimestamp"
2023-06-06T18:32:26Z    INFO    KubeAPIWarningLogger    unknown field "spec.replicatedJobs[3].template.metadata.creationTimestamp"
2023-06-06T18:32:26Z    INFO    KubeAPIWarningLogger    unknown field "spec.replicatedJobs[3].template.spec.template.metadata.creationTimestamp"

I'm running a size 4 node minikube cluster, and I tried installing from both the main branch and the last release. I produced it on kind too and switched over to see if it was related, seems to appear in both.

In my case, I don't see other errors in the logs, and I can see the spec for my jobset, but there are absolutely no events and nothing created. I'm hoping it's related to this issue (although it might not be, it's hard to tell!) I'm going to debug a little more and I'll open a separate issue if there seems to be something else going on. In the meantime, if anyone has tips for debugging jobsets that don't show up beyond the spec for get/describe (no pods, events, etc) please let me know!

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 22, 2024
@danielvegamyhre
Copy link
Contributor

Just checked and this issue is still occurring. It doesn't seem to affect anything, though. This is strange since during the time since this bug was created, we have bumped the versions of our major dependencies multiple times. I can't prioritize this right now but we can keep the issue open.

/label lifecycle/frozen

@k8s-ci-robot
Copy link
Contributor

@danielvegamyhre: The label(s) /label lifecycle/frozen cannot be applied. These labels are supported: api-review, tide/merge-method-merge, tide/merge-method-rebase, tide/merge-method-squash, team/katacoda, refactor. Is this label configured under labels -> additional_labels or labels -> restricted_labels in plugin.yaml?

In response to this:

Just checked and this issue is still occurring. It doesn't seem to affect anything, though. This is strange since during the time since this bug was created, we have bumped the versions of our major dependencies multiple times. I can't prioritize this right now but we can keep the issue open.

/label lifecycle/frozen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@danielvegamyhre
Copy link
Contributor

/lifecycle frozen

@k8s-ci-robot k8s-ci-robot added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.
Projects
None yet
7 participants