-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unknown fields in logs #141
Comments
Any idea which file is producing those logs? |
No, and google wasn't too helpful in figuring out where it was coming from. I'm still new to writing controllers so not sure where the responsibility of controller-runtime is versus jobset_controller. I didn't see any examples of us using this field in the controller. |
@kannon92 @ahg-g That error is caused by the controller-gen bug: kubernetes-sigs/controller-tools#402. And then, the bug was fixed in controller-gen v0.11.4 |
It seems that we have already used v0.11.4. So, we probably don't face this error. Line 187 in 321a9d6
|
I see it in these logs on our integration tests (https://storage.googleapis.com/kubernetes-jenkins/pr-logs/pull/kubernetes-sigs_jobset/171/pull-jobset-test-integration-main/1664000334931955712/build-log.txt) |
Which PR? Maybe it is fixed by rebasing the PR. |
I just hit this locally:
I'm running a size 4 node minikube cluster, and I tried installing from both the main branch and the last release. I produced it on kind too and switched over to see if it was related, seems to appear in both. In my case, I don't see other errors in the logs, and I can see the spec for my jobset, but there are absolutely no events and nothing created. I'm hoping it's related to this issue (although it might not be, it's hard to tell!) I'm going to debug a little more and I'll open a separate issue if there seems to be something else going on. In the meantime, if anyone has tips for debugging jobsets that don't show up beyond the spec for get/describe (no pods, events, etc) please let me know! |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
Just checked and this issue is still occurring. It doesn't seem to affect anything, though. This is strange since during the time since this bug was created, we have bumped the versions of our major dependencies multiple times. I can't prioritize this right now but we can keep the issue open. /label lifecycle/frozen |
@danielvegamyhre: The label(s) In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/lifecycle frozen |
As I am looking at the integration logs, I see the following warning:
I don't think its causing any issue but wanted to bring awareness to this. Not sure of the solution or if it is necessary to solve it. It happens also in the logs of the integration tests in the CI.
The text was updated successfully, but these errors were encountered: