-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Missing events on status config map #2495
Comments
If I revert 6601bf0 I see all the events coming through. |
@vivekbagade Can you take a look? |
This reverts commit 6601bf0. See kubernetes#2495
@MaciekPytel Sure |
Hi @enxebre. The PR you mentioned aggregates events so that CA for scalability reason where we don't want to send a lot of similar events which might happen if we have too many unschedulable pods. We'll fix this by having a different config for the logrecoder. Expect the fix some time next week |
This reverts commit 6601bf0. See kubernetes#2495
This reverts commit 6601bf0. See kubernetes#2495
This reverts commit 6601bf0. See kubernetes#2495
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
This reverts commit 6601bf0. See kubernetes#2495
This reverts commit 6601bf0. See kubernetes#2495
This reverts commit 6601bf0. See kubernetes#2495
@vivekbagade any updates on this issue? It seems that since 1.16, scale up failures are also not logged on the configmap properly. Reverting this change seems to make things work as normal though. /remove-lifecycle rotten |
@vivekbagade @MaciekPytel gentle ping on this issue. It seems that the events are restricted to one per nodegroup. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
This reverts commit 6601bf0. See kubernetes#2495
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
This reverts commit 6601bf0. See kubernetes#2495
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
This reverts commit 6601bf0. See kubernetes#2495
This reverts commit 6601bf0. See kubernetes#2495
This reverts commit 6601bf0. See kubernetes#2495
This reverts commit 6601bf0. See kubernetes#2495
/remove-lifecycle rotten @vivekbagade and @marwanad i am looking into making some progress on this, do you remember what the logrecorder fix referenced earlier was referring to?
i have some time to create a patch here, but i'm a little confused about that statement. alternatively we could revert the change. curious to hear any thoughts if folks remember this ;) |
/reopen |
@elmiko: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
This reverts commit 6601bf0. See kubernetes#2495
This reverts commit 6601bf0. See kubernetes#2495
This reverts commit 6601bf0. See kubernetes#2495
This reverts commit 6601bf0. See kubernetes#2495
This reverts commit 6601bf0. See kubernetes#2495
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
/remove-lifecycle rotten i'm still interested in seeing the ability to get the duplicated messages. i know adding a flag is not an ideal solution, but i would like to propose a solution where the user has the ability to disable de-duplication of messages. |
we talked about this issue at the sig meeting today, i am going to propose a patch with a flag to enable de-duplication of messages. |
This reverts commit 6601bf0. See kubernetes#2495
This reverts commit 6601bf0. See kubernetes#2495
This reverts commit 6601bf0. See kubernetes#2495
This reverts commit 6601bf0. See kubernetes#2495
This reverts commit 6601bf0. See kubernetes#2495
Even though
Scale-up: group
events (https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/core/scale_up.go#L715) should have been triggered as well for the status configMap as shows the log:I can always only see/receive one single
Scale-up: setting
(https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/core/scale_up.go#L701) event:Not sure what I'm missing.
The text was updated successfully, but these errors were encountered: