Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

dispatch: Fix initial alerts not honoring group_wait #3167

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

alxric
Copy link

@alxric alxric commented Dec 8, 2022

At initial startup of Alertmanager, old alerts will be sent to the receivers immediately as the start time for those alerts could be several days old in some cases (and in either way much older than the group_wait time)

This is problematic for alerts that are supposed to be inhibited. If the old inhibited alert gets processed before the alert that is supposed to inhibit it, it will get sent to the receiver and cause unwanted noise.

One approach to combat this is to always wait at least the group_wait duration for a new alert group, even if the alert is very old. This should make things a bit more stable as it gives all alerts a fighting chance to come in before we send out notifications.

We control this behavior by adding a new config option to routes: WaitOnStartup

By default it will be set to False to preserve current behavior, but if set to True, we will no longer immediately send out notifications on startup

This is to address the issue mentioned in #2229

@alxric alxric force-pushed the alex/inhibit_race_condition branch 11 times, most recently from 5a0f088 to c225982 Compare December 9, 2022 16:22
At initial startup of Alertmanager, old alerts will be sent to the
receivers immediately as the start time for those alerts could be
several days old in some cases (and in either way much older than the
group_wait time)

This is problematic for alerts that are supposed to be inhibited. If the
old inhibited alert gets processed before the alert that is supposed to
inhibit it, it will get sent to the receiver and cause unwanted noise.

One approach to combat this is to always wait at least the group_wait
duration for a new alert group, even if the alert is very old. This
should make things a bit more stable as it gives all alerts a fighting
chance to come in before we send out notifications

Signed-off-by: Alexander Rickardsson <[email protected]>
@alxric alxric force-pushed the alex/inhibit_race_condition branch from c225982 to 5e71cc0 Compare December 9, 2022 16:45
@matthiasr
Copy link

I am wondering if this needs to be configurable at all? Every option adds mental overhead for users and maintainers. Under what circumstances would I not want to set this?

@MichaHoffmann
Copy link

I am wondering if this needs to be configurable at all? Every option adds mental overhead for users and maintainers. Under what circumstances would I not want to set this?

Iirc; it was requested to be configurable to not change the default behaviour, but i cannot locate the thread anymore!

@grobinson-grafana
Copy link
Contributor

I'm not 100% convinced this is the correct fix. I think there are situations where this fix does not work. For example, when the inhibiting rule is evaluated group_wait seconds after the rule it was meant to inhibit. This can happen when group_wait is short and the inhibiting rule is in a different group in Prometheus (as different groups have their evaluations offset).

That said, I do think that the original code:

if !ag.hasFlushed && alert.StartsAt.Add(ag.opts.GroupWait).Before(time.Now()) {

should be deleted, although for other reasons.

@mknapphrt
Copy link

Any progress on this? We're running with a patch right now that just delays the start of the dispatcher because we were getting lots of false alarms for alerts that should be inhibited when we reloaded configs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants