Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add leader election to the backend #847

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open

Conversation

mbarnes
Copy link
Collaborator

@mbarnes mbarnes commented Nov 15, 2024

What this PR does

Adds leader election to the backend deployment using the k8s.io/client-go/tools/leaderelection module so it can be horizontally scaled but still restrict active polling to one pod at a time.

Jira: No Jira, added by request
Link to demo recording:

Special notes for your reviewer

  • The leader election module emits unstructured log messages through klog, so I had to catch these messages and try to adapt them to the standard library structured logger, which required some futzing. Please enlighten me if there's a better way to do this.

  • I went ahead and bumped the backend replica count to 2 just to prove it works in dev environments.

  • We should add a /healthz endpoint to the backend at some point. The leader election module offers healthz integration for when it fails to renew the leader lease, but that's a pull request for another day.

backend/main.go Outdated Show resolved Hide resolved

operationsScanner.Join()
go func() {
sig := <-signalChannel
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please just use signal.NotifyContext on this thing? Didn't we already talk about this?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I want to log the signal received along with a timestamp so I can observe how much longer it takes the process to shut down. I don't see any big advantage to signal.NotifyContext; it's a convenience wrapper that does the same thing I'm doing.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is signalChannel just for your logger? Then the following is simpler:

ctx, cancel := signal.NotifyContext(context.Background(), syscall.SIGINT, syscall.SIGTERM)
defer cancel()

go func() {
    <-ctx.Done()
    logger.Info("Caught interrupt signal")
}()

And the only difference is you can't tell if you got SIGINT or SIGTERM (why do you care? in production, the kubelet only ever sends SIGTERM or SIGKILL (ref), so this seems like a distinction without much meaning). Code is much less Rube-Goldberg. IDK. I respect you think this logging of the specific signal that is sent is somehow important but in a decade of writing k8s-native applications it has never been useful or required in the past.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As far as big advantage, I did want to mention - using a more complex cascade of handlers and goroutines when it's not necessary is a downside for sure. Making the simple choice every time you can helps make this maintainable ten years down the line.

Matthew Barnes added 2 commits November 20, 2024 04:45
Allows the backend deployment to be scaled up but still have only
one instance polling at a time.
Because we can now that aro-hcp-backend uses leader election.
@stevekuznetsov
Copy link
Contributor

@mbarnes I think you meant to push an update but may have forgotten :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants