-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add kubernetes_leaderelection provider #24913
Add kubernetes_leaderelection provider #24913
Conversation
Signed-off-by: chrismark <[email protected]>
Pinging @elastic/agent (Team:Agent) |
Pinging @elastic/integrations (Team:Integrations) |
Signed-off-by: chrismark <[email protected]>
Signed-off-by: chrismark <[email protected]>
Signed-off-by: chrismark <[email protected]>
💚 Build Succeeded
Expand to view the summary
Build stats
Test stats 🧪
Trends 🧪💚 Flaky test reportTests succeeded. Expand to view the summary
Test stats 🧪
|
Signed-off-by: chrismark <[email protected]>
@blakerouse could you have a look into this one please when you have the time? I mostly want to get feedback from Agent's team point of view. |
deploy/kubernetes/elastic-agent-standalone/elastic-agent-standalone-role.yaml
Outdated
Show resolved
Hide resolved
deploy/kubernetes/elastic-agent-standalone/elastic-agent-standalone-role.yaml
Outdated
Show resolved
Hide resolved
...elastic-agent/pkg/composable/providers/kubernetesleaderelection/kubernetes_leaderelection.go
Show resolved
Hide resolved
type Config struct { | ||
KubeConfig string `config:"kube_config"` | ||
// Name of the leaderelection lease | ||
LeaderLease string `config:"leader_lease"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So by default running Elastic Agent this is off. Is that the behavior we want? Being that will require a specific setting in the providers
top-level key, and not something that could then be used by Fleet at the moment.
I think that we should have this enabled by default, by having it have a default value.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree, I will add it.
id = "elastic-agent-leader-" + agentInfo.AgentID() | ||
} | ||
|
||
ns, err := kubernetes.InClusterNamespace() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should there also be a configuration value for this? Might want to run it on a different namespace?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The convention here is that we will create the lease
object under the same namespace with one where Agent is running. We also add role so as Agent to have access only to leases
under the same namespace, see #24913 (comment)
So I think we should keep it as is and not expose this as an option to the user.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1
...elastic-agent/pkg/composable/providers/kubernetesleaderelection/kubernetes_leaderelection.go
Show resolved
Hide resolved
) | ||
|
||
func init() { | ||
composable.Providers.AddDynamicProvider("kubernetes_leaderelection", DynamicProviderBuilder) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I question if this should be a dynamic provider. I think this should be a context provider. That way leader election can effect all the other vars even dynamic ones discovered by the main kubernetes
provider.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍🏼 ok I can change it
Signed-off-by: chrismark <[email protected]>
Signed-off-by: chrismark <[email protected]>
Signed-off-by: chrismark <[email protected]>
Signed-off-by: chrismark <[email protected]>
Signed-off-by: chrismark <[email protected]>
Signed-off-by: chrismark <[email protected]>
@jsoriano @blakerouse thanks for your review/comments. I think have I covered your comments so you can consider it ready for another review round. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great!
id = "elastic-agent-leader-" + agentInfo.AgentID() | ||
} | ||
|
||
ns, err := kubernetes.InClusterNamespace() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
(cherry picked from commit 2494ae5)
(cherry picked from commit 2494ae5)
What does this PR do?
Adds support for leaderelection provider in Agent based on #24267.
After this one is merged I plan to tune standalone manifest in follow-up PR so as to remove deployment section and only deploy as Deamonset with leaderelection "on".
Why is it important?
To support cluster wide metrics collection by only one Agent at a time between a group of multiple Agents (ie Deamonset).
On a second step we can change the standalone manifest, completely remove the Deployment of Agent and go on with only the Daemonset leveraging the leaderelection feature.
How to test this PR locally
./elastic-agent -e -d "*" -c /elastic-agent.yml inspect output -o default
Related issues