Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add kubernetes_leaderelection provider #24913

Merged
merged 11 commits into from
Apr 8, 2021

Conversation

ChrsMark
Copy link
Member

@ChrsMark ChrsMark commented Apr 2, 2021

What does this PR do?

Adds support for leaderelection provider in Agent based on #24267.

After this one is merged I plan to tune standalone manifest in follow-up PR so as to remove deployment section and only deploy as Deamonset with leaderelection "on".

Why is it important?

To support cluster wide metrics collection by only one Agent at a time between a group of multiple Agents (ie Deamonset).

On a second step we can change the standalone manifest, completely remove the Deployment of Agent and go on with only the Daemonset leveraging the leaderelection feature.

How to test this PR locally

  1. Enable the provider and an input based on the leader condition:
providers:
  kubernetes_leaderelection:
    kube_config: /Users/chrismark/.kube/config
    leader_lease: elastic-agent
  
inputs:
  - type: logfile
    condition: ${kubernetes_leaderelection.leader} == true
    streams:
      - paths: /var/log/leaderlog/another.log
  1. run inspect command to verify the output: ./elastic-agent -e -d "*" -c /elastic-agent.yml inspect output -o default
  2. Verify the conditional input is enabled:
filebeat:
  inputs:
  - index: logs-generic-default
    paths: /var/log/leaderlog/another.log
    processors:
    - add_fields:
        fields:
          dataset: generic
          namespace: default
          type: logs
        target: data_stream
    - add_fields:
        fields:
          dataset: generic
        target: event
    - add_fields:
        fields:
          id: 32806c0a-4f04-499c-9427-e6e24e5f6035
          snapshot: false
          version: 8.0.0
        target: elastic_agent
    type: log
  1. (Optional:) Perform same test from different Agents and on a running mode so as to verify that only one of them picks the lock and becomes the leader

Related issues

@ChrsMark ChrsMark added Team:Integrations Label for the Integrations team v7.13.0 Team:Elastic-Agent Label for the Agent team labels Apr 2, 2021
@ChrsMark ChrsMark self-assigned this Apr 2, 2021
@elasticmachine
Copy link
Collaborator

Pinging @elastic/agent (Team:Agent)

@botelastic botelastic bot added the needs_team Indicates that the issue/PR needs a Team:* label label Apr 2, 2021
@elasticmachine
Copy link
Collaborator

Pinging @elastic/integrations (Team:Integrations)

@botelastic botelastic bot removed the needs_team Indicates that the issue/PR needs a Team:* label label Apr 2, 2021
ChrsMark added 3 commits April 2, 2021 14:45
Signed-off-by: chrismark <[email protected]>
Signed-off-by: chrismark <[email protected]>
Signed-off-by: chrismark <[email protected]>
@elasticmachine
Copy link
Collaborator

elasticmachine commented Apr 2, 2021

💚 Build Succeeded

the below badges are clickable and redirect to their specific view in the CI or DOCS
Pipeline View Test View Changes Artifacts preview

Expand to view the summary

Build stats

  • Build Cause: Started by user Chris Mark

  • Start Time: 2021-04-07T14:12:19.936+0000

  • Duration: 79 min 26 sec

  • Commit: 8f84f23

Test stats 🧪

Test Results
Failed 0
Passed 6612
Skipped 16
Total 6628

Trends 🧪

Image of Build Times

Image of Tests

💚 Flaky test report

Tests succeeded.

Expand to view the summary

Test stats 🧪

Test Results
Failed 0
Passed 6612
Skipped 16
Total 6628

@ChrsMark
Copy link
Member Author

ChrsMark commented Apr 6, 2021

@blakerouse could you have a look into this one please when you have the time? I mostly want to get feedback from Agent's team point of view.

type Config struct {
KubeConfig string `config:"kube_config"`
// Name of the leaderelection lease
LeaderLease string `config:"leader_lease"`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So by default running Elastic Agent this is off. Is that the behavior we want? Being that will require a specific setting in the providers top-level key, and not something that could then be used by Fleet at the moment.

I think that we should have this enabled by default, by having it have a default value.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree, I will add it.

id = "elastic-agent-leader-" + agentInfo.AgentID()
}

ns, err := kubernetes.InClusterNamespace()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should there also be a configuration value for this? Might want to run it on a different namespace?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The convention here is that we will create the lease object under the same namespace with one where Agent is running. We also add role so as Agent to have access only to leases under the same namespace, see #24913 (comment)

So I think we should keep it as is and not expose this as an option to the user.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

)

func init() {
composable.Providers.AddDynamicProvider("kubernetes_leaderelection", DynamicProviderBuilder)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I question if this should be a dynamic provider. I think this should be a context provider. That way leader election can effect all the other vars even dynamic ones discovered by the main kubernetes provider.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍🏼 ok I can change it

@ChrsMark
Copy link
Member Author

ChrsMark commented Apr 7, 2021

@jsoriano @blakerouse thanks for your review/comments. I think have I covered your comments so you can consider it ready for another review round.

@ChrsMark ChrsMark requested review from blakerouse and jsoriano April 7, 2021 11:11
Copy link
Contributor

@blakerouse blakerouse left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great!

id = "elastic-agent-leader-" + agentInfo.AgentID()
}

ns, err := kubernetes.InClusterNamespace()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

Copy link
Member

@jsoriano jsoriano left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Team:Elastic-Agent Label for the Agent team Team:Integrations Label for the Integrations team v7.13.0
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Agent] Implement leader election on k8s for Elastic Agent
4 participants