Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor kubernetes autodiscover to enable different resource based discovery #14738

Merged
merged 12 commits into from
Dec 3, 2019

Conversation

vjsamuel
Copy link
Contributor

@vjsamuel vjsamuel commented Nov 25, 2019

This PR tries to move around code in a way such that each resource has its own interface implementation and the autodiscover relies on a resource parameter to determine what the watcher to spin up and how to create hints. We also now add scope as a parameter to define if Beats is running either at a node level or at a cluster level.

metricbeat.autodiscover:
  providers:
    - type: kubernetes
      kube_config: ${HOME}/.kube/config
      resource: service
      hints.enabled: true

output.console.pretty: true

The above configuration would enable discovering service objects across the cluster.

metricbeat.autodiscover:
  providers:
    - type: kubernetes
      kube_config: ${HOME}/.kube/config
      resource: pod
      scope: cluster
      hints.enabled: true

The above configuration would allow doing pod discovery across the cluster.

closes #14044 and #10578

@elasticmachine
Copy link
Collaborator

Since this is a community submitted pull request, a Jenkins build has not been kicked off automatically. Can an Elastic organization member please verify the contents of this patch and then kick off a build manually?

1 similar comment
@elasticmachine
Copy link
Collaborator

Since this is a community submitted pull request, a Jenkins build has not been kicked off automatically. Can an Elastic organization member please verify the contents of this patch and then kick off a build manually?

watcher kubernetes.Watcher
}

func NewPodEventer(uuid uuid.UUID, cfg *common.Config, client k8s.Interface, publish func(event bus.Event)) (Eventer, error) {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

exported function NewPodEventer should have comment or be unexported

"github.com/elastic/beats/libbeat/logp"
)

func init() {
autodiscover.Registry.AddProvider("kubernetes", AutodiscoverBuilder)
}

type Eventer interface {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

exported type Eventer should have comment or be unexported

@@ -15,6 +15,23 @@
// specific language governing permissions and limitations
// under the License.

// Licensed to Elasticsearch B.V. under one or more contributor
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for some reason this was duplicated 🤔

@exekias
Copy link
Contributor

exekias commented Nov 25, 2019

so far this is looking good to me 👍 thank you for working on it!

return ""
}

func (no *node) isNodeReady(node *kubernetes.Node) bool {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

receiver name no should be consistent with previous receiver name n for node

return ""
}

func (no *node) isNodeReady(node *kubernetes.Node) bool {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

receiver name no should be consistent with previous receiver name n for node

@vjsamuel vjsamuel force-pushed the refactor_k8s_autodiscover branch from 473bf23 to 64465ae Compare November 25, 2019 23:28
@vjsamuel vjsamuel marked this pull request as ready for review November 25, 2019 23:29
// Needed when resource is a pod
Host string `config:"host"`
// Scope can be either host or cluster.
Scope string `config:"scope"`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This would also solve #10578 🎉

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can happen in a different PR but I think we should extend this setting to add_kubernetes_metadata processor. I'm unsure about the other (resource), probably not worth it until we see the need.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will create follow up PR

Host string `config:"host"`
// Scope can be either host or cluster.
Scope string `config:"scope"`
Resource string `config:"resource"`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

both scope and resource are new, could you update the docs?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updated docs.

logger *logp.Logger
publish func(bus.Event)
watcher kubernetes.Watcher
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pod.go is fairly well tested, could you add some tests for service and node too?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added test cases for node and service.

},
}

objType = "service"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should the watcher refuse to start if scope != cluster? We should at least explain this in the docs if not

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updated the docs for the same. the watcher needs some refactor so that we can address some of these concerns and also allow it to do additional enrichment

@vjsamuel vjsamuel force-pushed the refactor_k8s_autodiscover branch 2 times, most recently from bfe75a5 to 3e03b63 Compare November 27, 2019 21:11
@exekias
Copy link
Contributor

exekias commented Nov 28, 2019

jenkins test this please

Copy link
Contributor

@exekias exekias left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is looking great! I left a few minor comments, thank you for adding docs and tests!!

// Check if host is being defined and change it to node instead.
if c.Node == "" && c.Host != "" {
c.Node = c.Host
logp.L().Warnf("`host` will be deprecated in 8.0. use `node` instead")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

❤️ Could you use cfgwarn.Deprecate deprecate here? Also, it would be nice to update our manifests to avoid this warning to new users: https://github.com/elastic/beats/tree/master/deploy/kubernetes

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should make this change also in add_kubernetes_metadata

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will add this as part of follow up PR to add scope setting

logp.L().Warnf("`host` will be deprecated in 8.0. use `node` instead")
}

// Check if resource is either node or pod. If yes then default the scope to "host" if not provided.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/host/node/?


default:
if c.Scope == "node" {
logp.L().Warnf("can not set scope to `host` when using resource %s. resetting scope to `cluster`", c.Resource)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/host/node/?

@@ -53,6 +53,22 @@ func TestConfigWithCustomBuilders(t *testing.T) {
assert.NotNil(t, err)
}

func TestConfigWithIncorrectScope(t *testing.T) {
cfg := common.MapStr{
"scope": "host",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/host/node/


// Ensure that host is set correctly whenever the scope is set to "host". Make sure that host is empty
// when cluster scope is enforced.
if config.Scope == "host" {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

both description and here, should host be node?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

btw I understand that setting node scope here means that we only watch for events in our node. right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

correct. it should be node. it will pin the discovery to that given node.

`resource`:: (Optional) Select the resource to do discovery on. Currently supported
Kubernetes resources are `pod`, `service` and `node`. If not configured `resource`
defaults to `pod`.
`scope`:: (Optional) Specify at what level autodiscovery needs to be done at. `scope` can
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
`scope`:: (Optional) Specify at what level autodiscovery needs to be done at. `scope` can
`scope`:: (Optional) Specify at what level autodiscover needs to be done at. `scope` can

Comment on lines 197 to 198
the specified node. `cluster` scope allows cluster wide discovery. `pod` and `node` are
only resources that can be discovered at `node` scope.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

only pod and node can be discovered at node scope.

Comment on lines 63 to 72
config.Host = kubernetes.DiscoverKubernetesNode(config.Host, kubernetes.IsInCluster(config.KubeConfig), client)
} else {
config.Host = ""
}

logger.Debugf("Initializing a new Kubernetes watcher using host: %v", config.Host)

watcher, err := kubernetes.NewWatcher(client, &kubernetes.Node{}, kubernetes.WatchOptions{
SyncTimeout: config.SyncPeriod,
Node: config.Host,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think all these should use config.Node, right? same happens in other eventers. Maybe we can rename Host to HostDeprecated to avoid its usage?

@exekias
Copy link
Contributor

exekias commented Nov 28, 2019

could you please update the PR description? also in_cluster was removed when we migrated to client-go. You can drop it in the examples

@vjsamuel vjsamuel force-pushed the refactor_k8s_autodiscover branch 2 times, most recently from abecaa4 to ec50931 Compare December 2, 2019 06:08
Copy link
Contributor

@exekias exekias left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please add a changelog?

Copy link
Member

@ChrsMark ChrsMark left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for addressing the last comments!

@ChrsMark
Copy link
Member

Manual Testing

We should check manually combinations for the following configuration options:

resource:: (Optional) Select the resource to do discovery on. Currently supported
Kubernetes resources are pod, service and node. If not configured resource
defaults to pod.
scope:: (Optional) Specify at what level autodiscover needs to be done at. scope can
either take node or cluster as values. node scope allows discovery of resources in
the specified node. cluster scope allows cluster wide discovery. Only pod and node resources can be discovered at node scope.

Here are two examples mentioned in the description of the PR:

metricbeat.autodiscover:
  providers:
    - type: kubernetes
      kube_config: ${HOME}/.kube/config
      resource: service
      hints.enabled: true

output.console.pretty: true

The above configuration would enable discovering service objects across the cluster.

metricbeat.autodiscover:
  providers:
    - type: kubernetes
      kube_config: ${HOME}/.kube/config
      resource: pod
      scope: cluster
      hints.enabled: true

The above configuration would allow doing pod discovery across the cluster.

A mix of combinations to test:

  • pod, node (check that a new pod can be discovered within the node)

  • pod, cluster (check that a new pod can be discovered along the cluster)

  • node, cluster (check that a new node can be discovered along the cluster)

  • node, node (check that the node can be discovered within the same node)

  • service, cluster (check that new services can be discovered in cluster scope)

  • service, node (check how this unsupported combination warns the user)

@exekias and @vjsamuel feel free to add/mention anything I might missed

@exekias exekias added the needs_backport PR is waiting to be backported to other branches. label Jan 10, 2020
@exekias
Copy link
Contributor

exekias commented Jan 10, 2020

Thank you for contributing @vjsamuel!!

@exekias exekias removed the needs_backport PR is waiting to be backported to other branches. label Jan 10, 2020
@andresrc andresrc assigned odacremolbap and unassigned exekias Jan 14, 2020
@odacremolbap
Copy link
Contributor

I couldn't get the node resource to work, probably because of this but I can't tell for certain.

All other resources worked

@nerddelphi
Copy link

I guess this change broke Autodiscover in Filebeat, as I state it here #16464

@odacremolbap can you check it? Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
autodiscovery enhancement needs testing notes Team:Integrations Label for the Integrations team test-plan Add this PR to be manual test plan test-plan-ok This PR passed manual testing v7.6.0
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Make kubernetes autodiscover and metadata work for other resources
8 participants