-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor kubernetes autodiscover to enable different resource based discovery #14738
Conversation
Since this is a community submitted pull request, a Jenkins build has not been kicked off automatically. Can an Elastic organization member please verify the contents of this patch and then kick off a build manually? |
1 similar comment
Since this is a community submitted pull request, a Jenkins build has not been kicked off automatically. Can an Elastic organization member please verify the contents of this patch and then kick off a build manually? |
watcher kubernetes.Watcher | ||
} | ||
|
||
func NewPodEventer(uuid uuid.UUID, cfg *common.Config, client k8s.Interface, publish func(event bus.Event)) (Eventer, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
exported function NewPodEventer should have comment or be unexported
"github.com/elastic/beats/libbeat/logp" | ||
) | ||
|
||
func init() { | ||
autodiscover.Registry.AddProvider("kubernetes", AutodiscoverBuilder) | ||
} | ||
|
||
type Eventer interface { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
exported type Eventer should have comment or be unexported
e55a7a1
to
9023690
Compare
@@ -15,6 +15,23 @@ | |||
// specific language governing permissions and limitations | |||
// under the License. | |||
|
|||
// Licensed to Elasticsearch B.V. under one or more contributor |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for some reason this was duplicated 🤔
so far this is looking good to me 👍 thank you for working on it! |
return "" | ||
} | ||
|
||
func (no *node) isNodeReady(node *kubernetes.Node) bool { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
receiver name no should be consistent with previous receiver name n for node
return "" | ||
} | ||
|
||
func (no *node) isNodeReady(node *kubernetes.Node) bool { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
receiver name no should be consistent with previous receiver name n for node
473bf23
to
64465ae
Compare
// Needed when resource is a pod | ||
Host string `config:"host"` | ||
// Scope can be either host or cluster. | ||
Scope string `config:"scope"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This would also solve #10578 🎉
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can happen in a different PR but I think we should extend this setting to add_kubernetes_metadata
processor. I'm unsure about the other (resource
), probably not worth it until we see the need.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
will create follow up PR
Host string `config:"host"` | ||
// Scope can be either host or cluster. | ||
Scope string `config:"scope"` | ||
Resource string `config:"resource"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
both scope and resource are new, could you update the docs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
updated docs.
logger *logp.Logger | ||
publish func(bus.Event) | ||
watcher kubernetes.Watcher | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pod.go is fairly well tested, could you add some tests for service and node too?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added test cases for node and service.
}, | ||
} | ||
|
||
objType = "service" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should the watcher refuse to start if scope != cluster
? We should at least explain this in the docs if not
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
updated the docs for the same. the watcher needs some refactor so that we can address some of these concerns and also allow it to do additional enrichment
bfe75a5
to
3e03b63
Compare
jenkins test this please |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is looking great! I left a few minor comments, thank you for adding docs and tests!!
// Check if host is being defined and change it to node instead. | ||
if c.Node == "" && c.Host != "" { | ||
c.Node = c.Host | ||
logp.L().Warnf("`host` will be deprecated in 8.0. use `node` instead") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
❤️ Could you use cfgwarn.Deprecate
deprecate here? Also, it would be nice to update our manifests to avoid this warning to new users: https://github.com/elastic/beats/tree/master/deploy/kubernetes
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should make this change also in add_kubernetes_metadata
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
will add this as part of follow up PR to add scope setting
logp.L().Warnf("`host` will be deprecated in 8.0. use `node` instead") | ||
} | ||
|
||
// Check if resource is either node or pod. If yes then default the scope to "host" if not provided. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/host/node/?
|
||
default: | ||
if c.Scope == "node" { | ||
logp.L().Warnf("can not set scope to `host` when using resource %s. resetting scope to `cluster`", c.Resource) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/host/node/?
@@ -53,6 +53,22 @@ func TestConfigWithCustomBuilders(t *testing.T) { | |||
assert.NotNil(t, err) | |||
} | |||
|
|||
func TestConfigWithIncorrectScope(t *testing.T) { | |||
cfg := common.MapStr{ | |||
"scope": "host", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/host/node/
|
||
// Ensure that host is set correctly whenever the scope is set to "host". Make sure that host is empty | ||
// when cluster scope is enforced. | ||
if config.Scope == "host" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
both description and here, should host be node?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
btw I understand that setting node scope here means that we only watch for events in our node. right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
correct. it should be node. it will pin the discovery to that given node.
`resource`:: (Optional) Select the resource to do discovery on. Currently supported | ||
Kubernetes resources are `pod`, `service` and `node`. If not configured `resource` | ||
defaults to `pod`. | ||
`scope`:: (Optional) Specify at what level autodiscovery needs to be done at. `scope` can |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
`scope`:: (Optional) Specify at what level autodiscovery needs to be done at. `scope` can | |
`scope`:: (Optional) Specify at what level autodiscover needs to be done at. `scope` can |
the specified node. `cluster` scope allows cluster wide discovery. `pod` and `node` are | ||
only resources that can be discovered at `node` scope. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
only pod
and node
can be discovered at node
scope.
config.Host = kubernetes.DiscoverKubernetesNode(config.Host, kubernetes.IsInCluster(config.KubeConfig), client) | ||
} else { | ||
config.Host = "" | ||
} | ||
|
||
logger.Debugf("Initializing a new Kubernetes watcher using host: %v", config.Host) | ||
|
||
watcher, err := kubernetes.NewWatcher(client, &kubernetes.Node{}, kubernetes.WatchOptions{ | ||
SyncTimeout: config.SyncPeriod, | ||
Node: config.Host, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think all these should use config.Node, right? same happens in other eventers. Maybe we can rename Host
to HostDeprecated
to avoid its usage?
could you please update the PR description? also |
abecaa4
to
ec50931
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you please add a changelog?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for addressing the last comments!
…iscovery (elastic#14738) (cherry picked from commit 1a029e5)
Manual TestingWe should check manually combinations for the following configuration options:
Here are two examples mentioned in the description of the PR:
A mix of combinations to test:
@exekias and @vjsamuel feel free to add/mention anything I might missed |
Thank you for contributing @vjsamuel!! |
I couldn't get the node resource to work, probably because of this but I can't tell for certain. All other resources worked |
I guess this change broke Autodiscover in Filebeat, as I state it here #16464 @odacremolbap can you check it? Thank you! |
This PR tries to move around code in a way such that each resource has its own interface implementation and the autodiscover relies on a
resource
parameter to determine what the watcher to spin up and how to create hints. We also now addscope
as a parameter to define if Beats is running either at anode
level or at acluster
level.The above configuration would enable discovering service objects across the cluster.
The above configuration would allow doing pod discovery across the cluster.
closes #14044 and #10578