Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Autodiscover appenders should concatenate processor arrays #11184

Closed
walfie opened this issue Mar 11, 2019 · 3 comments
Closed

Autodiscover appenders should concatenate processor arrays #11184

walfie opened this issue Mar 11, 2019 · 3 comments
Labels
containers Related to containers use case enhancement Team:Integrations Label for the Integrations team

Comments

@walfie
Copy link

walfie commented Mar 11, 2019


(Text below copied from forum thread)

I'm trying to use autodiscover, where I have some processors defined in the templates config, as well as some processors defined in the appenders section under certain conditions, like so:

filebeat.autodiscover:
  providers:
  - type: kubernetes
    templates:
    - config:
      - type: docker
        containers.ids:
        - "${data.kubernetes.container.id}"
        fields_under_root: true

        processors:
        - rename:
            fields:
            - from: kubernetes.container
              to: blah
            ignore_missing: true
            fail_on_error: false

    appenders:
    - type: config
      condition.equals.kubernetes.labels.k8s-app: filebeat
      config:
        processors:
        - decode_json_fields:
            fields: ["message"]
            target: ""
            overwrite_keys: true
        - drop_fields:
            fields: ["timestamp"]

What I would expect to happen is for the two processors arrays to be concatenated.

However, I keep getting the error "Error creating runner from config: each processor needs to have exactly one action, but found 2 actions" which confused me, because the processors looked correct to me.

I think I was able to figure out the issue though, it's that when filebeat attempts to combine the appenders config with the templates config, it may be flattening the array representations and combining each of the individual items with the same array index.

i.e., I imagine it was attempting to combine my rename processor with my decode_json_fields processor into one object, like:

- rename:
    fields:
    - from: kubernetes.container
      to: blah
    ignore_missing: true
    fail_on_error: false
  decode_json_fields: # note this is on the same level as `rename`
    fields: ["message"]
    target: ""
    overwrite_keys: true

which is very much an invalid processor.

I was able to work around it by changing my appenders definition to use numbered syntax for arrays, with numbers that don't overlap with the ones defined above:

    appenders:
    - type: config
      condition.equals.kubernetes.labels.k8s-app: filebeat
      config:
        processors.1.decode_json_fields:
          fields: ["message"]
          target: ""
          overwrite_keys: true
        processors.2.drop_fields:
          fields: ["timestamp"]

Now clearly this isn't an optimal solution, as I would expect the appender to just concatenate the arrays. Has anyone else encountered this issue? I'm encountering it on 7.0.0-beta1 but I imagine it's present in 6.x as well.


Here's a specific location in the code where this can be tested: https://github.com/elastic/beats/blob/7.x/libbeat/autodiscover/appenders/config/config_test.go

I was able to confirm it locally by adding a test that had an array field containing objects in both the eventConfig and config fields, and was seeing that the objects at each array index were being merged.

@ruflin ruflin added containers Related to containers use case Team:Integrations Label for the Integrations team labels Mar 12, 2019
@exekias
Copy link
Contributor

exekias commented Apr 4, 2019

Thank you for your feedback. I think this request makes sense, while I have some doubts on how we can implement this in a way that the result is what most users expect. As of today appenders override existing configs.

As a workaround for your specific case, this could work:

Move your rename processor out of the config template, and put it as a global processor, like this:

processors:
  - rename:
      when.has_fields: ['kubernetes.container']
      fields:
        - from: kubernetes.container
          to: blah
          ignore_missing: true
          fail_on_error: false

This way the appender is not clashing with any existing processor but you still get renaming

@walfie
Copy link
Author

walfie commented Apr 4, 2019

Thanks, I'll try that the next time I have a chance.

As for possible implementations, I did a brief check of the code and noticed in the underlying go-ucfg library, there is a way to specify how configs should be merged (replace, append, or prepend): https://github.com/elastic/go-ucfg/blob/1273e33847d004780f826af2de52e327c0322f5c/opts.go#L141-L154

I wonder if it would be reasonable to allow specifying the merge method in the appenders config, and have the default be what it is currently (replace). As a possible example:

     appenders:
     - type: config
       condition.equals.kubernetes.labels.k8s-app: filebeat
+      merge: append
       config:
         processors:
         - decode_json_fields:
             fields: ["message"]
             target: ""
             overwrite_keys: true
         - drop_fields:
             fields: ["timestamp"]

this way, users know exactly what to expect. It's just a thought though, as I don't know much about the underlying implementation.

@exekias exekias removed their assignment Apr 29, 2019
@exekias
Copy link
Contributor

exekias commented Mar 19, 2020

I think this should be fixed by #16450, which will be released in 7.7. Please reopen if you still see the issue

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
containers Related to containers use case enhancement Team:Integrations Label for the Integrations team
Projects
None yet
Development

No branches or pull requests

5 participants