Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

json.overwrite_keys causes filebeat to panic #6381

Closed
vbohata opened this issue Feb 14, 2018 · 9 comments · Fixed by #24741
Closed

json.overwrite_keys causes filebeat to panic #6381

vbohata opened this issue Feb 14, 2018 · 9 comments · Fixed by #24741
Assignees
Labels
bug Filebeat Filebeat Team:Integrations Label for the Integrations team

Comments

@vbohata
Copy link

vbohata commented Feb 14, 2018

I am using one instance of filebeat to store docker logs in local log file, each line is json. Another filebeat is reading this file and sending the logs to elasticsearch. This second filebeat does not work if I enable json.overwrite_keys: true.

My config:

name: "myname"
path:
  home: /opt/filebeat
  data: "/app/volumes/data/logforwarder"

filebeat.prospectors:
  - type: log
    paths:
      - "/app/volumes/logs/docker-json.log*"
    json:
      keys_under_root: true
      add_error_key: true
      message_key: message
      overwrite_keys: true
    close_inactive: 1440h

fields:
  machine_group: "mg1"

setup.template.enabled: false

output.elasticsearch:
  hosts: ["eshost:9210"]
  index: "mg1-dockercontainer-%{+yyyy.MM}"
  protocol: https
  ssl:
    certificate_authorities: ["/app/volumes/config/certs/elasticsearch-ca.crt"]
    enabled: true
    verification_mode: full
    supported_protocols: [ "TLSv1.2" ]
  username: sl
  password: sl

Filebeat log:

log/harvester.go:216    Harvester started for file: /app/volumes/logs/docker-json.log
panic: assignment to entry in nil map

goroutine 73 [running]:
github.com/elastic/beats/libbeat/common.MapStr.Update(0x0, 0xc4204af0b0)
        /go/src/github.com/elastic/beats/libbeat/common/mapstr.go:43 +0xd7
github.com/elastic/beats/libbeat/common/jsontransform.WriteJSONKeys(0xc420053bb8, 0xc4204af080, 0x16ca001)
        /go/src/github.com/elastic/beats/libbeat/common/jsontransform/jsonhelper.go:50 +0xa9b
github.com/elastic/beats/filebeat/harvester/reader.MergeJSONFields(0xc4204af2c0, 0xc4204af080, 0xc420053ce0, 0x0, 0x0, 0x10101, 0xc420017260, 0x21, 0xfd65)
        /go/src/github.com/elastic/beats/filebeat/harvester/reader/json.go:120 +0x1c3
github.com/elastic/beats/filebeat/prospector/log.(*Harvester).Run(0xc42009b900, 0x0, 0x0)
        /go/src/github.com/elastic/beats/filebeat/prospector/log/harvester.go:287 +0x98a
github.com/elastic/beats/filebeat/harvester.(*Registry).Start.func1(0xc42005ccc0, 0x1f3c240, 0xc42009b900)
        /go/src/github.com/elastic/beats/filebeat/harvester/registry.go:81 +0x95
created by github.com/elastic/beats/filebeat/harvester.(*Registry).Start
        /go/src/github.com/elastic/beats/filebeat/harvester/registry.go:74 +0x148

Example entry from /app/volumes/logs/docker-json.log:
{"@timestamp":"2018-02-14T08:58:13.484Z","@metadata":{"beat":"filebeat","type":"doc","version":"6.2.1"},"stream":"stdout","message":"[2018-02-14T09:58:13,483][INFO ][logstash.agent ] Pipelines running {:count=\u003e2, :pipelines=\u003e[".monitoring-logstash", "beatsmain"]}","prospector":{"type":"docker"},"docker":{"container":{"id":"48425cf7a58af112ff950ddcebe06968aed2d589bd9729422e98a4c9f7bd2a56","name":"els","image":"localhost/elk/logstash:latest","labels":{"com":{"docker":{"compose":{"version":"1.18.0","config-hash":"0993dddd4bf7d23bb20d04392b9fcb853da11d00a4b5d615ecf31af2be02f26b","container-number":"1","oneoff":"False","project":"elspo","service":"logstash"}}}}}},"beat":{"version":"6.2.1","name":"myname","hostname":"7a0006d02784"},"source":"/var/lib/docker/containers/48425cf7a58af112ff950ddcebe06968aed2d589bd9729422e98a4c9f7bd2a56/48425cf7a58af112ff950ddcebe06968aed2d589bd9729422e98a4c9f7bd2a56-json.log","offset":21400}

Affected filebeat versions: at least 6.1.x, 6.2.x

@ruflin ruflin added the Filebeat Filebeat label Feb 26, 2018
@ewgRa
Copy link
Contributor

ewgRa commented Mar 8, 2018

@vbohata I try to reproduce your problem

This is reproduce with potential fix: https://github.com/ewgRa/beats/commit/679397f1488acc6eeb2b92376e64d41a7913e994

But problem that it fix just a panic, but looks like overwrite_keys functionality not working right now, and looks like #4554 is a reason.

Problem that in json.go, at MergeJSONFields there is create Event: event := &beat.Event{, but in return there is only return event.Timestamp and created event.Meta not used at all. And this event.Meta that was overwritten, not goes to new pipeline processor, in event.go makeEvent function, in.Meta probably always nil.

So, I think problem is deeper, and functionality, related to overwrite_keys not working. @urso can you confirm?

@ewgRa
Copy link
Contributor

ewgRa commented Mar 8, 2018

Ok, looks like this is solution, that says that I'm right: https://github.com/ewgRa/beats/commit/b5a60f3b8a983195b75b3b429a3641d14377ea04, but I'm not sure that it is a best, 100% clean and not broke something else.

@ewgRa
Copy link
Contributor

ewgRa commented Mar 8, 2018

PR #6509, not sure that 100% clean and do not broke anything, can be used as base for better fix.

@Zenexer
Copy link

Zenexer commented Sep 14, 2018

@andrewkroh
Copy link
Member

andrewkroh commented Dec 7, 2018

I hit the same problem while trying to use Filebeat to ingest the JSON output from another Beat.

panic: assignment to entry in nil map

goroutine 57 [running]:
github.com/elastic/beats/libbeat/common.MapStr.Update(0x0, 0xc0003f6750)
	/Users/akroh/go/src/github.com/elastic/beats/libbeat/common/mapstr.go:60 +0xd3
github.com/elastic/beats/libbeat/common/jsontransform.WriteJSONKeys(0xc0002edbb8, 0xc0003f66c0, 0x4d06301)
	/Users/akroh/go/src/github.com/elastic/beats/libbeat/common/jsontransform/jsonhelper.go:67 +0x9f0
github.com/elastic/beats/libbeat/reader/readjson.MergeJSONFields(0xc0003f7230, 0xc0003f66c0, 0xc0002edd10, 0xc000399508, 0x7, 0x101, 0xc000364190, 0x47, 0x2babcf)
	/Users/akroh/go/src/github.com/elastic/beats/libbeat/reader/readjson/json.go:148 +0x1f6
github.com/elastic/beats/filebeat/input/log.(*Harvester).Run(0xc0001e2000, 0x0, 0x0)
	/Users/akroh/go/src/github.com/elastic/beats/filebeat/input/log/harvester.go:330 +0xabb
github.com/elastic/beats/filebeat/harvester.(*Registry).Start.func1(0xc00032c440, 0x4e38cc0, 0xc0001e2000)
	/Users/akroh/go/src/github.com/elastic/beats/filebeat/harvester/registry.go:96 +0x73
created by github.com/elastic/beats/filebeat/harvester.(*Registry).Start
	/Users/akroh/go/src/github.com/elastic/beats/filebeat/harvester/registry.go:90 +0x12b
filebeat.inputs:
  - paths: ['winlogbeat.log.json*']
    json.message_key: message
    json.keys_under_root: true
    json.overwrite_keys: true

output.elasticsearch:
  hosts: ['http://localhost:9200']

I also get a panic using

filebeat.inputs:
  - paths: ['winlogbeat.log.json*']
    processors:
      - decode_json_fields:
          fields: [message]
          target: ""
          overwrite_keys: true

Tested with version 375b5cc.

Patching with this seems to help:

$ git diff
diff --git a/libbeat/common/jsontransform/jsonhelper.go b/libbeat/common/jsontransform/jsonhelper.go
index 98a1da5ef..9180f96fd 100644
--- a/libbeat/common/jsontransform/jsonhelper.go
+++ b/libbeat/common/jsontransform/jsonhelper.go
@@ -59,11 +59,17 @@ func WriteJSONKeys(event *beat.Event, keys map[string]interface{}, overwriteKeys
                case "@metadata":
                        switch m := v.(type) {
                        case map[string]string:
+                               if event.Meta == nil {
+                                       event.Meta = common.MapStr{}
+                               }
                                for meta, value := range m {
                                        event.Meta[meta] = value
                                }
 
                        case map[string]interface{}:
+                               if event.Meta == nil {
+                                       event.Meta = common.MapStr{}
+                               }
                                event.Meta.Update(common.MapStr(m))
 
                        default:

@arcenik
Copy link

arcenik commented Sep 11, 2019

I have the same problem with the latest version of filebeat when reading outout from auditbeat.

The versions:

  • filebeat version 7.3.1 (amd64), libbeat 7.3.1 [a4be71b built 2019-08-19 19:30:50 +0000 UTC]
  • auditbeat version 7.3.1 (amd64), libbeat 7.3.1 [a4be71b built 2019-08-19 19:28:55 +0000 UTC]

The stackm is quite similar:

goroutine 142 [running]:
github.com/elastic/beats/libbeat/common.MapStr.Update(0x0, 0xc00027ad80)
        /go/src/github.com/elastic/beats/libbeat/common/mapstr.go:60 +0xd3
github.com/elastic/beats/libbeat/common/jsontransform.WriteJSONKeys(0xc00046bc60, 0xc00027ad50, 0x1df3801)
        /go/src/github.com/elastic/beats/libbeat/common/jsontransform/jsonhelper.go:67 +0x923
github.com/elastic/beats/libbeat/reader/readjson.MergeJSONFields(0xc00027af90, 0xc00027ad50, 0xc00046bdb8, 0x0, 0x0, 0x10101, 0xc0002576c0, 0x1c, 0x0)
        /go/src/github.com/elastic/beats/libbeat/reader/readjson/json.go:148 +0x1f2
github.com/elastic/beats/filebeat/input/log.(*Harvester).Run(0xc000142280, 0x0, 0x0)
        /go/src/github.com/elastic/beats/filebeat/input/log/harvester.go:327 +0x1068
github.com/elastic/beats/filebeat/harvester.(*Registry).Start.func1(0xc0004f0e00, 0x2022d20, 0xc000142280)
        /go/src/github.com/elastic/beats/filebeat/harvester/registry.go:96 +0x73
created by github.com/elastic/beats/filebeat/harvester.(*Registry).Start
        /go/src/github.com/elastic/beats/filebeat/harvester/registry.go:90 +0x128

@jsoriano
Copy link
Member

This problem is also reported in this topic in discuss: https://discuss.elastic.co/t/importing-metricbeat-events-from-json-file-to-elastic-using-filebeat/212457

There they are trying to use filebeat to read output from metricbeat, but it seems to happen with any json object that contains a non-empty @metadata field, I can reproduce the issue with a log like this one:

{"@metadata":{"foo": "bar"}}
  

And these json settings:

  json.keys_under_root: true
  json.overwrite_keys: true

@magicbuffer
Copy link

magicbuffer commented Jan 17, 2020

I am also affected by this. Very same scenario as @arcenik.

Any ideas on possible workarounds? Maybe using some processors on both beats ends?

EDIT:
The following configuration almost workaround this issue without having a logstash instance.

  1. Disable json.overwrite_keys
  2. If enriching the event with the host metadata (or any other processors) on the auditbeat, disable add_host_metadata on filebeat. Or add a condition to do it selectively.

The only annoying thing is that the original event timestamp gets overwritten by filebeat. I tried adding a copy_fields processor on auditbeat to duplicate the timestamp and a timestamp processor on filebeat to revert it but without luck. copy_fields seems to not support copying the @timestamp field.

I'll post an update if I find a way to get the timestamp right. Also any suggestions are welcome :)

@blakerouse blakerouse self-assigned this Jan 17, 2020
@andresrc andresrc added Team:Integrations Label for the Integrations team and removed Team:Beats labels Mar 6, 2020
@mch
Copy link

mch commented Sep 30, 2020

I am also seeing this issue when using functionbeat to export logs from cloudwatch that were previously processed by filebeat. The only work around I have found so far is to decode_json_fields into a target like "original", and then drop_fields and copy_fields to flatten it out and simulate overwrite_keys. Unfortunately this doesn't work for the @timestamp field, and there is no timestamp processor in functionbeat.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Filebeat Filebeat Team:Integrations Label for the Integrations team
Projects
None yet
Development

Successfully merging a pull request may close this issue.