Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Filebeat throws Unexpected file stat rror: stat /dev/stdin: bad file descriptor #1029

Closed
fiunchinho opened this issue Feb 24, 2016 · 7 comments

Comments

@fiunchinho
Copy link

In filebeat 1.1, when using prospector for stdin, it works fine but filebeat throws erros

2016/02/24 10:16:23.836129 beat.go:221: DBG  Initializing output plugins
2016/02/24 10:16:23.836389 geolite.go:24: INFO GeoIP disabled: No paths were set under shipper.geoip.paths
2016/02/24 10:16:23.836944 logstash.go:105: INFO Max Retries set to: 3
2016/02/24 10:16:23.837177 sync.go:63: DBG  connect
2016/02/24 10:16:23.841384 outputs.go:135: INFO Activated logstash as output plugin.
2016/02/24 10:16:23.841906 publish.go:235: DBG  Create output worker
2016/02/24 10:16:23.842258 publish.go:277: DBG  No output is defined to store the topology. The server fields might not be filled.
2016/02/24 10:16:23.842584 publish.go:291: INFO Publisher name: d32e2e1dd5c5
2016/02/24 10:16:23.842994 async.go:78: INFO Flush Interval set to: 1s
2016/02/24 10:16:23.843314 async.go:84: INFO Max Bulk Size set to: 2048
2016/02/24 10:16:23.843604 async.go:92: DBG  create bulk processing worker (interval=1s, bulk size=2048)
2016/02/24 10:16:23.843970 beat.go:238: INFO Init Beat: filebeat; Version: 1.2.0-SNAPSHOT
2016/02/24 10:16:23.844973 beat.go:267: INFO filebeat sucessfully setup. Start running.
2016/02/24 10:16:23.845309 registrar.go:65: INFO Registry file set to: /.filebeat
2016/02/24 10:16:23.845671 spooler.go:41: DBG  Spooler will use the default spool_size of 2048
2016/02/24 10:16:23.845922 spooler.go:47: DBG  Spooler will use the default idle_timeout of 5s
2016/02/24 10:16:23.846237 crawler.go:37: INFO Loading Prospectors: 1
2016/02/24 10:16:23.846519 crawler.go:42: DBG  File Configs: []
2016/02/24 10:16:23.846535 prospector.go:213: INFO Set ignore_older duration to 0
2016/02/24 10:16:23.846541 prospector.go:213: INFO Set scan_frequency duration to 10s
2016/02/24 10:16:23.846545 prospector.go:149: INFO buffer_size set to: 16384
2016/02/24 10:16:23.846548 prospector.go:155: INFO document_type set to: log
2016/02/24 10:16:23.846551 prospector.go:162: INFO input_type set to: stdin
2016/02/24 10:16:23.846554 prospector.go:213: INFO Set backoff duration to 1s
2016/02/24 10:16:23.846557 prospector.go:173: INFO backoff_factor set to: 2
2016/02/24 10:16:23.846560 prospector.go:213: INFO Set max_backoff duration to 10s
2016/02/24 10:16:23.846563 prospector.go:183: INFO force_close_file is disabled
2016/02/24 10:16:23.846566 prospector.go:213: INFO Set close_older duration to 1h0m0s
2016/02/24 10:16:23.846569 prospector.go:194: INFO max_bytes set to: 10485760
2016/02/24 10:16:23.846577 crawler.go:51: INFO Loading Prospectors completed. Number of prospectors: 1
2016/02/24 10:16:23.846582 crawler.go:59: INFO All prospectors are initialised and running with 0 states to persist
2016/02/24 10:16:23.846588 publish.go:88: INFO Start sending events to output
2016/02/24 10:16:23.846628 spooler.go:75: INFO Starting spooler: spool_size: 2048; idle_timeout: 5s
2016/02/24 10:16:23.846633 registrar.go:82: INFO Starting Registrar
2016/02/24 10:16:23.846637 prospector.go:86: INFO Starting prospector of type: stdin
2016/02/24 10:16:23.846639 prospector.go:94: INFO Run prospector
2016/02/24 10:16:23.846649 log.go:86: INFO Harvester started for file: -
2016/02/24 10:16:23.846699 reader.go:87: INFO Reached end of file: /dev/stdin
2016/02/24 10:16:23.846704 log.go:125: INFO Read line error: EOF
2016/02/24 10:16:23.846708 log.go:62: DBG  Stopping harvester for file: -
2016/02/24 10:16:23.846712 log.go:68: DBG  Stopping harvester, closing file: -
2016/02/24 10:16:24.847074 prospector.go:94: INFO Run prospector
2016/02/24 10:16:24.847640 log.go:82: ERR Stop Harvesting. Unexpected file stat rror: stat /dev/stdin: bad file descriptor
2016/02/24 10:16:24.848025 log.go:62: DBG  Stopping harvester for file: -
2016/02/24 10:16:24.848317 log.go:68: DBG  Stopping harvester, closing file: -
2016/02/24 10:16:29.853239 prospector.go:94: INFO Run prospector
2016/02/24 10:16:29.853869 log.go:82: ERR Stop Harvesting. Unexpected file stat rror: stat /dev/stdin: bad file descriptor
2016/02/24 10:16:29.854189 log.go:62: DBG  Stopping harvester for file: -
2016/02/24 10:16:29.854394 log.go:68: DBG  Stopping harvester, closing file: -

The prospector configuration is only

filebeat:
  prospectors:
    -
      input_type: stdin

With logstash output, but I guess that's not relevant. Logstash receives everything fine.

@tsg tsg added bug Filebeat Filebeat labels Feb 24, 2016
@ruflin
Copy link
Member

ruflin commented Feb 24, 2016

Which OS are you using?

@fiunchinho
Copy link
Author

The agent is running inside a Docker container based on busybox. The binary is the linux-386

@ruflin
Copy link
Member

ruflin commented Feb 29, 2016

What is the data you are feeding into stdin? Do you have any reproduction steps?
Working find means all the data is shipped as expected?

@chrono
Copy link

chrono commented Mar 22, 2016

This happens when filebeat hits the end of input on stdin (pipe closed, end of file). I was expecting filebeat to exit cleanly once it hits the end of stdin.

My intended use case of filebeat is to run something like s3cat | custom_transformation | filebeat together with the upcoming json support (#1143) in aws lambda responding to a file upload event. I was looking for something more lightweight than logstash to send data to elasticsearch and found filebeat.

Logstash startup takes ~30s there and only does ~1k events per second, even without grok filters.

@ruflin
Copy link
Member

ruflin commented Mar 22, 2016

@chrono This is very similar to the request here: https://github.com/elastic/filebeat/issues/134

@fiunchinho If the error above is because stdin was closed, I think the error is somehow correct?

@fiunchinho
Copy link
Author

Every time a chunk of data is received through stdin, that error will appear on the logs. Maybe the log message just needs to be improved?

@ph
Copy link
Contributor

ph commented Dec 7, 2017

This message isn't an issue anymore if you look at the latest filebeat output log:

2017/12/07 21:36:44.740776 beat.go:455: INFO Home path: [/Users/ph/go/src/github.com/elastic/beats/filebeat] Config path: [/Users/ph/go/src/github.com/elastic/beats/filebeat] Data path: [/Users/ph/go/src/github.com/elastic/beats/filebeat/data] Logs path: [/Users/ph/go/src/github.com/elastic/beats/filebeat/logs]
2017/12/07 21:36:44.740792 metrics.go:23: INFO Metrics logging every 30s
2017/12/07 21:36:44.740843 beat.go:462: INFO Beat UUID: 8869e206-ec96-4d9b-b047-d5c9580915f9
2017/12/07 21:36:44.740858 beat.go:211: INFO Setup Beat: filebeat; Version: 7.0.0-alpha1
2017/12/07 21:36:44.741022 client.go:123: INFO Elasticsearch url: http://localhost:9200
2017/12/07 21:36:44.741213 module.go:76: INFO Beat name: sashimi
2017/12/07 21:36:44.741474 beat.go:284: INFO filebeat start running.
2017/12/07 21:36:44.741529 registrar.go:88: INFO Registry file set to: /Users/ph/go/src/github.com/elastic/beats/filebeat/data/registry
2017/12/07 21:36:44.741563 registrar.go:108: INFO Loading registrar data from /Users/ph/go/src/github.com/elastic/beats/filebeat/data/registry
2017/12/07 21:36:44.741724 registrar.go:119: INFO States Loaded from registrar: 9
2017/12/07 21:36:44.741743 crawler.go:48: INFO Loading Prospectors: 1
2017/12/07 21:36:44.741790 registrar.go:150: INFO Starting Registrar
2017/12/07 21:36:44.741922 prospector.go:87: INFO Starting prospector of type: stdin; ID: 11136643476161899408
2017/12/07 21:36:44.742038 harvester.go:215: INFO Harvester started for file: -
2017/12/07 21:36:44.742040 crawler.go:82: INFO Loading and starting Prospectors completed. Enabled prospectors: 1
2017/12/07 21:36:44.742066 reload.go:127: INFO Config reloader started
2017/12/07 21:36:44.742202 reload.go:219: INFO Loading of config files completed.
2017/12/07 21:36:44.753423 harvester.go:238: INFO End of file reached: . Closing because close_eof is enabled.
2017/12/07 21:36:44.754485 client.go:651: INFO Connected to Elasticsearch version 6.0.0
2017/12/07 21:36:44.755957 load.go:73: INFO Template already exists and will not be overwritten.

@ph ph closed this as completed Dec 7, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants