Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kafka output fails on master #1215

Closed
lcanfield opened this issue Mar 23, 2016 · 12 comments
Closed

kafka output fails on master #1215

lcanfield opened this issue Mar 23, 2016 · 12 comments

Comments

@lcanfield
Copy link

Currently (master branch), the kafka output fails with:

2016-03-21T21:04:13Z ERR Invalid kafka configuration: kafka: invalid configuration (Producer.Timeout must be > 0)
2016-03-21T21:04:13Z ERR failed to initialize kafka plugin as output: kafka: invalid configuration (Producer.Timeout must be > 0)
2016-03-21T21:04:13Z CRIT Start error: error Initialising publisher: kafka: invalid configuration (Producer.Timeout must be > 0)

This began with merges on 3/1 for the 1.1.2 release and seems to be related to 5ef56c0

I am testing with filebeat on cent7, alternately compiling from source or using the nightly rpm
My configuration (concatenated whitespace and comments):

filebeat:
  prospectors:
    -
      paths:
        - /var/log/messages
      input_type: log
      document_type: log
output:
  kafka:
    hosts: ["IPADDRESS:PORT"]
    topic: "MYTOPIC"
shipper:
  tags: ["filebeat","MYTAG"]
logging:
  to_files: true
  to_syslog: false
  files:
    path: /tmp/mybeat
    name: filebeat.log
    keepfiles: 7
  level: info

Here is an example output:

# /usr/bin/filebeat -c /etc/filebeat/filebeat.yml -e -d '*'
2016/03/23 17:58:16.252735 beat.go:223: DBG  Initializing output plugins
2016/03/23 17:58:16.252758 geolite.go:24: INFO GeoIP disabled: No paths were set under shipper.geoip.paths
2016/03/23 17:58:16.252770 kafka.go:54: DBG  initialize kafka output
2016/03/23 17:58:16.252800 kafka.go:183: ERR Invalid kafka configuration: kafka: invalid configuration (Producer.Timeout must be > 0)
2016/03/23 17:58:16.252809 outputs.go:83: ERR failed to initialize kafka plugin as output: kafka: invalid configuration (Producer.Timeout must be > 0)
Start error: fails to load the config: error initializing publisher: kafka: invalid configuration (Producer.Timeout must be > 0)


2016/03/23 17:58:16.252822 beat.go:128: CRIT Start error: fails to load the config: error initializing publisher: kafka: invalid configuration (Producer.Timeout must be > 0)

2016/03/23 17:58:16.252826 beat.go:316: INFO Start exiting beat
2016/03/23 17:58:16.252839 beat.go:291: INFO Stopping Beat
2016/03/23 17:58:16.252845 beat.go:299: INFO Cleaning up filebeat before shutting down.
2016/03/23 17:58:16.252849 beat.go:141: INFO Exit beat completed

If I add this configuration:

    timeout: 30s

then I can force a different response:

2016/03/23 18:02:30.635787 kafka.go:54: DBG  initialize kafka output
2016/03/23 18:02:30.635802 outputs.go:83: ERR failed to initialize kafka plugin as output: TODO - implement me
Start error: fails to load the config: error initializing publisher: TODO - implement me


2016/03/23 18:02:30.635814 beat.go:128: CRIT Start error: fails to load the config: error initializing publisher: TODO - implement me

2016/03/23 18:02:30.635818 beat.go:316: INFO Start exiting beat
@urso
Copy link

urso commented Mar 23, 2016

Thanks for reporting. Will have a look.

@urso
Copy link

urso commented Mar 23, 2016

can you try to add broker_timeout: 10s just for testing? At the moment timeout must be an int and is interpreted in seconds, just use timeout: 30. Once #1212 is merge parsing of timeout will be improved (+ error messages).

@lcanfield
Copy link
Author

broker_timeout: 10s fixes the issue
timeout: 30 yields the same initial problem: invalid configuration (Producer.Timeout must be > 0)
Thanks @urso !

@anefassa
Copy link

anefassa commented Mar 24, 2016

I built from master and got my Kafka output setup and seems to be working fine. I can see the messages in the Kafka topic. Here is my output config in filebeat.yml

  ### Kafka output
  kafka:
    hosts: ["host1:9092","host2:9092","host3:9092","host4:9092","host5:9092"]
    topic: raw_metrics
    client_id: beats_sender
    worker: 3
    max_retries: -1
    bulk_max_size: 2048
    timeout: 30
    broker_timeout: 10s
    keep_alive: 0
    compression: none
    max_message_bytes: 1000000
    required_acks: 1
    flush_interval: 10

@andrewkroh
Copy link
Member

Fixed by #1226. If the issue persists, please re-open.

@zhanght1
Copy link

zhanght1 commented Apr 1, 2016

I built one filebeat-1.2.0-x86_64 of linux and followed the filebeat,yml configuration.
filebeat:
prospectors:
-
paths:
- /var/log/test.log
input_type: log
output:
kafka:
hosts: ["localhost:9092"]
topic: "test"
use_type: true
client_id: "beats"
worker: 1
max_retries: 0
console:
pretty: true
shipper:
logging:
files:
rotateeverybytes: 10485760 # = 1MB

But I got this error:Error Initialising publisher: No outputs are defined. Please define one under the output section.

@tsg
Copy link
Contributor

tsg commented Apr 1, 2016

@zhanght1 Kafka output is not in 1.2.0. We'll release a 5.0.0-alpha1, most likely next week, which will contain it.

@zhanght1
Copy link

zhanght1 commented Apr 1, 2016

5.0.0-alpha1? filebeat 5.0.0-alpha1?

@tsg
Copy link
Contributor

tsg commented Apr 1, 2016

Yes, we're synchronizing version across the elastic stack, the next major version for Elasticsearch, Logstash, Kibana and Beats will be 5.0.

@zhanght1
Copy link

zhanght1 commented Apr 6, 2016

We got the 5.0 alpha1 version.But still met one problem.

[root@peklnelh01 filebeat]# ./filebeat -e -c filebeat.yml
{
  "@timestamp": "2016-04-06T09:26:02.537Z",
  "beat": {
    "hostname": "peklnelh01",
    "name": "peklnelh01"
  },
  "input_type": "log",
  "message": "2222222222222",
  "offset": 0,
  "source": "/var/log/test.log",
  "type": "log"
}
2016/04/06 09:28:03.287251 client.go:60: ERR Kafka connect fails with: kafka: client has run out of available brokers to talk to (Is your cluster reachable?)
2016/04/06 09:30:04.141581 client.go:60: ERR Kafka connect fails with: kafka: client has run out of available brokers to talk to (Is your cluster reachable?)

Here is the filebeat.yml.

filebeat:
  prospectors:
    -
      paths:
        - /var/log/test.log
      input_type: log
output:
  kafka:
    hosts: ["10.99.205.127:9092"]
    topic: "elk"
    use_type: true
    client_id: "beats"
    worker: 1
    max_retries: 0
  console:
    pretty: true
shipper:
logging:
  files:
    rotateeverybytes: 10485760 # = 1MB

BR

@tsg
Copy link
Contributor

tsg commented Apr 6, 2016

@zhanght1 the error seems to indicate that the Kafka cluster is not reachable from Filebeat. Can you check connectivity with telnet?

@tsg
Copy link
Contributor

tsg commented Apr 6, 2016

Btw, it would be better to open a topic on https://discuss.elastic.co/c/beats rather then add this to a closed issue on Github.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants