telegraf/plugins/inputs/kafka_consumer
Daniel Nelson 886d8cc840
Drop message batches in kafka output if too large (#4565)
2018-08-17 13:51:21 -07:00
..
README.md Drop message batches in kafka output if too large (#4565) 2018-08-17 13:51:21 -07:00
kafka_consumer.go Drop message batches in kafka output if too large (#4565) 2018-08-17 13:51:21 -07:00
kafka_consumer_integration_test.go Switch skipped kafka test 2017-07-18 18:18:57 -07:00
kafka_consumer_test.go Add Kafka 0.9+ consumer support (#2487) 2017-06-07 18:22:28 -07:00

README.md

Kafka Consumer Input Plugin

The Kafka consumer plugin polls a specified Kafka topic and adds messages to InfluxDB. The plugin assumes messages follow the line protocol. Consumer Group is used to talk to the Kafka cluster so multiple instances of telegraf can read from the same topic in parallel.

For old kafka version (< 0.8), please use the kafka_consumer_legacy input plugin and use the old zookeeper connection method.

Configuration

# Read metrics from Kafka topic(s)
[[inputs.kafka_consumer]]
  ## topic(s) to consume
  topics = ["telegraf"]
  brokers = ["localhost:9092"]
  ## the name of the consumer group
  consumer_group = "telegraf_metrics_consumers"
  ## Offset (must be either "oldest" or "newest")
  offset = "oldest"

  ## Optional client id
  # client_id = "Telegraf"

  ## Optional TLS Config
  # tls_ca = "/etc/telegraf/ca.pem"
  # tls_cert = "/etc/telegraf/cert.pem"
  # tls_key = "/etc/telegraf/key.pem"
  ## Use TLS but skip chain & host verification
  # insecure_skip_verify = false

  ## Optional SASL Config
  # sasl_username = "kafka"
  # sasl_password = "secret"

  ## Data format to consume.
  ## Each data format has its own unique set of configuration options, read
  ## more about them here:
  ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
  data_format = "influx"

  ## Maximum length of a message to consume, in bytes (default 0/unlimited);
  ## larger messages are dropped
  max_message_len = 1000000

Testing

Running integration tests requires running Zookeeper & Kafka. See Makefile for kafka container command.