telegraf/plugins/inputs/kafka_consumer
Cameron Sparr c7834209d2 Major Logging Overhaul
in this commit:

- centralize logging output handler.
- set global Info/Debug/Error log levels based on config file or flags.
- remove per-plugin debug arg handling.
- add a I!, D!, or E! to every log message.
- add configuration option to specify where to send logs.

closes #1786
2016-10-03 17:13:03 +01:00
..
README.md Remove docker-machine/boot2docker dependencies & references 2016-06-22 17:25:01 +01:00
kafka_consumer.go Major Logging Overhaul 2016-10-03 17:13:03 +01:00
kafka_consumer_integration_test.go Flush based on buffer size rather than time 2016-02-16 22:25:22 -07:00
kafka_consumer_test.go mongodb input: fix version 2.2 panic 2016-09-06 11:58:06 +01:00

README.md

Kafka Consumer Input Plugin

The Kafka consumer plugin polls a specified Kafka topic and adds messages to InfluxDB. The plugin assumes messages follow the line protocol. Consumer Group is used to talk to the Kafka cluster so multiple instances of telegraf can read from the same topic in parallel.

Configuration

# Read metrics from Kafka topic(s)
[[inputs.kafka_consumer]]
  ## topic(s) to consume
  topics = ["telegraf"]
  ## an array of Zookeeper connection strings
  zookeeper_peers = ["localhost:2181"]
  ## the name of the consumer group
  consumer_group = "telegraf_metrics_consumers"
  ## Maximum number of metrics to buffer between collection intervals
  metric_buffer = 100000
  ## Offset (must be either "oldest" or "newest")
  offset = "oldest"

  ## Data format to consume.

  ## Each data format has it's own unique set of configuration options, read
  ## more about them here:
  ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
  data_format = "influx"

Testing

Running integration tests requires running Zookeeper & Kafka. See Makefile for kafka container command.