Commit Graph

13 Commits

Author SHA1 Message Date
Cameron Sparr a30b1a394f Kafka output: set max_retry=3 & required_acks=-1 as defaults
closes 
2016-04-29 18:51:45 -06:00
Cameron Sparr e436b2d720 Cleanup & standardize config file
changes:

- -sample-config will now comment out all but a few default plugins.
- config file parse errors will output path to bad conf file.
- cleanup 80-char line-length and some other style issues.
- default package conf file will now have all plugins, but commented
  out.

closes 
closes 
2016-04-01 10:59:53 -06:00
Florent Ramière 8c3371c4ac Use numerical codes instead of symbolic ones 2016-04-01 10:08:55 -06:00
Florent Ramière 6ff0fc6d83 Add compression/acks/retry conf to Kafka output plugin
The following configuration is now possible

	## CompressionCodec represents the various compression codecs
recognized by Kafka in messages.
	##  "none" : No compression
	##  "gzip" : Gzip compression
	##  "snappy" : Snappy compression
	# compression_codec = "none"

	##  RequiredAcks is used in Produce Requests to tell the broker how
many replica acknowledgements it must see before responding
	##  "none" : the producer never waits for an acknowledgement from the
broker. This option provides the lowest latency but the weakest
durability guarantees (some data will be lost when a server fails).
	##  "leader" : the producer gets an acknowledgement after the leader
replica has received the data. This option provides better durability
as the client waits until the server acknowledges the request as
successful (only messages that were written to the now-dead leader but
not yet replicated will be lost).
	##  "leader_and_replicas" : the producer gets an acknowledgement after
all in-sync replicas have received the data. This option provides the
best durability, we guarantee that no messages will be lost as long as
at least one in sync replica remains.
	# required_acks = "leader_and_replicas"

	##  The total number of times to retry sending a message
	# max_retry = "3"
2016-04-01 10:08:55 -06:00
Cameron Sparr 8d2e5f0bda Seems to be a toml parse bug around triple pounds 2016-02-18 14:36:03 -07:00
Cameron Sparr 7def6663bd Root directory cleanup 2016-02-18 13:37:36 -07:00
Cameron Sparr a9c135488e Add Serializer plugins, and 'file' output plugin 2016-02-12 14:13:49 -07:00
Cameron Sparr bd9c5b6995 mqtt output: cleanup, implement TLS
Also normalize TLS config across all output plugins and normalize
comment strings as well.
2016-02-04 10:44:37 -07:00
Cameron Sparr c549ab907a Throughout telegraf, use telegraf.Metric rather than client.Point
closes 
2016-01-27 23:47:32 -07:00
Cameron Sparr 9c0d14bb60 Create public models for telegraf metrics, accumlator, plugins
This will basically make the root directory a place for storing the
major telegraf interfaces, which will make telegraf's godoc looks quite
a bit nicer. And make it easier for contributors to lookup the few data
types that they actually care about.

closes 
2016-01-27 15:42:50 -07:00
Jack Zampolin 0cdf1b07e9 Fix issue 524 2016-01-20 10:57:35 -08:00
Hannu Valtonen c313af1b24 kafka: Add support for using TLS authentication for the kafka output
With the advent of Kafka 0.9.0+ it is possible to set up TLS client
certificate based authentication to limit access to Kafka.

Four new configuration variables are specified for setting up the
authentication. If they're not set the behavior stays the same as
before the change.

closes 
2016-01-18 11:17:01 -07:00
Cameron Sparr 9c5db1057d renaming plugins -> inputs 2016-01-07 15:04:30 -07:00