Compare commits

..

27 Commits

Author SHA1 Message Date
Cameron Sparr
8b54c73ae4 0.3.0 Removing internal parallelism: twemproxy and rabbitmq 2015-12-19 20:26:18 -07:00
Cameron Sparr
c9ef073fba 0.3.0 Removing internal parallelism: procstat 2015-12-19 16:06:21 -07:00
Cameron Sparr
15f66d7d1b 0.3.0 Removing internal parallelism: postgresql 2015-12-19 15:58:57 -07:00
Cameron Sparr
b0f79f43ec 0.3.0 Removing internal parallelism: httpjson and exec 2015-12-19 15:37:16 -07:00
Cameron Sparr
c584129758 0.3.0 outputs: riemann 2015-12-19 14:55:44 -07:00
Cameron Sparr
d1930c90b5 CHANGELOG update 2015-12-19 14:46:33 -07:00
Cameron Sparr
1e76e36df2 0.3.0 outputs: opentsdb 2015-12-19 14:30:37 -07:00
Cameron Sparr
a73b5257dc 0.3.0 output: librato 2015-12-19 14:19:43 -07:00
Cameron Sparr
c16be04ca7 0.3.0 output: datadog and amon 2015-12-19 14:08:31 -07:00
Cameron Sparr
5513275f2c 0.3.0: mongodb and jolokia 2015-12-19 13:31:22 -07:00
Cameron Sparr
3a7b1688a3 0.3.0: postgresql and phpfpm 2015-12-18 17:09:01 -07:00
Cameron Sparr
35d5c7bae3 0.3.0 HAProxy rebase 2015-12-18 16:39:45 -07:00
Cameron Sparr
60b6693ae3 0.3.0: rethinkdb 2015-12-18 16:39:45 -07:00
Cameron Sparr
c1e1f2ace4 0.3.0: zookeeper and zfs 2015-12-18 16:39:45 -07:00
Cameron Sparr
6698d195d8 backwards compatability for io->diskio change 2015-12-18 16:39:45 -07:00
Cameron Sparr
23b21ca86a 0.3.0: trig and twemproxy 2015-12-18 16:39:45 -07:00
Cameron Sparr
56e14e4731 0.3.0 redis & rabbitmq 2015-12-18 16:39:45 -07:00
Cameron Sparr
7deb339b76 0.3.0: prometheus & puppetagent 2015-12-18 16:39:45 -07:00
Cameron Sparr
0e55c371b7 0.3.0: procstat 2015-12-18 16:39:45 -07:00
Cameron Sparr
f284c8c154 0.3.0: ping, mysql, nginx 2015-12-18 16:39:45 -07:00
Cameron Sparr
e3b314cacb 0.3.0: mailchimp & memcached 2015-12-18 16:39:45 -07:00
Cameron Sparr
9fce094b36 0.3.0: leofs & lustre2 2015-12-18 16:39:45 -07:00
Cameron Sparr
319c363c8e 0.3.0 httpjson 2015-12-18 16:39:45 -07:00
Cameron Sparr
40d84accee 0.3.0: HAProxy 2015-12-18 16:39:45 -07:00
Cameron Sparr
3fc43df84e Breakout JSON flattening into internal package, exec & elasticsearch aggregation 2015-12-18 16:39:45 -07:00
Cameron Sparr
59f804d77a Updating aerospike & apache plugins for 0.3.0 2015-12-18 16:39:45 -07:00
Cameron Sparr
96d5f0d0de Updating system plugins for 0.3.0 2015-12-18 16:39:45 -07:00
307 changed files with 8562 additions and 32710 deletions

4
.gitignore vendored
View File

@@ -1,6 +1,4 @@
tivan
.vagrant
/telegraf
telegraf
.idea
*~
*#

View File

@@ -1,209 +1,51 @@
## v0.10.5 [unreleased]
## v0.3.0 [unreleased]
### Release Notes
### Features
### Bugfixes
## v0.10.4 [2016-02-24]
### Release Notes
- The pass/drop parameters have been renamed to fielddrop/fieldpass parameters,
to more accurately indicate their purpose.
- There are also now namedrop/namepass parameters for passing/dropping based
on the metric _name_.
- Experimental windows builds now available.
### Features
- [#727](https://github.com/influxdata/telegraf/pull/727): riak input, thanks @jcoene!
- [#694](https://github.com/influxdata/telegraf/pull/694): DNS Query input, thanks @mjasion!
- [#724](https://github.com/influxdata/telegraf/pull/724): username matching for procstat input, thanks @zorel!
- [#736](https://github.com/influxdata/telegraf/pull/736): Ignore dummy filesystems from disk plugin. Thanks @PierreF!
- [#737](https://github.com/influxdata/telegraf/pull/737): Support multiple fields for statsd input. Thanks @mattheath!
### Bugfixes
- [#701](https://github.com/influxdata/telegraf/pull/701): output write count shouldnt print in quiet mode.
- [#746](https://github.com/influxdata/telegraf/pull/746): httpjson plugin: Fix HTTP GET parameters.
## v0.10.3 [2016-02-18]
### Release Notes
- Users of the `exec` and `kafka_consumer` (and the new `nats_consumer`
and `mqtt_consumer` plugins) can now specify the incoming data
format that they would like to parse. Currently supports: "json", "influx", and
"graphite"
- Users of message broker and file output plugins can now choose what data format
they would like to output. Currently supports: "influx" and "graphite"
- More info on parsing _incoming_ data formats can be found
[here](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md)
- More info on serializing _outgoing_ data formats can be found
[here](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md)
- Telegraf now has an option `flush_buffer_when_full` that will flush the
metric buffer whenever it fills up for each output, rather than dropping
points and only flushing on a set time interval. This will default to `true`
and is in the `[agent]` config section.
### Features
- [#652](https://github.com/influxdata/telegraf/pull/652): CouchDB Input Plugin. Thanks @codehate!
- [#655](https://github.com/influxdata/telegraf/pull/655): Support parsing arbitrary data formats. Currently limited to kafka_consumer and exec inputs.
- [#671](https://github.com/influxdata/telegraf/pull/671): Dovecot input plugin. Thanks @mikif70!
- [#680](https://github.com/influxdata/telegraf/pull/680): NATS consumer input plugin. Thanks @netixen!
- [#676](https://github.com/influxdata/telegraf/pull/676): MQTT consumer input plugin.
- [#683](https://github.com/influxdata/telegraf/pull/683): PostGRES input plugin: add pg_stat_bgwriter. Thanks @menardorama!
- [#679](https://github.com/influxdata/telegraf/pull/679): File/stdout output plugin.
- [#679](https://github.com/influxdata/telegraf/pull/679): Support for arbitrary output data formats.
- [#695](https://github.com/influxdata/telegraf/pull/695): raindrops input plugin. Thanks @burdandrei!
- [#650](https://github.com/influxdata/telegraf/pull/650): net_response input plugin. Thanks @titilambert!
- [#699](https://github.com/influxdata/telegraf/pull/699): Flush based on buffer size rather than time.
- [#682](https://github.com/influxdata/telegraf/pull/682): Mesos input plugin. Thanks @tripledes!
### Bugfixes
- [#443](https://github.com/influxdata/telegraf/issues/443): Fix Ping command timeout parameter on Linux.
- [#662](https://github.com/influxdata/telegraf/pull/667): Change `[tags]` to `[global_tags]` to fix multiple-plugin tags bug.
- [#642](https://github.com/influxdata/telegraf/issues/642): Riemann output plugin issues.
- [#394](https://github.com/influxdata/telegraf/issues/394): Support HTTP POST. Thanks @gabelev!
- [#715](https://github.com/influxdata/telegraf/pull/715): Fix influxdb precision config panic. Thanks @netixen!
## v0.10.2 [2016-02-04]
### Release Notes
- Statsd timing measurements are now aggregated into a single measurement with
fields.
- Graphite output now inserts tags into the bucket in alphabetical order.
- Normalized TLS/SSL support for output plugins: MQTT, AMQP, Kafka
- `verify_ssl` config option was removed from Kafka because it was actually
doing the opposite of what it claimed to do (yikes). It's been replaced by
`insecure_skip_verify`
### Features
- [#575](https://github.com/influxdata/telegraf/pull/575): Support for collecting Windows Performance Counters. Thanks @TheFlyingCorpse!
- [#564](https://github.com/influxdata/telegraf/issues/564): features for plugin writing simplification. Internal metric data type.
- [#603](https://github.com/influxdata/telegraf/pull/603): Aggregate statsd timing measurements into fields. Thanks @marcinbunsch!
- [#601](https://github.com/influxdata/telegraf/issues/601): Warn when overwriting cached metrics.
- [#614](https://github.com/influxdata/telegraf/pull/614): PowerDNS input plugin. Thanks @Kasen!
- [#617](https://github.com/influxdata/telegraf/pull/617): exec plugin: parse influx line protocol in addition to JSON.
- [#628](https://github.com/influxdata/telegraf/pull/628): Windows perf counters: pre-vista support
### Bugfixes
- [#595](https://github.com/influxdata/telegraf/issues/595): graphite output should include tags to separate duplicate measurements.
- [#599](https://github.com/influxdata/telegraf/issues/599): datadog plugin tags not working.
- [#600](https://github.com/influxdata/telegraf/issues/600): datadog measurement/field name parsing is wrong.
- [#602](https://github.com/influxdata/telegraf/issues/602): Fix statsd field name templating.
- [#612](https://github.com/influxdata/telegraf/pull/612): Docker input panic fix if stats received are nil.
- [#634](https://github.com/influxdata/telegraf/pull/634): Properly set host headers in httpjson. Thanks @reginaldosousa!
## v0.10.1 [2016-01-27]
### Release Notes
- Telegraf now keeps a fixed-length buffer of metrics per-output. This buffer
defaults to 10,000 metrics, and is adjustable. The buffer is cleared when a
successful write to that output occurs.
- The docker plugin has been significantly overhauled to add more metrics
and allow for docker-machine (incl OSX) support.
[See the readme](https://github.com/influxdata/telegraf/blob/master/plugins/inputs/docker/README.md)
for the latest measurements, fields, and tags. There is also now support for
specifying a docker endpoint to get metrics from.
### Features
- [#509](https://github.com/influxdata/telegraf/pull/509): Flatten JSON arrays with indices. Thanks @psilva261!
- [#512](https://github.com/influxdata/telegraf/pull/512): Python 3 build script, add lsof dep to package. Thanks @Ormod!
- [#475](https://github.com/influxdata/telegraf/pull/475): Add response time to httpjson plugin. Thanks @titilambert!
- [#519](https://github.com/influxdata/telegraf/pull/519): Added a sensors input based on lm-sensors. Thanks @md14454!
- [#467](https://github.com/influxdata/telegraf/issues/467): Add option to disable statsd measurement name conversion.
- [#534](https://github.com/influxdata/telegraf/pull/534): NSQ input plugin. Thanks @allingeek!
- [#494](https://github.com/influxdata/telegraf/pull/494): Graphite output plugin. Thanks @titilambert!
- AMQP SSL support. Thanks @ekini!
- [#539](https://github.com/influxdata/telegraf/pull/539): Reload config on SIGHUP. Thanks @titilambert!
- [#522](https://github.com/influxdata/telegraf/pull/522): Phusion passenger input plugin. Thanks @kureikain!
- [#541](https://github.com/influxdata/telegraf/pull/541): Kafka output TLS cert support. Thanks @Ormod!
- [#551](https://github.com/influxdata/telegraf/pull/551): Statsd UDP read packet size now defaults to 1500 bytes, and is configurable.
- [#552](https://github.com/influxdata/telegraf/pull/552): Support for collection interval jittering.
- [#484](https://github.com/influxdata/telegraf/issues/484): Include usage percent with procstat metrics.
- [#553](https://github.com/influxdata/telegraf/pull/553): Amazon CloudWatch output. thanks @skwong2!
- [#503](https://github.com/influxdata/telegraf/pull/503): Support docker endpoint configuration.
- [#563](https://github.com/influxdata/telegraf/pull/563): Docker plugin overhaul.
- [#285](https://github.com/influxdata/telegraf/issues/285): Fixed-size buffer of points.
- [#546](https://github.com/influxdata/telegraf/pull/546): SNMP Input plugin. Thanks @titilambert!
- [#589](https://github.com/influxdata/telegraf/pull/589): Microsoft SQL Server input plugin. Thanks @zensqlmonitor!
- [#573](https://github.com/influxdata/telegraf/pull/573): Github webhooks consumer input. Thanks @jackzampolin!
- [#471](https://github.com/influxdata/telegraf/pull/471): httpjson request headers. Thanks @asosso!
### Bugfixes
- [#506](https://github.com/influxdata/telegraf/pull/506): Ping input doesn't return response time metric when timeout. Thanks @titilambert!
- [#508](https://github.com/influxdata/telegraf/pull/508): Fix prometheus cardinality issue with the `net` plugin
- [#499](https://github.com/influxdata/telegraf/issues/499) & [#502](https://github.com/influxdata/telegraf/issues/502): php fpm unix socket and other fixes, thanks @kureikain!
- [#543](https://github.com/influxdata/telegraf/issues/543): Statsd Packet size sometimes truncated.
- [#440](https://github.com/influxdata/telegraf/issues/440): Don't query filtered devices for disk stats.
- [#463](https://github.com/influxdata/telegraf/issues/463): Docker plugin not working on AWS Linux
- [#568](https://github.com/influxdata/telegraf/issues/568): Multiple output race condition.
- [#585](https://github.com/influxdata/telegraf/pull/585): Log stack trace and continue on Telegraf panic. Thanks @wutaizeng!
## v0.10.0 [2016-01-12]
### Release Notes
- Linux packages have been taken out of `opt`, the binary is now in `/usr/bin`
and configuration files are in `/etc/telegraf`
- **breaking change** `plugins` have been renamed to `inputs`. This was done because
`plugins` is too generic, as there are now also "output plugins", and will likely
be "aggregator plugins" and "filter plugins" in the future. Additionally,
`inputs/` and `outputs/` directories have been placed in the root-level `plugins/`
directory.
- **breaking change** the `io` plugin has been renamed `diskio`
- **breaking change** plugin measurements aggregated into a single measurement.
- **breaking change** Plugin measurements aggregated into a single measurement.
- **breaking change** `jolokia` plugin: must use global tag/drop/pass parameters
for configuration.
- **breaking change** `twemproxy` plugin: `prefix` option removed.
- **breaking change** `procstat` cpu measurements are now prepended with `cpu_time_`
instead of only `cpu_`
- **breaking change** some command-line flags have been renamed to separate words.
`-configdirectory` -> `-config-directory`, `-filter` -> `-input-filter`,
`-outputfilter` -> `-output-filter`
- `twemproxy` plugin: `prefix` option removed.
- `procstat` cpu measurements are now prepended with `cpu_time_` instead of
only `cpu_`
- The prometheus plugin schema has not been changed (measurements have not been
aggregated).
### Packaging change note:
RHEL/CentOS users upgrading from 0.2.x to 0.10.0 will probably have their
configurations overwritten by the upgrade. There is a backup stored at
/etc/telegraf/telegraf.conf.$(date +%s).backup.
### Features
- Plugin measurements aggregated into a single measurement.
- Added ability to specify per-plugin tags
- Added ability to specify per-plugin measurement suffix and prefix.
(`name_prefix` and `name_suffix`)
- Added ability to override base plugin measurement name. (`name_override`)
- Added ability to override base plugin name. (`name_override`)
### Bugfixes
## v0.2.5 [unreleased]
### Features
- [#427](https://github.com/influxdata/telegraf/pull/427): zfs plugin: pool stats added. Thanks @allenpetersen!
- [#428](https://github.com/influxdata/telegraf/pull/428): Amazon Kinesis output. Thanks @jimmystewpot!
- [#449](https://github.com/influxdata/telegraf/pull/449): influxdb plugin, thanks @mark-rushakoff
- [#427](https://github.com/influxdb/telegraf/pull/427): zfs plugin: pool stats added. Thanks @allenpetersen!
- [#428](https://github.com/influxdb/telegraf/pull/428): Amazon Kinesis output. Thanks @jimmystewpot!
- [#449](https://github.com/influxdb/telegraf/pull/449): influxdb plugin, thanks @mark-rushakoff
### Bugfixes
- [#430](https://github.com/influxdata/telegraf/issues/430): Network statistics removed in elasticsearch 2.1. Thanks @jipperinbham!
- [#452](https://github.com/influxdata/telegraf/issues/452): Elasticsearch open file handles error. Thanks @jipperinbham!
- [#430](https://github.com/influxdb/telegraf/issues/430): Network statistics removed in elasticsearch 2.1. Thanks @jipperinbham!
- [#452](https://github.com/influxdb/telegraf/issues/452): Elasticsearch open file handles error. Thanks @jipperinbham!
## v0.2.4 [2015-12-08]
### Features
- [#412](https://github.com/influxdata/telegraf/pull/412): Additional memcached stats. Thanks @mgresser!
- [#410](https://github.com/influxdata/telegraf/pull/410): Additional redis metrics. Thanks @vlaadbrain!
- [#414](https://github.com/influxdata/telegraf/issues/414): Jolokia plugin auth parameters
- [#415](https://github.com/influxdata/telegraf/issues/415): memcached plugin: support unix sockets
- [#418](https://github.com/influxdata/telegraf/pull/418): memcached plugin additional unit tests.
- [#408](https://github.com/influxdata/telegraf/pull/408): MailChimp plugin.
- [#382](https://github.com/influxdata/telegraf/pull/382): Add system wide network protocol stats to `net` plugin.
- [#401](https://github.com/influxdata/telegraf/pull/401): Support pass/drop/tagpass/tagdrop for outputs. Thanks @oldmantaiter!
- [#412](https://github.com/influxdb/telegraf/pull/412): Additional memcached stats. Thanks @mgresser!
- [#410](https://github.com/influxdb/telegraf/pull/410): Additional redis metrics. Thanks @vlaadbrain!
- [#414](https://github.com/influxdb/telegraf/issues/414): Jolokia plugin auth parameters
- [#415](https://github.com/influxdb/telegraf/issues/415): memcached plugin: support unix sockets
- [#418](https://github.com/influxdb/telegraf/pull/418): memcached plugin additional unit tests.
- [#408](https://github.com/influxdb/telegraf/pull/408): MailChimp plugin.
- [#382](https://github.com/influxdb/telegraf/pull/382): Add system wide network protocol stats to `net` plugin.
- [#401](https://github.com/influxdb/telegraf/pull/401): Support pass/drop/tagpass/tagdrop for outputs. Thanks @oldmantaiter!
### Bugfixes
- [#405](https://github.com/influxdata/telegraf/issues/405): Prometheus output cardinality issue
- [#388](https://github.com/influxdata/telegraf/issues/388): Fix collection hangup when cpu times decrement.
- [#405](https://github.com/influxdb/telegraf/issues/405): Prometheus output cardinality issue
- [#388](https://github.com/influxdb/telegraf/issues/388): Fix collection hangup when cpu times decrement.
## v0.2.3 [2015-11-30]
@@ -218,11 +60,11 @@ functional.
same type can be specified, like this:
```
[[inputs.cpu]]
[[plugins.cpu]]
percpu = false
totalcpu = true
[[inputs.cpu]]
[[plugins.cpu]]
percpu = true
totalcpu = false
drop = ["cpu_time"]
@@ -232,15 +74,15 @@ same type can be specified, like this:
- Aerospike plugin: tag changed from `host` -> `aerospike_host`
### Features
- [#379](https://github.com/influxdata/telegraf/pull/379): Riemann output, thanks @allenj!
- [#375](https://github.com/influxdata/telegraf/pull/375): kafka_consumer service plugin.
- [#392](https://github.com/influxdata/telegraf/pull/392): Procstat plugin can now accept pgrep -f pattern, thanks @ecarreras!
- [#383](https://github.com/influxdata/telegraf/pull/383): Specify plugins as a list.
- [#354](https://github.com/influxdata/telegraf/pull/354): Add ability to specify multiple metrics in one statsd line. Thanks @MerlinDMC!
- [#379](https://github.com/influxdb/telegraf/pull/379): Riemann output, thanks @allenj!
- [#375](https://github.com/influxdb/telegraf/pull/375): kafka_consumer service plugin.
- [#392](https://github.com/influxdb/telegraf/pull/392): Procstat plugin can now accept pgrep -f pattern, thanks @ecarreras!
- [#383](https://github.com/influxdb/telegraf/pull/383): Specify plugins as a list.
- [#354](https://github.com/influxdb/telegraf/pull/354): Add ability to specify multiple metrics in one statsd line. Thanks @MerlinDMC!
### Bugfixes
- [#371](https://github.com/influxdata/telegraf/issues/371): Kafka consumer plugin not functioning.
- [#389](https://github.com/influxdata/telegraf/issues/389): NaN value panic
- [#371](https://github.com/influxdb/telegraf/issues/371): Kafka consumer plugin not functioning.
- [#389](https://github.com/influxdb/telegraf/issues/389): NaN value panic
## v0.2.2 [2015-11-18]
@@ -249,7 +91,7 @@ same type can be specified, like this:
lists of servers/URLs. 0.2.2 is being released solely to fix that bug
### Bugfixes
- [#377](https://github.com/influxdata/telegraf/pull/377): Fix for duplicate slices in inputs.
- [#377](https://github.com/influxdb/telegraf/pull/377): Fix for duplicate slices in plugins.
## v0.2.1 [2015-11-16]
@@ -266,22 +108,22 @@ changed to just run docker commands in the Makefile. See `make docker-run` and
same type.
### Features
- [#325](https://github.com/influxdata/telegraf/pull/325): NSQ output. Thanks @jrxFive!
- [#318](https://github.com/influxdata/telegraf/pull/318): Prometheus output. Thanks @oldmantaiter!
- [#338](https://github.com/influxdata/telegraf/pull/338): Restart Telegraf on package upgrade. Thanks @linsomniac!
- [#337](https://github.com/influxdata/telegraf/pull/337): Jolokia plugin, thanks @saiello!
- [#350](https://github.com/influxdata/telegraf/pull/350): Amon output.
- [#365](https://github.com/influxdata/telegraf/pull/365): Twemproxy plugin by @codeb2cc
- [#317](https://github.com/influxdata/telegraf/issues/317): ZFS plugin, thanks @cornerot!
- [#364](https://github.com/influxdata/telegraf/pull/364): Support InfluxDB UDP output.
- [#370](https://github.com/influxdata/telegraf/pull/370): Support specifying multiple outputs, as lists.
- [#372](https://github.com/influxdata/telegraf/pull/372): Remove gosigar and update go-dockerclient for FreeBSD support. Thanks @MerlinDMC!
- [#325](https://github.com/influxdb/telegraf/pull/325): NSQ output. Thanks @jrxFive!
- [#318](https://github.com/influxdb/telegraf/pull/318): Prometheus output. Thanks @oldmantaiter!
- [#338](https://github.com/influxdb/telegraf/pull/338): Restart Telegraf on package upgrade. Thanks @linsomniac!
- [#337](https://github.com/influxdb/telegraf/pull/337): Jolokia plugin, thanks @saiello!
- [#350](https://github.com/influxdb/telegraf/pull/350): Amon output.
- [#365](https://github.com/influxdb/telegraf/pull/365): Twemproxy plugin by @codeb2cc
- [#317](https://github.com/influxdb/telegraf/issues/317): ZFS plugin, thanks @cornerot!
- [#364](https://github.com/influxdb/telegraf/pull/364): Support InfluxDB UDP output.
- [#370](https://github.com/influxdb/telegraf/pull/370): Support specifying multiple outputs, as lists.
- [#372](https://github.com/influxdb/telegraf/pull/372): Remove gosigar and update go-dockerclient for FreeBSD support. Thanks @MerlinDMC!
### Bugfixes
- [#331](https://github.com/influxdata/telegraf/pull/331): Dont overwrite host tag in redis plugin.
- [#336](https://github.com/influxdata/telegraf/pull/336): Mongodb plugin should take 2 measurements.
- [#351](https://github.com/influxdata/telegraf/issues/317): Fix continual "CREATE DATABASE" in writes
- [#360](https://github.com/influxdata/telegraf/pull/360): Apply prefix before ShouldPass check. Thanks @sotfo!
- [#331](https://github.com/influxdb/telegraf/pull/331): Dont overwrite host tag in redis plugin.
- [#336](https://github.com/influxdb/telegraf/pull/336): Mongodb plugin should take 2 measurements.
- [#351](https://github.com/influxdb/telegraf/issues/317): Fix continual "CREATE DATABASE" in writes
- [#360](https://github.com/influxdb/telegraf/pull/360): Apply prefix before ShouldPass check. Thanks @sotfo!
## v0.2.0 [2015-10-27]
@@ -302,38 +144,38 @@ be controlled via the `round_interval` and `flush_jitter` config options.
- Telegraf will now retry metric flushes twice
### Features
- [#205](https://github.com/influxdata/telegraf/issues/205): Include per-db redis keyspace info
- [#226](https://github.com/influxdata/telegraf/pull/226): Add timestamps to points in Kafka/AMQP outputs. Thanks @ekini
- [#90](https://github.com/influxdata/telegraf/issues/90): Add Docker labels to tags in docker plugin
- [#223](https://github.com/influxdata/telegraf/pull/223): Add port tag to nginx plugin. Thanks @neezgee!
- [#227](https://github.com/influxdata/telegraf/pull/227): Add command intervals to exec plugin. Thanks @jpalay!
- [#241](https://github.com/influxdata/telegraf/pull/241): MQTT Output. Thanks @shirou!
- [#205](https://github.com/influxdb/telegraf/issues/205): Include per-db redis keyspace info
- [#226](https://github.com/influxdb/telegraf/pull/226): Add timestamps to points in Kafka/AMQP outputs. Thanks @ekini
- [#90](https://github.com/influxdb/telegraf/issues/90): Add Docker labels to tags in docker plugin
- [#223](https://github.com/influxdb/telegraf/pull/223): Add port tag to nginx plugin. Thanks @neezgee!
- [#227](https://github.com/influxdb/telegraf/pull/227): Add command intervals to exec plugin. Thanks @jpalay!
- [#241](https://github.com/influxdb/telegraf/pull/241): MQTT Output. Thanks @shirou!
- Memory plugin: cached and buffered measurements re-added
- Logging: additional logging for each collection interval, track the number
of metrics collected and from how many inputs.
- [#240](https://github.com/influxdata/telegraf/pull/240): procstat plugin, thanks @ranjib!
- [#244](https://github.com/influxdata/telegraf/pull/244): netstat plugin, thanks @shirou!
- [#262](https://github.com/influxdata/telegraf/pull/262): zookeeper plugin, thanks @jrxFive!
- [#237](https://github.com/influxdata/telegraf/pull/237): statsd service plugin, thanks @sparrc
- [#273](https://github.com/influxdata/telegraf/pull/273): puppet agent plugin, thats @jrxFive!
- [#280](https://github.com/influxdata/telegraf/issues/280): Use InfluxDB client v2.
- [#281](https://github.com/influxdata/telegraf/issues/281): Eliminate need to deep copy Batch Points.
- [#286](https://github.com/influxdata/telegraf/issues/286): bcache plugin, thanks @cornerot!
- [#287](https://github.com/influxdata/telegraf/issues/287): Batch AMQP output, thanks @ekini!
- [#301](https://github.com/influxdata/telegraf/issues/301): Collect on even intervals
- [#298](https://github.com/influxdata/telegraf/pull/298): Support retrying output writes
- [#300](https://github.com/influxdata/telegraf/issues/300): aerospike plugin. Thanks @oldmantaiter!
- [#322](https://github.com/influxdata/telegraf/issues/322): Librato output. Thanks @jipperinbham!
of metrics collected and from how many plugins.
- [#240](https://github.com/influxdb/telegraf/pull/240): procstat plugin, thanks @ranjib!
- [#244](https://github.com/influxdb/telegraf/pull/244): netstat plugin, thanks @shirou!
- [#262](https://github.com/influxdb/telegraf/pull/262): zookeeper plugin, thanks @jrxFive!
- [#237](https://github.com/influxdb/telegraf/pull/237): statsd service plugin, thanks @sparrc
- [#273](https://github.com/influxdb/telegraf/pull/273): puppet agent plugin, thats @jrxFive!
- [#280](https://github.com/influxdb/telegraf/issues/280): Use InfluxDB client v2.
- [#281](https://github.com/influxdb/telegraf/issues/281): Eliminate need to deep copy Batch Points.
- [#286](https://github.com/influxdb/telegraf/issues/286): bcache plugin, thanks @cornerot!
- [#287](https://github.com/influxdb/telegraf/issues/287): Batch AMQP output, thanks @ekini!
- [#301](https://github.com/influxdb/telegraf/issues/301): Collect on even intervals
- [#298](https://github.com/influxdb/telegraf/pull/298): Support retrying output writes
- [#300](https://github.com/influxdb/telegraf/issues/300): aerospike plugin. Thanks @oldmantaiter!
- [#322](https://github.com/influxdb/telegraf/issues/322): Librato output. Thanks @jipperinbham!
### Bugfixes
- [#228](https://github.com/influxdata/telegraf/pull/228): New version of package will replace old one. Thanks @ekini!
- [#232](https://github.com/influxdata/telegraf/pull/232): Fix bashism run during deb package installation. Thanks @yankcrime!
- [#261](https://github.com/influxdata/telegraf/issues/260): RabbitMQ panics if wrong credentials given. Thanks @ekini!
- [#245](https://github.com/influxdata/telegraf/issues/245): Document Exec plugin example. Thanks @ekini!
- [#264](https://github.com/influxdata/telegraf/issues/264): logrotate config file fixes. Thanks @linsomniac!
- [#290](https://github.com/influxdata/telegraf/issues/290): Fix some plugins sending their values as strings.
- [#289](https://github.com/influxdata/telegraf/issues/289): Fix accumulator panic on nil tags.
- [#302](https://github.com/influxdata/telegraf/issues/302): Fix `[tags]` getting applied, thanks @gotyaoi!
- [#228](https://github.com/influxdb/telegraf/pull/228): New version of package will replace old one. Thanks @ekini!
- [#232](https://github.com/influxdb/telegraf/pull/232): Fix bashism run during deb package installation. Thanks @yankcrime!
- [#261](https://github.com/influxdb/telegraf/issues/260): RabbitMQ panics if wrong credentials given. Thanks @ekini!
- [#245](https://github.com/influxdb/telegraf/issues/245): Document Exec plugin example. Thanks @ekini!
- [#264](https://github.com/influxdb/telegraf/issues/264): logrotate config file fixes. Thanks @linsomniac!
- [#290](https://github.com/influxdb/telegraf/issues/290): Fix some plugins sending their values as strings.
- [#289](https://github.com/influxdb/telegraf/issues/289): Fix accumulator panic on nil tags.
- [#302](https://github.com/influxdb/telegraf/issues/302): Fix `[tags]` getting applied, thanks @gotyaoi!
## v0.1.9 [2015-09-22]
@@ -343,7 +185,7 @@ will still be backwards compatible if only `url` is specified.
- The -test flag will now output two metric collections
- Support for filtering telegraf outputs on the CLI -- Telegraf will now
allow filtering of output sinks on the command-line using the `-outputfilter`
flag, much like how the `-filter` flag works for inputs.
flag, much like how the `-filter` flag works for plugins.
- Support for filtering on config-file creation -- Telegraf now supports
filtering to -sample-config command. You can now run
`telegraf -sample-config -filter cpu -outputfilter influxdb` to get a config
@@ -359,27 +201,27 @@ have been renamed for consistency. Some measurements have also been removed from
re-added in a "verbose" mode if there is demand for it.
### Features
- [#143](https://github.com/influxdata/telegraf/issues/143): InfluxDB clustering support
- [#181](https://github.com/influxdata/telegraf/issues/181): Makefile GOBIN support. Thanks @Vye!
- [#203](https://github.com/influxdata/telegraf/pull/200): AMQP output. Thanks @ekini!
- [#182](https://github.com/influxdata/telegraf/pull/182): OpenTSDB output. Thanks @rplessl!
- [#187](https://github.com/influxdata/telegraf/pull/187): Retry output sink connections on startup.
- [#220](https://github.com/influxdata/telegraf/pull/220): Add port tag to apache plugin. Thanks @neezgee!
- [#217](https://github.com/influxdata/telegraf/pull/217): Add filtering for output sinks
- [#143](https://github.com/influxdb/telegraf/issues/143): InfluxDB clustering support
- [#181](https://github.com/influxdb/telegraf/issues/181): Makefile GOBIN support. Thanks @Vye!
- [#203](https://github.com/influxdb/telegraf/pull/200): AMQP output. Thanks @ekini!
- [#182](https://github.com/influxdb/telegraf/pull/182): OpenTSDB output. Thanks @rplessl!
- [#187](https://github.com/influxdb/telegraf/pull/187): Retry output sink connections on startup.
- [#220](https://github.com/influxdb/telegraf/pull/220): Add port tag to apache plugin. Thanks @neezgee!
- [#217](https://github.com/influxdb/telegraf/pull/217): Add filtering for output sinks
and filtering when specifying a config file.
### Bugfixes
- [#170](https://github.com/influxdata/telegraf/issues/170): Systemd support
- [#175](https://github.com/influxdata/telegraf/issues/175): Set write precision before gathering metrics
- [#178](https://github.com/influxdata/telegraf/issues/178): redis plugin, multiple server thread hang bug
- [#170](https://github.com/influxdb/telegraf/issues/170): Systemd support
- [#175](https://github.com/influxdb/telegraf/issues/175): Set write precision before gathering metrics
- [#178](https://github.com/influxdb/telegraf/issues/178): redis plugin, multiple server thread hang bug
- Fix net plugin on darwin
- [#84](https://github.com/influxdata/telegraf/issues/84): Fix docker plugin on CentOS. Thanks @neezgee!
- [#189](https://github.com/influxdata/telegraf/pull/189): Fix mem_used_perc. Thanks @mced!
- [#192](https://github.com/influxdata/telegraf/issues/192): Increase compatibility of postgresql plugin. Now supports versions 8.1+
- [#203](https://github.com/influxdata/telegraf/issues/203): EL5 rpm support. Thanks @ekini!
- [#206](https://github.com/influxdata/telegraf/issues/206): CPU steal/guest values wrong on linux.
- [#212](https://github.com/influxdata/telegraf/issues/212): Add hashbang to postinstall script. Thanks @ekini!
- [#212](https://github.com/influxdata/telegraf/issues/212): Fix makefile warning. Thanks @ekini!
- [#84](https://github.com/influxdb/telegraf/issues/84): Fix docker plugin on CentOS. Thanks @neezgee!
- [#189](https://github.com/influxdb/telegraf/pull/189): Fix mem_used_perc. Thanks @mced!
- [#192](https://github.com/influxdb/telegraf/issues/192): Increase compatibility of postgresql plugin. Now supports versions 8.1+
- [#203](https://github.com/influxdb/telegraf/issues/203): EL5 rpm support. Thanks @ekini!
- [#206](https://github.com/influxdb/telegraf/issues/206): CPU steal/guest values wrong on linux.
- [#212](https://github.com/influxdb/telegraf/issues/212): Add hashbang to postinstall script. Thanks @ekini!
- [#212](https://github.com/influxdb/telegraf/issues/212): Fix makefile warning. Thanks @ekini!
## v0.1.8 [2015-09-04]
@@ -388,106 +230,106 @@ and filtering when specifying a config file.
- Now using Go 1.5 to build telegraf
### Features
- [#150](https://github.com/influxdata/telegraf/pull/150): Add Host Uptime metric to system plugin
- [#158](https://github.com/influxdata/telegraf/pull/158): Apache Plugin. Thanks @KPACHbIuLLIAnO4
- [#159](https://github.com/influxdata/telegraf/pull/159): Use second precision for InfluxDB writes
- [#165](https://github.com/influxdata/telegraf/pull/165): Add additional metrics to mysql plugin. Thanks @nickscript0
- [#162](https://github.com/influxdata/telegraf/pull/162): Write UTC by default, provide option
- [#166](https://github.com/influxdata/telegraf/pull/166): Upload binaries to S3
- [#169](https://github.com/influxdata/telegraf/pull/169): Ping plugin
- [#150](https://github.com/influxdb/telegraf/pull/150): Add Host Uptime metric to system plugin
- [#158](https://github.com/influxdb/telegraf/pull/158): Apache Plugin. Thanks @KPACHbIuLLIAnO4
- [#159](https://github.com/influxdb/telegraf/pull/159): Use second precision for InfluxDB writes
- [#165](https://github.com/influxdb/telegraf/pull/165): Add additional metrics to mysql plugin. Thanks @nickscript0
- [#162](https://github.com/influxdb/telegraf/pull/162): Write UTC by default, provide option
- [#166](https://github.com/influxdb/telegraf/pull/166): Upload binaries to S3
- [#169](https://github.com/influxdb/telegraf/pull/169): Ping plugin
### Bugfixes
## v0.1.7 [2015-08-28]
### Features
- [#38](https://github.com/influxdata/telegraf/pull/38): Kafka output producer.
- [#133](https://github.com/influxdata/telegraf/pull/133): Add plugin.Gather error logging. Thanks @nickscript0!
- [#136](https://github.com/influxdata/telegraf/issues/136): Add a -usage flag for printing usage of a single plugin.
- [#137](https://github.com/influxdata/telegraf/issues/137): Memcached: fix when a value contains a space
- [#138](https://github.com/influxdata/telegraf/issues/138): MySQL server address tag.
- [#142](https://github.com/influxdata/telegraf/pull/142): Add Description and SampleConfig funcs to output interface
- [#38](https://github.com/influxdb/telegraf/pull/38): Kafka output producer.
- [#133](https://github.com/influxdb/telegraf/pull/133): Add plugin.Gather error logging. Thanks @nickscript0!
- [#136](https://github.com/influxdb/telegraf/issues/136): Add a -usage flag for printing usage of a single plugin.
- [#137](https://github.com/influxdb/telegraf/issues/137): Memcached: fix when a value contains a space
- [#138](https://github.com/influxdb/telegraf/issues/138): MySQL server address tag.
- [#142](https://github.com/influxdb/telegraf/pull/142): Add Description and SampleConfig funcs to output interface
- Indent the toml config file for readability
### Bugfixes
- [#128](https://github.com/influxdata/telegraf/issues/128): system_load measurement missing.
- [#129](https://github.com/influxdata/telegraf/issues/129): Latest pkg url fix.
- [#131](https://github.com/influxdata/telegraf/issues/131): Fix memory reporting on linux & darwin. Thanks @subhachandrachandra!
- [#140](https://github.com/influxdata/telegraf/issues/140): Memory plugin prec->perc typo fix. Thanks @brunoqc!
- [#128](https://github.com/influxdb/telegraf/issues/128): system_load measurement missing.
- [#129](https://github.com/influxdb/telegraf/issues/129): Latest pkg url fix.
- [#131](https://github.com/influxdb/telegraf/issues/131): Fix memory reporting on linux & darwin. Thanks @subhachandrachandra!
- [#140](https://github.com/influxdb/telegraf/issues/140): Memory plugin prec->perc typo fix. Thanks @brunoqc!
## v0.1.6 [2015-08-20]
### Features
- [#112](https://github.com/influxdata/telegraf/pull/112): Datadog output. Thanks @jipperinbham!
- [#116](https://github.com/influxdata/telegraf/pull/116): Use godep to vendor all dependencies
- [#120](https://github.com/influxdata/telegraf/pull/120): Httpjson plugin. Thanks @jpalay & @alvaromorales!
- [#112](https://github.com/influxdb/telegraf/pull/112): Datadog output. Thanks @jipperinbham!
- [#116](https://github.com/influxdb/telegraf/pull/116): Use godep to vendor all dependencies
- [#120](https://github.com/influxdb/telegraf/pull/120): Httpjson plugin. Thanks @jpalay & @alvaromorales!
### Bugfixes
- [#113](https://github.com/influxdata/telegraf/issues/113): Update README with Telegraf/InfluxDB compatibility
- [#118](https://github.com/influxdata/telegraf/pull/118): Fix for disk usage stats in Windows. Thanks @srfraser!
- [#122](https://github.com/influxdata/telegraf/issues/122): Fix for DiskUsage segv fault. Thanks @srfraser!
- [#126](https://github.com/influxdata/telegraf/issues/126): Nginx plugin not catching net.SplitHostPort error
- [#113](https://github.com/influxdb/telegraf/issues/113): Update README with Telegraf/InfluxDB compatibility
- [#118](https://github.com/influxdb/telegraf/pull/118): Fix for disk usage stats in Windows. Thanks @srfraser!
- [#122](https://github.com/influxdb/telegraf/issues/122): Fix for DiskUsage segv fault. Thanks @srfraser!
- [#126](https://github.com/influxdb/telegraf/issues/126): Nginx plugin not catching net.SplitHostPort error
## v0.1.5 [2015-08-13]
### Features
- [#54](https://github.com/influxdata/telegraf/pull/54): MongoDB plugin. Thanks @jipperinbham!
- [#55](https://github.com/influxdata/telegraf/pull/55): Elasticsearch plugin. Thanks @brocaar!
- [#71](https://github.com/influxdata/telegraf/pull/71): HAProxy plugin. Thanks @kureikain!
- [#72](https://github.com/influxdata/telegraf/pull/72): Adding TokuDB metrics to MySQL. Thanks vadimtk!
- [#73](https://github.com/influxdata/telegraf/pull/73): RabbitMQ plugin. Thanks @ianunruh!
- [#77](https://github.com/influxdata/telegraf/issues/77): Automatically create database.
- [#79](https://github.com/influxdata/telegraf/pull/56): Nginx plugin. Thanks @codeb2cc!
- [#86](https://github.com/influxdata/telegraf/pull/86): Lustre2 plugin. Thanks srfraser!
- [#91](https://github.com/influxdata/telegraf/pull/91): Unit testing
- [#92](https://github.com/influxdata/telegraf/pull/92): Exec plugin. Thanks @alvaromorales!
- [#98](https://github.com/influxdata/telegraf/pull/98): LeoFS plugin. Thanks @mocchira!
- [#103](https://github.com/influxdata/telegraf/pull/103): Filter by metric tags. Thanks @srfraser!
- [#106](https://github.com/influxdata/telegraf/pull/106): Options to filter plugins on startup. Thanks @zepouet!
- [#107](https://github.com/influxdata/telegraf/pull/107): Multiple outputs beyong influxdb. Thanks @jipperinbham!
- [#108](https://github.com/influxdata/telegraf/issues/108): Support setting per-CPU and total-CPU gathering.
- [#111](https://github.com/influxdata/telegraf/pull/111): Report CPU Usage in cpu plugin. Thanks @jpalay!
- [#54](https://github.com/influxdb/telegraf/pull/54): MongoDB plugin. Thanks @jipperinbham!
- [#55](https://github.com/influxdb/telegraf/pull/55): Elasticsearch plugin. Thanks @brocaar!
- [#71](https://github.com/influxdb/telegraf/pull/71): HAProxy plugin. Thanks @kureikain!
- [#72](https://github.com/influxdb/telegraf/pull/72): Adding TokuDB metrics to MySQL. Thanks vadimtk!
- [#73](https://github.com/influxdb/telegraf/pull/73): RabbitMQ plugin. Thanks @ianunruh!
- [#77](https://github.com/influxdb/telegraf/issues/77): Automatically create database.
- [#79](https://github.com/influxdb/telegraf/pull/56): Nginx plugin. Thanks @codeb2cc!
- [#86](https://github.com/influxdb/telegraf/pull/86): Lustre2 plugin. Thanks srfraser!
- [#91](https://github.com/influxdb/telegraf/pull/91): Unit testing
- [#92](https://github.com/influxdb/telegraf/pull/92): Exec plugin. Thanks @alvaromorales!
- [#98](https://github.com/influxdb/telegraf/pull/98): LeoFS plugin. Thanks @mocchira!
- [#103](https://github.com/influxdb/telegraf/pull/103): Filter by metric tags. Thanks @srfraser!
- [#106](https://github.com/influxdb/telegraf/pull/106): Options to filter plugins on startup. Thanks @zepouet!
- [#107](https://github.com/influxdb/telegraf/pull/107): Multiple outputs beyong influxdb. Thanks @jipperinbham!
- [#108](https://github.com/influxdb/telegraf/issues/108): Support setting per-CPU and total-CPU gathering.
- [#111](https://github.com/influxdb/telegraf/pull/111): Report CPU Usage in cpu plugin. Thanks @jpalay!
### Bugfixes
- [#85](https://github.com/influxdata/telegraf/pull/85): Fix GetLocalHost testutil function for mac users
- [#89](https://github.com/influxdata/telegraf/pull/89): go fmt fixes
- [#94](https://github.com/influxdata/telegraf/pull/94): Fix for issue #93, explicitly call sarama.v1 -> sarama
- [#101](https://github.com/influxdata/telegraf/issues/101): switch back from master branch if building locally
- [#99](https://github.com/influxdata/telegraf/issues/99): update integer output to new InfluxDB line protocol format
- [#85](https://github.com/influxdb/telegraf/pull/85): Fix GetLocalHost testutil function for mac users
- [#89](https://github.com/influxdb/telegraf/pull/89): go fmt fixes
- [#94](https://github.com/influxdb/telegraf/pull/94): Fix for issue #93, explicitly call sarama.v1 -> sarama
- [#101](https://github.com/influxdb/telegraf/issues/101): switch back from master branch if building locally
- [#99](https://github.com/influxdb/telegraf/issues/99): update integer output to new InfluxDB line protocol format
## v0.1.4 [2015-07-09]
### Features
- [#56](https://github.com/influxdata/telegraf/pull/56): Update README for Kafka plugin. Thanks @EmilS!
- [#56](https://github.com/influxdb/telegraf/pull/56): Update README for Kafka plugin. Thanks @EmilS!
### Bugfixes
- [#50](https://github.com/influxdata/telegraf/pull/50): Fix init.sh script to use telegraf directory. Thanks @jseriff!
- [#52](https://github.com/influxdata/telegraf/pull/52): Update CHANGELOG to reference updated directory. Thanks @benfb!
- [#50](https://github.com/influxdb/telegraf/pull/50): Fix init.sh script to use telegraf directory. Thanks @jseriff!
- [#52](https://github.com/influxdb/telegraf/pull/52): Update CHANGELOG to reference updated directory. Thanks @benfb!
## v0.1.3 [2015-07-05]
### Features
- [#35](https://github.com/influxdata/telegraf/pull/35): Add Kafka plugin. Thanks @EmilS!
- [#47](https://github.com/influxdata/telegraf/pull/47): Add RethinkDB plugin. Thanks @jipperinbham!
- [#35](https://github.com/influxdb/telegraf/pull/35): Add Kafka plugin. Thanks @EmilS!
- [#47](https://github.com/influxdb/telegraf/pull/47): Add RethinkDB plugin. Thanks @jipperinbham!
### Bugfixes
- [#45](https://github.com/influxdata/telegraf/pull/45): Skip disk tags that don't have a value. Thanks @jhofeditz!
- [#43](https://github.com/influxdata/telegraf/pull/43): Fix bug in MySQL plugin. Thanks @marcosnils!
- [#45](https://github.com/influxdb/telegraf/pull/45): Skip disk tags that don't have a value. Thanks @jhofeditz!
- [#43](https://github.com/influxdb/telegraf/pull/43): Fix bug in MySQL plugin. Thanks @marcosnils!
## v0.1.2 [2015-07-01]
### Features
- [#12](https://github.com/influxdata/telegraf/pull/12): Add Linux/ARM to the list of built binaries. Thanks @voxxit!
- [#14](https://github.com/influxdata/telegraf/pull/14): Clarify the S3 buckets that Telegraf is pushed to.
- [#16](https://github.com/influxdata/telegraf/pull/16): Convert Redis to use URI, support Redis AUTH. Thanks @jipperinbham!
- [#21](https://github.com/influxdata/telegraf/pull/21): Add memcached plugin. Thanks @Yukki!
- [#12](https://github.com/influxdb/telegraf/pull/12): Add Linux/ARM to the list of built binaries. Thanks @voxxit!
- [#14](https://github.com/influxdb/telegraf/pull/14): Clarify the S3 buckets that Telegraf is pushed to.
- [#16](https://github.com/influxdb/telegraf/pull/16): Convert Redis to use URI, support Redis AUTH. Thanks @jipperinbham!
- [#21](https://github.com/influxdb/telegraf/pull/21): Add memcached plugin. Thanks @Yukki!
### Bugfixes
- [#13](https://github.com/influxdata/telegraf/pull/13): Fix the packaging script.
- [#19](https://github.com/influxdata/telegraf/pull/19): Add host name to metric tags. Thanks @sherifzain!
- [#20](https://github.com/influxdata/telegraf/pull/20): Fix race condition with accumulator mutex. Thanks @nkatsaros!
- [#23](https://github.com/influxdata/telegraf/pull/23): Change name of folder for packages. Thanks @colinrymer!
- [#32](https://github.com/influxdata/telegraf/pull/32): Fix spelling of memoory -> memory. Thanks @tylernisonoff!
- [#13](https://github.com/influxdb/telegraf/pull/13): Fix the packaging script.
- [#19](https://github.com/influxdb/telegraf/pull/19): Add host name to metric tags. Thanks @sherifzain!
- [#20](https://github.com/influxdb/telegraf/pull/20): Fix race condition with accumulator mutex. Thanks @nkatsaros!
- [#23](https://github.com/influxdb/telegraf/pull/23): Change name of folder for packages. Thanks @colinrymer!
- [#32](https://github.com/influxdb/telegraf/pull/32): Fix spelling of memoory -> memory. Thanks @tylernisonoff!
## v0.1.1 [2015-06-19]

177
CONFIGURATION.md Normal file
View File

@@ -0,0 +1,177 @@
# Telegraf Configuration
## Plugin Configuration
There are some configuration options that are configurable per plugin:
* **name_override**: Override the base name of the measurement.
(Default is the name of the plugin).
* **name_prefix**: Specifies a prefix to attach to the measurement name.
* **name_suffix**: Specifies a suffix to attach to the measurement name.
* **tags**: A map of tags to apply to a specific plugin's measurements.
### Plugin Filters
There are also filters that can be configured per plugin:
* **pass**: An array of strings that is used to filter metrics generated by the
current plugin. Each string in the array is tested as a glob match against field names
and if it matches, the field is emitted.
* **drop**: The inverse of pass, if a field name matches, it is not emitted.
* **tagpass**: tag names and arrays of strings that are used to filter
measurements by the current plugin. Each string in the array is tested as a glob
match against the tag name, and if it matches the measurement is emitted.
* **tagdrop**: The inverse of tagpass. If a tag matches, the measurement is not emitted.
This is tested on measurements that have passed the tagpass test.
* **interval**: How often to gather this metric. Normal plugins use a single
global interval, but if one particular plugin should be run less or more often,
you can configure that here.
### Plugin Configuration Examples
This is a full working config that will output CPU data to an InfluxDB instance
at 192.168.59.103:8086, tagging measurements with dc="denver-1". It will output
measurements at a 10s interval and will collect per-cpu data, dropping any
fields which begin with `time_`.
```toml
[tags]
dc = "denver-1"
[agent]
interval = "10s"
# OUTPUTS
[outputs]
[[outputs.influxdb]]
url = "http://192.168.59.103:8086" # required.
database = "telegraf" # required.
precision = "s"
# PLUGINS
[plugins]
[[plugins.cpu]]
percpu = true
totalcpu = false
# filter all fields beginning with 'time_'
drop = ["time_*"]
```
### Plugin Config: tagpass and tagdrop
```toml
[plugins]
[[plugins.cpu]]
percpu = true
totalcpu = false
drop = ["cpu_time"]
# Don't collect CPU data for cpu6 & cpu7
[plugins.cpu.tagdrop]
cpu = [ "cpu6", "cpu7" ]
[[plugins.disk]]
[plugins.disk.tagpass]
# tagpass conditions are OR, not AND.
# If the (filesystem is ext4 or xfs) OR (the path is /opt or /home)
# then the metric passes
fstype = [ "ext4", "xfs" ]
# Globs can also be used on the tag values
path = [ "/opt", "/home*" ]
```
### Plugin Config: pass and drop
```toml
# Drop all metrics for guest & steal CPU usage
[[plugins.cpu]]
percpu = false
totalcpu = true
drop = ["usage_guest", "usage_steal"]
# Only store inode related metrics for disks
[[plugins.disk]]
pass = ["inodes*"]
```
### Plugin config: prefix, suffix, and override
This plugin will emit measurements with the name `cpu_total`
```toml
[[plugins.cpu]]
name_suffix = "_total"
percpu = false
totalcpu = true
```
This will emit measurements with the name `foobar`
```toml
[[plugins.cpu]]
name_override = "foobar"
percpu = false
totalcpu = true
```
### Plugin config: tags
This plugin will emit measurements with two additional tags: `tag1=foo` and
`tag2=bar`
```toml
[[plugins.cpu]]
percpu = false
totalcpu = true
[plugins.cpu.tags]
tag1 = "foo"
tag2 = "bar"
```
### Multiple plugins of the same type
Additional plugins (or outputs) of the same type can be specified,
just define more instances in the config file:
```toml
[[plugins.cpu]]
percpu = false
totalcpu = true
[[plugins.cpu]]
percpu = true
totalcpu = false
drop = ["cpu_time*"]
```
## Output Configuration
Telegraf also supports specifying multiple output sinks to send data to,
configuring each output sink is different, but examples can be
found by running `telegraf -sample-config`.
Outputs also support the same configurable options as plugins
(pass, drop, tagpass, tagdrop), added in 0.2.4
```toml
[[outputs.influxdb]]
urls = [ "http://localhost:8086" ]
database = "telegraf"
precision = "s"
# Drop all measurements that start with "aerospike"
drop = ["aerospike*"]
[[outputs.influxdb]]
urls = [ "http://localhost:8086" ]
database = "telegraf-aerospike-data"
precision = "s"
# Only accept aerospike data:
pass = ["aerospike*"]
[[outputs.influxdb]]
urls = [ "http://localhost:8086" ]
database = "telegraf-cpu0-data"
precision = "s"
# Only store measurements where the tag "cpu" matches the value "cpu0"
[outputs.influxdb.tagpass]
cpu = ["cpu0"]
```

View File

@@ -1,72 +1,103 @@
## Steps for Contributing:
1. [Sign the CLA](http://influxdb.com/community/cla.html)
1. Make changes or write plugin (see below for details)
1. Add your plugin to `plugins/inputs/all/all.go` or `plugins/outputs/all/all.go`
1. If your plugin requires a new Go package,
[add it](https://github.com/influxdata/telegraf/blob/master/CONTRIBUTING.md#adding-a-dependency)
1. Write a README for your plugin, if it's an input plugin, it should be structured
like the [input example here](https://github.com/influxdata/telegraf/blob/master/plugins/inputs/EXAMPLE_README.md).
Output plugins READMEs are less structured,
but any information you can provide on how the data will look is appreciated.
See the [OpenTSDB output](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/opentsdb)
for a good example.
## GoDoc
Public interfaces for inputs, outputs, metrics, and the accumulator can be found
on the GoDoc
[![GoDoc](https://godoc.org/github.com/influxdata/telegraf?status.svg)](https://godoc.org/github.com/influxdata/telegraf)
## Sign the CLA
Before we can merge a pull request, you will need to sign the CLA,
which can be found [on our website](http://influxdb.com/community/cla.html)
## Adding a dependency
## Plugins
Assuming you can already build the project, run these in the telegraf directory:
1. `go get github.com/sparrc/gdm`
1. `gdm restore`
1. `gdm save`
## Input Plugins
This section is for developers who want to create new collection inputs.
This section is for developers who want to create new collection plugins.
Telegraf is entirely plugin driven. This interface allows for operators to
pick and chose what is gathered and makes it easy for developers
pick and chose what is gathered as well as makes it easy for developers
to create new ways of generating metrics.
Plugin authorship is kept as simple as possible to promote people to develop
and submit new inputs.
and submit new plugins.
### Input Plugin Guidelines
### Plugin Guidelines
* A plugin must conform to the `telegraf.Input` interface.
* Input Plugins should call `inputs.Add` in their `init` function to register themselves.
* A plugin must conform to the `plugins.Plugin` interface.
* Each generated metric automatically has the name of the plugin that generated
it prepended. This is to keep plugins honest.
* Plugins should call `plugins.Add` in their `init` function to register themselves.
See below for a quick example.
* Input Plugins must be added to the
`github.com/influxdata/telegraf/plugins/inputs/all/all.go` file.
* To be available within Telegraf itself, plugins must add themselves to the
`github.com/influxdb/telegraf/plugins/all/all.go` file.
* The `SampleConfig` function should return valid toml that describes how the
plugin can be configured. This is include in `telegraf -sample-config`.
* The `Description` function should say in one line what this plugin does.
Let's say you've written a plugin that emits metrics about processes on the
current host.
### Plugin interface
### Input Plugin Example
```go
type Plugin interface {
SampleConfig() string
Description() string
Gather(Accumulator) error
}
type Accumulator interface {
Add(measurement string,
value interface{},
tags map[string]string,
timestamp ...time.Time)
AddFields(measurement string,
fields map[string]interface{},
tags map[string]string,
timestamp ...time.Time)
}
```
### Accumulator
The way that a plugin emits metrics is by interacting with the Accumulator.
The `Add` function takes 3 arguments:
* **measurement**: A string description of the metric. For instance `bytes_read` or `faults`.
* **value**: A value for the metric. This accepts 5 different types of value:
* **int**: The most common type. All int types are accepted but favor using `int64`
Useful for counters, etc.
* **float**: Favor `float64`, useful for gauges, percentages, etc.
* **bool**: `true` or `false`, useful to indicate the presence of a state. `light_on`, etc.
* **string**: Typically used to indicate a message, or some kind of freeform information.
* **time.Time**: Useful for indicating when a state last occurred, for instance `light_on_since`.
* **tags**: This is a map of strings to strings to describe the where or who
about the metric. For instance, the `net` plugin adds a tag named `"interface"`
set to the name of the network interface, like `"eth0"`.
The `AddFieldsWithTime` allows multiple values for a point to be passed. The values
used are the same type profile as **value** above. The **timestamp** argument
allows a point to be registered as having occurred at an arbitrary time.
Let's say you've written a plugin that emits metrics about processes on the current host.
```go
type Process struct {
CPUTime float64
MemoryBytes int64
PID int
}
func Gather(acc plugins.Accumulator) error {
for _, process := range system.Processes() {
tags := map[string]string {
"pid": fmt.Sprintf("%d", process.Pid),
}
acc.Add("cpu", process.CPUTime, tags, time.Now())
acc.Add("memory", process.MemoryBytes, tags, time.Now())
}
}
```
### Plugin Example
```go
package simple
// simple.go
import (
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
)
import "github.com/influxdb/telegraf/plugins"
type Simple struct {
Ok bool
@@ -80,7 +111,7 @@ func (s *Simple) SampleConfig() string {
return "ok = true # indicate if everything is fine"
}
func (s *Simple) Gather(acc inputs.Accumulator) error {
func (s *Simple) Gather(acc plugins.Accumulator) error {
if s.Ok {
acc.Add("state", "pretty good", nil)
} else {
@@ -91,65 +122,19 @@ func (s *Simple) Gather(acc inputs.Accumulator) error {
}
func init() {
inputs.Add("simple", func() telegraf.Input { return &Simple{} })
plugins.Add("simple", func() plugins.Plugin { return &Simple{} })
}
```
## Input Plugins Accepting Arbitrary Data Formats
Some input plugins (such as
[exec](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/exec))
accept arbitrary input data formats. An overview of these data formats can
be found
[here](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md).
In order to enable this, you must specify a `SetParser(parser parsers.Parser)`
function on the plugin object (see the exec plugin for an example), as well as
defining `parser` as a field of the object.
You can then utilize the parser internally in your plugin, parsing data as you
see fit. Telegraf's configuration layer will take care of instantiating and
creating the `Parser` object.
You should also add the following to your SampleConfig() return:
```toml
## Data format to consume. This can be "json", "influx" or "graphite"
## Each data format has it's own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
data_format = "influx"
```
Below is the `Parser` interface.
```go
// Parser is an interface defining functions that a parser plugin must satisfy.
type Parser interface {
// Parse takes a byte buffer separated by newlines
// ie, `cpu.usage.idle 90\ncpu.usage.busy 10`
// and parses it into telegraf metrics
Parse(buf []byte) ([]telegraf.Metric, error)
// ParseLine takes a single string metric
// ie, "cpu.usage.idle 90"
// and parses it into a telegraf metric.
ParseLine(line string) (telegraf.Metric, error)
}
```
And you can view the code
[here.](https://github.com/influxdata/telegraf/blob/henrypfhu-master/plugins/parsers/registry.go)
## Service Input Plugins
## Service Plugins
This section is for developers who want to create new "service" collection
inputs. A service plugin differs from a regular plugin in that it operates
plugins. A service plugin differs from a regular plugin in that it operates
a background service while Telegraf is running. One example would be the `statsd`
plugin, which operates a statsd server.
Service Input Plugins are substantially more complicated than a regular plugin, as they
will require threads and locks to verify data integrity. Service Input Plugins should
Service Plugins are substantially more complicated than a regular plugin, as they
will require threads and locks to verify data integrity. Service Plugins should
be avoided unless there is no way to create their behavior with a regular plugin.
Their interface is quite similar to a regular plugin, with the addition of `Start()`
@@ -158,25 +143,49 @@ and `Stop()` methods.
### Service Plugin Guidelines
* Same as the `Plugin` guidelines, except that they must conform to the
`inputs.ServiceInput` interface.
`plugins.ServicePlugin` interface.
## Output Plugins
### Service Plugin interface
```go
type ServicePlugin interface {
SampleConfig() string
Description() string
Gather(Accumulator) error
Start() error
Stop()
}
```
## Outputs
This section is for developers who want to create a new output sink. Outputs
are created in a similar manner as collection plugins, and their interface has
similar constructs.
### Output Plugin Guidelines
### Output Guidelines
* An output must conform to the `outputs.Output` interface.
* Outputs should call `outputs.Add` in their `init` function to register themselves.
See below for a quick example.
* To be available within Telegraf itself, plugins must add themselves to the
`github.com/influxdata/telegraf/plugins/outputs/all/all.go` file.
`github.com/influxdb/telegraf/outputs/all/all.go` file.
* The `SampleConfig` function should return valid toml that describes how the
output can be configured. This is include in `telegraf -sample-config`.
* The `Description` function should say in one line what this output does.
### Output interface
```go
type Output interface {
Connect() error
Close() error
Description() string
SampleConfig() string
Write(points []*client.Point) error
}
```
### Output Example
```go
@@ -184,10 +193,7 @@ package simpleoutput
// simpleoutput.go
import (
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/outputs"
)
import "github.com/influxdb/telegraf/outputs"
type Simple struct {
Ok bool
@@ -211,7 +217,7 @@ func (s *Simple) Close() error {
return nil
}
func (s *Simple) Write(metrics []telegraf.Metric) error {
func (s *Simple) Write(points []*client.Point) error {
for _, pt := range points {
// write `pt` to the output sink here
}
@@ -219,39 +225,12 @@ func (s *Simple) Write(metrics []telegraf.Metric) error {
}
func init() {
outputs.Add("simpleoutput", func() telegraf.Output { return &Simple{} })
outputs.Add("simpleoutput", func() outputs.Output { return &Simple{} })
}
```
## Output Plugins Writing Arbitrary Data Formats
Some output plugins (such as
[file](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/file))
can write arbitrary output data formats. An overview of these data formats can
be found
[here](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md).
In order to enable this, you must specify a
`SetSerializer(serializer serializers.Serializer)`
function on the plugin object (see the file plugin for an example), as well as
defining `serializer` as a field of the object.
You can then utilize the serializer internally in your plugin, serializing data
before it's written. Telegraf's configuration layer will take care of
instantiating and creating the `Serializer` object.
You should also add the following to your SampleConfig() return:
```toml
## Data format to output. This can be "influx" or "graphite"
## Each data format has it's own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
data_format = "influx"
```
## Service Output Plugins
## Service Outputs
This section is for developers who want to create new "service" output. A
service output differs from a regular output in that it operates a background service
@@ -264,7 +243,21 @@ and `Stop()` methods.
### Service Output Guidelines
* Same as the `Output` guidelines, except that they must conform to the
`output.ServiceOutput` interface.
`plugins.ServiceOutput` interface.
### Service Output interface
```go
type ServiceOutput interface {
Connect() error
Close() error
Description() string
SampleConfig() string
Write(points []*client.Point) error
Start() error
Stop()
}
```
## Unit Tests
@@ -281,7 +274,7 @@ which would take some time to replicate.
To overcome this situation we've decided to use docker containers to provide a
fast and reproducible environment to test those services which require it.
For other situations
(i.e: https://github.com/influxdata/telegraf/blob/master/plugins/inputs/redis/redis_test.go)
(i.e: https://github.com/influxdb/telegraf/blob/master/plugins/redis/redis_test.go )
a simple mock will suffice.
To execute Telegraf tests follow these simple steps:

59
Godeps
View File

@@ -1,53 +1,52 @@
git.eclipse.org/gitroot/paho/org.eclipse.paho.mqtt.golang.git 617c801af238c3af2d9e72c5d4a0f02edad03ce5
github.com/Shopify/sarama d37c73f2b2bce85f7fa16b6a550d26c5372892ef
github.com/Sirupsen/logrus f7f79f729e0fbe2fcc061db48a9ba0263f588252
git.eclipse.org/gitroot/paho/org.eclipse.paho.mqtt.golang.git dbd8d5c40a582eb9adacde36b47932b3a3ad0034
github.com/Shopify/sarama 159e9990b0796511607dd0d7aaa3eb37d1829d16
github.com/Sirupsen/logrus 446d1c146faa8ed3f4218f056fcd165f6bcfda81
github.com/amir/raidman 6a8e089bbe32e6b907feae5ba688841974b3c339
github.com/aws/aws-sdk-go 87b1e60a50b09e4812dee560b33a238f67305804
github.com/armon/go-metrics 06b60999766278efd6d2b5d8418a58c3d5b99e87
github.com/aws/aws-sdk-go 999b1591218c36d5050d1ba7266eba956e65965f
github.com/beorn7/perks b965b613227fddccbfffe13eae360ed3fa822f8d
github.com/boltdb/bolt b34b35ea8d06bb9ae69d9a349119252e4c1d8ee0
github.com/cenkalti/backoff 4dc77674aceaabba2c7e3da25d4c823edfb73f99
github.com/dancannon/gorethink 6f088135ff288deb9d5546f4c71919207f891a70
github.com/dancannon/gorethink a124c9663325ed9f7fb669d17c69961b59151e6e
github.com/davecgh/go-spew 5215b55f46b2b919f50a1df0eaa5886afe4e3b3d
github.com/eapache/go-resiliency b86b1ec0dd4209a588dc1285cdd471e73525c0b3
github.com/eapache/go-resiliency f341fb4dca45128e4aa86389fa6a675d55fe25e1
github.com/eapache/queue ded5959c0d4e360646dc9e9908cff48666781367
github.com/fsouza/go-dockerclient 7b651349f9479f5114913eefbfd3c4eeddd79ab4
github.com/go-ini/ini afbd495e5aaea13597b5e14fe514ddeaa4d76fc3
github.com/go-sql-driver/mysql 7c7f556282622f94213bc028b4d0a7b6151ba239
github.com/golang/protobuf 6aaa8d47701fa6cf07e914ec01fde3d4a1fe79c3
github.com/fsouza/go-dockerclient 7177a9e3543b0891a5d91dbf7051e0f71455c8ef
github.com/go-ini/ini 9314fb0ef64171d6a3d0a4fa570dfa33441cba05
github.com/go-sql-driver/mysql d512f204a577a4ab037a1816604c48c9c13210be
github.com/gogo/protobuf e492fd34b12d0230755c45aa5fb1e1eea6a84aa9
github.com/golang/protobuf 68415e7123da32b07eab49c96d2c4d6158360e9b
github.com/golang/snappy 723cc1e459b8eea2dea4583200fd60757d40097a
github.com/gonuts/go-shellquote e842a11b24c6abfb3dd27af69a17f482e4b483c2
github.com/gorilla/context 1c83b3eabd45b6d76072b66b746c20815fb2872d
github.com/gorilla/mux 26a6070f849969ba72b72256e9f14cf519751690
github.com/hailocab/go-hostpool e80d13ce29ede4452c43dea11e79b9bc8a15b478
github.com/influxdata/config bae7cb98197d842374d3b8403905924094930f24
github.com/influxdata/influxdb ef571fc104dc24b77cd3710c156cd95e5cfd7aa5
github.com/hailocab/go-hostpool 0637eae892be221164aff5fcbccc57171aea6406
github.com/hashicorp/go-msgpack fa3f63826f7c23912c15263591e65d54d080b458
github.com/hashicorp/raft d136cd15dfb7876fd7c89cad1995bc4f19ceb294
github.com/hashicorp/raft-boltdb d1e82c1ec3f15ee991f7cc7ffd5b67ff6f5bbaee
github.com/influxdb/influxdb 69a7664f2d4b75aec300b7cbfc7e57c971721f04
github.com/jmespath/go-jmespath c01cf91b011868172fdcd9f41838e80c9d716264
github.com/klauspost/crc32 999f3125931f6557b991b2f8472172bdfa578d38
github.com/lib/pq 8ad2b298cadd691a77015666a5372eae5dbfac8f
github.com/klauspost/crc32 0aff1ea9c20474c3901672b5b6ead0ac611156de
github.com/lib/pq 11fc39a580a008f1f39bb3d11d984fb34ed778d9
github.com/matttproud/golang_protobuf_extensions d0c3fe89de86839aecf2e0579c40ba3bb336a453
github.com/mreiferson/go-snappystream 028eae7ab5c4c9e2d1cb4c4ca1e53259bbe7e504
github.com/naoina/go-stringutil 6b638e95a32d0c1131db0e7fe83775cbea4a0d0b
github.com/naoina/toml 751171607256bb66e64c9f0220c00662420c38e9
github.com/nats-io/nats 6a83f1a633cfbfd90aa648ac99fb38c06a8b40df
github.com/nsqio/go-nsq 2118015c120962edc5d03325c680daf3163a8b5f
github.com/pmezard/go-difflib 792786c7400a136282c1664665ae0a8db921c6c2
github.com/pborman/uuid cccd189d45f7ac3368a0d127efb7f4d08ae0b655
github.com/pmezard/go-difflib e8554b8641db39598be7f6342874b958f12ae1d4
github.com/prometheus/client_golang 67994f177195311c3ea3d4407ed0175e34a4256f
github.com/prometheus/client_model fa8ad6fec33561be4280a8f0514318c79d7f6cb6
github.com/prometheus/common 14ca1097bbe21584194c15e391a9dab95ad42a59
github.com/prometheus/common 56b90312e937d43b930f06a59bf0d6a4ae1944bc
github.com/prometheus/procfs 406e5b7bfd8201a36e2bb5f7bdae0b03380c2ce8
github.com/samuel/go-zookeeper 218e9c81c0dd8b3b18172b2bbfad92cc7d6db55f
github.com/shirou/gopsutil e77438504d45b9985c99a75730fe65220ceea00e
github.com/soniah/gosnmp b1b4f885b12c5dcbd021c5cee1c904110de6db7d
github.com/shirou/gopsutil fc932d9090f13a84fb4b3cb8baa124610cab184c
github.com/streadway/amqp b4f3ceab0337f013208d31348b578d83c0064744
github.com/stretchr/objx 1a9d0bb9f541897e62256577b352fdbc1fb4fd94
github.com/stretchr/testify f390dcf405f7b83c997eac1b06768bb9f44dec18
github.com/stretchr/testify e3a8ff8ce36581f87a15341206f205b1da467059
github.com/wvanbergen/kafka 1a8639a45164fcc245d5c7b4bd3ccfbd1a0ffbf3
github.com/wvanbergen/kazoo-go 0f768712ae6f76454f987c3356177e138df258f8
github.com/zensqlmonitor/go-mssqldb ffe5510c6fa5e15e6d983210ab501c815b56b363
golang.org/x/crypto 1f22c0103821b9390939b6776727195525381532
golang.org/x/net 04b9de9b512f58addf28c9853d50ebef61c3953e
golang.org/x/text 6d3c22c4525a4da167968fa2479be5524d2e8bd0
gopkg.in/dancannon/gorethink.v1 6f088135ff288deb9d5546f4c71919207f891a70
golang.org/x/crypto 7b85b097bf7527677d54d3220065e966a0e3b613
golang.org/x/net 1796f9b8b7178e3c7587dff118d3bb9d37f9b0b3
gopkg.in/dancannon/gorethink.v1 a124c9663325ed9f7fb669d17c69961b59151e6e
gopkg.in/fatih/pool.v2 cba550ebf9bce999a02e963296d4bc7a486cb715
gopkg.in/mgo.v2 03c9f3ee4c14c8e51ee521a6a7d0425658dd6f64
gopkg.in/mgo.v2 e30de8ac9ae3b30df7065f766c71f88bba7d4e49
gopkg.in/yaml.v2 f7716cbe52baa25d2e9b0d0da546fcf909fc16b4
github.com/miekg/dns e0d84d97e59bcb6561eae269c4e94d25b66822cb

View File

@@ -1,56 +0,0 @@
git.eclipse.org/gitroot/paho/org.eclipse.paho.mqtt.golang.git 617c801af238c3af2d9e72c5d4a0f02edad03ce5
github.com/Shopify/sarama d37c73f2b2bce85f7fa16b6a550d26c5372892ef
github.com/Sirupsen/logrus f7f79f729e0fbe2fcc061db48a9ba0263f588252
github.com/StackExchange/wmi f3e2bae1e0cb5aef83e319133eabfee30013a4a5
github.com/amir/raidman 6a8e089bbe32e6b907feae5ba688841974b3c339
github.com/aws/aws-sdk-go 87b1e60a50b09e4812dee560b33a238f67305804
github.com/beorn7/perks b965b613227fddccbfffe13eae360ed3fa822f8d
github.com/cenkalti/backoff 4dc77674aceaabba2c7e3da25d4c823edfb73f99
github.com/dancannon/gorethink 6f088135ff288deb9d5546f4c71919207f891a70
github.com/davecgh/go-spew 5215b55f46b2b919f50a1df0eaa5886afe4e3b3d
github.com/eapache/go-resiliency b86b1ec0dd4209a588dc1285cdd471e73525c0b3
github.com/eapache/queue ded5959c0d4e360646dc9e9908cff48666781367
github.com/fsouza/go-dockerclient 7b651349f9479f5114913eefbfd3c4eeddd79ab4
github.com/go-ini/ini afbd495e5aaea13597b5e14fe514ddeaa4d76fc3
github.com/go-ole/go-ole 50055884d646dd9434f16bbb5c9801749b9bafe4
github.com/go-sql-driver/mysql 7c7f556282622f94213bc028b4d0a7b6151ba239
github.com/golang/protobuf 6aaa8d47701fa6cf07e914ec01fde3d4a1fe79c3
github.com/golang/snappy 723cc1e459b8eea2dea4583200fd60757d40097a
github.com/gonuts/go-shellquote e842a11b24c6abfb3dd27af69a17f482e4b483c2
github.com/gorilla/context 1c83b3eabd45b6d76072b66b746c20815fb2872d
github.com/gorilla/mux 26a6070f849969ba72b72256e9f14cf519751690
github.com/hailocab/go-hostpool e80d13ce29ede4452c43dea11e79b9bc8a15b478
github.com/influxdata/config bae7cb98197d842374d3b8403905924094930f24
github.com/influxdata/influxdb ef571fc104dc24b77cd3710c156cd95e5cfd7aa5
github.com/jmespath/go-jmespath c01cf91b011868172fdcd9f41838e80c9d716264
github.com/klauspost/crc32 999f3125931f6557b991b2f8472172bdfa578d38
github.com/lib/pq 8ad2b298cadd691a77015666a5372eae5dbfac8f
github.com/lxn/win 9a7734ea4db26bc593d52f6a8a957afdad39c5c1
github.com/matttproud/golang_protobuf_extensions d0c3fe89de86839aecf2e0579c40ba3bb336a453
github.com/miekg/dns e0d84d97e59bcb6561eae269c4e94d25b66822cb
github.com/mreiferson/go-snappystream 028eae7ab5c4c9e2d1cb4c4ca1e53259bbe7e504
github.com/naoina/go-stringutil 6b638e95a32d0c1131db0e7fe83775cbea4a0d0b
github.com/naoina/toml 751171607256bb66e64c9f0220c00662420c38e9
github.com/nats-io/nats 6a83f1a633cfbfd90aa648ac99fb38c06a8b40df
github.com/nsqio/go-nsq 2118015c120962edc5d03325c680daf3163a8b5f
github.com/pmezard/go-difflib 792786c7400a136282c1664665ae0a8db921c6c2
github.com/prometheus/client_golang 67994f177195311c3ea3d4407ed0175e34a4256f
github.com/prometheus/client_model fa8ad6fec33561be4280a8f0514318c79d7f6cb6
github.com/prometheus/common 14ca1097bbe21584194c15e391a9dab95ad42a59
github.com/prometheus/procfs 406e5b7bfd8201a36e2bb5f7bdae0b03380c2ce8
github.com/samuel/go-zookeeper 218e9c81c0dd8b3b18172b2bbfad92cc7d6db55f
github.com/shirou/gopsutil e77438504d45b9985c99a75730fe65220ceea00e
github.com/shirou/w32 ada3ba68f000aa1b58580e45c9d308fe0b7fc5c5
github.com/soniah/gosnmp b1b4f885b12c5dcbd021c5cee1c904110de6db7d
github.com/streadway/amqp b4f3ceab0337f013208d31348b578d83c0064744
github.com/stretchr/objx 1a9d0bb9f541897e62256577b352fdbc1fb4fd94
github.com/stretchr/testify f390dcf405f7b83c997eac1b06768bb9f44dec18
github.com/wvanbergen/kafka 1a8639a45164fcc245d5c7b4bd3ccfbd1a0ffbf3
github.com/wvanbergen/kazoo-go 0f768712ae6f76454f987c3356177e138df258f8
github.com/zensqlmonitor/go-mssqldb ffe5510c6fa5e15e6d983210ab501c815b56b363
golang.org/x/net 04b9de9b512f58addf28c9853d50ebef61c3953e
golang.org/x/text 6d3c22c4525a4da167968fa2479be5524d2e8bd0
gopkg.in/dancannon/gorethink.v1 6f088135ff288deb9d5546f4c71919207f891a70
gopkg.in/fatih/pool.v2 cba550ebf9bce999a02e963296d4bc7a486cb715
gopkg.in/mgo.v2 03c9f3ee4c14c8e51ee521a6a7d0425658dd6f64
gopkg.in/yaml.v2 f7716cbe52baa25d2e9b0d0da546fcf909fc16b4

View File

@@ -9,41 +9,36 @@ endif
# Standard Telegraf build
default: prepare build
# Windows build
windows: prepare-windows build-windows
# Only run the build (no dependency grabbing)
build:
go install -ldflags "-X main.Version=$(VERSION)" ./...
build-windows:
go build -o telegraf.exe -ldflags \
go build -o telegraf -ldflags \
"-X main.Version=$(VERSION)" \
./cmd/telegraf/telegraf.go
build-for-docker:
CGO_ENABLED=0 GOOS=linux go -o telegraf -ldflags \
"-X main.Version=$(VERSION)" \
./cmd/telegraf/telegraf.go
# Build with race detector
dev: prepare
go build -race -ldflags "-X main.Version=$(VERSION)" ./...
go build -race -o telegraf -ldflags \
"-X main.Version=$(VERSION)" \
./cmd/telegraf/telegraf.go
# run package script
package:
./scripts/build.py --package --version="$(VERSION)" --platform=linux --arch=all --upload
# Build linux 64-bit, 32-bit and arm architectures
build-linux-bins: prepare
GOARCH=amd64 GOOS=linux go build -o telegraf_linux_amd64 \
-ldflags "-X main.Version=$(VERSION)" \
./cmd/telegraf/telegraf.go
GOARCH=386 GOOS=linux go build -o telegraf_linux_386 \
-ldflags "-X main.Version=$(VERSION)" \
./cmd/telegraf/telegraf.go
GOARCH=arm GOOS=linux go build -o telegraf_linux_arm \
-ldflags "-X main.Version=$(VERSION)" \
./cmd/telegraf/telegraf.go
# Get dependencies and use gdm to checkout changesets
prepare:
go get ./...
go get github.com/sparrc/gdm
gdm restore
# Use the windows godeps file to prepare dependencies
prepare-windows:
go get github.com/sparrc/gdm
gdm restore -f Godeps_windows
# Run all docker containers necessary for unit tests
docker-run:
ifeq ($(UNAME), Darwin)
@@ -70,7 +65,6 @@ endif
docker run --name nsq -p "4150:4150" -d nsqio/nsq /nsqd
docker run --name mqtt -p "1883:1883" -d ncarlier/mqtt
docker run --name riemann -p "5555:5555" -d blalor/riemann
docker run --name snmp -p "31161:31161/udp" -d titilambert/snmpsim
# Run docker containers necessary for CircleCI unit tests
docker-run-circle:
@@ -84,25 +78,21 @@ docker-run-circle:
docker run --name nsq -p "4150:4150" -d nsqio/nsq /nsqd
docker run --name mqtt -p "1883:1883" -d ncarlier/mqtt
docker run --name riemann -p "5555:5555" -d blalor/riemann
docker run --name snmp -p "31161:31161/udp" -d titilambert/snmpsim
# Kill all docker containers, ignore errors
docker-kill:
-docker kill nsq aerospike redis opentsdb rabbitmq postgres memcached mysql kafka mqtt riemann snmp
-docker rm nsq aerospike redis opentsdb rabbitmq postgres memcached mysql kafka mqtt riemann snmp
-docker kill nsq aerospike redis opentsdb rabbitmq postgres memcached mysql kafka mqtt riemann
-docker rm nsq aerospike redis opentsdb rabbitmq postgres memcached mysql kafka mqtt riemann
# Run full unit tests using docker containers (includes setup and teardown)
test: vet docker-kill docker-run
test: docker-kill docker-run
# Sleeping for kafka leadership election, TSDB setup, etc.
sleep 60
# SUCCESS, running tests
go test -race ./...
# Run "short" unit tests
test-short: vet
test-short:
go test -short ./...
vet:
go vet ./...
.PHONY: test test-short vet build default
.PHONY: test

195
README.md
View File

@@ -1,87 +1,48 @@
# Telegraf [![Circle CI](https://circleci.com/gh/influxdata/telegraf.svg?style=svg)](https://circleci.com/gh/influxdata/telegraf)
# Telegraf - A native agent for InfluxDB [![Circle CI](https://circleci.com/gh/influxdb/telegraf.svg?style=svg)](https://circleci.com/gh/influxdb/telegraf)
Telegraf is an agent written in Go for collecting metrics from the system it's
running on, or from other services, and writing them into InfluxDB or other
[outputs](https://github.com/influxdata/telegraf#supported-output-plugins).
running on, or from other services, and writing them into InfluxDB.
Design goals are to have a minimal memory footprint with a plugin system so
that developers in the community can easily add support for collecting metrics
from well known services (like Hadoop, Postgres, or Redis) and third party
APIs (like Mailchimp, AWS CloudWatch, or Google Analytics).
New input and output plugins are designed to be easy to contribute,
we'll eagerly accept pull
requests and will manage the set of plugins that Telegraf supports.
See the [contributing guide](CONTRIBUTING.md) for instructions on writing
new plugins.
We'll eagerly accept pull requests for new plugins and will manage the set of
plugins that Telegraf supports. See the
[contributing guide](CONTRIBUTING.md) for instructions on
writing new plugins.
## Installation:
NOTE: Telegraf 0.10.x is **not** backwards-compatible with previous versions
of telegraf, both in the database layout and the configuration file. 0.2.x
will continue to be supported, see below for download links.
For more details on the differences between Telegraf 0.2.x and 0.10.x, see
the [release blog post](https://influxdata.com/blog/announcing-telegraf-0-10-0/).
### Linux deb and rpm Packages:
### Linux deb and rpm packages:
Latest:
* http://get.influxdb.org/telegraf/telegraf_0.10.4-1_amd64.deb
* http://get.influxdb.org/telegraf/telegraf-0.10.4-1.x86_64.rpm
Latest (arm):
* http://get.influxdb.org/telegraf/telegraf_0.10.4-1_arm.deb
* http://get.influxdb.org/telegraf/telegraf-0.10.4-1.arm.rpm
0.2.x:
* http://get.influxdb.org/telegraf/telegraf_0.2.4_amd64.deb
* http://get.influxdb.org/telegraf/telegraf-0.2.4-1.x86_64.rpm
##### Package Instructions:
##### Package instructions:
* Telegraf binary is installed in `/usr/bin/telegraf`
* Telegraf daemon configuration file is in `/etc/telegraf/telegraf.conf`
* Telegraf binary is installed in `/opt/telegraf/telegraf`
* Telegraf daemon configuration file is in `/etc/opt/telegraf/telegraf.conf`
* On sysv systems, the telegraf daemon can be controlled via
`service telegraf [action]`
* On systemd systems (such as Ubuntu 15+), the telegraf daemon can be
controlled via `systemctl [action] telegraf`
### yum/apt Repositories:
There is a yum/apt repo available for the whole InfluxData stack, see
[here](https://docs.influxdata.com/influxdb/v0.9/introduction/installation/#installation)
for instructions, replacing the `influxdb` package name with `telegraf`.
### Linux tarballs:
### Linux binaries:
Latest:
* http://get.influxdb.org/telegraf/telegraf-0.10.4-1_linux_amd64.tar.gz
* http://get.influxdb.org/telegraf/telegraf-0.10.4-1_linux_i386.tar.gz
* http://get.influxdb.org/telegraf/telegraf-0.10.4-1_linux_arm.tar.gz
0.2.x:
* http://get.influxdb.org/telegraf/telegraf_linux_amd64_0.2.4.tar.gz
* http://get.influxdb.org/telegraf/telegraf_linux_386_0.2.4.tar.gz
* http://get.influxdb.org/telegraf/telegraf_linux_arm_0.2.4.tar.gz
##### tarball Instructions:
##### Binary instructions:
To install the full directory structure with config file, run:
```
sudo tar -C / -zxvf ./telegraf-0.10.4-1_linux_amd64.tar.gz
```
To extract only the binary, run:
```
tar -zxvf telegraf-0.10.4-1_linux_amd64.tar.gz --strip-components=3 ./usr/bin/telegraf
```
### Ansible Role:
Ansible role: https://github.com/rossmcdonald/telegraf
These are standalone binaries that can be unpacked and executed on any linux
system. They can be unpacked and renamed in a location such as
`/usr/local/bin` for convenience. A config file will need to be generated,
see "How to use it" below.
### OSX via Homebrew:
@@ -90,88 +51,63 @@ brew update
brew install telegraf
```
### Windows Binaries (EXPERIMENTAL)
Latest:
* http://get.influxdb.org/telegraf/telegraf-0.10.4-1_windows_amd64.zip
* http://get.influxdb.org/telegraf/telegraf-0.10.4-1_windows_i386.zip
### From Source:
Telegraf manages dependencies via [gdm](https://github.com/sparrc/gdm),
which gets installed via the Makefile
if you don't have it already. You also must build with golang version 1.5+.
if you don't have it already. You also must build with golang version 1.4+.
1. [Install Go](https://golang.org/doc/install)
2. [Setup your GOPATH](https://golang.org/doc/code.html#GOPATH)
3. Run `go get github.com/influxdata/telegraf`
4. Run `cd $GOPATH/src/github.com/influxdata/telegraf`
3. Run `go get github.com/influxdb/telegraf`
4. Run `cd $GOPATH/src/github.com/influxdb/telegraf`
5. Run `make`
## How to use it:
### How to use it:
```console
$ telegraf -help
Telegraf, The plugin-driven server agent for collecting and reporting metrics.
* Run `telegraf -sample-config > telegraf.conf` to create an initial configuration.
* Or run `telegraf -sample-config -filter cpu:mem -outputfilter influxdb > telegraf.conf`.
to create a config file with only CPU and memory plugins defined, and InfluxDB
output defined.
* Edit the configuration to match your needs.
* Run `telegraf -config telegraf.conf -test` to output one full measurement
sample to STDOUT. NOTE: you may want to run as the telegraf user if you are using
the linux packages `sudo -u telegraf telegraf -config telegraf.conf -test`
* Run `telegraf -config telegraf.conf` to gather and send metrics to configured outputs.
* Run `telegraf -config telegraf.conf -filter system:swap`.
to run telegraf with only the system & swap plugins defined in the config.
Usage:
## Telegraf Options
telegraf <flags>
Telegraf has a few options you can configure under the `agent` section of the
config.
The flags are:
-config <file> configuration file to load
-test gather metrics once, print them to stdout, and exit
-sample-config print out full sample configuration to stdout
-config-directory directory containing additional *.conf files
-input-filter filter the input plugins to enable, separator is :
-output-filter filter the output plugins to enable, separator is :
-usage print usage for a plugin, ie, 'telegraf -usage mysql'
-debug print metrics as they're generated to stdout
-quiet run in quiet mode
-version print the version to stdout
Examples:
# generate a telegraf config file:
telegraf -sample-config > telegraf.conf
# generate config with only cpu input & influxdb output plugins defined
telegraf -sample-config -input-filter cpu -output-filter influxdb
# run a single telegraf collection, outputing metrics to stdout
telegraf -config telegraf.conf -test
# run telegraf with all plugins defined in config file
telegraf -config telegraf.conf
# run telegraf, enabling the cpu & memory input, and influxdb output plugins
telegraf -config telegraf.conf -input-filter cpu:mem -output-filter influxdb
```
* **hostname**: The hostname is passed as a tag. By default this will be
the value returned by `hostname` on the machine running Telegraf.
You can override that value here.
* **interval**: How often to gather metrics. Uses a simple number +
unit parser, e.g. "10s" for 10 seconds or "5m" for 5 minutes.
* **debug**: Set to true to gather and send metrics to STDOUT as well as
InfluxDB.
## Configuration
See the [configuration guide](docs/CONFIGURATION.md) for a rundown of the more advanced
See the [configuration guide](CONFIGURATION.md) for a rundown of the more advanced
configuration options.
## Supported Input Plugins
## Supported Plugins
Telegraf currently has support for collecting metrics from many sources. For
more information on each, please look at the directory of the same name in
`plugins/inputs`.
**You can view usage instructions for each plugin by running**
`telegraf -usage <pluginname>`.
Currently implemented sources:
Telegraf currently has support for collecting metrics from:
* aerospike
* apache
* bcache
* couchdb
* disque
* dns query time
* docker
* dovecot
* elasticsearch
* exec (generic executable plugin, support JSON, influx and graphite)
* exec (generic JSON-emitting executable plugin)
* haproxy
* httpjson (generic JSON-emitting http service plugin)
* influxdb
@@ -180,32 +116,21 @@ Currently implemented sources:
* lustre2
* mailchimp
* memcached
* mesos
* mongodb
* mysql
* net_response
* nginx
* nsq
* phpfpm
* phusion passenger
* ping
* postgresql
* powerdns
* procstat
* prometheus
* puppetagent
* rabbitmq
* raindrops
* redis
* rethinkdb
* riak
* sensors (only available if built from source)
* snmp
* sql server (microsoft)
* twemproxy
* zfs
* zookeeper
* win_perf_counters (windows performance counters)
* system
* cpu
* mem
@@ -215,36 +140,32 @@ Currently implemented sources:
* diskio
* swap
Telegraf can also collect metrics via the following service plugins:
## Supported Service Plugins
Telegraf can collect metrics via the following services:
* statsd
* mqtt_consumer
* kafka_consumer
* nats_consumer
* github_webhooks
We'll be adding support for many more over the coming months. Read on if you
want to add support for another service or third-party API.
## Supported Output Plugins
## Supported Outputs
* influxdb
* amon
* amqp
* aws kinesis
* aws cloudwatch
* datadog
* graphite
* kafka
* librato
* mqtt
* nsq
* kafka
* datadog
* opentsdb
* amqp (rabbitmq)
* mqtt
* librato
* prometheus
* amon
* riemann
## Contributing
Please see the
[contributing guide](CONTRIBUTING.md)
for details on contributing a plugin to Telegraf.
for details on contributing a plugin or output to Telegraf.

View File

@@ -1,21 +1,184 @@
package telegraf
import "time"
import (
"fmt"
"log"
"math"
"sync"
"time"
"github.com/influxdb/telegraf/internal/config"
"github.com/influxdb/influxdb/client/v2"
)
type Accumulator interface {
// Create a point with a value, decorating it with tags
// NOTE: tags is expected to be owned by the caller, don't mutate
// it after passing to Add.
Add(measurement string,
value interface{},
tags map[string]string,
t ...time.Time)
Add(measurement string, value interface{},
tags map[string]string, t ...time.Time)
AddFields(measurement string, fields map[string]interface{},
tags map[string]string, t ...time.Time)
AddFields(measurement string,
fields map[string]interface{},
tags map[string]string,
t ...time.Time)
SetDefaultTags(tags map[string]string)
AddDefaultTag(key, value string)
Prefix() string
SetPrefix(prefix string)
Debug() bool
SetDebug(enabled bool)
}
func NewAccumulator(
pluginConfig *config.PluginConfig,
points chan *client.Point,
) Accumulator {
acc := accumulator{}
acc.points = points
acc.pluginConfig = pluginConfig
return &acc
}
type accumulator struct {
sync.Mutex
points chan *client.Point
defaultTags map[string]string
debug bool
pluginConfig *config.PluginConfig
prefix string
}
func (ac *accumulator) Add(
measurement string,
value interface{},
tags map[string]string,
t ...time.Time,
) {
fields := make(map[string]interface{})
fields["value"] = value
ac.AddFields(measurement, fields, tags, t...)
}
func (ac *accumulator) AddFields(
measurement string,
fields map[string]interface{},
tags map[string]string,
t ...time.Time,
) {
if !ac.pluginConfig.Filter.ShouldTagsPass(tags) {
return
}
// Override measurement name if set
if len(ac.pluginConfig.NameOverride) != 0 {
measurement = ac.pluginConfig.NameOverride
}
// Apply measurement prefix and suffix if set
if len(ac.pluginConfig.MeasurementPrefix) != 0 {
measurement = ac.pluginConfig.MeasurementPrefix + measurement
}
if len(ac.pluginConfig.MeasurementSuffix) != 0 {
measurement = measurement + ac.pluginConfig.MeasurementSuffix
}
if tags == nil {
tags = make(map[string]string)
}
// Apply plugin-wide tags if set
for k, v := range ac.pluginConfig.Tags {
if _, ok := tags[k]; !ok {
tags[k] = v
}
}
// Apply daemon-wide tags if set
for k, v := range ac.defaultTags {
if _, ok := tags[k]; !ok {
tags[k] = v
}
}
result := make(map[string]interface{})
for k, v := range fields {
// Filter out any filtered fields
if ac.pluginConfig != nil {
if !ac.pluginConfig.Filter.ShouldPass(k) {
continue
}
}
result[k] = v
// Validate uint64 and float64 fields
switch val := v.(type) {
case uint64:
// InfluxDB does not support writing uint64
if val < uint64(9223372036854775808) {
result[k] = int64(val)
} else {
result[k] = int64(9223372036854775807)
}
case float64:
// NaNs are invalid values in influxdb, skip measurement
if math.IsNaN(val) || math.IsInf(val, 0) {
if ac.debug {
log.Printf("Measurement [%s] field [%s] has a NaN or Inf "+
"field, skipping",
measurement, k)
}
continue
}
}
}
fields = nil
if len(result) == 0 {
return
}
var timestamp time.Time
if len(t) > 0 {
timestamp = t[0]
} else {
timestamp = time.Now()
}
if ac.prefix != "" {
measurement = ac.prefix + measurement
}
pt, err := client.NewPoint(measurement, tags, result, timestamp)
if err != nil {
log.Printf("Error adding point [%s]: %s\n", measurement, err.Error())
return
}
if ac.debug {
fmt.Println("> " + pt.String())
}
ac.points <- pt
}
func (ac *accumulator) SetDefaultTags(tags map[string]string) {
ac.defaultTags = tags
}
func (ac *accumulator) AddDefaultTag(key, value string) {
ac.defaultTags[key] = value
}
func (ac *accumulator) Prefix() string {
return ac.prefix
}
func (ac *accumulator) SetPrefix(prefix string) {
ac.prefix = prefix
}
func (ac *accumulator) Debug() bool {
return ac.debug
}
func (ac *accumulator) SetDebug(debug bool) {
ac.debug = debug
}

397
agent.go Normal file
View File

@@ -0,0 +1,397 @@
package telegraf
import (
"crypto/rand"
"fmt"
"log"
"math/big"
"os"
"sync"
"time"
"github.com/influxdb/telegraf/internal/config"
"github.com/influxdb/telegraf/outputs"
"github.com/influxdb/telegraf/plugins"
"github.com/influxdb/influxdb/client/v2"
)
// Agent runs telegraf and collects data based on the given config
type Agent struct {
Config *config.Config
}
// NewAgent returns an Agent struct based off the given Config
func NewAgent(config *config.Config) (*Agent, error) {
a := &Agent{
Config: config,
}
if a.Config.Agent.Hostname == "" {
hostname, err := os.Hostname()
if err != nil {
return nil, err
}
a.Config.Agent.Hostname = hostname
}
config.Tags["host"] = a.Config.Agent.Hostname
return a, nil
}
// Connect connects to all configured outputs
func (a *Agent) Connect() error {
for _, o := range a.Config.Outputs {
switch ot := o.Output.(type) {
case outputs.ServiceOutput:
if err := ot.Start(); err != nil {
log.Printf("Service for output %s failed to start, exiting\n%s\n",
o.Name, err.Error())
return err
}
}
if a.Config.Agent.Debug {
log.Printf("Attempting connection to output: %s\n", o.Name)
}
err := o.Output.Connect()
if err != nil {
log.Printf("Failed to connect to output %s, retrying in 15s\n", o.Name)
time.Sleep(15 * time.Second)
err = o.Output.Connect()
if err != nil {
return err
}
}
if a.Config.Agent.Debug {
log.Printf("Successfully connected to output: %s\n", o.Name)
}
}
return nil
}
// Close closes the connection to all configured outputs
func (a *Agent) Close() error {
var err error
for _, o := range a.Config.Outputs {
err = o.Output.Close()
switch ot := o.Output.(type) {
case outputs.ServiceOutput:
ot.Stop()
}
}
return err
}
// gatherParallel runs the plugins that are using the same reporting interval
// as the telegraf agent.
func (a *Agent) gatherParallel(pointChan chan *client.Point) error {
var wg sync.WaitGroup
start := time.Now()
counter := 0
for _, plugin := range a.Config.Plugins {
if plugin.Config.Interval != 0 {
continue
}
wg.Add(1)
counter++
go func(plugin *config.RunningPlugin) {
defer wg.Done()
acc := NewAccumulator(plugin.Config, pointChan)
acc.SetDebug(a.Config.Agent.Debug)
// acc.SetPrefix(plugin.Name + "_")
acc.SetDefaultTags(a.Config.Tags)
if err := plugin.Plugin.Gather(acc); err != nil {
log.Printf("Error in plugin [%s]: %s", plugin.Name, err)
}
}(plugin)
}
if counter == 0 {
return nil
}
wg.Wait()
elapsed := time.Since(start)
log.Printf("Gathered metrics, (%s interval), from %d plugins in %s\n",
a.Config.Agent.Interval, counter, elapsed)
return nil
}
// gatherSeparate runs the plugins that have been configured with their own
// reporting interval.
func (a *Agent) gatherSeparate(
shutdown chan struct{},
plugin *config.RunningPlugin,
pointChan chan *client.Point,
) error {
ticker := time.NewTicker(plugin.Config.Interval)
for {
var outerr error
start := time.Now()
acc := NewAccumulator(plugin.Config, pointChan)
acc.SetDebug(a.Config.Agent.Debug)
// acc.SetPrefix(plugin.Name + "_")
acc.SetDefaultTags(a.Config.Tags)
if err := plugin.Plugin.Gather(acc); err != nil {
log.Printf("Error in plugin [%s]: %s", plugin.Name, err)
}
elapsed := time.Since(start)
log.Printf("Gathered metrics, (separate %s interval), from %s in %s\n",
plugin.Config.Interval, plugin.Name, elapsed)
if outerr != nil {
return outerr
}
select {
case <-shutdown:
return nil
case <-ticker.C:
continue
}
}
}
// Test verifies that we can 'Gather' from all plugins with their configured
// Config struct
func (a *Agent) Test() error {
shutdown := make(chan struct{})
defer close(shutdown)
pointChan := make(chan *client.Point)
// dummy receiver for the point channel
go func() {
for {
select {
case <-pointChan:
// do nothing
case <-shutdown:
return
}
}
}()
for _, plugin := range a.Config.Plugins {
acc := NewAccumulator(plugin.Config, pointChan)
acc.SetDebug(true)
// acc.SetPrefix(plugin.Name + "_")
fmt.Printf("* Plugin: %s, Collection 1\n", plugin.Name)
if plugin.Config.Interval != 0 {
fmt.Printf("* Internal: %s\n", plugin.Config.Interval)
}
if err := plugin.Plugin.Gather(acc); err != nil {
return err
}
// Special instructions for some plugins. cpu, for example, needs to be
// run twice in order to return cpu usage percentages.
switch plugin.Name {
case "cpu", "mongodb":
time.Sleep(500 * time.Millisecond)
fmt.Printf("* Plugin: %s, Collection 2\n", plugin.Name)
if err := plugin.Plugin.Gather(acc); err != nil {
return err
}
}
}
return nil
}
// writeOutput writes a list of points to a single output, with retries.
// Optionally takes a `done` channel to indicate that it is done writing.
func (a *Agent) writeOutput(
points []*client.Point,
ro *config.RunningOutput,
shutdown chan struct{},
wg *sync.WaitGroup,
) {
defer wg.Done()
if len(points) == 0 {
return
}
retry := 0
retries := a.Config.Agent.FlushRetries
start := time.Now()
for {
filtered := ro.FilterPoints(points)
err := ro.Output.Write(filtered)
if err == nil {
// Write successful
elapsed := time.Since(start)
log.Printf("Flushed %d metrics to output %s in %s\n",
len(filtered), ro.Name, elapsed)
return
}
select {
case <-shutdown:
return
default:
if retry >= retries {
// No more retries
msg := "FATAL: Write to output [%s] failed %d times, dropping" +
" %d metrics\n"
log.Printf(msg, ro.Name, retries+1, len(points))
return
} else if err != nil {
// Sleep for a retry
log.Printf("Error in output [%s]: %s, retrying in %s",
ro.Name, err.Error(), a.Config.Agent.FlushInterval.Duration)
time.Sleep(a.Config.Agent.FlushInterval.Duration)
}
}
retry++
}
}
// flush writes a list of points to all configured outputs
func (a *Agent) flush(
points []*client.Point,
shutdown chan struct{},
wait bool,
) {
var wg sync.WaitGroup
for _, o := range a.Config.Outputs {
wg.Add(1)
go a.writeOutput(points, o, shutdown, &wg)
}
if wait {
wg.Wait()
}
}
// flusher monitors the points input channel and flushes on the minimum interval
func (a *Agent) flusher(shutdown chan struct{}, pointChan chan *client.Point) error {
// Inelegant, but this sleep is to allow the Gather threads to run, so that
// the flusher will flush after metrics are collected.
time.Sleep(time.Millisecond * 100)
ticker := time.NewTicker(a.Config.Agent.FlushInterval.Duration)
points := make([]*client.Point, 0)
for {
select {
case <-shutdown:
log.Println("Hang on, flushing any cached points before shutdown")
a.flush(points, shutdown, true)
return nil
case <-ticker.C:
a.flush(points, shutdown, false)
points = make([]*client.Point, 0)
case pt := <-pointChan:
points = append(points, pt)
}
}
}
// jitterInterval applies the the interval jitter to the flush interval using
// crypto/rand number generator
func jitterInterval(ininterval, injitter time.Duration) time.Duration {
var jitter int64
outinterval := ininterval
if injitter.Nanoseconds() != 0 {
maxjitter := big.NewInt(injitter.Nanoseconds())
if j, err := rand.Int(rand.Reader, maxjitter); err == nil {
jitter = j.Int64()
}
outinterval = time.Duration(jitter + ininterval.Nanoseconds())
}
if outinterval.Nanoseconds() < time.Duration(500*time.Millisecond).Nanoseconds() {
log.Printf("Flush interval %s too low, setting to 500ms\n", outinterval)
outinterval = time.Duration(500 * time.Millisecond)
}
return outinterval
}
// Run runs the agent daemon, gathering every Interval
func (a *Agent) Run(shutdown chan struct{}) error {
var wg sync.WaitGroup
a.Config.Agent.FlushInterval.Duration = jitterInterval(a.Config.Agent.FlushInterval.Duration,
a.Config.Agent.FlushJitter.Duration)
log.Printf("Agent Config: Interval:%s, Debug:%#v, Hostname:%#v, "+
"Flush Interval:%s\n",
a.Config.Agent.Interval, a.Config.Agent.Debug,
a.Config.Agent.Hostname, a.Config.Agent.FlushInterval)
// channel shared between all plugin threads for accumulating points
pointChan := make(chan *client.Point, 1000)
// Round collection to nearest interval by sleeping
if a.Config.Agent.RoundInterval {
i := int64(a.Config.Agent.Interval.Duration)
time.Sleep(time.Duration(i - (time.Now().UnixNano() % i)))
}
ticker := time.NewTicker(a.Config.Agent.Interval.Duration)
wg.Add(1)
go func() {
defer wg.Done()
if err := a.flusher(shutdown, pointChan); err != nil {
log.Printf("Flusher routine failed, exiting: %s\n", err.Error())
close(shutdown)
}
}()
for _, plugin := range a.Config.Plugins {
// Start service of any ServicePlugins
switch p := plugin.Plugin.(type) {
case plugins.ServicePlugin:
if err := p.Start(); err != nil {
log.Printf("Service for plugin %s failed to start, exiting\n%s\n",
plugin.Name, err.Error())
return err
}
defer p.Stop()
}
// Special handling for plugins that have their own collection interval
// configured. Default intervals are handled below with gatherParallel
if plugin.Config.Interval != 0 {
wg.Add(1)
go func(plugin *config.RunningPlugin) {
defer wg.Done()
if err := a.gatherSeparate(shutdown, plugin, pointChan); err != nil {
log.Printf(err.Error())
}
}(plugin)
}
}
defer wg.Wait()
for {
if err := a.gatherParallel(pointChan); err != nil {
log.Printf(err.Error())
}
select {
case <-shutdown:
return nil
case <-ticker.C:
continue
}
}
}

View File

@@ -1,172 +0,0 @@
package agent
import (
"fmt"
"log"
"math"
"sync"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal/models"
)
func NewAccumulator(
inputConfig *internal_models.InputConfig,
metrics chan telegraf.Metric,
) *accumulator {
acc := accumulator{}
acc.metrics = metrics
acc.inputConfig = inputConfig
return &acc
}
type accumulator struct {
sync.Mutex
metrics chan telegraf.Metric
defaultTags map[string]string
debug bool
inputConfig *internal_models.InputConfig
prefix string
}
func (ac *accumulator) Add(
measurement string,
value interface{},
tags map[string]string,
t ...time.Time,
) {
fields := make(map[string]interface{})
fields["value"] = value
if !ac.inputConfig.Filter.ShouldNamePass(measurement) {
return
}
ac.AddFields(measurement, fields, tags, t...)
}
func (ac *accumulator) AddFields(
measurement string,
fields map[string]interface{},
tags map[string]string,
t ...time.Time,
) {
if len(fields) == 0 || len(measurement) == 0 {
return
}
if !ac.inputConfig.Filter.ShouldNamePass(measurement) {
return
}
if !ac.inputConfig.Filter.ShouldTagsPass(tags) {
return
}
// Override measurement name if set
if len(ac.inputConfig.NameOverride) != 0 {
measurement = ac.inputConfig.NameOverride
}
// Apply measurement prefix and suffix if set
if len(ac.inputConfig.MeasurementPrefix) != 0 {
measurement = ac.inputConfig.MeasurementPrefix + measurement
}
if len(ac.inputConfig.MeasurementSuffix) != 0 {
measurement = measurement + ac.inputConfig.MeasurementSuffix
}
if tags == nil {
tags = make(map[string]string)
}
// Apply plugin-wide tags if set
for k, v := range ac.inputConfig.Tags {
if _, ok := tags[k]; !ok {
tags[k] = v
}
}
// Apply daemon-wide tags if set
for k, v := range ac.defaultTags {
if _, ok := tags[k]; !ok {
tags[k] = v
}
}
result := make(map[string]interface{})
for k, v := range fields {
// Filter out any filtered fields
if ac.inputConfig != nil {
if !ac.inputConfig.Filter.ShouldFieldsPass(k) {
continue
}
}
result[k] = v
// Validate uint64 and float64 fields
switch val := v.(type) {
case uint64:
// InfluxDB does not support writing uint64
if val < uint64(9223372036854775808) {
result[k] = int64(val)
} else {
result[k] = int64(9223372036854775807)
}
case float64:
// NaNs are invalid values in influxdb, skip measurement
if math.IsNaN(val) || math.IsInf(val, 0) {
if ac.debug {
log.Printf("Measurement [%s] field [%s] has a NaN or Inf "+
"field, skipping",
measurement, k)
}
continue
}
}
}
fields = nil
if len(result) == 0 {
return
}
var timestamp time.Time
if len(t) > 0 {
timestamp = t[0]
} else {
timestamp = time.Now()
}
if ac.prefix != "" {
measurement = ac.prefix + measurement
}
m, err := telegraf.NewMetric(measurement, tags, result, timestamp)
if err != nil {
log.Printf("Error adding point [%s]: %s\n", measurement, err.Error())
return
}
if ac.debug {
fmt.Println("> " + m.String())
}
ac.metrics <- m
}
func (ac *accumulator) Debug() bool {
return ac.debug
}
func (ac *accumulator) SetDebug(debug bool) {
ac.debug = debug
}
func (ac *accumulator) setDefaultTags(tags map[string]string) {
ac.defaultTags = tags
}
func (ac *accumulator) addDefaultTag(key, value string) {
ac.defaultTags[key] = value
}

View File

@@ -1,387 +0,0 @@
package agent
import (
cryptorand "crypto/rand"
"fmt"
"log"
"math/big"
"math/rand"
"os"
"runtime"
"sync"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal/config"
"github.com/influxdata/telegraf/internal/models"
)
// Agent runs telegraf and collects data based on the given config
type Agent struct {
Config *config.Config
}
// NewAgent returns an Agent struct based off the given Config
func NewAgent(config *config.Config) (*Agent, error) {
a := &Agent{
Config: config,
}
if a.Config.Agent.Hostname == "" {
hostname, err := os.Hostname()
if err != nil {
return nil, err
}
a.Config.Agent.Hostname = hostname
}
config.Tags["host"] = a.Config.Agent.Hostname
return a, nil
}
// Connect connects to all configured outputs
func (a *Agent) Connect() error {
for _, o := range a.Config.Outputs {
o.Quiet = a.Config.Agent.Quiet
switch ot := o.Output.(type) {
case telegraf.ServiceOutput:
if err := ot.Start(); err != nil {
log.Printf("Service for output %s failed to start, exiting\n%s\n",
o.Name, err.Error())
return err
}
}
if a.Config.Agent.Debug {
log.Printf("Attempting connection to output: %s\n", o.Name)
}
err := o.Output.Connect()
if err != nil {
log.Printf("Failed to connect to output %s, retrying in 15s, "+
"error was '%s' \n", o.Name, err)
time.Sleep(15 * time.Second)
err = o.Output.Connect()
if err != nil {
return err
}
}
if a.Config.Agent.Debug {
log.Printf("Successfully connected to output: %s\n", o.Name)
}
}
return nil
}
// Close closes the connection to all configured outputs
func (a *Agent) Close() error {
var err error
for _, o := range a.Config.Outputs {
err = o.Output.Close()
switch ot := o.Output.(type) {
case telegraf.ServiceOutput:
ot.Stop()
}
}
return err
}
func panicRecover(input *internal_models.RunningInput) {
if err := recover(); err != nil {
trace := make([]byte, 2048)
runtime.Stack(trace, true)
log.Printf("FATAL: Input [%s] panicked: %s, Stack:\n%s\n",
input.Name, err, trace)
log.Println("PLEASE REPORT THIS PANIC ON GITHUB with " +
"stack trace, configuration, and OS information: " +
"https://github.com/influxdata/telegraf/issues/new")
}
}
// gatherParallel runs the inputs that are using the same reporting interval
// as the telegraf agent.
func (a *Agent) gatherParallel(metricC chan telegraf.Metric) error {
var wg sync.WaitGroup
start := time.Now()
counter := 0
jitter := a.Config.Agent.CollectionJitter.Duration.Nanoseconds()
for _, input := range a.Config.Inputs {
if input.Config.Interval != 0 {
continue
}
wg.Add(1)
counter++
go func(input *internal_models.RunningInput) {
defer panicRecover(input)
defer wg.Done()
acc := NewAccumulator(input.Config, metricC)
acc.SetDebug(a.Config.Agent.Debug)
acc.setDefaultTags(a.Config.Tags)
if jitter != 0 {
nanoSleep := rand.Int63n(jitter)
d, err := time.ParseDuration(fmt.Sprintf("%dns", nanoSleep))
if err != nil {
log.Printf("Jittering collection interval failed for plugin %s",
input.Name)
} else {
time.Sleep(d)
}
}
if err := input.Input.Gather(acc); err != nil {
log.Printf("Error in input [%s]: %s", input.Name, err)
}
}(input)
}
if counter == 0 {
return nil
}
wg.Wait()
elapsed := time.Since(start)
if !a.Config.Agent.Quiet {
log.Printf("Gathered metrics, (%s interval), from %d inputs in %s\n",
a.Config.Agent.Interval.Duration, counter, elapsed)
}
return nil
}
// gatherSeparate runs the inputs that have been configured with their own
// reporting interval.
func (a *Agent) gatherSeparate(
shutdown chan struct{},
input *internal_models.RunningInput,
metricC chan telegraf.Metric,
) error {
defer panicRecover(input)
ticker := time.NewTicker(input.Config.Interval)
for {
var outerr error
start := time.Now()
acc := NewAccumulator(input.Config, metricC)
acc.SetDebug(a.Config.Agent.Debug)
acc.setDefaultTags(a.Config.Tags)
if err := input.Input.Gather(acc); err != nil {
log.Printf("Error in input [%s]: %s", input.Name, err)
}
elapsed := time.Since(start)
if !a.Config.Agent.Quiet {
log.Printf("Gathered metrics, (separate %s interval), from %s in %s\n",
input.Config.Interval, input.Name, elapsed)
}
if outerr != nil {
return outerr
}
select {
case <-shutdown:
return nil
case <-ticker.C:
continue
}
}
}
// Test verifies that we can 'Gather' from all inputs with their configured
// Config struct
func (a *Agent) Test() error {
shutdown := make(chan struct{})
defer close(shutdown)
metricC := make(chan telegraf.Metric)
// dummy receiver for the point channel
go func() {
for {
select {
case <-metricC:
// do nothing
case <-shutdown:
return
}
}
}()
for _, input := range a.Config.Inputs {
acc := NewAccumulator(input.Config, metricC)
acc.SetDebug(true)
fmt.Printf("* Plugin: %s, Collection 1\n", input.Name)
if input.Config.Interval != 0 {
fmt.Printf("* Internal: %s\n", input.Config.Interval)
}
if err := input.Input.Gather(acc); err != nil {
return err
}
// Special instructions for some inputs. cpu, for example, needs to be
// run twice in order to return cpu usage percentages.
switch input.Name {
case "cpu", "mongodb", "procstat":
time.Sleep(500 * time.Millisecond)
fmt.Printf("* Plugin: %s, Collection 2\n", input.Name)
if err := input.Input.Gather(acc); err != nil {
return err
}
}
}
return nil
}
// flush writes a list of metrics to all configured outputs
func (a *Agent) flush() {
var wg sync.WaitGroup
wg.Add(len(a.Config.Outputs))
for _, o := range a.Config.Outputs {
go func(output *internal_models.RunningOutput) {
defer wg.Done()
err := output.Write()
if err != nil {
log.Printf("Error writing to output [%s]: %s\n",
output.Name, err.Error())
}
}(o)
}
wg.Wait()
}
// flusher monitors the metrics input channel and flushes on the minimum interval
func (a *Agent) flusher(shutdown chan struct{}, metricC chan telegraf.Metric) error {
// Inelegant, but this sleep is to allow the Gather threads to run, so that
// the flusher will flush after metrics are collected.
time.Sleep(time.Millisecond * 200)
ticker := time.NewTicker(a.Config.Agent.FlushInterval.Duration)
for {
select {
case <-shutdown:
log.Println("Hang on, flushing any cached metrics before shutdown")
a.flush()
return nil
case <-ticker.C:
a.flush()
case m := <-metricC:
for _, o := range a.Config.Outputs {
o.AddMetric(m)
}
}
}
}
// jitterInterval applies the the interval jitter to the flush interval using
// crypto/rand number generator
func jitterInterval(ininterval, injitter time.Duration) time.Duration {
var jitter int64
outinterval := ininterval
if injitter.Nanoseconds() != 0 {
maxjitter := big.NewInt(injitter.Nanoseconds())
if j, err := cryptorand.Int(cryptorand.Reader, maxjitter); err == nil {
jitter = j.Int64()
}
outinterval = time.Duration(jitter + ininterval.Nanoseconds())
}
if outinterval.Nanoseconds() < time.Duration(500*time.Millisecond).Nanoseconds() {
log.Printf("Flush interval %s too low, setting to 500ms\n", outinterval)
outinterval = time.Duration(500 * time.Millisecond)
}
return outinterval
}
// Run runs the agent daemon, gathering every Interval
func (a *Agent) Run(shutdown chan struct{}) error {
var wg sync.WaitGroup
a.Config.Agent.FlushInterval.Duration = jitterInterval(
a.Config.Agent.FlushInterval.Duration,
a.Config.Agent.FlushJitter.Duration)
log.Printf("Agent Config: Interval:%s, Debug:%#v, Quiet:%#v, Hostname:%#v, "+
"Flush Interval:%s \n",
a.Config.Agent.Interval.Duration, a.Config.Agent.Debug, a.Config.Agent.Quiet,
a.Config.Agent.Hostname, a.Config.Agent.FlushInterval.Duration)
// channel shared between all input threads for accumulating metrics
metricC := make(chan telegraf.Metric, 10000)
for _, input := range a.Config.Inputs {
// Start service of any ServicePlugins
switch p := input.Input.(type) {
case telegraf.ServiceInput:
acc := NewAccumulator(input.Config, metricC)
acc.SetDebug(a.Config.Agent.Debug)
acc.setDefaultTags(a.Config.Tags)
if err := p.Start(acc); err != nil {
log.Printf("Service for input %s failed to start, exiting\n%s\n",
input.Name, err.Error())
return err
}
defer p.Stop()
}
}
// Round collection to nearest interval by sleeping
if a.Config.Agent.RoundInterval {
i := int64(a.Config.Agent.Interval.Duration)
time.Sleep(time.Duration(i - (time.Now().UnixNano() % i)))
}
ticker := time.NewTicker(a.Config.Agent.Interval.Duration)
wg.Add(1)
go func() {
defer wg.Done()
if err := a.flusher(shutdown, metricC); err != nil {
log.Printf("Flusher routine failed, exiting: %s\n", err.Error())
close(shutdown)
}
}()
for _, input := range a.Config.Inputs {
// Special handling for inputs that have their own collection interval
// configured. Default intervals are handled below with gatherParallel
if input.Config.Interval != 0 {
wg.Add(1)
go func(input *internal_models.RunningInput) {
defer wg.Done()
if err := a.gatherSeparate(shutdown, input, metricC); err != nil {
log.Printf(err.Error())
}
}(input)
}
}
defer wg.Wait()
for {
if err := a.gatherParallel(metricC); err != nil {
log.Printf(err.Error())
}
select {
case <-shutdown:
return nil
case <-ticker.C:
continue
}
}
}

View File

@@ -1,103 +1,84 @@
package agent
package telegraf
import (
"github.com/stretchr/testify/assert"
"testing"
"time"
"github.com/influxdata/telegraf/internal/config"
"github.com/influxdb/telegraf/internal/config"
// needing to load the plugins
_ "github.com/influxdata/telegraf/plugins/inputs/all"
_ "github.com/influxdb/telegraf/plugins/all"
// needing to load the outputs
_ "github.com/influxdata/telegraf/plugins/outputs/all"
_ "github.com/influxdb/telegraf/outputs/all"
)
func TestAgent_LoadPlugin(t *testing.T) {
c := config.NewConfig()
c.InputFilters = []string{"mysql"}
err := c.LoadConfig("../internal/config/testdata/telegraf-agent.toml")
assert.NoError(t, err)
c.PluginFilters = []string{"mysql"}
c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
a, _ := NewAgent(c)
assert.Equal(t, 1, len(a.Config.Inputs))
assert.Equal(t, 1, len(a.Config.Plugins))
c = config.NewConfig()
c.InputFilters = []string{"foo"}
err = c.LoadConfig("../internal/config/testdata/telegraf-agent.toml")
assert.NoError(t, err)
c.PluginFilters = []string{"foo"}
c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
a, _ = NewAgent(c)
assert.Equal(t, 0, len(a.Config.Inputs))
assert.Equal(t, 0, len(a.Config.Plugins))
c = config.NewConfig()
c.InputFilters = []string{"mysql", "foo"}
err = c.LoadConfig("../internal/config/testdata/telegraf-agent.toml")
assert.NoError(t, err)
c.PluginFilters = []string{"mysql", "foo"}
c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
a, _ = NewAgent(c)
assert.Equal(t, 1, len(a.Config.Inputs))
assert.Equal(t, 1, len(a.Config.Plugins))
c = config.NewConfig()
c.InputFilters = []string{"mysql", "redis"}
err = c.LoadConfig("../internal/config/testdata/telegraf-agent.toml")
assert.NoError(t, err)
c.PluginFilters = []string{"mysql", "redis"}
c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
a, _ = NewAgent(c)
assert.Equal(t, 2, len(a.Config.Inputs))
assert.Equal(t, 2, len(a.Config.Plugins))
c = config.NewConfig()
c.InputFilters = []string{"mysql", "foo", "redis", "bar"}
err = c.LoadConfig("../internal/config/testdata/telegraf-agent.toml")
assert.NoError(t, err)
c.PluginFilters = []string{"mysql", "foo", "redis", "bar"}
c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
a, _ = NewAgent(c)
assert.Equal(t, 2, len(a.Config.Inputs))
assert.Equal(t, 2, len(a.Config.Plugins))
}
func TestAgent_LoadOutput(t *testing.T) {
c := config.NewConfig()
c.OutputFilters = []string{"influxdb"}
err := c.LoadConfig("../internal/config/testdata/telegraf-agent.toml")
assert.NoError(t, err)
c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
a, _ := NewAgent(c)
assert.Equal(t, 2, len(a.Config.Outputs))
c = config.NewConfig()
c.OutputFilters = []string{"kafka"}
err = c.LoadConfig("../internal/config/testdata/telegraf-agent.toml")
assert.NoError(t, err)
a, _ = NewAgent(c)
assert.Equal(t, 1, len(a.Config.Outputs))
c = config.NewConfig()
c.OutputFilters = []string{}
err = c.LoadConfig("../internal/config/testdata/telegraf-agent.toml")
assert.NoError(t, err)
c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
a, _ = NewAgent(c)
assert.Equal(t, 3, len(a.Config.Outputs))
c = config.NewConfig()
c.OutputFilters = []string{"foo"}
err = c.LoadConfig("../internal/config/testdata/telegraf-agent.toml")
assert.NoError(t, err)
c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
a, _ = NewAgent(c)
assert.Equal(t, 0, len(a.Config.Outputs))
c = config.NewConfig()
c.OutputFilters = []string{"influxdb", "foo"}
err = c.LoadConfig("../internal/config/testdata/telegraf-agent.toml")
assert.NoError(t, err)
c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
a, _ = NewAgent(c)
assert.Equal(t, 2, len(a.Config.Outputs))
c = config.NewConfig()
c.OutputFilters = []string{"influxdb", "kafka"}
err = c.LoadConfig("../internal/config/testdata/telegraf-agent.toml")
assert.NoError(t, err)
assert.Equal(t, 3, len(c.Outputs))
c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
a, _ = NewAgent(c)
assert.Equal(t, 3, len(a.Config.Outputs))
c = config.NewConfig()
c.OutputFilters = []string{"influxdb", "foo", "kafka", "bar"}
err = c.LoadConfig("../internal/config/testdata/telegraf-agent.toml")
assert.NoError(t, err)
c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
a, _ = NewAgent(c)
assert.Equal(t, 3, len(a.Config.Outputs))
}

View File

@@ -4,17 +4,16 @@ machine:
post:
- sudo service zookeeper stop
- go version
- go version | grep 1.5.3 || sudo rm -rf /usr/local/go
- wget https://storage.googleapis.com/golang/go1.5.3.linux-amd64.tar.gz
- sudo tar -C /usr/local -xzf go1.5.3.linux-amd64.tar.gz
- go version | grep 1.5.1 || sudo rm -rf /usr/local/go
- wget https://storage.googleapis.com/golang/go1.5.1.linux-amd64.tar.gz
- sudo tar -C /usr/local -xzf go1.5.1.linux-amd64.tar.gz
- go version
dependencies:
cache_directories:
- "~/telegraf-build/src"
override:
- docker info
post:
- gem install fpm
- sudo apt-get install -y rpm python-boto
test:
override:

View File

@@ -7,235 +7,146 @@ import (
"os"
"os/signal"
"strings"
"syscall"
"github.com/influxdata/telegraf/agent"
"github.com/influxdata/telegraf/internal/config"
_ "github.com/influxdata/telegraf/plugins/inputs/all"
_ "github.com/influxdata/telegraf/plugins/outputs/all"
"github.com/influxdb/telegraf"
"github.com/influxdb/telegraf/internal/config"
_ "github.com/influxdb/telegraf/outputs/all"
_ "github.com/influxdb/telegraf/plugins/all"
)
var fDebug = flag.Bool("debug", false,
"show metrics as they're generated to stdout")
var fQuiet = flag.Bool("quiet", false,
"run in quiet mode")
var fTest = flag.Bool("test", false, "gather metrics, print them out, and exit")
var fConfig = flag.String("config", "", "configuration file to load")
var fConfigDirectory = flag.String("config-directory", "",
var fConfigDirectory = flag.String("configdirectory", "",
"directory containing additional *.conf files")
var fVersion = flag.Bool("version", false, "display the version")
var fSampleConfig = flag.Bool("sample-config", false,
"print out full sample configuration")
var fPidfile = flag.String("pidfile", "", "file to write our pid to")
var fInputFilters = flag.String("input-filter", "",
"filter the inputs to enable, separator is :")
var fOutputFilters = flag.String("output-filter", "",
var fPLuginFilters = flag.String("filter", "",
"filter the plugins to enable, separator is :")
var fOutputFilters = flag.String("outputfilter", "",
"filter the outputs to enable, separator is :")
var fUsage = flag.String("usage", "",
"print usage for a plugin, ie, 'telegraf -usage mysql'")
var fInputFiltersLegacy = flag.String("filter", "",
"filter the inputs to enable, separator is :")
var fOutputFiltersLegacy = flag.String("outputfilter", "",
"filter the outputs to enable, separator is :")
var fConfigDirectoryLegacy = flag.String("configdirectory", "",
"directory containing additional *.conf files")
// Telegraf version
// -ldflags "-X main.Version=`git describe --always --tags`"
var Version string
const usage = `Telegraf, The plugin-driven server agent for collecting and reporting metrics.
Usage:
telegraf <flags>
The flags are:
-config <file> configuration file to load
-test gather metrics once, print them to stdout, and exit
-sample-config print out full sample configuration to stdout
-config-directory directory containing additional *.conf files
-input-filter filter the input plugins to enable, separator is :
-output-filter filter the output plugins to enable, separator is :
-usage print usage for a plugin, ie, 'telegraf -usage mysql'
-debug print metrics as they're generated to stdout
-quiet run in quiet mode
-version print the version to stdout
Examples:
# generate a telegraf config file:
telegraf -sample-config > telegraf.conf
# generate config with only cpu input & influxdb output plugins defined
telegraf -sample-config -input-filter cpu -output-filter influxdb
# run a single telegraf collection, outputing metrics to stdout
telegraf -config telegraf.conf -test
# run telegraf with all plugins defined in config file
telegraf -config telegraf.conf
# run telegraf, enabling the cpu & memory input, and influxdb output plugins
telegraf -config telegraf.conf -input-filter cpu:mem -output-filter influxdb
`
func main() {
reload := make(chan bool, 1)
reload <- true
for <-reload {
reload <- false
flag.Usage = func() { usageExit(0) }
flag.Parse()
flag.Parse()
if flag.NFlag() == 0 {
usageExit(0)
}
var inputFilters []string
if *fInputFiltersLegacy != "" {
inputFilter := strings.TrimSpace(*fInputFiltersLegacy)
inputFilters = strings.Split(":"+inputFilter+":", ":")
}
if *fInputFilters != "" {
inputFilter := strings.TrimSpace(*fInputFilters)
inputFilters = strings.Split(":"+inputFilter+":", ":")
}
var outputFilters []string
if *fOutputFiltersLegacy != "" {
outputFilter := strings.TrimSpace(*fOutputFiltersLegacy)
outputFilters = strings.Split(":"+outputFilter+":", ":")
}
if *fOutputFilters != "" {
outputFilter := strings.TrimSpace(*fOutputFilters)
outputFilters = strings.Split(":"+outputFilter+":", ":")
}
if *fVersion {
v := fmt.Sprintf("Telegraf - Version %s", Version)
fmt.Println(v)
return
}
if *fSampleConfig {
config.PrintSampleConfig(inputFilters, outputFilters)
return
}
if *fUsage != "" {
if err := config.PrintInputConfig(*fUsage); err != nil {
if err2 := config.PrintOutputConfig(*fUsage); err2 != nil {
log.Fatalf("%s and %s", err, err2)
}
}
return
}
var (
c *config.Config
err error
)
if *fConfig != "" {
c = config.NewConfig()
c.OutputFilters = outputFilters
c.InputFilters = inputFilters
err = c.LoadConfig(*fConfig)
if err != nil {
log.Fatal(err)
}
} else {
fmt.Println("You must specify a config file. See telegraf --help")
os.Exit(1)
}
if *fConfigDirectoryLegacy != "" {
err = c.LoadDirectory(*fConfigDirectoryLegacy)
if err != nil {
log.Fatal(err)
}
}
if *fConfigDirectory != "" {
err = c.LoadDirectory(*fConfigDirectory)
if err != nil {
log.Fatal(err)
}
}
if len(c.Outputs) == 0 {
log.Fatalf("Error: no outputs found, did you provide a valid config file?")
}
if len(c.Inputs) == 0 {
log.Fatalf("Error: no inputs found, did you provide a valid config file?")
}
ag, err := agent.NewAgent(c)
if err != nil {
log.Fatal(err)
}
if *fDebug {
ag.Config.Agent.Debug = true
}
if *fQuiet {
ag.Config.Agent.Quiet = true
}
if *fTest {
err = ag.Test()
if err != nil {
log.Fatal(err)
}
return
}
err = ag.Connect()
if err != nil {
log.Fatal(err)
}
shutdown := make(chan struct{})
signals := make(chan os.Signal)
signal.Notify(signals, os.Interrupt, syscall.SIGHUP)
go func() {
sig := <-signals
if sig == os.Interrupt {
close(shutdown)
}
if sig == syscall.SIGHUP {
log.Printf("Reloading Telegraf config\n")
<-reload
reload <- true
close(shutdown)
}
}()
log.Printf("Starting Telegraf (version %s)\n", Version)
log.Printf("Loaded outputs: %s", strings.Join(c.OutputNames(), " "))
log.Printf("Loaded inputs: %s", strings.Join(c.InputNames(), " "))
log.Printf("Tags enabled: %s", c.ListTags())
if *fPidfile != "" {
f, err := os.Create(*fPidfile)
if err != nil {
log.Fatalf("Unable to create pidfile: %s", err)
}
fmt.Fprintf(f, "%d\n", os.Getpid())
f.Close()
}
ag.Run(shutdown)
var pluginFilters []string
if *fPLuginFilters != "" {
pluginsFilter := strings.TrimSpace(*fPLuginFilters)
pluginFilters = strings.Split(":"+pluginsFilter+":", ":")
}
}
func usageExit(rc int) {
fmt.Println(usage)
os.Exit(rc)
var outputFilters []string
if *fOutputFilters != "" {
outputFilter := strings.TrimSpace(*fOutputFilters)
outputFilters = strings.Split(":"+outputFilter+":", ":")
}
if *fVersion {
v := fmt.Sprintf("Telegraf - Version %s", Version)
fmt.Println(v)
return
}
if *fSampleConfig {
config.PrintSampleConfig(pluginFilters, outputFilters)
return
}
if *fUsage != "" {
if err := config.PrintPluginConfig(*fUsage); err != nil {
if err2 := config.PrintOutputConfig(*fUsage); err2 != nil {
log.Fatalf("%s and %s", err, err2)
}
}
return
}
var (
c *config.Config
err error
)
if *fConfig != "" {
c = config.NewConfig()
c.OutputFilters = outputFilters
c.PluginFilters = pluginFilters
err = c.LoadConfig(*fConfig)
if err != nil {
log.Fatal(err)
}
} else {
fmt.Println("Usage: Telegraf")
flag.PrintDefaults()
return
}
if *fConfigDirectory != "" {
err = c.LoadDirectory(*fConfigDirectory)
if err != nil {
log.Fatal(err)
}
}
if len(c.Outputs) == 0 {
log.Fatalf("Error: no outputs found, did you provide a valid config file?")
}
if len(c.Plugins) == 0 {
log.Fatalf("Error: no plugins found, did you provide a valid config file?")
}
ag, err := telegraf.NewAgent(c)
if err != nil {
log.Fatal(err)
}
if *fDebug {
ag.Config.Agent.Debug = true
}
if *fTest {
err = ag.Test()
if err != nil {
log.Fatal(err)
}
return
}
err = ag.Connect()
if err != nil {
log.Fatal(err)
}
shutdown := make(chan struct{})
signals := make(chan os.Signal)
signal.Notify(signals, os.Interrupt)
go func() {
<-signals
close(shutdown)
}()
log.Printf("Starting Telegraf (version %s)\n", Version)
log.Printf("Loaded outputs: %s", strings.Join(c.OutputNames(), " "))
log.Printf("Loaded plugins: %s", strings.Join(c.PluginNames(), " "))
log.Printf("Tags enabled: %s", c.ListTags())
if *fPidfile != "" {
f, err := os.Create(*fPidfile)
if err != nil {
log.Fatalf("Unable to create pidfile: %s", err)
}
fmt.Fprintf(f, "%d\n", os.Getpid())
f.Close()
}
ag.Run(shutdown)
}

View File

@@ -1,236 +0,0 @@
# Telegraf Configuration
## Generating a Configuration File
A default Telegraf config file can be generated using the -sample-config flag:
`telegraf -sample-config > telegraf.conf`
To generate a file with specific inputs and outputs, you can use the
-input-filter and -output-filter flags:
`telegraf -sample-config -input-filter cpu:mem:net:swap -output-filter influxdb:kafka`
## `[global_tags]` Configuration
Global tags can be specific in the `[global_tags]` section of the config file in
key="value" format. All metrics being gathered on this host will be tagged
with the tags specified here.
## `[agent]` Configuration
Telegraf has a few options you can configure under the `agent` section of the
config.
* **interval**: Default data collection interval for all inputs
* **round_interval**: Rounds collection interval to 'interval'
ie, if interval="10s" then always collect on :00, :10, :20, etc.
* **metric_buffer_limit**: Telegraf will cache metric_buffer_limit metrics
for each output, and will flush this buffer on a successful write.
* **collection_jitter**: Collection jitter is used to jitter
the collection by a random amount.
Each plugin will sleep for a random time within jitter before collecting.
This can be used to avoid many plugins querying things like sysfs at the
same time, which can have a measurable effect on the system.
* **flush_interval**: Default data flushing interval for all outputs.
You should not set this below
interval. Maximum flush_interval will be flush_interval + flush_jitter
* **flush_jitter**: Jitter the flush interval by a random amount.
This is primarily to avoid
large write spikes for users running a large number of telegraf instances.
ie, a jitter of 5s and flush_interval 10s means flushes will happen every 10-15s.
* **debug**: Run telegraf in debug mode.
* **quiet**: Run telegraf in quiet mode.
* **hostname**: Override default hostname, if empty use os.Hostname().
## `[inputs.xxx]` Configuration
There are some configuration options that are configurable per input:
* **name_override**: Override the base name of the measurement.
(Default is the name of the input).
* **name_prefix**: Specifies a prefix to attach to the measurement name.
* **name_suffix**: Specifies a suffix to attach to the measurement name.
* **tags**: A map of tags to apply to a specific input's measurements.
* **interval**: How often to gather this metric. Normal plugins use a single
global interval, but if one particular input should be run less or more often,
you can configure that here.
#### Input Filters
There are also filters that can be configured per input:
* **namepass**: An array of strings that is used to filter metrics generated by the
current input. Each string in the array is tested as a glob match against
measurement names and if it matches, the field is emitted.
* **namedrop**: The inverse of pass, if a measurement name matches, it is not emitted.
* **fieldpass**: An array of strings that is used to filter metrics generated by the
current input. Each string in the array is tested as a glob match against field names
and if it matches, the field is emitted.
* **fielddrop**: The inverse of pass, if a field name matches, it is not emitted.
* **tagpass**: tag names and arrays of strings that are used to filter
measurements by the current input. Each string in the array is tested as a glob
match against the tag name, and if it matches the measurement is emitted.
* **tagdrop**: The inverse of tagpass. If a tag matches, the measurement is not
emitted. This is tested on measurements that have passed the tagpass test.
#### Input Configuration Examples
This is a full working config that will output CPU data to an InfluxDB instance
at 192.168.59.103:8086, tagging measurements with dc="denver-1". It will output
measurements at a 10s interval and will collect per-cpu data, dropping any
fields which begin with `time_`.
```toml
[global_tags]
dc = "denver-1"
[agent]
interval = "10s"
# OUTPUTS
[[outputs.influxdb]]
url = "http://192.168.59.103:8086" # required.
database = "telegraf" # required.
precision = "s"
# INPUTS
[[inputs.cpu]]
percpu = true
totalcpu = false
# filter all fields beginning with 'time_'
drop = ["time_*"]
```
#### Input Config: tagpass and tagdrop
```toml
[[inputs.cpu]]
percpu = true
totalcpu = false
drop = ["cpu_time"]
# Don't collect CPU data for cpu6 & cpu7
[inputs.cpu.tagdrop]
cpu = [ "cpu6", "cpu7" ]
[[inputs.disk]]
[inputs.disk.tagpass]
# tagpass conditions are OR, not AND.
# If the (filesystem is ext4 or xfs) OR (the path is /opt or /home)
# then the metric passes
fstype = [ "ext4", "xfs" ]
# Globs can also be used on the tag values
path = [ "/opt", "/home*" ]
```
#### Input Config: fieldpass and fielddrop
```toml
# Drop all metrics for guest & steal CPU usage
[[inputs.cpu]]
percpu = false
totalcpu = true
fielddrop = ["usage_guest", "usage_steal"]
# Only store inode related metrics for disks
[[inputs.disk]]
fieldpass = ["inodes*"]
```
#### Input Config: namepass and namedrop
```toml
# Drop all metrics about containers for kubelet
[[inputs.prometheus]]
urls = ["http://kube-node-1:4194/metrics"]
namedrop = ["container_"]
# Only store rest client related metrics for kubelet
[[inputs.prometheus]]
urls = ["http://kube-node-1:4194/metrics"]
namepass = ["rest_client_"]
```
#### Input config: prefix, suffix, and override
This plugin will emit measurements with the name `cpu_total`
```toml
[[inputs.cpu]]
name_suffix = "_total"
percpu = false
totalcpu = true
```
This will emit measurements with the name `foobar`
```toml
[[inputs.cpu]]
name_override = "foobar"
percpu = false
totalcpu = true
```
#### Input config: tags
This plugin will emit measurements with two additional tags: `tag1=foo` and
`tag2=bar`
```toml
[[inputs.cpu]]
percpu = false
totalcpu = true
[inputs.cpu.tags]
tag1 = "foo"
tag2 = "bar"
```
#### Multiple inputs of the same type
Additional inputs (or outputs) of the same type can be specified,
just define more instances in the config file. It is highly recommended that
you utilize `name_override`, `name_prefix`, or `name_suffix` config options
to avoid measurement collisions:
```toml
[[inputs.cpu]]
percpu = false
totalcpu = true
[[inputs.cpu]]
percpu = true
totalcpu = false
name_override = "percpu_usage"
drop = ["cpu_time*"]
```
## `[outputs.xxx]` Configuration
Telegraf also supports specifying multiple output sinks to send data to,
configuring each output sink is different, but examples can be
found by running `telegraf -sample-config`.
Outputs also support the same configurable options as inputs
(namepass, namedrop, tagpass, tagdrop)
```toml
[[outputs.influxdb]]
urls = [ "http://localhost:8086" ]
database = "telegraf"
precision = "s"
# Drop all measurements that start with "aerospike"
namedrop = ["aerospike*"]
[[outputs.influxdb]]
urls = [ "http://localhost:8086" ]
database = "telegraf-aerospike-data"
precision = "s"
# Only accept aerospike data:
namepass = ["aerospike*"]
[[outputs.influxdb]]
urls = [ "http://localhost:8086" ]
database = "telegraf-cpu0-data"
precision = "s"
# Only store measurements where the tag "cpu" matches the value "cpu0"
[outputs.influxdb.tagpass]
cpu = ["cpu0"]
```

View File

@@ -1,274 +0,0 @@
# Telegraf Input Data Formats
Telegraf metrics, like InfluxDB
[points](https://docs.influxdata.com/influxdb/v0.10/write_protocols/line/),
are a combination of four basic parts:
1. Measurement Name
1. Tags
1. Fields
1. Timestamp
These four parts are easily defined when using InfluxDB line-protocol as a
data format. But there are other data formats that users may want to use which
require more advanced configuration to create usable Telegraf metrics.
Plugins such as `exec` and `kafka_consumer` parse textual data. Up until now,
these plugins were statically configured to parse just a single
data format. `exec` mostly only supported parsing JSON, and `kafka_consumer` only
supported data in InfluxDB line-protocol.
But now we are normalizing the parsing of various data formats across all
plugins that can support it. You will be able to identify a plugin that supports
different data formats by the presence of a `data_format` config option, for
example, in the exec plugin:
```toml
[[inputs.exec]]
## Commands array
commands = ["/tmp/test.sh", "/usr/bin/mycollector --foo=bar"]
## measurement name suffix (for separating different commands)
name_suffix = "_mycollector"
## Data format to consume. This can be "json", "influx" or "graphite"
## Each data format has it's own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
data_format = "json"
## Additional configuration options go here
```
Each data_format has an additional set of configuration options available, which
I'll go over below.
## Influx:
There are no additional configuration options for InfluxDB line-protocol. The
metrics are parsed directly into Telegraf metrics.
#### Influx Configuration:
```toml
[[inputs.exec]]
## Commands array
commands = ["/tmp/test.sh", "/usr/bin/mycollector --foo=bar"]
## measurement name suffix (for separating different commands)
name_suffix = "_mycollector"
## Data format to consume. This can be "json", "influx" or "graphite"
## Each data format has it's own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
data_format = "influx"
```
## JSON:
The JSON data format flattens JSON into metric _fields_. For example, this JSON:
```json
{
"a": 5,
"b": {
"c": 6
}
}
```
Would get translated into _fields_ of a measurement:
```
myjsonmetric a=5,b_c=6
```
The _measurement_ _name_ is usually the name of the plugin,
but can be overridden using the `name_override` config option.
#### JSON Configuration:
The JSON data format supports specifying "tag keys". If specified, keys
will be searched for in the root-level of the JSON blob. If the key(s) exist,
they will be applied as tags to the Telegraf metrics.
For example, if you had this configuration:
```toml
[[inputs.exec]]
## Commands array
commands = ["/tmp/test.sh", "/usr/bin/mycollector --foo=bar"]
## measurement name suffix (for separating different commands)
name_suffix = "_mycollector"
## Data format to consume. This can be "json", "influx" or "graphite"
## Each data format has it's own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
data_format = "json"
## List of tag names to extract from top-level of JSON server response
tag_keys = [
"my_tag_1",
"my_tag_2"
]
```
with this JSON output from a command:
```json
{
"a": 5,
"b": {
"c": 6
},
"my_tag_1": "foo"
}
```
Your Telegraf metrics would get tagged with "my_tag_1"
```
exec_mycollector,my_tag_1=foo a=5,b_c=6
```
## Graphite:
The Graphite data format translates graphite _dot_ buckets directly into
telegraf measurement names, with a single value field, and without any tags. For
more advanced options, Telegraf supports specifying "templates" to translate
graphite buckets into Telegraf metrics.
#### Separator:
You can specify a separator to use for the parsed metrics.
By default, it will leave the metrics with a "." separator.
Setting `separator = "_"` will translate:
```
cpu.usage.idle 99
=> cpu_usage_idle value=99
```
#### Measurement/Tag Templates:
The most basic template is to specify a single transformation to apply to all
incoming metrics. _measurement_ is a special keyword that tells Telegraf which
parts of the graphite bucket to combine into the measurement name. It can have a
trailing `*` to indicate that the remainder of the metric should be used.
Other words are considered tag keys. So the following template:
```toml
templates = [
"region.measurement*"
]
```
would result in the following Graphite -> Telegraf transformation.
```
us-west.cpu.load 100
=> cpu.load,region=us-west value=100
```
#### Field Templates:
There is also a _field_ keyword, which can only be specified once.
The field keyword tells Telegraf to give the metric that field name.
So the following template:
```toml
templates = [
"measurement.measurement.field.region"
]
```
would result in the following Graphite -> Telegraf transformation.
```
cpu.usage.idle.us-west 100
=> cpu_usage,region=us-west idle=100
```
#### Filter Templates:
Users can also filter the template(s) to use based on the name of the bucket,
using glob matching, like so:
```toml
templates = [
"cpu.* measurement.measurement.region",
"mem.* measurement.measurement.host"
]
```
which would result in the following transformation:
```
cpu.load.us-west 100
=> cpu_load,region=us-west value=100
mem.cached.localhost 256
=> mem_cached,host=localhost value=256
```
#### Adding Tags:
Additional tags can be added to a metric that don't exist on the received metric.
You can add additional tags by specifying them after the pattern.
Tags have the same format as the line protocol.
Multiple tags are separated by commas.
```toml
templates = [
"measurement.measurement.field.region datacenter=1a"
]
```
would result in the following Graphite -> Telegraf transformation.
```
cpu.usage.idle.us-west 100
=> cpu_usage,region=us-west,datacenter=1a idle=100
```
There are many more options available,
[More details can be found here](https://github.com/influxdata/influxdb/tree/master/services/graphite#templates)
#### Graphite Configuration:
```toml
[[inputs.exec]]
## Commands array
commands = ["/tmp/test.sh", "/usr/bin/mycollector --foo=bar"]
## measurement name suffix (for separating different commands)
name_suffix = "_mycollector"
## Data format to consume. This can be "json", "influx" or "graphite" (line-protocol)
## Each data format has it's own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
data_format = "graphite"
## This string will be used to join the matched values.
separator = "_"
## Each template line requires a template pattern. It can have an optional
## filter before the template and separated by spaces. It can also have optional extra
## tags following the template. Multiple tags should be separated by commas and no spaces
## similar to the line protocol format. There can be only one default template.
## Templates support below format:
## 1. filter + template
## 2. filter + template + extra tag
## 3. filter + template with field key
## 4. default template
templates = [
"*.app env.service.resource.measurement",
"stats.* .host.measurement* region=us-west,agent=sensu",
"stats2.* .host.measurement.field",
"measurement*"
]
```

View File

@@ -1,97 +0,0 @@
# Telegraf Output Data Formats
Telegraf metrics, like InfluxDB
[points](https://docs.influxdata.com/influxdb/v0.10/write_protocols/line/),
are a combination of four basic parts:
1. Measurement Name
1. Tags
1. Fields
1. Timestamp
In InfluxDB line protocol, these 4 parts are easily defined in textual form:
```
measurement_name[,tag1=val1,...] field1=val1[,field2=val2,...] [timestamp]
```
For Telegraf outputs that write textual data (such as `kafka`, `mqtt`, and `file`),
InfluxDB line protocol was originally the only available output format. But now
we are normalizing telegraf metric "serializers" into a
[plugin-like interface](https://github.com/influxdata/telegraf/tree/master/plugins/serializers)
across all output plugins that can support it.
You will be able to identify a plugin that supports different data formats
by the presence of a `data_format`
config option, for example, in the `file` output plugin:
```toml
[[outputs.file]]
## Files to write to, "stdout" is a specially handled file.
files = ["stdout"]
## Data format to output. This can be "influx" or "graphite"
## Each data format has it's own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
data_format = "influx"
## Additional configuration options go here
```
Each data_format has an additional set of configuration options available, which
I'll go over below.
## Influx:
There are no additional configuration options for InfluxDB line-protocol. The
metrics are serialized directly into InfluxDB line-protocol.
#### Influx Configuration:
```toml
[[outputs.file]]
## Files to write to, "stdout" is a specially handled file.
files = ["stdout", "/tmp/metrics.out"]
## Data format to output. This can be "influx" or "graphite"
## Each data format has it's own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
data_format = "influx"
```
## Graphite:
The Graphite data format translates Telegraf metrics into _dot_ buckets.
The format is:
```
[prefix].[host tag].[all tags (alphabetical)].[measurement name].[field name] value timestamp
```
Which means the following influx metric -> graphite conversion would happen:
```
cpu,cpu=cpu-total,dc=us-east-1,host=tars usage_idle=98.09,usage_user=0.89 1455320660004257758
=>
tars.cpu-total.us-east-1.cpu.usage_user 0.89 1455320690
tars.cpu-total.us-east-1.cpu.usage_idle 98.09 1455320690
```
`prefix` is a configuration option when using the graphite output data format.
#### Graphite Configuration:
```toml
[[outputs.file]]
## Files to write to, "stdout" is a specially handled file.
files = ["stdout", "/tmp/metrics.out"]
## Data format to output. This can be "influx" or "graphite"
## Each data format has it's own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
data_format = "influx"
prefix = "telegraf"
```

View File

@@ -1,52 +1,47 @@
# Telegraf configuration
# Telegraf is entirely plugin driven. All metrics are gathered from the
# declared inputs, and sent to the declared outputs.
# declared plugins.
# Plugins must be declared in here to be active.
# To deactivate a plugin, comment out the name and any variables.
# Even if a plugin has no configuration, it must be declared in here
# to be active. Declaring a plugin means just specifying the name
# as a section with no variables. To deactivate a plugin, comment
# out the name and any variables.
# Use 'telegraf -config telegraf.conf -test' to see what metrics a config
# Use 'telegraf -config telegraf.toml -test' to see what metrics a config
# file would generate.
# Global tags can be specified here in key="value" format.
[global_tags]
# dc = "us-east-1" # will tag all metrics with dc=us-east-1
# rack = "1a"
# One rule that plugins conform to is wherever a connection string
# can be passed, the values '' and 'localhost' are treated specially.
# They indicate to the plugin to use their own builtin configuration to
# connect to the local system.
# NOTE: The configuration has a few required parameters. They are marked
# with 'required'. Be sure to edit those to make this configuration work.
# Tags can also be specified via a normal map, but only one form at a time:
[tags]
# dc = "us-east-1"
# Configuration for telegraf agent
[agent]
## Default data collection interval for all inputs
# Default data collection interval for all plugins
interval = "10s"
## Rounds collection interval to 'interval'
## ie, if interval="10s" then always collect on :00, :10, :20, etc.
# Rounds collection interval to 'interval'
# ie, if interval="10s" then always collect on :00, :10, :20, etc.
round_interval = true
## Telegraf will cache metric_buffer_limit metrics for each output, and will
## flush this buffer on a successful write.
metric_buffer_limit = 10000
## Flush the buffer whenever full, regardless of flush_interval.
flush_buffer_when_full = true
## Collection jitter is used to jitter the collection by a random amount.
## Each plugin will sleep for a random time within jitter before collecting.
## This can be used to avoid many plugins querying things like sysfs at the
## same time, which can have a measurable effect on the system.
collection_jitter = "0s"
## Default flushing interval for all outputs. You shouldn't set this below
## interval. Maximum flush_interval will be flush_interval + flush_jitter
# Default data flushing interval for all outputs. You should not set this below
# interval. Maximum flush_interval will be flush_interval + flush_jitter
flush_interval = "10s"
## Jitter the flush interval by a random amount. This is primarily to avoid
## large write spikes for users running a large number of telegraf instances.
## ie, a jitter of 5s and interval 10s means flushes will happen every 10-15s
# Jitter the flush interval by a random amount. This is primarily to avoid
# large write spikes for users running a large number of telegraf instances.
# ie, a jitter of 5s and interval 10s means flushes will happen every 10-15s
flush_jitter = "0s"
## Run telegraf in debug mode
# Run telegraf in debug mode
debug = false
## Run telegraf in quiet mode
quiet = false
## Override default hostname, if empty use os.Hostname()
# Override default hostname, if empty use os.Hostname()
hostname = ""
@@ -54,6 +49,8 @@
# OUTPUTS #
###############################################################################
[outputs]
# Configuration for influxdb server to send metrics to
[[outputs.influxdb]]
# The full HTTP or UDP endpoint URL for your InfluxDB instance.
@@ -63,13 +60,13 @@
urls = ["http://localhost:8086"] # required
# The target database for metrics (telegraf will create it if not exists)
database = "telegraf" # required
# Precision of writes, valid values are "ns", "us" (or "µs"), "ms", "s", "m", "h".
# Precision of writes, valid values are n, u, ms, s, m, and h
# note: using second precision greatly helps InfluxDB compression
precision = "s"
## Write timeout (for the InfluxDB client), formatted as a string.
## If not provided, will default to 5s. 0s means no timeout (not recommended).
timeout = "5s"
# Connection timeout (for the connection with InfluxDB), formatted as a string.
# If not provided, will default to 0 (no timeout)
# timeout = "5s"
# username = "telegraf"
# password = "metricsmetricsmetricsmetrics"
# Set the user agent for HTTP POSTs (can be useful for log differentiation)
@@ -79,50 +76,48 @@
###############################################################################
# INPUTS #
# PLUGINS #
###############################################################################
[plugins]
# Read metrics about cpu usage
[[inputs.cpu]]
[[plugins.cpu]]
# Whether to report per-cpu stats or not
percpu = true
# Whether to report total system cpu stats or not
totalcpu = true
# Comment this line if you want the raw CPU time metrics
fielddrop = ["time_*"]
drop = ["cpu_time"]
# Read metrics about disk usage by mount point
[[inputs.disk]]
[[plugins.disk]]
# By default, telegraf gather stats for all mountpoints.
# Setting mountpoints will restrict the stats to the specified mountpoints.
# mount_points=["/"]
# Ignore some mountpoints by filesystem type. For example (dev)tmpfs (usually
# present on /run, /var/run, /dev/shm or /dev).
ignore_fs = ["tmpfs", "devtmpfs"]
# Mountpoints=["/"]
# Read metrics about disk IO by device
[[inputs.diskio]]
[[plugins.diskio]]
# By default, telegraf will gather stats for all devices including
# disk partitions.
# Setting devices will restrict the stats to the specified devices.
# devices = ["sda", "sdb"]
# Setting devices will restrict the stats to the specified devcies.
# Devices=["sda","sdb"]
# Uncomment the following line if you do not need disk serial numbers.
# skip_serial_number = true
# SkipSerialNumber = true
# Read metrics about memory usage
[[inputs.mem]]
[[plugins.mem]]
# no configuration
# Read metrics about swap memory usage
[[inputs.swap]]
[[plugins.swap]]
# no configuration
# Read metrics about system load & uptime
[[inputs.system]]
[[plugins.system]]
# no configuration
###############################################################################
# SERVICE INPUTS #
# SERVICE PLUGINS #
###############################################################################

View File

@@ -1,164 +0,0 @@
# Telegraf configuration
# Telegraf is entirely plugin driven. All metrics are gathered from the
# declared inputs, and sent to the declared outputs.
# Plugins must be declared in here to be active.
# To deactivate a plugin, comment out the name and any variables.
# Use 'telegraf -config telegraf.conf -test' to see what metrics a config
# file would generate.
# Global tags can be specified here in key="value" format.
[global_tags]
# dc = "us-east-1" # will tag all metrics with dc=us-east-1
# rack = "1a"
# Configuration for telegraf agent
[agent]
## Default data collection interval for all inputs
interval = "10s"
## Rounds collection interval to 'interval'
## ie, if interval="10s" then always collect on :00, :10, :20, etc.
round_interval = true
## Telegraf will cache metric_buffer_limit metrics for each output, and will
## flush this buffer on a successful write.
metric_buffer_limit = 10000
## Flush the buffer whenever full, regardless of flush_interval.
flush_buffer_when_full = true
## Collection jitter is used to jitter the collection by a random amount.
## Each plugin will sleep for a random time within jitter before collecting.
## This can be used to avoid many plugins querying things like sysfs at the
## same time, which can have a measurable effect on the system.
collection_jitter = "0s"
## Default flushing interval for all outputs. You shouldn't set this below
## interval. Maximum flush_interval will be flush_interval + flush_jitter
flush_interval = "10s"
## Jitter the flush interval by a random amount. This is primarily to avoid
## large write spikes for users running a large number of telegraf instances.
## ie, a jitter of 5s and interval 10s means flushes will happen every 10-15s
flush_jitter = "0s"
## Run telegraf in debug mode
debug = false
## Run telegraf in quiet mode
quiet = false
## Override default hostname, if empty use os.Hostname()
hostname = ""
###############################################################################
# OUTPUTS #
###############################################################################
# Configuration for influxdb server to send metrics to
[[outputs.influxdb]]
# The full HTTP or UDP endpoint URL for your InfluxDB instance.
# Multiple urls can be specified but it is assumed that they are part of the same
# cluster, this means that only ONE of the urls will be written to each interval.
# urls = ["udp://localhost:8089"] # UDP endpoint example
urls = ["http://localhost:8086"] # required
# The target database for metrics (telegraf will create it if not exists)
database = "telegraf" # required
# Precision of writes, valid values are "ns", "us" (or "µs"), "ms", "s", "m", "h".
# note: using second precision greatly helps InfluxDB compression
precision = "s"
## Write timeout (for the InfluxDB client), formatted as a string.
## If not provided, will default to 5s. 0s means no timeout (not recommended).
timeout = "5s"
# username = "telegraf"
# password = "metricsmetricsmetricsmetrics"
# Set the user agent for HTTP POSTs (can be useful for log differentiation)
# user_agent = "telegraf"
# Set UDP payload size, defaults to InfluxDB UDP Client default (512 bytes)
# udp_payload = 512
###############################################################################
# INPUTS #
###############################################################################
# Windows Performance Counters plugin.
# These are the recommended method of monitoring system metrics on windows,
# as the regular system plugins (inputs.cpu, inputs.mem, etc.) rely on WMI,
# which utilizes a lot of system resources.
#
# See more configuration examples at:
# https://github.com/influxdata/telegraf/tree/master/plugins/inputs/win_perf_counters
[[inputs.win_perf_counters]]
[[inputs.win_perf_counters.object]]
# Processor usage, alternative to native, reports on a per core.
ObjectName = "Processor"
Instances = ["*"]
Counters = ["% Idle Time", "% Interrupt Time", "% Privileged Time", "% User Time", "% Processor Time"]
Measurement = "win_cpu"
#IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
[[inputs.win_perf_counters.object]]
# Disk times and queues
ObjectName = "LogicalDisk"
Instances = ["*"]
Counters = ["% Idle Time", "% Disk Time","% Disk Read Time", "% Disk Write Time", "% User Time", "Current Disk Queue Length"]
Measurement = "win_disk"
#IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
[[inputs.win_perf_counters.object]]
ObjectName = "System"
Counters = ["Context Switches/sec","System Calls/sec"]
Instances = ["------"]
Measurement = "win_system"
#IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
[[inputs.win_perf_counters.object]]
# Example query where the Instance portion must be removed to get data back, such as from the Memory object.
ObjectName = "Memory"
Counters = ["Available Bytes","Cache Faults/sec","Demand Zero Faults/sec","Page Faults/sec","Pages/sec","Transition Faults/sec","Pool Nonpaged Bytes","Pool Paged Bytes"]
Instances = ["------"] # Use 6 x - to remove the Instance bit from the query.
Measurement = "win_mem"
#IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
# Windows system plugins using WMI (disabled by default, using
# win_perf_counters over WMI is recommended)
# Read metrics about cpu usage
#[[inputs.cpu]]
## Whether to report per-cpu stats or not
#percpu = true
## Whether to report total system cpu stats or not
#totalcpu = true
## Comment this line if you want the raw CPU time metrics
#fielddrop = ["time_*"]
# Read metrics about disk usage by mount point
#[[inputs.disk]]
## By default, telegraf gather stats for all mountpoints.
## Setting mountpoints will restrict the stats to the specified mountpoints.
## mount_points=["/"]
## Ignore some mountpoints by filesystem type. For example (dev)tmpfs (usually
## present on /run, /var/run, /dev/shm or /dev).
#ignore_fs = ["tmpfs", "devtmpfs"]
# Read metrics about disk IO by device
#[[inputs.diskio]]
## By default, telegraf will gather stats for all devices including
## disk partitions.
## Setting devices will restrict the stats to the specified devices.
## devices = ["sda", "sdb"]
## Uncomment the following line if you do not need disk serial numbers.
## skip_serial_number = true
# Read metrics about memory usage
#[[inputs.mem]]
# no configuration
# Read metrics about swap memory usage
#[[inputs.swap]]
# no configuration

View File

@@ -1,31 +0,0 @@
package telegraf
type Input interface {
// SampleConfig returns the default configuration of the Input
SampleConfig() string
// Description returns a one-sentence description on the Input
Description() string
// Gather takes in an accumulator and adds the metrics that the Input
// gathers. This is called every "interval"
Gather(Accumulator) error
}
type ServiceInput interface {
// SampleConfig returns the default configuration of the Input
SampleConfig() string
// Description returns a one-sentence description on the Input
Description() string
// Gather takes in an accumulator and adds the metrics that the Input
// gathers. This is called every "interval"
Gather(Accumulator) error
// Start starts the ServiceInput's service, whatever that may be
Start(Accumulator) error
// Stop stops the services and closes any necessary channels and connections
Stop()
}

View File

@@ -10,16 +10,14 @@ import (
"strings"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/internal/models"
"github.com/influxdata/telegraf/plugins/inputs"
"github.com/influxdata/telegraf/plugins/outputs"
"github.com/influxdata/telegraf/plugins/parsers"
"github.com/influxdata/telegraf/plugins/serializers"
"github.com/influxdb/telegraf/internal"
"github.com/influxdb/telegraf/outputs"
"github.com/influxdb/telegraf/plugins"
"github.com/influxdata/config"
"github.com/naoina/toml"
"github.com/naoina/toml/ast"
"github.com/influxdb/influxdb/client/v2"
)
// Config specifies the URL/user/password for the database that telegraf
@@ -27,12 +25,12 @@ import (
// specified
type Config struct {
Tags map[string]string
InputFilters []string
PluginFilters []string
OutputFilters []string
Agent *AgentConfig
Inputs []*internal_models.RunningInput
Outputs []*internal_models.RunningOutput
Plugins []*RunningPlugin
Outputs []*RunningOutput
}
func NewConfig() *Config {
@@ -42,13 +40,14 @@ func NewConfig() *Config {
Interval: internal.Duration{Duration: 10 * time.Second},
RoundInterval: true,
FlushInterval: internal.Duration{Duration: 10 * time.Second},
FlushRetries: 2,
FlushJitter: internal.Duration{Duration: 5 * time.Second},
},
Tags: make(map[string]string),
Inputs: make([]*internal_models.RunningInput, 0),
Outputs: make([]*internal_models.RunningOutput, 0),
InputFilters: make([]string, 0),
Plugins: make([]*RunningPlugin, 0),
Outputs: make([]*RunningOutput, 0),
PluginFilters: make([]string, 0),
OutputFilters: make([]string, 0),
}
return c
@@ -62,55 +61,159 @@ type AgentConfig struct {
// ie, if Interval=10s then always collect on :00, :10, :20, etc.
RoundInterval bool
// CollectionJitter is used to jitter the collection by a random amount.
// Each plugin will sleep for a random time within jitter before collecting.
// This can be used to avoid many plugins querying things like sysfs at the
// same time, which can have a measurable effect on the system.
CollectionJitter internal.Duration
// FlushInterval is the Interval at which to flush data
// Interval at which to flush data
FlushInterval internal.Duration
// FlushJitter Jitters the flush interval by a random amount.
// This is primarily to avoid large write spikes for users running a large
// number of telegraf instances.
// ie, a jitter of 5s and interval 10s means flushes will happen every 10-15s
// FlushRetries is the number of times to retry each data flush
FlushRetries int
// FlushJitter tells
FlushJitter internal.Duration
// MetricBufferLimit is the max number of metrics that each output plugin
// will cache. The buffer is cleared when a successful write occurs. When
// full, the oldest metrics will be overwritten.
MetricBufferLimit int
// FlushBufferWhenFull tells Telegraf to flush the metric buffer whenever
// it fills up, regardless of FlushInterval. Setting this option to true
// does _not_ deactivate FlushInterval.
FlushBufferWhenFull bool
// TODO(cam): Remove UTC and Precision parameters, they are no longer
// valid for the agent config. Leaving them here for now for backwards-
// compatability
UTC bool `toml:"utc"`
Precision string
// Debug is the option for running in debug mode
Debug bool
// Quiet is the option for running in quiet mode
Quiet bool
// Option for running in debug mode
Debug bool
Hostname string
}
// Inputs returns a list of strings of the configured inputs.
func (c *Config) InputNames() []string {
// TagFilter is the name of a tag, and the values on which to filter
type TagFilter struct {
Name string
Filter []string
}
type RunningOutput struct {
Name string
Output outputs.Output
Config *OutputConfig
}
type RunningPlugin struct {
Name string
Plugin plugins.Plugin
Config *PluginConfig
}
// Filter containing drop/pass and tagdrop/tagpass rules
type Filter struct {
Drop []string
Pass []string
TagDrop []TagFilter
TagPass []TagFilter
IsActive bool
}
// PluginConfig containing a name, interval, and filter
type PluginConfig struct {
Name string
NameOverride string
MeasurementPrefix string
MeasurementSuffix string
Tags map[string]string
Filter Filter
Interval time.Duration
}
// OutputConfig containing name and filter
type OutputConfig struct {
Name string
Filter Filter
}
// Filter returns filtered slice of client.Points based on whether filters
// are active for this RunningOutput.
func (ro *RunningOutput) FilterPoints(points []*client.Point) []*client.Point {
if !ro.Config.Filter.IsActive {
return points
}
var filteredPoints []*client.Point
for i := range points {
if !ro.Config.Filter.ShouldPass(points[i].Name()) || !ro.Config.Filter.ShouldTagsPass(points[i].Tags()) {
continue
}
filteredPoints = append(filteredPoints, points[i])
}
return filteredPoints
}
// ShouldPass returns true if the metric should pass, false if should drop
// based on the drop/pass filter parameters
func (f Filter) ShouldPass(fieldkey string) bool {
if f.Pass != nil {
for _, pat := range f.Pass {
// TODO remove HasPrefix check, leaving it for now for legacy support.
// Cam, 2015-12-07
if strings.HasPrefix(fieldkey, pat) || internal.Glob(pat, fieldkey) {
return true
}
}
return false
}
if f.Drop != nil {
for _, pat := range f.Drop {
// TODO remove HasPrefix check, leaving it for now for legacy support.
// Cam, 2015-12-07
if strings.HasPrefix(fieldkey, pat) || internal.Glob(pat, fieldkey) {
return false
}
}
return true
}
return true
}
// ShouldTagsPass returns true if the metric should pass, false if should drop
// based on the tagdrop/tagpass filter parameters
func (f Filter) ShouldTagsPass(tags map[string]string) bool {
if f.TagPass != nil {
for _, pat := range f.TagPass {
if tagval, ok := tags[pat.Name]; ok {
for _, filter := range pat.Filter {
if internal.Glob(filter, tagval) {
return true
}
}
}
}
return false
}
if f.TagDrop != nil {
for _, pat := range f.TagDrop {
if tagval, ok := tags[pat.Name]; ok {
for _, filter := range pat.Filter {
if internal.Glob(filter, tagval) {
return false
}
}
}
}
return true
}
return true
}
// Plugins returns a list of strings of the configured plugins.
func (c *Config) PluginNames() []string {
var name []string
for _, input := range c.Inputs {
name = append(name, input.Name)
for _, plugin := range c.Plugins {
name = append(name, plugin.Name)
}
return name
}
// Outputs returns a list of strings of the configured inputs.
// Outputs returns a list of strings of the configured plugins.
func (c *Config) OutputNames() []string {
var name []string
for _, output := range c.Outputs {
@@ -133,74 +236,74 @@ func (c *Config) ListTags() string {
return strings.Join(tags, " ")
}
var header = `# Telegraf Configuration
var header = `# Telegraf configuration
# Telegraf is entirely plugin driven. All metrics are gathered from the
# declared inputs, and sent to the declared outputs.
# declared plugins.
# Plugins must be declared in here to be active.
# To deactivate a plugin, comment out the name and any variables.
# Even if a plugin has no configuration, it must be declared in here
# to be active. Declaring a plugin means just specifying the name
# as a section with no variables. To deactivate a plugin, comment
# out the name and any variables.
# Use 'telegraf -config telegraf.conf -test' to see what metrics a config
# Use 'telegraf -config telegraf.toml -test' to see what metrics a config
# file would generate.
# Global tags can be specified here in key="value" format.
[global_tags]
# dc = "us-east-1" # will tag all metrics with dc=us-east-1
# rack = "1a"
# One rule that plugins conform to is wherever a connection string
# can be passed, the values '' and 'localhost' are treated specially.
# They indicate to the plugin to use their own builtin configuration to
# connect to the local system.
# NOTE: The configuration has a few required parameters. They are marked
# with 'required'. Be sure to edit those to make this configuration work.
# Tags can also be specified via a normal map, but only one form at a time:
[tags]
# dc = "us-east-1"
# Configuration for telegraf agent
[agent]
## Default data collection interval for all inputs
# Default data collection interval for all plugins
interval = "10s"
## Rounds collection interval to 'interval'
## ie, if interval="10s" then always collect on :00, :10, :20, etc.
# Rounds collection interval to 'interval'
# ie, if interval="10s" then always collect on :00, :10, :20, etc.
round_interval = true
## Telegraf will cache metric_buffer_limit metrics for each output, and will
## flush this buffer on a successful write.
metric_buffer_limit = 10000
## Flush the buffer whenever full, regardless of flush_interval.
flush_buffer_when_full = true
## Collection jitter is used to jitter the collection by a random amount.
## Each plugin will sleep for a random time within jitter before collecting.
## This can be used to avoid many plugins querying things like sysfs at the
## same time, which can have a measurable effect on the system.
collection_jitter = "0s"
## Default flushing interval for all outputs. You shouldn't set this below
## interval. Maximum flush_interval will be flush_interval + flush_jitter
# Default data flushing interval for all outputs. You should not set this below
# interval. Maximum flush_interval will be flush_interval + flush_jitter
flush_interval = "10s"
## Jitter the flush interval by a random amount. This is primarily to avoid
## large write spikes for users running a large number of telegraf instances.
## ie, a jitter of 5s and interval 10s means flushes will happen every 10-15s
# Jitter the flush interval by a random amount. This is primarily to avoid
# large write spikes for users running a large number of telegraf instances.
# ie, a jitter of 5s and interval 10s means flushes will happen every 10-15s
flush_jitter = "0s"
## Run telegraf in debug mode
# Run telegraf in debug mode
debug = false
## Run telegraf in quiet mode
quiet = false
## Override default hostname, if empty use os.Hostname()
# Override default hostname, if empty use os.Hostname()
hostname = ""
#
# OUTPUTS:
#
###############################################################################
# OUTPUTS #
###############################################################################
[outputs]
`
var pluginHeader = `
#
# INPUTS:
#
###############################################################################
# PLUGINS #
###############################################################################
[plugins]
`
var serviceInputHeader = `
#
# SERVICE INPUTS:
#
var servicePluginHeader = `
###############################################################################
# SERVICE PLUGINS #
###############################################################################
`
// PrintSampleConfig prints the sample config
@@ -223,35 +326,35 @@ func PrintSampleConfig(pluginFilters []string, outputFilters []string) {
printConfig(oname, output, "outputs")
}
// Filter inputs
// Filter plugins
var pnames []string
for pname := range inputs.Inputs {
for pname := range plugins.Plugins {
if len(pluginFilters) == 0 || sliceContains(pname, pluginFilters) {
pnames = append(pnames, pname)
}
}
sort.Strings(pnames)
// Print Inputs
// Print Plugins
fmt.Printf(pluginHeader)
servInputs := make(map[string]telegraf.ServiceInput)
servPlugins := make(map[string]plugins.ServicePlugin)
for _, pname := range pnames {
creator := inputs.Inputs[pname]
input := creator()
creator := plugins.Plugins[pname]
plugin := creator()
switch p := input.(type) {
case telegraf.ServiceInput:
servInputs[pname] = p
switch p := plugin.(type) {
case plugins.ServicePlugin:
servPlugins[pname] = p
continue
}
printConfig(pname, input, "inputs")
printConfig(pname, plugin, "plugins")
}
// Print Service Inputs
fmt.Printf(serviceInputHeader)
for name, input := range servInputs {
printConfig(name, input, "inputs")
// Print Service Plugins
fmt.Printf(servicePluginHeader)
for name, plugin := range servPlugins {
printConfig(name, plugin, "plugins")
}
}
@@ -279,12 +382,12 @@ func sliceContains(name string, list []string) bool {
return false
}
// PrintInputConfig prints the config usage of a single input.
func PrintInputConfig(name string) error {
if creator, ok := inputs.Inputs[name]; ok {
printConfig(name, creator(), "inputs")
// PrintPluginConfig prints the config usage of a single plugin.
func PrintPluginConfig(name string) error {
if creator, ok := plugins.Plugins[name]; ok {
printConfig(name, creator(), "plugins")
} else {
return errors.New(fmt.Sprintf("Input %s not found", name))
return errors.New(fmt.Sprintf("Plugin %s not found", name))
}
return nil
}
@@ -322,7 +425,12 @@ func (c *Config) LoadDirectory(path string) error {
// LoadConfig loads the given config file and applies it to c
func (c *Config) LoadConfig(path string) error {
tbl, err := config.ParseFile(path)
data, err := ioutil.ReadFile(path)
if err != nil {
return err
}
tbl, err := toml.Parse(data)
if err != nil {
return err
}
@@ -335,25 +443,43 @@ func (c *Config) LoadConfig(path string) error {
switch name {
case "agent":
if err = config.UnmarshalTable(subTable, c.Agent); err != nil {
if err = toml.UnmarshalTable(subTable, c.Agent); err != nil {
log.Printf("Could not parse [agent] config\n")
return err
}
case "global_tags", "tags":
if err = config.UnmarshalTable(subTable, c.Tags); err != nil {
log.Printf("Could not parse [global_tags] config\n")
case "tags":
if err = toml.UnmarshalTable(subTable, c.Tags); err != nil {
log.Printf("Could not parse [tags] config\n")
return err
}
case "outputs":
for outputName, outputVal := range subTable.Fields {
switch outputSubTable := outputVal.(type) {
case *ast.Table:
if err = c.addOutput(outputName, outputSubTable); err != nil {
return err
}
case []*ast.Table:
for _, t := range outputSubTable {
if err = c.addOutput(outputName, t); err != nil {
return err
}
}
default:
return fmt.Errorf("Unsupported config format: %s",
outputName)
}
}
case "plugins":
for pluginName, pluginVal := range subTable.Fields {
switch pluginSubTable := pluginVal.(type) {
case *ast.Table:
if err = c.addOutput(pluginName, pluginSubTable); err != nil {
if err = c.addPlugin(pluginName, pluginSubTable); err != nil {
return err
}
case []*ast.Table:
for _, t := range pluginSubTable {
if err = c.addOutput(pluginName, t); err != nil {
if err = c.addPlugin(pluginName, t); err != nil {
return err
}
}
@@ -362,28 +488,10 @@ func (c *Config) LoadConfig(path string) error {
pluginName)
}
}
case "inputs", "plugins":
for pluginName, pluginVal := range subTable.Fields {
switch pluginSubTable := pluginVal.(type) {
case *ast.Table:
if err = c.addInput(pluginName, pluginSubTable); err != nil {
return err
}
case []*ast.Table:
for _, t := range pluginSubTable {
if err = c.addInput(pluginName, t); err != nil {
return err
}
}
default:
return fmt.Errorf("Unsupported config format: %s",
pluginName)
}
}
// Assume it's an input input for legacy config file support if no other
// Assume it's a plugin for legacy config file support if no other
// identifiers are present
default:
if err = c.addInput(name, subTable); err != nil {
if err = c.addPlugin(name, subTable); err != nil {
return err
}
}
@@ -401,92 +509,69 @@ func (c *Config) addOutput(name string, table *ast.Table) error {
}
output := creator()
// If the output has a SetSerializer function, then this means it can write
// arbitrary types of output, so build the serializer and set it.
switch t := output.(type) {
case serializers.SerializerOutput:
serializer, err := buildSerializer(name, table)
if err != nil {
return err
}
t.SetSerializer(serializer)
}
outputConfig, err := buildOutput(name, table)
if err != nil {
return err
}
if err := config.UnmarshalTable(table, output); err != nil {
if err := toml.UnmarshalTable(table, output); err != nil {
return err
}
ro := internal_models.NewRunningOutput(name, output, outputConfig)
if c.Agent.MetricBufferLimit > 0 {
ro.MetricBufferLimit = c.Agent.MetricBufferLimit
ro := &RunningOutput{
Name: name,
Output: output,
Config: outputConfig,
}
ro.FlushBufferWhenFull = c.Agent.FlushBufferWhenFull
c.Outputs = append(c.Outputs, ro)
return nil
}
func (c *Config) addInput(name string, table *ast.Table) error {
if len(c.InputFilters) > 0 && !sliceContains(name, c.InputFilters) {
func (c *Config) addPlugin(name string, table *ast.Table) error {
if len(c.PluginFilters) > 0 && !sliceContains(name, c.PluginFilters) {
return nil
}
// Legacy support renaming io input to diskio
// Legacy support renaming io plugin to diskio
if name == "io" {
name = "diskio"
}
creator, ok := inputs.Inputs[name]
creator, ok := plugins.Plugins[name]
if !ok {
return fmt.Errorf("Undefined but requested input: %s", name)
return fmt.Errorf("Undefined but requested plugin: %s", name)
}
input := creator()
plugin := creator()
// If the input has a SetParser function, then this means it can accept
// arbitrary types of input, so build the parser and set it.
switch t := input.(type) {
case parsers.ParserInput:
parser, err := buildParser(name, table)
if err != nil {
return err
}
t.SetParser(parser)
}
pluginConfig, err := buildInput(name, table)
pluginConfig, err := buildPlugin(name, table)
if err != nil {
return err
}
if err := config.UnmarshalTable(table, input); err != nil {
if err := toml.UnmarshalTable(table, plugin); err != nil {
return err
}
rp := &internal_models.RunningInput{
rp := &RunningPlugin{
Name: name,
Input: input,
Plugin: plugin,
Config: pluginConfig,
}
c.Inputs = append(c.Inputs, rp)
c.Plugins = append(c.Plugins, rp)
return nil
}
// buildFilter builds a Filter
// (tagpass/tagdrop/namepass/namedrop/fieldpass/fielddrop) to
// be inserted into the internal_models.OutputConfig/internal_models.InputConfig to be used for prefix
// buildFilter builds a Filter (tagpass/tagdrop/pass/drop) to
// be inserted into the OutputConfig/PluginConfig to be used for prefix
// filtering on tags and measurements
func buildFilter(tbl *ast.Table) internal_models.Filter {
f := internal_models.Filter{}
func buildFilter(tbl *ast.Table) Filter {
f := Filter{}
if node, ok := tbl.Fields["namepass"]; ok {
if node, ok := tbl.Fields["pass"]; ok {
if kv, ok := node.(*ast.KeyValue); ok {
if ary, ok := kv.Value.(*ast.Array); ok {
for _, elem := range ary.Value {
if str, ok := elem.(*ast.String); ok {
f.NamePass = append(f.NamePass, str.Value)
f.Pass = append(f.Pass, str.Value)
f.IsActive = true
}
}
@@ -494,12 +579,12 @@ func buildFilter(tbl *ast.Table) internal_models.Filter {
}
}
if node, ok := tbl.Fields["namedrop"]; ok {
if node, ok := tbl.Fields["drop"]; ok {
if kv, ok := node.(*ast.KeyValue); ok {
if ary, ok := kv.Value.(*ast.Array); ok {
for _, elem := range ary.Value {
if str, ok := elem.(*ast.String); ok {
f.NameDrop = append(f.NameDrop, str.Value)
f.Drop = append(f.Drop, str.Value)
f.IsActive = true
}
}
@@ -507,43 +592,11 @@ func buildFilter(tbl *ast.Table) internal_models.Filter {
}
}
fields := []string{"pass", "fieldpass"}
for _, field := range fields {
if node, ok := tbl.Fields[field]; ok {
if kv, ok := node.(*ast.KeyValue); ok {
if ary, ok := kv.Value.(*ast.Array); ok {
for _, elem := range ary.Value {
if str, ok := elem.(*ast.String); ok {
f.FieldPass = append(f.FieldPass, str.Value)
f.IsActive = true
}
}
}
}
}
}
fields = []string{"drop", "fielddrop"}
for _, field := range fields {
if node, ok := tbl.Fields[field]; ok {
if kv, ok := node.(*ast.KeyValue); ok {
if ary, ok := kv.Value.(*ast.Array); ok {
for _, elem := range ary.Value {
if str, ok := elem.(*ast.String); ok {
f.FieldDrop = append(f.FieldDrop, str.Value)
f.IsActive = true
}
}
}
}
}
}
if node, ok := tbl.Fields["tagpass"]; ok {
if subtbl, ok := node.(*ast.Table); ok {
for name, val := range subtbl.Fields {
if kv, ok := val.(*ast.KeyValue); ok {
tagfilter := &internal_models.TagFilter{Name: name}
tagfilter := &TagFilter{Name: name}
if ary, ok := kv.Value.(*ast.Array); ok {
for _, elem := range ary.Value {
if str, ok := elem.(*ast.String); ok {
@@ -562,7 +615,7 @@ func buildFilter(tbl *ast.Table) internal_models.Filter {
if subtbl, ok := node.(*ast.Table); ok {
for name, val := range subtbl.Fields {
if kv, ok := val.(*ast.KeyValue); ok {
tagfilter := &internal_models.TagFilter{Name: name}
tagfilter := &TagFilter{Name: name}
if ary, ok := kv.Value.(*ast.Array); ok {
for _, elem := range ary.Value {
if str, ok := elem.(*ast.String); ok {
@@ -577,10 +630,6 @@ func buildFilter(tbl *ast.Table) internal_models.Filter {
}
}
delete(tbl.Fields, "namedrop")
delete(tbl.Fields, "namepass")
delete(tbl.Fields, "fielddrop")
delete(tbl.Fields, "fieldpass")
delete(tbl.Fields, "drop")
delete(tbl.Fields, "pass")
delete(tbl.Fields, "tagdrop")
@@ -588,11 +637,11 @@ func buildFilter(tbl *ast.Table) internal_models.Filter {
return f
}
// buildInput parses input specific items from the ast.Table,
// buildPlugin parses plugin specific items from the ast.Table,
// builds the filter and returns a
// internal_models.InputConfig to be inserted into internal_models.RunningInput
func buildInput(name string, tbl *ast.Table) (*internal_models.InputConfig, error) {
cp := &internal_models.InputConfig{Name: name}
// PluginConfig to be inserted into RunningPlugin
func buildPlugin(name string, tbl *ast.Table) (*PluginConfig, error) {
cp := &PluginConfig{Name: name}
if node, ok := tbl.Fields["interval"]; ok {
if kv, ok := node.(*ast.KeyValue); ok {
if str, ok := kv.Value.(*ast.String); ok {
@@ -633,8 +682,8 @@ func buildInput(name string, tbl *ast.Table) (*internal_models.InputConfig, erro
cp.Tags = make(map[string]string)
if node, ok := tbl.Fields["tags"]; ok {
if subtbl, ok := node.(*ast.Table); ok {
if err := config.UnmarshalTable(subtbl, cp.Tags); err != nil {
log.Printf("Could not parse tags for input %s\n", name)
if err := toml.UnmarshalTable(subtbl, cp.Tags); err != nil {
log.Printf("Could not parse tags for plugin %s\n", name)
}
}
}
@@ -648,114 +697,13 @@ func buildInput(name string, tbl *ast.Table) (*internal_models.InputConfig, erro
return cp, nil
}
// buildParser grabs the necessary entries from the ast.Table for creating
// a parsers.Parser object, and creates it, which can then be added onto
// an Input object.
func buildParser(name string, tbl *ast.Table) (parsers.Parser, error) {
c := &parsers.Config{}
if node, ok := tbl.Fields["data_format"]; ok {
if kv, ok := node.(*ast.KeyValue); ok {
if str, ok := kv.Value.(*ast.String); ok {
c.DataFormat = str.Value
}
}
}
// Legacy support, exec plugin originally parsed JSON by default.
if name == "exec" && c.DataFormat == "" {
c.DataFormat = "json"
} else if c.DataFormat == "" {
c.DataFormat = "influx"
}
if node, ok := tbl.Fields["separator"]; ok {
if kv, ok := node.(*ast.KeyValue); ok {
if str, ok := kv.Value.(*ast.String); ok {
c.Separator = str.Value
}
}
}
if node, ok := tbl.Fields["templates"]; ok {
if kv, ok := node.(*ast.KeyValue); ok {
if ary, ok := kv.Value.(*ast.Array); ok {
for _, elem := range ary.Value {
if str, ok := elem.(*ast.String); ok {
c.Templates = append(c.Templates, str.Value)
}
}
}
}
}
if node, ok := tbl.Fields["tag_keys"]; ok {
if kv, ok := node.(*ast.KeyValue); ok {
if ary, ok := kv.Value.(*ast.Array); ok {
for _, elem := range ary.Value {
if str, ok := elem.(*ast.String); ok {
c.TagKeys = append(c.TagKeys, str.Value)
}
}
}
}
}
c.MetricName = name
delete(tbl.Fields, "data_format")
delete(tbl.Fields, "separator")
delete(tbl.Fields, "templates")
delete(tbl.Fields, "tag_keys")
return parsers.NewParser(c)
}
// buildSerializer grabs the necessary entries from the ast.Table for creating
// a serializers.Serializer object, and creates it, which can then be added onto
// an Output object.
func buildSerializer(name string, tbl *ast.Table) (serializers.Serializer, error) {
c := &serializers.Config{}
if node, ok := tbl.Fields["data_format"]; ok {
if kv, ok := node.(*ast.KeyValue); ok {
if str, ok := kv.Value.(*ast.String); ok {
c.DataFormat = str.Value
}
}
}
if c.DataFormat == "" {
c.DataFormat = "influx"
}
if node, ok := tbl.Fields["prefix"]; ok {
if kv, ok := node.(*ast.KeyValue); ok {
if str, ok := kv.Value.(*ast.String); ok {
c.Prefix = str.Value
}
}
}
delete(tbl.Fields, "data_format")
delete(tbl.Fields, "prefix")
return serializers.NewSerializer(c)
}
// buildOutput parses output specific items from the ast.Table, builds the filter and returns an
// internal_models.OutputConfig to be inserted into internal_models.RunningInput
// OutputConfig to be inserted into RunningPlugin
// Note: error exists in the return for future calls that might require error
func buildOutput(name string, tbl *ast.Table) (*internal_models.OutputConfig, error) {
oc := &internal_models.OutputConfig{
func buildOutput(name string, tbl *ast.Table) (*OutputConfig, error) {
oc := &OutputConfig{
Name: name,
Filter: buildFilter(tbl),
}
// Outputs don't support FieldDrop/FieldPass, so set to NameDrop/NamePass
if len(oc.Filter.FieldDrop) > 0 {
oc.Filter.NameDrop = oc.Filter.FieldDrop
}
if len(oc.Filter.FieldPass) > 0 {
oc.Filter.NamePass = oc.Filter.FieldPass
}
return oc, nil
}

View File

@@ -4,37 +4,33 @@ import (
"testing"
"time"
"github.com/influxdata/telegraf/internal/models"
"github.com/influxdata/telegraf/plugins/inputs"
"github.com/influxdata/telegraf/plugins/inputs/exec"
"github.com/influxdata/telegraf/plugins/inputs/memcached"
"github.com/influxdata/telegraf/plugins/inputs/procstat"
"github.com/influxdata/telegraf/plugins/parsers"
"github.com/influxdb/telegraf/plugins"
"github.com/influxdb/telegraf/plugins/exec"
"github.com/influxdb/telegraf/plugins/memcached"
"github.com/influxdb/telegraf/plugins/procstat"
"github.com/stretchr/testify/assert"
)
func TestConfig_LoadSingleInput(t *testing.T) {
func TestConfig_LoadSinglePlugin(t *testing.T) {
c := NewConfig()
c.LoadConfig("./testdata/single_plugin.toml")
memcached := inputs.Inputs["memcached"]().(*memcached.Memcached)
memcached := plugins.Plugins["memcached"]().(*memcached.Memcached)
memcached.Servers = []string{"localhost"}
mConfig := &internal_models.InputConfig{
mConfig := &PluginConfig{
Name: "memcached",
Filter: internal_models.Filter{
NameDrop: []string{"metricname2"},
NamePass: []string{"metricname1"},
FieldDrop: []string{"other", "stuff"},
FieldPass: []string{"some", "strings"},
TagDrop: []internal_models.TagFilter{
internal_models.TagFilter{
Filter: Filter{
Drop: []string{"other", "stuff"},
Pass: []string{"some", "strings"},
TagDrop: []TagFilter{
TagFilter{
Name: "badtag",
Filter: []string{"othertag"},
},
},
TagPass: []internal_models.TagFilter{
internal_models.TagFilter{
TagPass: []TagFilter{
TagFilter{
Name: "goodtag",
Filter: []string{"mytag"},
},
@@ -43,11 +39,10 @@ func TestConfig_LoadSingleInput(t *testing.T) {
},
Interval: 5 * time.Second,
}
mConfig.Tags = make(map[string]string)
assert.Equal(t, memcached, c.Inputs[0].Input,
assert.Equal(t, memcached, c.Plugins[0].Plugin,
"Testdata did not produce a correct memcached struct.")
assert.Equal(t, mConfig, c.Inputs[0].Config,
assert.Equal(t, mConfig, c.Plugins[0].Config,
"Testdata did not produce correct memcached metadata.")
}
@@ -62,24 +57,22 @@ func TestConfig_LoadDirectory(t *testing.T) {
t.Error(err)
}
memcached := inputs.Inputs["memcached"]().(*memcached.Memcached)
memcached := plugins.Plugins["memcached"]().(*memcached.Memcached)
memcached.Servers = []string{"localhost"}
mConfig := &internal_models.InputConfig{
mConfig := &PluginConfig{
Name: "memcached",
Filter: internal_models.Filter{
NameDrop: []string{"metricname2"},
NamePass: []string{"metricname1"},
FieldDrop: []string{"other", "stuff"},
FieldPass: []string{"some", "strings"},
TagDrop: []internal_models.TagFilter{
internal_models.TagFilter{
Filter: Filter{
Drop: []string{"other", "stuff"},
Pass: []string{"some", "strings"},
TagDrop: []TagFilter{
TagFilter{
Name: "badtag",
Filter: []string{"othertag"},
},
},
TagPass: []internal_models.TagFilter{
internal_models.TagFilter{
TagPass: []TagFilter{
TagFilter{
Name: "goodtag",
Filter: []string{"mytag"},
},
@@ -88,42 +81,216 @@ func TestConfig_LoadDirectory(t *testing.T) {
},
Interval: 5 * time.Second,
}
mConfig.Tags = make(map[string]string)
assert.Equal(t, memcached, c.Inputs[0].Input,
assert.Equal(t, memcached, c.Plugins[0].Plugin,
"Testdata did not produce a correct memcached struct.")
assert.Equal(t, mConfig, c.Inputs[0].Config,
assert.Equal(t, mConfig, c.Plugins[0].Config,
"Testdata did not produce correct memcached metadata.")
ex := inputs.Inputs["exec"]().(*exec.Exec)
p, err := parsers.NewJSONParser("exec", nil, nil)
assert.NoError(t, err)
ex.SetParser(p)
ex.Command = "/usr/bin/myothercollector --foo=bar"
eConfig := &internal_models.InputConfig{
Name: "exec",
MeasurementSuffix: "_myothercollector",
ex := plugins.Plugins["exec"]().(*exec.Exec)
ex.Commands = []*exec.Command{
&exec.Command{
Command: "/usr/bin/myothercollector --foo=bar",
Name: "myothercollector",
},
}
eConfig.Tags = make(map[string]string)
assert.Equal(t, ex, c.Inputs[1].Input,
eConfig := &PluginConfig{Name: "exec"}
assert.Equal(t, ex, c.Plugins[1].Plugin,
"Merged Testdata did not produce a correct exec struct.")
assert.Equal(t, eConfig, c.Inputs[1].Config,
assert.Equal(t, eConfig, c.Plugins[1].Config,
"Merged Testdata did not produce correct exec metadata.")
memcached.Servers = []string{"192.168.1.1"}
assert.Equal(t, memcached, c.Inputs[2].Input,
assert.Equal(t, memcached, c.Plugins[2].Plugin,
"Testdata did not produce a correct memcached struct.")
assert.Equal(t, mConfig, c.Inputs[2].Config,
assert.Equal(t, mConfig, c.Plugins[2].Config,
"Testdata did not produce correct memcached metadata.")
pstat := inputs.Inputs["procstat"]().(*procstat.Procstat)
pstat.PidFile = "/var/run/grafana-server.pid"
pstat := plugins.Plugins["procstat"]().(*procstat.Procstat)
pstat.Specifications = []*procstat.Specification{
&procstat.Specification{
PidFile: "/var/run/grafana-server.pid",
},
&procstat.Specification{
PidFile: "/var/run/influxdb/influxd.pid",
},
}
pConfig := &internal_models.InputConfig{Name: "procstat"}
pConfig.Tags = make(map[string]string)
pConfig := &PluginConfig{Name: "procstat"}
assert.Equal(t, pstat, c.Inputs[3].Input,
assert.Equal(t, pstat, c.Plugins[3].Plugin,
"Merged Testdata did not produce a correct procstat struct.")
assert.Equal(t, pConfig, c.Inputs[3].Config,
assert.Equal(t, pConfig, c.Plugins[3].Config,
"Merged Testdata did not produce correct procstat metadata.")
}
func TestFilter_Empty(t *testing.T) {
f := Filter{}
measurements := []string{
"foo",
"bar",
"barfoo",
"foo_bar",
"foo.bar",
"foo-bar",
"supercalifradjulisticexpialidocious",
}
for _, measurement := range measurements {
if !f.ShouldPass(measurement) {
t.Errorf("Expected measurement %s to pass", measurement)
}
}
}
func TestFilter_Pass(t *testing.T) {
f := Filter{
Pass: []string{"foo*", "cpu_usage_idle"},
}
passes := []string{
"foo",
"foo_bar",
"foo.bar",
"foo-bar",
"cpu_usage_idle",
}
drops := []string{
"bar",
"barfoo",
"bar_foo",
"cpu_usage_busy",
}
for _, measurement := range passes {
if !f.ShouldPass(measurement) {
t.Errorf("Expected measurement %s to pass", measurement)
}
}
for _, measurement := range drops {
if f.ShouldPass(measurement) {
t.Errorf("Expected measurement %s to drop", measurement)
}
}
}
func TestFilter_Drop(t *testing.T) {
f := Filter{
Drop: []string{"foo*", "cpu_usage_idle"},
}
drops := []string{
"foo",
"foo_bar",
"foo.bar",
"foo-bar",
"cpu_usage_idle",
}
passes := []string{
"bar",
"barfoo",
"bar_foo",
"cpu_usage_busy",
}
for _, measurement := range passes {
if !f.ShouldPass(measurement) {
t.Errorf("Expected measurement %s to pass", measurement)
}
}
for _, measurement := range drops {
if f.ShouldPass(measurement) {
t.Errorf("Expected measurement %s to drop", measurement)
}
}
}
func TestFilter_TagPass(t *testing.T) {
filters := []TagFilter{
TagFilter{
Name: "cpu",
Filter: []string{"cpu-*"},
},
TagFilter{
Name: "mem",
Filter: []string{"mem_free"},
}}
f := Filter{
TagPass: filters,
}
passes := []map[string]string{
{"cpu": "cpu-total"},
{"cpu": "cpu-0"},
{"cpu": "cpu-1"},
{"cpu": "cpu-2"},
{"mem": "mem_free"},
}
drops := []map[string]string{
{"cpu": "cputotal"},
{"cpu": "cpu0"},
{"cpu": "cpu1"},
{"cpu": "cpu2"},
{"mem": "mem_used"},
}
for _, tags := range passes {
if !f.ShouldTagsPass(tags) {
t.Errorf("Expected tags %v to pass", tags)
}
}
for _, tags := range drops {
if f.ShouldTagsPass(tags) {
t.Errorf("Expected tags %v to drop", tags)
}
}
}
func TestFilter_TagDrop(t *testing.T) {
filters := []TagFilter{
TagFilter{
Name: "cpu",
Filter: []string{"cpu-*"},
},
TagFilter{
Name: "mem",
Filter: []string{"mem_free"},
}}
f := Filter{
TagDrop: filters,
}
drops := []map[string]string{
{"cpu": "cpu-total"},
{"cpu": "cpu-0"},
{"cpu": "cpu-1"},
{"cpu": "cpu-2"},
{"mem": "mem_free"},
}
passes := []map[string]string{
{"cpu": "cputotal"},
{"cpu": "cpu0"},
{"cpu": "cpu1"},
{"cpu": "cpu2"},
{"mem": "mem_used"},
}
for _, tags := range passes {
if !f.ShouldTagsPass(tags) {
t.Errorf("Expected tags %v to pass", tags)
}
}
for _, tags := range drops {
if f.ShouldTagsPass(tags) {
t.Errorf("Expected tags %v to drop", tags)
}
}
}

View File

@@ -1,11 +1,9 @@
[[inputs.memcached]]
[[plugins.memcached]]
servers = ["localhost"]
namepass = ["metricname1"]
namedrop = ["metricname2"]
fieldpass = ["some", "strings"]
fielddrop = ["other", "stuff"]
pass = ["some", "strings"]
drop = ["other", "stuff"]
interval = "5s"
[inputs.memcached.tagpass]
[plugins.memcached.tagpass]
goodtag = ["mytag"]
[inputs.memcached.tagdrop]
[plugins.memcached.tagdrop]
badtag = ["othertag"]

View File

@@ -1,4 +1,8 @@
[[inputs.exec]]
[[plugins.exec]]
# specify commands via an array of tables
[[plugins.exec.commands]]
# the command to run
command = "/usr/bin/myothercollector --foo=bar"
name_suffix = "_myothercollector"
# name of the command (used as a prefix for measurements)
name = "myothercollector"

View File

@@ -1,11 +1,9 @@
[[inputs.memcached]]
[[plugins.memcached]]
servers = ["192.168.1.1"]
namepass = ["metricname1"]
namedrop = ["metricname2"]
pass = ["some", "strings"]
drop = ["other", "stuff"]
interval = "5s"
[inputs.memcached.tagpass]
[plugins.memcached.tagpass]
goodtag = ["mytag"]
[inputs.memcached.tagdrop]
[plugins.memcached.tagdrop]
badtag = ["othertag"]

View File

@@ -1,2 +1,5 @@
[[inputs.procstat]]
[[plugins.procstat]]
[[plugins.procstat.specifications]]
pid_file = "/var/run/grafana-server.pid"
[[plugins.procstat.specifications]]
pid_file = "/var/run/influxdb/influxd.pid"

View File

@@ -1,7 +1,7 @@
# Telegraf configuration
# Telegraf is entirely plugin driven. All metrics are gathered from the
# declared inputs.
# declared plugins.
# Even if a plugin has no configuration, it must be declared in here
# to be active. Declaring a plugin means just specifying the name
@@ -20,14 +20,21 @@
# with 'required'. Be sure to edit those to make this configuration work.
# Tags can also be specified via a normal map, but only one form at a time:
[global_tags]
dc = "us-east-1"
[tags]
# dc = "us-east-1"
# Configuration for telegraf agent
[agent]
# Default data collection interval for all plugins
interval = "10s"
# If utc = false, uses local time (utc is highly recommended)
utc = true
# Precision of writes, valid values are n, u, ms, s, m, and h
# note: using second precision greatly helps InfluxDB compression
precision = "s"
# run telegraf in debug mode
debug = false
@@ -39,6 +46,8 @@
# OUTPUTS #
###############################################################################
[outputs]
# Configuration for influxdb server to send metrics to
[[outputs.influxdb]]
# The full HTTP endpoint URL for your InfluxDB instance
@@ -49,6 +58,17 @@
# The target database for metrics. This database must already exist
database = "telegraf" # required.
# Connection timeout (for the connection with InfluxDB), formatted as a string.
# Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
# If not provided, will default to 0 (no timeout)
# timeout = "5s"
# username = "telegraf"
# password = "metricsmetricsmetricsmetrics"
# Set the user agent for the POSTs (can be useful for log differentiation)
# user_agent = "telegraf"
[[outputs.influxdb]]
urls = ["udp://localhost:8089"]
database = "udp-telegraf"
@@ -68,13 +88,15 @@
# PLUGINS #
###############################################################################
[plugins]
# Read Apache status information (mod_status)
[[inputs.apache]]
# An array of Apache status URI to gather stats.
urls = ["http://localhost/server-status?auto"]
[[plugins.apache]]
# An array of Apache status URI to gather stats.
urls = ["http://localhost/server-status?auto"]
# Read metrics about cpu usage
[[inputs.cpu]]
[[plugins.cpu]]
# Whether to report per-cpu stats or not
percpu = true
# Whether to report total system cpu stats or not
@@ -83,11 +105,11 @@
drop = ["cpu_time"]
# Read metrics about disk usage by mount point
[[inputs.diskio]]
[[plugins.diskio]]
# no configuration
# Read metrics from one or many disque servers
[[inputs.disque]]
[[plugins.disque]]
# An array of URI to gather stats about. Specify an ip or hostname
# with optional port and password. ie disque://localhost, disque://10.10.3.33:18832,
# 10.0.0.1:10000, etc.
@@ -96,7 +118,7 @@
servers = ["localhost"]
# Read stats from one or more Elasticsearch servers or clusters
[[inputs.elasticsearch]]
[[plugins.elasticsearch]]
# specify a list of one or more Elasticsearch servers
servers = ["http://localhost:9200"]
@@ -105,13 +127,17 @@
local = true
# Read flattened metrics from one or more commands that output JSON to stdout
[[inputs.exec]]
[[plugins.exec]]
# specify commands via an array of tables
[[exec.commands]]
# the command to run
command = "/usr/bin/mycollector --foo=bar"
name_suffix = "_mycollector"
# name of the command (used as a prefix for measurements)
name = "mycollector"
# Read metrics of haproxy, via socket or csv stats page
[[inputs.haproxy]]
[[plugins.haproxy]]
# An array of address to gather stats about. Specify an ip on hostname
# with optional port. ie localhost, 10.10.3.33:1936, etc.
#
@@ -121,30 +147,33 @@
# servers = ["socket:/run/haproxy/admin.sock"]
# Read flattened metrics from one or more JSON HTTP endpoints
[[inputs.httpjson]]
# a name for the service being polled
name = "webserver_stats"
[[plugins.httpjson]]
# Specify services via an array of tables
[[httpjson.services]]
# URL of each server in the service's cluster
servers = [
"http://localhost:9999/stats/",
"http://localhost:9998/stats/",
]
# a name for the service being polled
name = "webserver_stats"
# HTTP method to use (case-sensitive)
method = "GET"
# URL of each server in the service's cluster
servers = [
"http://localhost:9999/stats/",
"http://localhost:9998/stats/",
]
# HTTP parameters (all values must be strings)
[httpjson.parameters]
event_type = "cpu_spike"
threshold = "0.75"
# HTTP method to use (case-sensitive)
method = "GET"
# HTTP parameters (all values must be strings)
[httpjson.services.parameters]
event_type = "cpu_spike"
threshold = "0.75"
# Read metrics about disk IO by device
[[inputs.diskio]]
[[plugins.io]]
# no configuration
# read metrics from a Kafka topic
[[inputs.kafka_consumer]]
[[plugins.kafka_consumer]]
# topic(s) to consume
topics = ["telegraf"]
# an array of Zookeeper connection strings
@@ -157,7 +186,7 @@
offset = "oldest"
# Read metrics from a LeoFS Server via SNMP
[[inputs.leofs]]
[[plugins.leofs]]
# An array of URI to gather stats about LeoFS.
# Specify an ip or hostname with port. ie 127.0.0.1:4020
#
@@ -165,7 +194,7 @@
servers = ["127.0.0.1:4021"]
# Read metrics from local Lustre service on OST, MDS
[[inputs.lustre2]]
[[plugins.lustre2]]
# An array of /proc globs to search for Lustre stats
# If not specified, the default will work on Lustre 2.5.x
#
@@ -173,28 +202,19 @@
# mds_procfiles = ["/proc/fs/lustre/mdt/*/md_stats"]
# Read metrics about memory usage
[[inputs.mem]]
[[plugins.mem]]
# no configuration
# Read metrics from one or many memcached servers
[[inputs.memcached]]
[[plugins.memcached]]
# An array of address to gather stats about. Specify an ip on hostname
# with optional port. ie localhost, 10.0.0.1:11211, etc.
#
# If no servers are specified, then localhost is used as the host.
servers = ["localhost"]
# Telegraf plugin for gathering metrics from N Mesos masters
[[inputs.mesos]]
# Timeout, in ms.
timeout = 100
# A list of Mesos masters, default value is localhost:5050.
masters = ["localhost:5050"]
# Metrics groups to be collected, by default, all enabled.
master_collections = ["resources","master","system","slaves","frameworks","messages","evqueue","registrar"]
# Read metrics from one or many MongoDB servers
[[inputs.mongodb]]
[[plugins.mongodb]]
# An array of URI to gather stats about. Specify an ip or hostname
# with optional port add password. ie mongodb://user:auth_key@10.10.3.30:27017,
# mongodb://10.10.3.33:18832, 10.0.0.1:10000, etc.
@@ -203,7 +223,7 @@
servers = ["127.0.0.1:27017"]
# Read metrics from one or many mysql servers
[[inputs.mysql]]
[[plugins.mysql]]
# specify servers via a url matching:
# [username[:password]@][protocol[(address)]]/[?tls=[true|false|skip-verify]]
# e.g.
@@ -214,7 +234,7 @@
servers = ["localhost"]
# Read metrics about network interface usage
[[inputs.net]]
[[plugins.net]]
# By default, telegraf gathers stats from any up interface (excluding loopback)
# Setting interfaces will tell it to gather these explicit interfaces,
# regardless of status.
@@ -222,12 +242,12 @@
# interfaces = ["eth0", ... ]
# Read Nginx's basic status information (ngx_http_stub_status_module)
[[inputs.nginx]]
[[plugins.nginx]]
# An array of Nginx stub_status URI to gather stats.
urls = ["http://localhost/status"]
# Ping given url(s) and return statistics
[[inputs.ping]]
[[plugins.ping]]
# urls to ping
urls = ["www.google.com"] # required
# number of pings to send (ping -c <COUNT>)
@@ -240,7 +260,10 @@
interface = ""
# Read metrics from one or many postgresql servers
[[inputs.postgresql]]
[[plugins.postgresql]]
# specify servers via an array of tables
[[postgresql.servers]]
# specify address via a url matching:
# postgres://[pqgotest[:password]]@localhost[/dbname]?sslmode=[disable|verify-ca|verify-full]
# or a simple string:
@@ -267,13 +290,14 @@
# address = "influx@remoteserver"
# Read metrics from one or many prometheus clients
[[inputs.prometheus]]
[[plugins.prometheus]]
# An array of urls to scrape metrics from.
urls = ["http://localhost:9100/metrics"]
# Read metrics from one or many RabbitMQ servers via the management API
[[inputs.rabbitmq]]
[[plugins.rabbitmq]]
# Specify servers via an array of tables
[[rabbitmq.servers]]
# name = "rmq-server-1" # optional tag
# url = "http://localhost:15672"
# username = "guest"
@@ -284,7 +308,7 @@
# nodes = ["rabbit@node1", "rabbit@node2"]
# Read metrics from one or many redis servers
[[inputs.redis]]
[[plugins.redis]]
# An array of URI to gather stats about. Specify an ip or hostname
# with optional port add password. ie redis://localhost, redis://10.10.3.33:18832,
# 10.0.0.1:10000, etc.
@@ -293,7 +317,7 @@
servers = ["localhost"]
# Read metrics from one or many RethinkDB servers
[[inputs.rethinkdb]]
[[plugins.rethinkdb]]
# An array of URI to gather stats about. Specify an ip or hostname
# with optional port add password. ie rethinkdb://user:auth_key@10.10.3.30:28105,
# rethinkdb://10.10.3.33:18832, 10.0.0.1:10000, etc.
@@ -302,9 +326,9 @@
servers = ["127.0.0.1:28015"]
# Read metrics about swap memory usage
[[inputs.swap]]
[[plugins.swap]]
# no configuration
# Read metrics about system load & uptime
[[inputs.system]]
[[plugins.system]]
# no configuration

View File

@@ -2,19 +2,13 @@ package internal
import (
"bufio"
"crypto/rand"
"crypto/tls"
"crypto/x509"
"errors"
"fmt"
"io/ioutil"
"os"
"strings"
"time"
)
const alphanum string = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz"
// Duration just wraps time.Duration
type Duration struct {
Duration time.Duration
@@ -34,6 +28,39 @@ func (d *Duration) UnmarshalTOML(b []byte) error {
var NotImplementedError = errors.New("not implemented yet")
type JSONFlattener struct {
Fields map[string]interface{}
}
// FlattenJSON flattens nested maps/interfaces into a fields map
func (f *JSONFlattener) FlattenJSON(
fieldname string,
v interface{},
) error {
if f.Fields == nil {
f.Fields = make(map[string]interface{})
}
fieldname = strings.Trim(fieldname, "_")
switch t := v.(type) {
case map[string]interface{}:
for k, v := range t {
err := f.FlattenJSON(fieldname+"_"+k+"_", v)
if err != nil {
return err
}
}
case float64:
f.Fields[fieldname] = t
case bool, string, []interface{}:
// ignored types
return nil
default:
return fmt.Errorf("JSON Flattener: got unexpected type %T with value %v (%s)",
t, t, fieldname)
}
return nil
}
// ReadLines reads contents from a file and splits them by new lines.
// A convenience wrapper to ReadLinesOffsetN(filename, 0, -1).
func ReadLines(filename string) ([]string, error) {
@@ -69,57 +96,6 @@ func ReadLinesOffsetN(filename string, offset uint, n int) ([]string, error) {
return ret, nil
}
// RandomString returns a random string of alpha-numeric characters
func RandomString(n int) string {
var bytes = make([]byte, n)
rand.Read(bytes)
for i, b := range bytes {
bytes[i] = alphanum[b%byte(len(alphanum))]
}
return string(bytes)
}
// GetTLSConfig gets a tls.Config object from the given certs, key, and CA files.
// you must give the full path to the files.
// If all files are blank and InsecureSkipVerify=false, returns a nil pointer.
func GetTLSConfig(
SSLCert, SSLKey, SSLCA string,
InsecureSkipVerify bool,
) (*tls.Config, error) {
t := &tls.Config{}
if SSLCert != "" && SSLKey != "" && SSLCA != "" {
cert, err := tls.LoadX509KeyPair(SSLCert, SSLKey)
if err != nil {
return nil, errors.New(fmt.Sprintf(
"Could not load TLS client key/certificate: %s",
err))
}
caCert, err := ioutil.ReadFile(SSLCA)
if err != nil {
return nil, errors.New(fmt.Sprintf("Could not load TLS CA: %s",
err))
}
caCertPool := x509.NewCertPool()
caCertPool.AppendCertsFromPEM(caCert)
t = &tls.Config{
Certificates: []tls.Certificate{cert},
RootCAs: caCertPool,
InsecureSkipVerify: InsecureSkipVerify,
}
} else {
if InsecureSkipVerify {
t.InsecureSkipVerify = true
} else {
return nil, nil
}
}
// will be nil by default if nothing is provided
return t, nil
}
// Glob will test a string pattern, potentially containing globs, against a
// subject string. The result is a simple true/false, determining whether or
// not the glob pattern matched the subject text.

View File

@@ -1,123 +0,0 @@
package internal_models
import (
"strings"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal"
)
// TagFilter is the name of a tag, and the values on which to filter
type TagFilter struct {
Name string
Filter []string
}
// Filter containing drop/pass and tagdrop/tagpass rules
type Filter struct {
NameDrop []string
NamePass []string
FieldDrop []string
FieldPass []string
TagDrop []TagFilter
TagPass []TagFilter
IsActive bool
}
func (f Filter) ShouldMetricPass(metric telegraf.Metric) bool {
if f.ShouldNamePass(metric.Name()) && f.ShouldTagsPass(metric.Tags()) {
return true
}
return false
}
// ShouldFieldsPass returns true if the metric should pass, false if should drop
// based on the drop/pass filter parameters
func (f Filter) ShouldNamePass(key string) bool {
if f.NamePass != nil {
for _, pat := range f.NamePass {
// TODO remove HasPrefix check, leaving it for now for legacy support.
// Cam, 2015-12-07
if strings.HasPrefix(key, pat) || internal.Glob(pat, key) {
return true
}
}
return false
}
if f.NameDrop != nil {
for _, pat := range f.NameDrop {
// TODO remove HasPrefix check, leaving it for now for legacy support.
// Cam, 2015-12-07
if strings.HasPrefix(key, pat) || internal.Glob(pat, key) {
return false
}
}
return true
}
return true
}
// ShouldFieldsPass returns true if the metric should pass, false if should drop
// based on the drop/pass filter parameters
func (f Filter) ShouldFieldsPass(key string) bool {
if f.FieldPass != nil {
for _, pat := range f.FieldPass {
// TODO remove HasPrefix check, leaving it for now for legacy support.
// Cam, 2015-12-07
if strings.HasPrefix(key, pat) || internal.Glob(pat, key) {
return true
}
}
return false
}
if f.FieldDrop != nil {
for _, pat := range f.FieldDrop {
// TODO remove HasPrefix check, leaving it for now for legacy support.
// Cam, 2015-12-07
if strings.HasPrefix(key, pat) || internal.Glob(pat, key) {
return false
}
}
return true
}
return true
}
// ShouldTagsPass returns true if the metric should pass, false if should drop
// based on the tagdrop/tagpass filter parameters
func (f Filter) ShouldTagsPass(tags map[string]string) bool {
if f.TagPass != nil {
for _, pat := range f.TagPass {
if tagval, ok := tags[pat.Name]; ok {
for _, filter := range pat.Filter {
if internal.Glob(filter, tagval) {
return true
}
}
}
}
return false
}
if f.TagDrop != nil {
for _, pat := range f.TagDrop {
if tagval, ok := tags[pat.Name]; ok {
for _, filter := range pat.Filter {
if internal.Glob(filter, tagval) {
return false
}
}
}
}
return true
}
return true
}

View File

@@ -1,243 +0,0 @@
package internal_models
import (
"testing"
)
func TestFilter_Empty(t *testing.T) {
f := Filter{}
measurements := []string{
"foo",
"bar",
"barfoo",
"foo_bar",
"foo.bar",
"foo-bar",
"supercalifradjulisticexpialidocious",
}
for _, measurement := range measurements {
if !f.ShouldFieldsPass(measurement) {
t.Errorf("Expected measurement %s to pass", measurement)
}
}
}
func TestFilter_NamePass(t *testing.T) {
f := Filter{
NamePass: []string{"foo*", "cpu_usage_idle"},
}
passes := []string{
"foo",
"foo_bar",
"foo.bar",
"foo-bar",
"cpu_usage_idle",
}
drops := []string{
"bar",
"barfoo",
"bar_foo",
"cpu_usage_busy",
}
for _, measurement := range passes {
if !f.ShouldNamePass(measurement) {
t.Errorf("Expected measurement %s to pass", measurement)
}
}
for _, measurement := range drops {
if f.ShouldNamePass(measurement) {
t.Errorf("Expected measurement %s to drop", measurement)
}
}
}
func TestFilter_NameDrop(t *testing.T) {
f := Filter{
NameDrop: []string{"foo*", "cpu_usage_idle"},
}
drops := []string{
"foo",
"foo_bar",
"foo.bar",
"foo-bar",
"cpu_usage_idle",
}
passes := []string{
"bar",
"barfoo",
"bar_foo",
"cpu_usage_busy",
}
for _, measurement := range passes {
if !f.ShouldNamePass(measurement) {
t.Errorf("Expected measurement %s to pass", measurement)
}
}
for _, measurement := range drops {
if f.ShouldNamePass(measurement) {
t.Errorf("Expected measurement %s to drop", measurement)
}
}
}
func TestFilter_FieldPass(t *testing.T) {
f := Filter{
FieldPass: []string{"foo*", "cpu_usage_idle"},
}
passes := []string{
"foo",
"foo_bar",
"foo.bar",
"foo-bar",
"cpu_usage_idle",
}
drops := []string{
"bar",
"barfoo",
"bar_foo",
"cpu_usage_busy",
}
for _, measurement := range passes {
if !f.ShouldFieldsPass(measurement) {
t.Errorf("Expected measurement %s to pass", measurement)
}
}
for _, measurement := range drops {
if f.ShouldFieldsPass(measurement) {
t.Errorf("Expected measurement %s to drop", measurement)
}
}
}
func TestFilter_FieldDrop(t *testing.T) {
f := Filter{
FieldDrop: []string{"foo*", "cpu_usage_idle"},
}
drops := []string{
"foo",
"foo_bar",
"foo.bar",
"foo-bar",
"cpu_usage_idle",
}
passes := []string{
"bar",
"barfoo",
"bar_foo",
"cpu_usage_busy",
}
for _, measurement := range passes {
if !f.ShouldFieldsPass(measurement) {
t.Errorf("Expected measurement %s to pass", measurement)
}
}
for _, measurement := range drops {
if f.ShouldFieldsPass(measurement) {
t.Errorf("Expected measurement %s to drop", measurement)
}
}
}
func TestFilter_TagPass(t *testing.T) {
filters := []TagFilter{
TagFilter{
Name: "cpu",
Filter: []string{"cpu-*"},
},
TagFilter{
Name: "mem",
Filter: []string{"mem_free"},
}}
f := Filter{
TagPass: filters,
}
passes := []map[string]string{
{"cpu": "cpu-total"},
{"cpu": "cpu-0"},
{"cpu": "cpu-1"},
{"cpu": "cpu-2"},
{"mem": "mem_free"},
}
drops := []map[string]string{
{"cpu": "cputotal"},
{"cpu": "cpu0"},
{"cpu": "cpu1"},
{"cpu": "cpu2"},
{"mem": "mem_used"},
}
for _, tags := range passes {
if !f.ShouldTagsPass(tags) {
t.Errorf("Expected tags %v to pass", tags)
}
}
for _, tags := range drops {
if f.ShouldTagsPass(tags) {
t.Errorf("Expected tags %v to drop", tags)
}
}
}
func TestFilter_TagDrop(t *testing.T) {
filters := []TagFilter{
TagFilter{
Name: "cpu",
Filter: []string{"cpu-*"},
},
TagFilter{
Name: "mem",
Filter: []string{"mem_free"},
}}
f := Filter{
TagDrop: filters,
}
drops := []map[string]string{
{"cpu": "cpu-total"},
{"cpu": "cpu-0"},
{"cpu": "cpu-1"},
{"cpu": "cpu-2"},
{"mem": "mem_free"},
}
passes := []map[string]string{
{"cpu": "cputotal"},
{"cpu": "cpu0"},
{"cpu": "cpu1"},
{"cpu": "cpu2"},
{"mem": "mem_used"},
}
for _, tags := range passes {
if !f.ShouldTagsPass(tags) {
t.Errorf("Expected tags %v to pass", tags)
}
}
for _, tags := range drops {
if f.ShouldTagsPass(tags) {
t.Errorf("Expected tags %v to drop", tags)
}
}
}

View File

@@ -1,24 +0,0 @@
package internal_models
import (
"time"
"github.com/influxdata/telegraf"
)
type RunningInput struct {
Name string
Input telegraf.Input
Config *InputConfig
}
// InputConfig containing a name, interval, and filter
type InputConfig struct {
Name string
NameOverride string
MeasurementPrefix string
MeasurementSuffix string
Tags map[string]string
Filter Filter
Interval time.Duration
}

View File

@@ -1,138 +0,0 @@
package internal_models
import (
"log"
"sync"
"time"
"github.com/influxdata/telegraf"
)
const (
// Default number of metrics kept between flushes.
DEFAULT_METRIC_BUFFER_LIMIT = 10000
// Limit how many full metric buffers are kept due to failed writes.
FULL_METRIC_BUFFERS_LIMIT = 100
)
type RunningOutput struct {
Name string
Output telegraf.Output
Config *OutputConfig
Quiet bool
MetricBufferLimit int
FlushBufferWhenFull bool
metrics []telegraf.Metric
tmpmetrics map[int][]telegraf.Metric
overwriteI int
mapI int
sync.Mutex
}
func NewRunningOutput(
name string,
output telegraf.Output,
conf *OutputConfig,
) *RunningOutput {
ro := &RunningOutput{
Name: name,
metrics: make([]telegraf.Metric, 0),
tmpmetrics: make(map[int][]telegraf.Metric),
Output: output,
Config: conf,
MetricBufferLimit: DEFAULT_METRIC_BUFFER_LIMIT,
}
return ro
}
// AddMetric adds a metric to the output. This function can also write cached
// points if FlushBufferWhenFull is true.
func (ro *RunningOutput) AddMetric(metric telegraf.Metric) {
if ro.Config.Filter.IsActive {
if !ro.Config.Filter.ShouldMetricPass(metric) {
return
}
}
ro.Lock()
defer ro.Unlock()
if len(ro.metrics) < ro.MetricBufferLimit {
ro.metrics = append(ro.metrics, metric)
} else {
if ro.FlushBufferWhenFull {
ro.metrics = append(ro.metrics, metric)
tmpmetrics := make([]telegraf.Metric, len(ro.metrics))
copy(tmpmetrics, ro.metrics)
ro.metrics = make([]telegraf.Metric, 0)
err := ro.write(tmpmetrics)
if err != nil {
log.Printf("ERROR writing full metric buffer to output %s, %s",
ro.Name, err)
if len(ro.tmpmetrics) == FULL_METRIC_BUFFERS_LIMIT {
ro.mapI = 0
// overwrite one
ro.tmpmetrics[ro.mapI] = tmpmetrics
ro.mapI++
} else {
ro.tmpmetrics[ro.mapI] = tmpmetrics
ro.mapI++
}
}
} else {
log.Printf("WARNING: overwriting cached metrics, you may want to " +
"increase the metric_buffer_limit setting in your [agent] " +
"config if you do not wish to overwrite metrics.\n")
if ro.overwriteI == len(ro.metrics) {
ro.overwriteI = 0
}
ro.metrics[ro.overwriteI] = metric
ro.overwriteI++
}
}
}
// Write writes all cached points to this output.
func (ro *RunningOutput) Write() error {
ro.Lock()
defer ro.Unlock()
err := ro.write(ro.metrics)
if err != nil {
return err
} else {
ro.metrics = make([]telegraf.Metric, 0)
ro.overwriteI = 0
}
// Write any cached metric buffers that failed previously
for i, tmpmetrics := range ro.tmpmetrics {
if err := ro.write(tmpmetrics); err != nil {
return err
} else {
delete(ro.tmpmetrics, i)
}
}
return nil
}
func (ro *RunningOutput) write(metrics []telegraf.Metric) error {
start := time.Now()
err := ro.Output.Write(metrics)
elapsed := time.Since(start)
if err == nil {
if !ro.Quiet {
log.Printf("Wrote %d metrics to output %s in %s\n",
len(metrics), ro.Name, elapsed)
}
}
return err
}
// OutputConfig containing name and filter
type OutputConfig struct {
Name string
Filter Filter
}

View File

@@ -1,265 +0,0 @@
package internal_models
import (
"fmt"
"sort"
"sync"
"testing"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
var first5 = []telegraf.Metric{
testutil.TestMetric(101, "metric1"),
testutil.TestMetric(101, "metric2"),
testutil.TestMetric(101, "metric3"),
testutil.TestMetric(101, "metric4"),
testutil.TestMetric(101, "metric5"),
}
var next5 = []telegraf.Metric{
testutil.TestMetric(101, "metric6"),
testutil.TestMetric(101, "metric7"),
testutil.TestMetric(101, "metric8"),
testutil.TestMetric(101, "metric9"),
testutil.TestMetric(101, "metric10"),
}
// Test that we can write metrics with simple default setup.
func TestRunningOutputDefault(t *testing.T) {
conf := &OutputConfig{
Filter: Filter{
IsActive: false,
},
}
m := &mockOutput{}
ro := NewRunningOutput("test", m, conf)
for _, metric := range first5 {
ro.AddMetric(metric)
}
for _, metric := range next5 {
ro.AddMetric(metric)
}
assert.Len(t, m.Metrics(), 0)
err := ro.Write()
assert.NoError(t, err)
assert.Len(t, m.Metrics(), 10)
}
// Test that the first metric gets overwritten if there is a buffer overflow.
func TestRunningOutputOverwrite(t *testing.T) {
conf := &OutputConfig{
Filter: Filter{
IsActive: false,
},
}
m := &mockOutput{}
ro := NewRunningOutput("test", m, conf)
ro.MetricBufferLimit = 4
for _, metric := range first5 {
ro.AddMetric(metric)
}
require.Len(t, m.Metrics(), 0)
err := ro.Write()
require.NoError(t, err)
require.Len(t, m.Metrics(), 4)
var expected, actual []string
for i, exp := range first5[1:] {
expected = append(expected, exp.String())
actual = append(actual, m.Metrics()[i].String())
}
sort.Strings(expected)
sort.Strings(actual)
assert.Equal(t, expected, actual)
}
// Test that multiple buffer overflows are handled properly.
func TestRunningOutputMultiOverwrite(t *testing.T) {
conf := &OutputConfig{
Filter: Filter{
IsActive: false,
},
}
m := &mockOutput{}
ro := NewRunningOutput("test", m, conf)
ro.MetricBufferLimit = 3
for _, metric := range first5 {
ro.AddMetric(metric)
}
for _, metric := range next5 {
ro.AddMetric(metric)
}
require.Len(t, m.Metrics(), 0)
err := ro.Write()
require.NoError(t, err)
require.Len(t, m.Metrics(), 3)
var expected, actual []string
for i, exp := range next5[2:] {
expected = append(expected, exp.String())
actual = append(actual, m.Metrics()[i].String())
}
sort.Strings(expected)
sort.Strings(actual)
assert.Equal(t, expected, actual)
}
// Test that running output doesn't flush until it's full when
// FlushBufferWhenFull is set.
func TestRunningOutputFlushWhenFull(t *testing.T) {
conf := &OutputConfig{
Filter: Filter{
IsActive: false,
},
}
m := &mockOutput{}
ro := NewRunningOutput("test", m, conf)
ro.FlushBufferWhenFull = true
ro.MetricBufferLimit = 5
// Fill buffer to limit
for _, metric := range first5 {
ro.AddMetric(metric)
}
// no flush yet
assert.Len(t, m.Metrics(), 0)
// add one more metric
ro.AddMetric(next5[0])
// now it flushed
assert.Len(t, m.Metrics(), 6)
// add one more metric and write it manually
ro.AddMetric(next5[1])
err := ro.Write()
assert.NoError(t, err)
assert.Len(t, m.Metrics(), 7)
}
// Test that running output doesn't flush until it's full when
// FlushBufferWhenFull is set, twice.
func TestRunningOutputMultiFlushWhenFull(t *testing.T) {
conf := &OutputConfig{
Filter: Filter{
IsActive: false,
},
}
m := &mockOutput{}
ro := NewRunningOutput("test", m, conf)
ro.FlushBufferWhenFull = true
ro.MetricBufferLimit = 4
// Fill buffer past limit twive
for _, metric := range first5 {
ro.AddMetric(metric)
}
for _, metric := range next5 {
ro.AddMetric(metric)
}
// flushed twice
assert.Len(t, m.Metrics(), 10)
}
func TestRunningOutputWriteFail(t *testing.T) {
conf := &OutputConfig{
Filter: Filter{
IsActive: false,
},
}
m := &mockOutput{}
m.failWrite = true
ro := NewRunningOutput("test", m, conf)
ro.FlushBufferWhenFull = true
ro.MetricBufferLimit = 4
// Fill buffer past limit twice
for _, metric := range first5 {
ro.AddMetric(metric)
}
for _, metric := range next5 {
ro.AddMetric(metric)
}
// no successful flush yet
assert.Len(t, m.Metrics(), 0)
// manual write fails
err := ro.Write()
require.Error(t, err)
// no successful flush yet
assert.Len(t, m.Metrics(), 0)
m.failWrite = false
err = ro.Write()
require.NoError(t, err)
assert.Len(t, m.Metrics(), 10)
}
type mockOutput struct {
sync.Mutex
metrics []telegraf.Metric
// if true, mock a write failure
failWrite bool
}
func (m *mockOutput) Connect() error {
return nil
}
func (m *mockOutput) Close() error {
return nil
}
func (m *mockOutput) Description() string {
return ""
}
func (m *mockOutput) SampleConfig() string {
return ""
}
func (m *mockOutput) Write(metrics []telegraf.Metric) error {
m.Lock()
defer m.Unlock()
if m.failWrite {
return fmt.Errorf("Failed Write!")
}
if m.metrics == nil {
m.metrics = []telegraf.Metric{}
}
for _, metric := range metrics {
m.metrics = append(m.metrics, metric)
}
return nil
}
func (m *mockOutput) Metrics() []telegraf.Metric {
m.Lock()
defer m.Unlock()
return m.metrics
}

View File

@@ -1,94 +0,0 @@
package telegraf
import (
"time"
"github.com/influxdata/influxdb/client/v2"
)
type Metric interface {
// Name returns the measurement name of the metric
Name() string
// Name returns the tags associated with the metric
Tags() map[string]string
// Time return the timestamp for the metric
Time() time.Time
// UnixNano returns the unix nano time of the metric
UnixNano() int64
// Fields returns the fields for the metric
Fields() map[string]interface{}
// String returns a line-protocol string of the metric
String() string
// PrecisionString returns a line-protocol string of the metric, at precision
PrecisionString(precison string) string
// Point returns a influxdb client.Point object
Point() *client.Point
}
// metric is a wrapper of the influxdb client.Point struct
type metric struct {
pt *client.Point
}
// NewMetric returns a metric with the given timestamp. If a timestamp is not
// given, then data is sent to the database without a timestamp, in which case
// the server will assign local time upon reception. NOTE: it is recommended to
// send data with a timestamp.
func NewMetric(
name string,
tags map[string]string,
fields map[string]interface{},
t ...time.Time,
) (Metric, error) {
var T time.Time
if len(t) > 0 {
T = t[0]
}
pt, err := client.NewPoint(name, tags, fields, T)
if err != nil {
return nil, err
}
return &metric{
pt: pt,
}, nil
}
func (m *metric) Name() string {
return m.pt.Name()
}
func (m *metric) Tags() map[string]string {
return m.pt.Tags()
}
func (m *metric) Time() time.Time {
return m.pt.Time()
}
func (m *metric) UnixNano() int64 {
return m.pt.UnixNano()
}
func (m *metric) Fields() map[string]interface{} {
return m.pt.Fields()
}
func (m *metric) String() string {
return m.pt.String()
}
func (m *metric) PrecisionString(precison string) string {
return m.pt.PrecisionString(precison)
}
func (m *metric) Point() *client.Point {
return m.pt
}

View File

@@ -1,83 +0,0 @@
package telegraf
import (
"fmt"
"math"
"testing"
"time"
"github.com/stretchr/testify/assert"
)
func TestNewMetric(t *testing.T) {
now := time.Now()
tags := map[string]string{
"host": "localhost",
"datacenter": "us-east-1",
}
fields := map[string]interface{}{
"usage_idle": float64(99),
"usage_busy": float64(1),
}
m, err := NewMetric("cpu", tags, fields, now)
assert.NoError(t, err)
assert.Equal(t, tags, m.Tags())
assert.Equal(t, fields, m.Fields())
assert.Equal(t, "cpu", m.Name())
assert.Equal(t, now, m.Time())
assert.Equal(t, now.UnixNano(), m.UnixNano())
}
func TestNewMetricString(t *testing.T) {
now := time.Now()
tags := map[string]string{
"host": "localhost",
}
fields := map[string]interface{}{
"usage_idle": float64(99),
}
m, err := NewMetric("cpu", tags, fields, now)
assert.NoError(t, err)
lineProto := fmt.Sprintf("cpu,host=localhost usage_idle=99 %d",
now.UnixNano())
assert.Equal(t, lineProto, m.String())
lineProtoPrecision := fmt.Sprintf("cpu,host=localhost usage_idle=99 %d",
now.Unix())
assert.Equal(t, lineProtoPrecision, m.PrecisionString("s"))
}
func TestNewMetricStringNoTime(t *testing.T) {
tags := map[string]string{
"host": "localhost",
}
fields := map[string]interface{}{
"usage_idle": float64(99),
}
m, err := NewMetric("cpu", tags, fields)
assert.NoError(t, err)
lineProto := fmt.Sprintf("cpu,host=localhost usage_idle=99")
assert.Equal(t, lineProto, m.String())
lineProtoPrecision := fmt.Sprintf("cpu,host=localhost usage_idle=99")
assert.Equal(t, lineProtoPrecision, m.PrecisionString("s"))
}
func TestNewMetricFailNaN(t *testing.T) {
now := time.Now()
tags := map[string]string{
"host": "localhost",
}
fields := map[string]interface{}{
"usage_idle": math.NaN(),
}
_, err := NewMetric("cpu", tags, fields, now)
assert.Error(t, err)
}

16
outputs/all/all.go Normal file
View File

@@ -0,0 +1,16 @@
package all
import (
_ "github.com/influxdb/telegraf/outputs/amon"
_ "github.com/influxdb/telegraf/outputs/amqp"
_ "github.com/influxdb/telegraf/outputs/datadog"
_ "github.com/influxdb/telegraf/outputs/influxdb"
_ "github.com/influxdb/telegraf/outputs/kafka"
_ "github.com/influxdb/telegraf/outputs/kinesis"
_ "github.com/influxdb/telegraf/outputs/librato"
_ "github.com/influxdb/telegraf/outputs/mqtt"
_ "github.com/influxdb/telegraf/outputs/nsq"
_ "github.com/influxdb/telegraf/outputs/opentsdb"
_ "github.com/influxdb/telegraf/outputs/prometheus_client"
_ "github.com/influxdb/telegraf/outputs/riemann"
)

View File

@@ -8,9 +8,9 @@ import (
"net/http"
"strings"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/outputs"
"github.com/influxdb/influxdb/client/v2"
"github.com/influxdb/telegraf/internal"
"github.com/influxdb/telegraf/outputs"
)
type Amon struct {
@@ -22,13 +22,13 @@ type Amon struct {
}
var sampleConfig = `
## Amon Server Key
# Amon Server Key
server_key = "my-server-key" # required.
## Amon Instance URL
# Amon Instance URL
amon_instance = "https://youramoninstance" # required
## Connection timeout.
# Connection timeout.
# timeout = "5s"
`
@@ -38,7 +38,7 @@ type TimeSeries struct {
type Metric struct {
Metric string `json:"metric"`
Points [1]Point `json:"metrics"`
Points [1]Point `json:"points"`
}
type Point [2]float64
@@ -53,17 +53,17 @@ func (a *Amon) Connect() error {
return nil
}
func (a *Amon) Write(metrics []telegraf.Metric) error {
if len(metrics) == 0 {
func (a *Amon) Write(points []*client.Point) error {
if len(points) == 0 {
return nil
}
ts := TimeSeries{}
tempSeries := []*Metric{}
metricCounter := 0
for _, m := range metrics {
mname := strings.Replace(m.Name(), "_", ".", -1)
if amonPts, err := buildMetrics(m); err == nil {
for _, pt := range points {
mname := strings.Replace(pt.Name(), "_", ".", -1)
if amonPts, err := buildPoints(pt); err == nil {
for fieldName, amonPt := range amonPts {
metric := &Metric{
Metric: mname + "_" + strings.Replace(fieldName, "_", ".", -1),
@@ -73,7 +73,7 @@ func (a *Amon) Write(metrics []telegraf.Metric) error {
metricCounter++
}
} else {
log.Printf("unable to build Metric for %s, skipping\n", m.Name())
log.Printf("unable to build Metric for %s, skipping\n", pt.Name())
}
}
@@ -115,17 +115,17 @@ func (a *Amon) authenticatedUrl() string {
return fmt.Sprintf("%s/api/system/%s", a.AmonInstance, a.ServerKey)
}
func buildMetrics(m telegraf.Metric) (map[string]Point, error) {
ms := make(map[string]Point)
for k, v := range m.Fields() {
func buildPoints(pt *client.Point) (map[string]Point, error) {
pts := make(map[string]Point)
for k, v := range pt.Fields() {
var p Point
if err := p.setValue(v); err != nil {
return ms, fmt.Errorf("unable to extract value from Fields, %s", err.Error())
return pts, fmt.Errorf("unable to extract value from Fields, %s", err.Error())
}
p[0] = float64(m.Time().Unix())
ms[k] = p
p[0] = float64(pt.Time().Unix())
pts[k] = p
}
return ms, nil
return pts, nil
}
func (p *Point) setValue(v interface{}) error {
@@ -151,7 +151,7 @@ func (a *Amon) Close() error {
}
func init() {
outputs.Add("amon", func() telegraf.Output {
outputs.Add("amon", func() outputs.Output {
return &Amon{}
})
}

View File

@@ -6,19 +6,19 @@ import (
"testing"
"time"
"github.com/influxdata/telegraf/testutil"
"github.com/influxdb/telegraf/testutil"
"github.com/influxdata/telegraf"
"github.com/influxdb/influxdb/client/v2"
)
func TestBuildPoint(t *testing.T) {
var tagtests = []struct {
ptIn telegraf.Metric
ptIn *client.Point
outPt Point
err error
}{
{
testutil.TestMetric(float64(0.0), "testpt"),
testutil.TestPoint(float64(0.0)),
Point{
float64(time.Date(2009, time.November, 10, 23, 0, 0, 0, time.UTC).Unix()),
0.0,
@@ -26,7 +26,7 @@ func TestBuildPoint(t *testing.T) {
nil,
},
{
testutil.TestMetric(float64(1.0), "testpt"),
testutil.TestPoint(float64(1.0)),
Point{
float64(time.Date(2009, time.November, 10, 23, 0, 0, 0, time.UTC).Unix()),
1.0,
@@ -34,7 +34,7 @@ func TestBuildPoint(t *testing.T) {
nil,
},
{
testutil.TestMetric(int(10), "testpt"),
testutil.TestPoint(int(10)),
Point{
float64(time.Date(2009, time.November, 10, 23, 0, 0, 0, time.UTC).Unix()),
10.0,
@@ -42,7 +42,7 @@ func TestBuildPoint(t *testing.T) {
nil,
},
{
testutil.TestMetric(int32(112345), "testpt"),
testutil.TestPoint(int32(112345)),
Point{
float64(time.Date(2009, time.November, 10, 23, 0, 0, 0, time.UTC).Unix()),
112345.0,
@@ -50,7 +50,7 @@ func TestBuildPoint(t *testing.T) {
nil,
},
{
testutil.TestMetric(int64(112345), "testpt"),
testutil.TestPoint(int64(112345)),
Point{
float64(time.Date(2009, time.November, 10, 23, 0, 0, 0, time.UTC).Unix()),
112345.0,
@@ -58,7 +58,7 @@ func TestBuildPoint(t *testing.T) {
nil,
},
{
testutil.TestMetric(float32(11234.5), "testpt"),
testutil.TestPoint(float32(11234.5)),
Point{
float64(time.Date(2009, time.November, 10, 23, 0, 0, 0, time.UTC).Unix()),
11234.5,
@@ -66,7 +66,7 @@ func TestBuildPoint(t *testing.T) {
nil,
},
{
testutil.TestMetric("11234.5", "testpt"),
testutil.TestPoint("11234.5"),
Point{
float64(time.Date(2009, time.November, 10, 23, 0, 0, 0, time.UTC).Unix()),
11234.5,
@@ -75,16 +75,15 @@ func TestBuildPoint(t *testing.T) {
},
}
for _, tt := range tagtests {
pt, err := buildMetrics(tt.ptIn)
pt, err := buildPoint(tt.ptIn)
if err != nil && tt.err == nil {
t.Errorf("%s: unexpected error, %+v\n", tt.ptIn.Name(), err)
}
if tt.err != nil && err == nil {
t.Errorf("%s: expected an error (%s) but none returned", tt.ptIn.Name(), tt.err.Error())
}
if !reflect.DeepEqual(pt["value"], tt.outPt) && tt.err == nil {
t.Errorf("%s: \nexpected %+v\ngot %+v\n",
tt.ptIn.Name(), tt.outPt, pt["value"])
if !reflect.DeepEqual(pt, tt.outPt) && tt.err == nil {
t.Errorf("%s: \nexpected %+v\ngot %+v\n", tt.ptIn.Name(), tt.outPt, pt)
}
}
}

View File

@@ -7,11 +7,8 @@ import (
"sync"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/outputs"
"github.com/influxdata/telegraf/plugins/serializers"
"github.com/influxdb/influxdb/client/v2"
"github.com/influxdb/telegraf/outputs"
"github.com/streadway/amqp"
)
@@ -29,20 +26,9 @@ type AMQP struct {
// InfluxDB precision
Precision string
// Path to CA file
SSLCA string `toml:"ssl_ca"`
// Path to host cert file
SSLCert string `toml:"ssl_cert"`
// Path to cert key file
SSLKey string `toml:"ssl_key"`
// Use SSL but skip chain & host verification
InsecureSkipVerify bool
channel *amqp.Channel
sync.Mutex
headers amqp.Table
serializer serializers.Serializer
}
const (
@@ -52,39 +38,22 @@ const (
)
var sampleConfig = `
## AMQP url
# AMQP url
url = "amqp://localhost:5672/influxdb"
## AMQP exchange
# AMQP exchange
exchange = "telegraf"
## Telegraf tag to use as a routing key
## ie, if this tag exists, it's value will be used as the routing key
# Telegraf tag to use as a routing key
# ie, if this tag exists, it's value will be used as the routing key
routing_tag = "host"
## InfluxDB retention policy
# retention_policy = "default"
## InfluxDB database
# database = "telegraf"
## InfluxDB precision
# precision = "s"
## Optional SSL Config
# ssl_ca = "/etc/telegraf/ca.pem"
# ssl_cert = "/etc/telegraf/cert.pem"
# ssl_key = "/etc/telegraf/key.pem"
## Use SSL but skip chain & host verification
# insecure_skip_verify = false
## Data format to output. This can be "influx" or "graphite"
## Each data format has it's own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
data_format = "influx"
# InfluxDB retention policy
#retention_policy = "default"
# InfluxDB database
#database = "telegraf"
# InfluxDB precision
#precision = "s"
`
func (a *AMQP) SetSerializer(serializer serializers.Serializer) {
a.serializer = serializer
}
func (q *AMQP) Connect() error {
q.Lock()
defer q.Unlock()
@@ -95,19 +64,7 @@ func (q *AMQP) Connect() error {
"retention_policy": q.RetentionPolicy,
}
var connection *amqp.Connection
// make new tls config
tls, err := internal.GetTLSConfig(
q.SSLCert, q.SSLKey, q.SSLCA, q.InsecureSkipVerify)
if err != nil {
return err
}
if tls != nil {
connection, err = amqp.DialTLS(q.URL, tls)
} else {
connection, err = amqp.Dial(q.URL)
}
connection, err := amqp.Dial(q.URL)
if err != nil {
return err
}
@@ -153,32 +110,28 @@ func (q *AMQP) Description() string {
return "Configuration for the AMQP server to send metrics to"
}
func (q *AMQP) Write(metrics []telegraf.Metric) error {
func (q *AMQP) Write(points []*client.Point) error {
q.Lock()
defer q.Unlock()
if len(metrics) == 0 {
if len(points) == 0 {
return nil
}
var outbuf = make(map[string][][]byte)
for _, metric := range metrics {
var key string
for _, p := range points {
// Combine tags from Point and BatchPoints and grab the resulting
// line-protocol output string to write to AMQP
var value, key string
value = p.String()
if q.RoutingTag != "" {
if h, ok := metric.Tags()[q.RoutingTag]; ok {
if h, ok := p.Tags()[q.RoutingTag]; ok {
key = h
}
}
outbuf[key] = append(outbuf[key], []byte(value))
values, err := q.serializer.Serialize(metric)
if err != nil {
return err
}
for _, value := range values {
outbuf[key] = append(outbuf[key], []byte(value))
}
}
for key, buf := range outbuf {
err := q.channel.Publish(
q.Exchange, // exchange
@@ -198,7 +151,7 @@ func (q *AMQP) Write(metrics []telegraf.Metric) error {
}
func init() {
outputs.Add("amqp", func() telegraf.Output {
outputs.Add("amqp", func() outputs.Output {
return &AMQP{
Database: DefaultDatabase,
Precision: DefaultPrecision,

View File

@@ -3,8 +3,7 @@ package amqp
import (
"testing"
"github.com/influxdata/telegraf/plugins/serializers"
"github.com/influxdata/telegraf/testutil"
"github.com/influxdb/telegraf/testutil"
"github.com/stretchr/testify/require"
)
@@ -14,11 +13,9 @@ func TestConnectAndWrite(t *testing.T) {
}
var url = "amqp://" + testutil.GetLocalHost() + ":5672/"
s, _ := serializers.NewInfluxSerializer()
q := &AMQP{
URL: url,
Exchange: "telegraf_test",
serializer: s,
URL: url,
Exchange: "telegraf_test",
}
// Verify that we can connect to the AMQP broker
@@ -26,6 +23,6 @@ func TestConnectAndWrite(t *testing.T) {
require.NoError(t, err)
// Verify that we can successfully write data to the amqp broker
err = q.Write(testutil.MockMetrics())
err = q.Write(testutil.MockBatchPoints().Points())
require.NoError(t, err)
}

View File

@@ -10,9 +10,9 @@ import (
"sort"
"strings"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/outputs"
"github.com/influxdb/influxdb/client/v2"
"github.com/influxdb/telegraf/internal"
"github.com/influxdb/telegraf/outputs"
)
type Datadog struct {
@@ -24,10 +24,10 @@ type Datadog struct {
}
var sampleConfig = `
## Datadog API key
# Datadog API key
apikey = "my-secret-key" # required.
## Connection timeout.
# Connection timeout.
# timeout = "5s"
`
@@ -62,37 +62,27 @@ func (d *Datadog) Connect() error {
return nil
}
func (d *Datadog) Write(metrics []telegraf.Metric) error {
if len(metrics) == 0 {
func (d *Datadog) Write(points []*client.Point) error {
if len(points) == 0 {
return nil
}
ts := TimeSeries{}
tempSeries := []*Metric{}
metricCounter := 0
for _, m := range metrics {
mname := strings.Replace(m.Name(), "_", ".", -1)
if dogMs, err := buildMetrics(m); err == nil {
for fieldName, dogM := range dogMs {
// name of the datadog measurement
var dname string
if fieldName == "value" {
// adding .value seems redundant here
dname = mname
} else {
dname = mname + "." + strings.Replace(fieldName, "_", ".", -1)
}
for _, pt := range points {
mname := strings.Replace(pt.Name(), "_", ".", -1)
if amonPts, err := buildPoints(pt); err == nil {
for fieldName, amonPt := range amonPts {
metric := &Metric{
Metric: dname,
Tags: buildTags(m.Tags()),
Host: m.Tags()["host"],
Metric: mname + strings.Replace(fieldName, "_", ".", -1),
}
metric.Points[0] = dogM
metric.Points[0] = amonPt
tempSeries = append(tempSeries, metric)
metricCounter++
}
} else {
log.Printf("unable to build Metric for %s, skipping\n", m.Name())
log.Printf("unable to build Metric for %s, skipping\n", pt.Name())
}
}
@@ -136,23 +126,23 @@ func (d *Datadog) authenticatedUrl() string {
return fmt.Sprintf("%s?%s", d.apiUrl, q.Encode())
}
func buildMetrics(m telegraf.Metric) (map[string]Point, error) {
ms := make(map[string]Point)
for k, v := range m.Fields() {
func buildPoints(pt *client.Point) (map[string]Point, error) {
pts := make(map[string]Point)
for k, v := range pt.Fields() {
var p Point
if err := p.setValue(v); err != nil {
return ms, fmt.Errorf("unable to extract value from Fields, %s", err.Error())
return pts, fmt.Errorf("unable to extract value from Fields, %s", err.Error())
}
p[0] = float64(m.Time().Unix())
ms[k] = p
p[0] = float64(pt.Time().Unix())
pts[k] = p
}
return ms, nil
return pts, nil
}
func buildTags(mTags map[string]string) []string {
tags := make([]string, len(mTags))
func buildTags(ptTags map[string]string) []string {
tags := make([]string, len(ptTags))
index := 0
for k, v := range mTags {
for k, v := range ptTags {
tags[index] = fmt.Sprintf("%s:%s", k, v)
index += 1
}
@@ -183,7 +173,7 @@ func (d *Datadog) Close() error {
}
func init() {
outputs.Add("datadog", func() telegraf.Output {
outputs.Add("datadog", func() outputs.Output {
return NewDatadog(datadog_api)
})
}

View File

@@ -9,9 +9,9 @@ import (
"testing"
"time"
"github.com/influxdata/telegraf/testutil"
"github.com/influxdb/telegraf/testutil"
"github.com/influxdata/telegraf"
"github.com/influxdb/influxdb/client/v2"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
@@ -38,7 +38,7 @@ func TestUriOverride(t *testing.T) {
d.Apikey = "123456"
err := d.Connect()
require.NoError(t, err)
err = d.Write(testutil.MockMetrics())
err = d.Write(testutil.MockBatchPoints().Points())
require.NoError(t, err)
}
@@ -57,7 +57,7 @@ func TestBadStatusCode(t *testing.T) {
d.Apikey = "123456"
err := d.Connect()
require.NoError(t, err)
err = d.Write(testutil.MockMetrics())
err = d.Write(testutil.MockBatchPoints().Points())
if err == nil {
t.Errorf("error expected but none returned")
} else {
@@ -100,12 +100,12 @@ func TestBuildTags(t *testing.T) {
func TestBuildPoint(t *testing.T) {
var tagtests = []struct {
ptIn telegraf.Metric
ptIn *client.Point
outPt Point
err error
}{
{
testutil.TestMetric(0.0, "test1"),
testutil.TestPoint(0.0, "test1"),
Point{
float64(time.Date(2009, time.November, 10, 23, 0, 0, 0, time.UTC).Unix()),
0.0,
@@ -113,7 +113,7 @@ func TestBuildPoint(t *testing.T) {
nil,
},
{
testutil.TestMetric(1.0, "test2"),
testutil.TestPoint(1.0, "test2"),
Point{
float64(time.Date(2009, time.November, 10, 23, 0, 0, 0, time.UTC).Unix()),
1.0,
@@ -121,7 +121,7 @@ func TestBuildPoint(t *testing.T) {
nil,
},
{
testutil.TestMetric(10, "test3"),
testutil.TestPoint(10, "test3"),
Point{
float64(time.Date(2009, time.November, 10, 23, 0, 0, 0, time.UTC).Unix()),
10.0,
@@ -129,7 +129,7 @@ func TestBuildPoint(t *testing.T) {
nil,
},
{
testutil.TestMetric(int32(112345), "test4"),
testutil.TestPoint(int32(112345), "test4"),
Point{
float64(time.Date(2009, time.November, 10, 23, 0, 0, 0, time.UTC).Unix()),
112345.0,
@@ -137,7 +137,7 @@ func TestBuildPoint(t *testing.T) {
nil,
},
{
testutil.TestMetric(int64(112345), "test5"),
testutil.TestPoint(int64(112345), "test5"),
Point{
float64(time.Date(2009, time.November, 10, 23, 0, 0, 0, time.UTC).Unix()),
112345.0,
@@ -145,7 +145,7 @@ func TestBuildPoint(t *testing.T) {
nil,
},
{
testutil.TestMetric(float32(11234.5), "test6"),
testutil.TestPoint(float32(11234.5), "test6"),
Point{
float64(time.Date(2009, time.November, 10, 23, 0, 0, 0, time.UTC).Unix()),
11234.5,
@@ -153,7 +153,7 @@ func TestBuildPoint(t *testing.T) {
nil,
},
{
testutil.TestMetric("11234.5", "test7"),
testutil.TestPoint("11234.5", "test7"),
Point{
float64(time.Date(2009, time.November, 10, 23, 0, 0, 0, time.UTC).Unix()),
11234.5,
@@ -162,16 +162,15 @@ func TestBuildPoint(t *testing.T) {
},
}
for _, tt := range tagtests {
pt, err := buildMetrics(tt.ptIn)
pt, err := buildPoint(tt.ptIn)
if err != nil && tt.err == nil {
t.Errorf("%s: unexpected error, %+v\n", tt.ptIn.Name(), err)
}
if tt.err != nil && err == nil {
t.Errorf("%s: expected an error (%s) but none returned", tt.ptIn.Name(), tt.err.Error())
}
if !reflect.DeepEqual(pt["value"], tt.outPt) && tt.err == nil {
t.Errorf("%s: \nexpected %+v\ngot %+v\n",
tt.ptIn.Name(), tt.outPt, pt["value"])
if !reflect.DeepEqual(pt, tt.outPt) && tt.err == nil {
t.Errorf("%s: \nexpected %+v\ngot %+v\n", tt.ptIn.Name(), tt.outPt, pt)
}
}
}

View File

@@ -9,11 +9,9 @@ import (
"strings"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/outputs"
"github.com/influxdata/influxdb/client/v2"
"github.com/influxdb/influxdb/client/v2"
"github.com/influxdb/telegraf/internal"
"github.com/influxdb/telegraf/outputs"
)
type InfluxDB struct {
@@ -28,46 +26,30 @@ type InfluxDB struct {
Timeout internal.Duration
UDPPayload int `toml:"udp_payload"`
// Path to CA file
SSLCA string `toml:"ssl_ca"`
// Path to host cert file
SSLCert string `toml:"ssl_cert"`
// Path to cert key file
SSLKey string `toml:"ssl_key"`
// Use SSL but skip chain & host verification
InsecureSkipVerify bool
conns []client.Client
}
var sampleConfig = `
## The full HTTP or UDP endpoint URL for your InfluxDB instance.
## Multiple urls can be specified as part of the same cluster,
## this means that only ONE of the urls will be written to each interval.
# The full HTTP or UDP endpoint URL for your InfluxDB instance.
# Multiple urls can be specified but it is assumed that they are part of the same
# cluster, this means that only ONE of the urls will be written to each interval.
# urls = ["udp://localhost:8089"] # UDP endpoint example
urls = ["http://localhost:8086"] # required
## The target database for metrics (telegraf will create it if not exists)
# The target database for metrics (telegraf will create it if not exists)
database = "telegraf" # required
## Precision of writes, valid values are "ns", "us" (or "µs"), "ms", "s", "m", "h".
## note: using "s" precision greatly improves InfluxDB compression
# Precision of writes, valid values are n, u, ms, s, m, and h
# note: using second precision greatly helps InfluxDB compression
precision = "s"
## Write timeout (for the InfluxDB client), formatted as a string.
## If not provided, will default to 5s. 0s means no timeout (not recommended).
timeout = "5s"
# Connection timeout (for the connection with InfluxDB), formatted as a string.
# If not provided, will default to 0 (no timeout)
# timeout = "5s"
# username = "telegraf"
# password = "metricsmetricsmetricsmetrics"
## Set the user agent for HTTP POSTs (can be useful for log differentiation)
# Set the user agent for HTTP POSTs (can be useful for log differentiation)
# user_agent = "telegraf"
## Set UDP payload size, defaults to InfluxDB UDP Client default (512 bytes)
# Set UDP payload size, defaults to InfluxDB UDP Client default (512 bytes)
# udp_payload = 512
## Optional SSL Config
# ssl_ca = "/etc/telegraf/ca.pem"
# ssl_cert = "/etc/telegraf/cert.pem"
# ssl_key = "/etc/telegraf/key.pem"
## Use SSL but skip chain & host verification
# insecure_skip_verify = false
`
func (i *InfluxDB) Connect() error {
@@ -82,12 +64,6 @@ func (i *InfluxDB) Connect() error {
urls = append(urls, i.URL)
}
tlsCfg, err := internal.GetTLSConfig(
i.SSLCert, i.SSLKey, i.SSLCA, i.InsecureSkipVerify)
if err != nil {
return err
}
var conns []client.Client
for _, u := range urls {
switch {
@@ -116,7 +92,6 @@ func (i *InfluxDB) Connect() error {
Password: i.Password,
UserAgent: i.UserAgent,
Timeout: i.Timeout.Duration,
TLSConfig: tlsCfg,
})
if err != nil {
return err
@@ -155,21 +130,18 @@ func (i *InfluxDB) Description() string {
// Choose a random server in the cluster to write to until a successful write
// occurs, logging each unsuccessful. If all servers fail, return error.
func (i *InfluxDB) Write(metrics []telegraf.Metric) error {
bp, err := client.NewBatchPoints(client.BatchPointsConfig{
func (i *InfluxDB) Write(points []*client.Point) error {
bp, _ := client.NewBatchPoints(client.BatchPointsConfig{
Database: i.Database,
Precision: i.Precision,
})
if err != nil {
return err
}
for _, metric := range metrics {
bp.AddPoint(metric.Point())
for _, point := range points {
bp.AddPoint(point)
}
// This will get set to nil if a successful write occurs
err = errors.New("Could not write to any InfluxDB server in cluster")
err := errors.New("Could not write to any InfluxDB server in cluster")
p := rand.Perm(len(i.conns))
for _, n := range p {
@@ -184,9 +156,7 @@ func (i *InfluxDB) Write(metrics []telegraf.Metric) error {
}
func init() {
outputs.Add("influxdb", func() telegraf.Output {
return &InfluxDB{
Timeout: internal.Duration{Duration: time.Second * 5},
}
outputs.Add("influxdb", func() outputs.Output {
return &InfluxDB{}
})
}

View File

@@ -6,7 +6,7 @@ import (
"net/http/httptest"
"testing"
"github.com/influxdata/telegraf/testutil"
"github.com/influxdb/telegraf/testutil"
"github.com/stretchr/testify/require"
)
@@ -18,7 +18,7 @@ func TestUDPInflux(t *testing.T) {
err := i.Connect()
require.NoError(t, err)
err = i.Write(testutil.MockMetrics())
err = i.Write(testutil.MockBatchPoints().Points())
require.NoError(t, err)
}
@@ -36,6 +36,6 @@ func TestHTTPInflux(t *testing.T) {
err := i.Connect()
require.NoError(t, err)
err = i.Write(testutil.MockMetrics())
err = i.Write(testutil.MockBatchPoints().Points())
require.NoError(t, err)
}

85
outputs/kafka/kafka.go Normal file
View File

@@ -0,0 +1,85 @@
package kafka
import (
"errors"
"fmt"
"github.com/Shopify/sarama"
"github.com/influxdb/influxdb/client/v2"
"github.com/influxdb/telegraf/outputs"
)
type Kafka struct {
// Kafka brokers to send metrics to
Brokers []string
// Kafka topic
Topic string
// Routing Key Tag
RoutingTag string `toml:"routing_tag"`
producer sarama.SyncProducer
}
var sampleConfig = `
# URLs of kafka brokers
brokers = ["localhost:9092"]
# Kafka topic for producer messages
topic = "telegraf"
# Telegraf tag to use as a routing key
# ie, if this tag exists, it's value will be used as the routing key
routing_tag = "host"
`
func (k *Kafka) Connect() error {
producer, err := sarama.NewSyncProducer(k.Brokers, nil)
if err != nil {
return err
}
k.producer = producer
return nil
}
func (k *Kafka) Close() error {
return k.producer.Close()
}
func (k *Kafka) SampleConfig() string {
return sampleConfig
}
func (k *Kafka) Description() string {
return "Configuration for the Kafka server to send metrics to"
}
func (k *Kafka) Write(points []*client.Point) error {
if len(points) == 0 {
return nil
}
for _, p := range points {
// Combine tags from Point and BatchPoints and grab the resulting
// line-protocol output string to write to Kafka
value := p.String()
m := &sarama.ProducerMessage{
Topic: k.Topic,
Value: sarama.StringEncoder(value),
}
if h, ok := p.Tags()[k.RoutingTag]; ok {
m.Key = sarama.StringEncoder(h)
}
_, _, err := k.producer.SendMessage(m)
if err != nil {
return errors.New(fmt.Sprintf("FAILED to send kafka message: %s\n",
err))
}
}
return nil
}
func init() {
outputs.Add("kafka", func() outputs.Output {
return &Kafka{}
})
}

View File

@@ -3,8 +3,7 @@ package kafka
import (
"testing"
"github.com/influxdata/telegraf/plugins/serializers"
"github.com/influxdata/telegraf/testutil"
"github.com/influxdb/telegraf/testutil"
"github.com/stretchr/testify/require"
)
@@ -14,11 +13,9 @@ func TestConnectAndWrite(t *testing.T) {
}
brokers := []string{testutil.GetLocalHost() + ":9092"}
s, _ := serializers.NewInfluxSerializer()
k := &Kafka{
Brokers: brokers,
Topic: "Test",
serializer: s,
Brokers: brokers,
Topic: "Test",
}
// Verify that we can connect to the Kafka broker
@@ -26,6 +23,6 @@ func TestConnectAndWrite(t *testing.T) {
require.NoError(t, err)
// Verify that we can successfully write data to the kafka broker
err = k.Write(testutil.MockMetrics())
err = k.Write(testutil.MockBatchPoints().Points())
require.NoError(t, err)
}

View File

@@ -1,6 +1,7 @@
package kinesis
import (
"errors"
"fmt"
"log"
"os"
@@ -14,8 +15,8 @@ import (
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/kinesis"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/outputs"
"github.com/influxdb/influxdb/client/v2"
"github.com/influxdb/telegraf/outputs"
)
type KinesisOutput struct {
@@ -28,16 +29,16 @@ type KinesisOutput struct {
}
var sampleConfig = `
## Amazon REGION of kinesis endpoint.
# Amazon REGION of kinesis endpoint.
region = "ap-southeast-2"
## Kinesis StreamName must exist prior to starting telegraf.
# Kinesis StreamName must exist prior to starting telegraf.
streamname = "StreamName"
## PartitionKey as used for sharding data.
# PartitionKey as used for sharding data.
partitionkey = "PartitionKey"
## format of the Data payload in the kinesis PutRecord, supported
## String and Custom.
# format of the Data payload in the kinesis PutRecord, supported
# String and Custom.
format = "string"
## debug will show upstream aws messages.
# debug will show upstream aws messages.
debug = false
`
@@ -100,10 +101,10 @@ func (k *KinesisOutput) Connect() error {
}
func (k *KinesisOutput) Close() error {
return nil
return errors.New("Error")
}
func FormatMetric(k *KinesisOutput, point telegraf.Metric) (string, error) {
func FormatMetric(k *KinesisOutput, point *client.Point) (string, error) {
if k.Format == "string" {
return point.String(), nil
} else {
@@ -138,16 +139,16 @@ func writekinesis(k *KinesisOutput, r []*kinesis.PutRecordsRequestEntry) time.Du
return time.Since(start)
}
func (k *KinesisOutput) Write(metrics []telegraf.Metric) error {
func (k *KinesisOutput) Write(points []*client.Point) error {
var sz uint32 = 0
if len(metrics) == 0 {
if len(points) == 0 {
return nil
}
r := []*kinesis.PutRecordsRequestEntry{}
for _, p := range metrics {
for _, p := range points {
atomic.AddUint32(&sz, 1)
metric, _ := FormatMetric(k, p)
@@ -172,7 +173,7 @@ func (k *KinesisOutput) Write(metrics []telegraf.Metric) error {
}
func init() {
outputs.Add("kinesis", func() telegraf.Output {
outputs.Add("kinesis", func() outputs.Output {
return &KinesisOutput{}
})
}

View File

@@ -1,7 +1,7 @@
package kinesis
import (
"github.com/influxdata/telegraf/testutil"
"github.com/influxdb/telegraf/testutil"
"github.com/stretchr/testify/require"
"testing"
)
@@ -15,7 +15,7 @@ func TestFormatMetric(t *testing.T) {
Format: "string",
}
p := testutil.MockMetrics()[0]
p := testutil.MockBatchPoints().Points()[0]
valid_string := "test1,tag1=value1 value=1 1257894000000000000"
func_string, err := FormatMetric(k, p)

View File

@@ -7,9 +7,9 @@ import (
"log"
"net/http"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/outputs"
"github.com/influxdb/influxdb/client/v2"
"github.com/influxdb/telegraf/internal"
"github.com/influxdb/telegraf/outputs"
)
type Librato struct {
@@ -23,24 +23,24 @@ type Librato struct {
}
var sampleConfig = `
## Librator API Docs
## http://dev.librato.com/v1/metrics-authentication
# Librator API Docs
# http://dev.librato.com/v1/metrics-authentication
## Librato API user
# Librato API user
api_user = "telegraf@influxdb.com" # required.
## Librato API token
# Librato API token
api_token = "my-secret-token" # required.
## Tag Field to populate source attribute (optional)
## This is typically the _hostname_ from which the metric was obtained.
# Tag Field to populate source attribute (optional)
# This is typically the _hostname_ from which the metric was obtained.
source_tag = "hostname"
## Connection timeout.
# Connection timeout.
# timeout = "5s"
`
type LMetrics struct {
type Metrics struct {
Gauges []*Gauge `json:"gauges"`
}
@@ -69,27 +69,27 @@ func (l *Librato) Connect() error {
return nil
}
func (l *Librato) Write(metrics []telegraf.Metric) error {
if len(metrics) == 0 {
func (l *Librato) Write(points []*client.Point) error {
if len(points) == 0 {
return nil
}
lmetrics := LMetrics{}
metrics := Metrics{}
tempGauges := []*Gauge{}
metricCounter := 0
for _, m := range metrics {
if gauges, err := l.buildGauges(m); err == nil {
for _, pt := range points {
if gauges, err := l.buildGauges(pt); err == nil {
for _, gauge := range gauges {
tempGauges = append(tempGauges, gauge)
metricCounter++
}
} else {
log.Printf("unable to build Gauge for %s, skipping\n", m.Name())
log.Printf("unable to build Gauge for %s, skipping\n", pt.Name())
}
}
lmetrics.Gauges = make([]*Gauge, metricCounter)
copy(lmetrics.Gauges, tempGauges[0:])
metrics.Gauges = make([]*Gauge, metricCounter)
copy(metrics.Gauges, tempGauges[0:])
metricsBytes, err := json.Marshal(metrics)
if err != nil {
return fmt.Errorf("unable to marshal Metrics, %s\n", err.Error())
@@ -122,19 +122,19 @@ func (l *Librato) Description() string {
return "Configuration for Librato API to send metrics to."
}
func (l *Librato) buildGauges(m telegraf.Metric) ([]*Gauge, error) {
func (l *Librato) buildGauges(pt *client.Point) ([]*Gauge, error) {
gauges := []*Gauge{}
for fieldName, value := range m.Fields() {
for fieldName, value := range pt.Fields() {
gauge := &Gauge{
Name: m.Name() + "_" + fieldName,
MeasureTime: m.Time().Unix(),
Name: pt.Name() + "_" + fieldName,
MeasureTime: pt.Time().Unix(),
}
if err := gauge.setValue(value); err != nil {
return gauges, fmt.Errorf("unable to extract value from Fields, %s\n",
err.Error())
}
if l.SourceTag != "" {
if source, ok := m.Tags()[l.SourceTag]; ok {
if source, ok := pt.Tags()[l.SourceTag]; ok {
gauge.Source = source
} else {
return gauges,
@@ -169,7 +169,7 @@ func (l *Librato) Close() error {
}
func init() {
outputs.Add("librato", func() telegraf.Output {
outputs.Add("librato", func() outputs.Output {
return NewLibrato(librato_api)
})
}

View File

@@ -9,9 +9,9 @@ import (
"testing"
"time"
"github.com/influxdata/telegraf/testutil"
"github.com/influxdb/telegraf/testutil"
"github.com/influxdata/telegraf"
"github.com/influxdb/influxdb/client/v2"
"github.com/stretchr/testify/require"
)
@@ -39,7 +39,7 @@ func TestUriOverride(t *testing.T) {
l.ApiToken = "123456"
err := l.Connect()
require.NoError(t, err)
err = l.Write(testutil.MockMetrics())
err = l.Write(testutil.MockBatchPoints().Points())
require.NoError(t, err)
}
@@ -61,7 +61,7 @@ func TestBadStatusCode(t *testing.T) {
l.ApiToken = "123456"
err := l.Connect()
require.NoError(t, err)
err = l.Write(testutil.MockMetrics())
err = l.Write(testutil.MockBatchPoints().Points())
if err == nil {
t.Errorf("error expected but none returned")
} else {
@@ -71,12 +71,12 @@ func TestBadStatusCode(t *testing.T) {
func TestBuildGauge(t *testing.T) {
var gaugeTests = []struct {
ptIn telegraf.Metric
ptIn *client.Point
outGauge *Gauge
err error
}{
{
testutil.TestMetric(0.0, "test1"),
testutil.TestPoint(0.0, "test1"),
&Gauge{
Name: "test1",
MeasureTime: time.Date(2009, time.November, 10, 23, 0, 0, 0, time.UTC).Unix(),
@@ -85,7 +85,7 @@ func TestBuildGauge(t *testing.T) {
nil,
},
{
testutil.TestMetric(1.0, "test2"),
testutil.TestPoint(1.0, "test2"),
&Gauge{
Name: "test2",
MeasureTime: time.Date(2009, time.November, 10, 23, 0, 0, 0, time.UTC).Unix(),
@@ -94,7 +94,7 @@ func TestBuildGauge(t *testing.T) {
nil,
},
{
testutil.TestMetric(10, "test3"),
testutil.TestPoint(10, "test3"),
&Gauge{
Name: "test3",
MeasureTime: time.Date(2009, time.November, 10, 23, 0, 0, 0, time.UTC).Unix(),
@@ -103,7 +103,7 @@ func TestBuildGauge(t *testing.T) {
nil,
},
{
testutil.TestMetric(int32(112345), "test4"),
testutil.TestPoint(int32(112345), "test4"),
&Gauge{
Name: "test4",
MeasureTime: time.Date(2009, time.November, 10, 23, 0, 0, 0, time.UTC).Unix(),
@@ -112,7 +112,7 @@ func TestBuildGauge(t *testing.T) {
nil,
},
{
testutil.TestMetric(int64(112345), "test5"),
testutil.TestPoint(int64(112345), "test5"),
&Gauge{
Name: "test5",
MeasureTime: time.Date(2009, time.November, 10, 23, 0, 0, 0, time.UTC).Unix(),
@@ -121,7 +121,7 @@ func TestBuildGauge(t *testing.T) {
nil,
},
{
testutil.TestMetric(float32(11234.5), "test6"),
testutil.TestPoint(float32(11234.5), "test6"),
&Gauge{
Name: "test6",
MeasureTime: time.Date(2009, time.November, 10, 23, 0, 0, 0, time.UTC).Unix(),
@@ -130,7 +130,7 @@ func TestBuildGauge(t *testing.T) {
nil,
},
{
testutil.TestMetric("11234.5", "test7"),
testutil.TestPoint("11234.5", "test7"),
&Gauge{
Name: "test7",
MeasureTime: time.Date(2009, time.November, 10, 23, 0, 0, 0, time.UTC).Unix(),
@@ -142,39 +142,34 @@ func TestBuildGauge(t *testing.T) {
l := NewLibrato(fakeUrl)
for _, gt := range gaugeTests {
gauges, err := l.buildGauges(gt.ptIn)
gauge, err := l.buildGauge(gt.ptIn)
if err != nil && gt.err == nil {
t.Errorf("%s: unexpected error, %+v\n", gt.ptIn.Name(), err)
}
if gt.err != nil && err == nil {
t.Errorf("%s: expected an error (%s) but none returned",
gt.ptIn.Name(), gt.err.Error())
t.Errorf("%s: expected an error (%s) but none returned", gt.ptIn.Name(), gt.err.Error())
}
if len(gauges) == 0 {
continue
}
if gt.err == nil && !reflect.DeepEqual(gauges[0], gt.outGauge) {
t.Errorf("%s: \nexpected %+v\ngot %+v\n",
gt.ptIn.Name(), gt.outGauge, gauges[0])
if !reflect.DeepEqual(gauge, gt.outGauge) && gt.err == nil {
t.Errorf("%s: \nexpected %+v\ngot %+v\n", gt.ptIn.Name(), gt.outGauge, gauge)
}
}
}
func TestBuildGaugeWithSource(t *testing.T) {
pt1, _ := telegraf.NewMetric(
pt1, _ := client.NewPoint(
"test1",
map[string]string{"hostname": "192.168.0.1"},
map[string]interface{}{"value": 0.0},
time.Date(2010, time.November, 10, 23, 0, 0, 0, time.UTC),
)
pt2, _ := telegraf.NewMetric(
pt2, _ := client.NewPoint(
"test2",
map[string]string{"hostnam": "192.168.0.1"},
map[string]interface{}{"value": 1.0},
time.Date(2010, time.December, 10, 23, 0, 0, 0, time.UTC),
)
var gaugeTests = []struct {
ptIn telegraf.Metric
ptIn *client.Point
outGauge *Gauge
err error
}{
@@ -203,18 +198,15 @@ func TestBuildGaugeWithSource(t *testing.T) {
l := NewLibrato(fakeUrl)
l.SourceTag = "hostname"
for _, gt := range gaugeTests {
gauges, err := l.buildGauges(gt.ptIn)
gauge, err := l.buildGauge(gt.ptIn)
if err != nil && gt.err == nil {
t.Errorf("%s: unexpected error, %+v\n", gt.ptIn.Name(), err)
}
if gt.err != nil && err == nil {
t.Errorf("%s: expected an error (%s) but none returned", gt.ptIn.Name(), gt.err.Error())
}
if len(gauges) == 0 {
continue
}
if gt.err == nil && !reflect.DeepEqual(gauges[0], gt.outGauge) {
t.Errorf("%s: \nexpected %+v\ngot %+v\n", gt.ptIn.Name(), gt.outGauge, gauges[0])
if !reflect.DeepEqual(gauge, gt.outGauge) && gt.err == nil {
t.Errorf("%s: \nexpected %+v\ngot %+v\n", gt.ptIn.Name(), gt.outGauge, gauge)
}
}
}

190
outputs/mqtt/mqtt.go Normal file
View File

@@ -0,0 +1,190 @@
package mqtt
import (
"crypto/rand"
"crypto/tls"
"crypto/x509"
"fmt"
"io/ioutil"
"strings"
"sync"
paho "git.eclipse.org/gitroot/paho/org.eclipse.paho.mqtt.golang.git"
"github.com/influxdb/influxdb/client/v2"
"github.com/influxdb/telegraf/internal"
"github.com/influxdb/telegraf/outputs"
)
const MaxClientIdLen = 8
const MaxRetryCount = 3
const ClientIdPrefix = "telegraf"
type MQTT struct {
Servers []string `toml:"servers"`
Username string
Password string
Database string
Timeout internal.Duration
TopicPrefix string
Client *paho.Client
Opts *paho.ClientOptions
sync.Mutex
}
var sampleConfig = `
servers = ["localhost:1883"] # required.
# MQTT outputs send metrics to this topic format
# "<topic_prefix>/host/<hostname>/<pluginname>/"
# ex: prefix/host/web01.example.com/mem/available
# topic_prefix = "prefix"
# username and password to connect MQTT server.
# username = "telegraf"
# password = "metricsmetricsmetricsmetrics"
`
func (m *MQTT) Connect() error {
var err error
m.Lock()
defer m.Unlock()
m.Opts, err = m.CreateOpts()
if err != nil {
return err
}
m.Client = paho.NewClient(m.Opts)
if token := m.Client.Connect(); token.Wait() && token.Error() != nil {
return token.Error()
}
return nil
}
func (m *MQTT) Close() error {
if m.Client.IsConnected() {
m.Client.Disconnect(20)
}
return nil
}
func (m *MQTT) SampleConfig() string {
return sampleConfig
}
func (m *MQTT) Description() string {
return "Configuration for MQTT server to send metrics to"
}
func (m *MQTT) Write(points []*client.Point) error {
m.Lock()
defer m.Unlock()
if len(points) == 0 {
return nil
}
hostname, ok := points[0].Tags()["host"]
if !ok {
hostname = ""
}
for _, p := range points {
var t []string
if m.TopicPrefix != "" {
t = append(t, m.TopicPrefix)
}
tm := strings.Split(p.Name(), "_")
if len(tm) < 2 {
tm = []string{p.Name(), "stat"}
}
t = append(t, "host", hostname, tm[0], tm[1])
topic := strings.Join(t, "/")
value := p.String()
err := m.publish(topic, value)
if err != nil {
return fmt.Errorf("Could not write to MQTT server, %s", err)
}
}
return nil
}
func (m *MQTT) publish(topic, body string) error {
token := m.Client.Publish(topic, 0, false, body)
token.Wait()
if token.Error() != nil {
return token.Error()
}
return nil
}
func (m *MQTT) CreateOpts() (*paho.ClientOptions, error) {
opts := paho.NewClientOptions()
clientId := getRandomClientId()
opts.SetClientID(clientId)
TLSConfig := &tls.Config{InsecureSkipVerify: false}
ca := "" // TODO
scheme := "tcp"
if ca != "" {
scheme = "ssl"
certPool, err := getCertPool(ca)
if err != nil {
return nil, err
}
TLSConfig.RootCAs = certPool
}
TLSConfig.InsecureSkipVerify = true // TODO
opts.SetTLSConfig(TLSConfig)
user := m.Username
if user == "" {
opts.SetUsername(user)
}
password := m.Password
if password != "" {
opts.SetPassword(password)
}
if len(m.Servers) == 0 {
return opts, fmt.Errorf("could not get host infomations")
}
for _, host := range m.Servers {
server := fmt.Sprintf("%s://%s", scheme, host)
opts.AddBroker(server)
}
opts.SetAutoReconnect(true)
return opts, nil
}
func getRandomClientId() string {
const alphanum = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz"
var bytes = make([]byte, MaxClientIdLen)
rand.Read(bytes)
for i, b := range bytes {
bytes[i] = alphanum[b%byte(len(alphanum))]
}
return ClientIdPrefix + "-" + string(bytes)
}
func getCertPool(pemPath string) (*x509.CertPool, error) {
certs := x509.NewCertPool()
pemData, err := ioutil.ReadFile(pemPath)
if err != nil {
return nil, err
}
certs.AppendCertsFromPEM(pemData)
return certs, nil
}
func init() {
outputs.Add("mqtt", func() outputs.Output {
return &MQTT{}
})
}

View File

@@ -3,9 +3,7 @@ package mqtt
import (
"testing"
"github.com/influxdata/telegraf/plugins/serializers"
"github.com/influxdata/telegraf/testutil"
"github.com/influxdb/telegraf/testutil"
"github.com/stretchr/testify/require"
)
@@ -15,10 +13,8 @@ func TestConnectAndWrite(t *testing.T) {
}
var url = testutil.GetLocalHost() + ":1883"
s, _ := serializers.NewInfluxSerializer()
m := &MQTT{
Servers: []string{url},
serializer: s,
Servers: []string{url},
}
// Verify that we can connect to the MQTT broker
@@ -26,6 +22,6 @@ func TestConnectAndWrite(t *testing.T) {
require.NoError(t, err)
// Verify that we can successfully write data to the mqtt broker
err = m.Write(testutil.MockMetrics())
err = m.Write(testutil.MockBatchPoints().Points())
require.NoError(t, err)
}

71
outputs/nsq/nsq.go Normal file
View File

@@ -0,0 +1,71 @@
package nsq
import (
"fmt"
"github.com/influxdb/influxdb/client/v2"
"github.com/influxdb/telegraf/outputs"
"github.com/nsqio/go-nsq"
)
type NSQ struct {
Server string
Topic string
producer *nsq.Producer
}
var sampleConfig = `
# Location of nsqd instance listening on TCP
server = "localhost:4150"
# NSQ topic for producer messages
topic = "telegraf"
`
func (n *NSQ) Connect() error {
config := nsq.NewConfig()
producer, err := nsq.NewProducer(n.Server, config)
if err != nil {
return err
}
n.producer = producer
return nil
}
func (n *NSQ) Close() error {
n.producer.Stop()
return nil
}
func (n *NSQ) SampleConfig() string {
return sampleConfig
}
func (n *NSQ) Description() string {
return "Send telegraf measurements to NSQD"
}
func (n *NSQ) Write(points []*client.Point) error {
if len(points) == 0 {
return nil
}
for _, p := range points {
// Combine tags from Point and BatchPoints and grab the resulting
// line-protocol output string to write to NSQ
value := p.String()
err := n.producer.Publish(n.Topic, []byte(value))
if err != nil {
return fmt.Errorf("FAILED to send NSQD message: %s", err)
}
}
return nil
}
func init() {
outputs.Add("nsq", func() outputs.Output {
return &NSQ{}
})
}

View File

@@ -3,8 +3,7 @@ package nsq
import (
"testing"
"github.com/influxdata/telegraf/plugins/serializers"
"github.com/influxdata/telegraf/testutil"
"github.com/influxdb/telegraf/testutil"
"github.com/stretchr/testify/require"
)
@@ -14,11 +13,9 @@ func TestConnectAndWrite(t *testing.T) {
}
server := []string{testutil.GetLocalHost() + ":4150"}
s, _ := serializers.NewInfluxSerializer()
n := &NSQ{
Server: server[0],
Topic: "telegraf",
serializer: s,
Server: server[0],
Topic: "telegraf",
}
// Verify that we can connect to the NSQ daemon
@@ -26,6 +23,6 @@ func TestConnectAndWrite(t *testing.T) {
require.NoError(t, err)
// Verify that we can successfully write data to the NSQ daemon
err = n.Write(testutil.MockMetrics())
err = n.Write(testutil.MockBatchPoints().Points())
require.NoError(t, err)
}

View File

@@ -8,8 +8,8 @@ import (
"strings"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/outputs"
"github.com/influxdb/influxdb/client/v2"
"github.com/influxdb/telegraf/outputs"
)
type OpenTSDB struct {
@@ -22,17 +22,17 @@ type OpenTSDB struct {
}
var sampleConfig = `
## prefix for metrics keys
# prefix for metrics keys
prefix = "my.specific.prefix."
## Telnet Mode ##
## DNS name of the OpenTSDB server in telnet mode
# DNS name of the OpenTSDB server in telnet mode
host = "opentsdb.example.com"
## Port of the OpenTSDB server in telnet mode
# Port of the OpenTSDB server in telnet mode
port = 4242
## Debug true - Prints OpenTSDB communication
# Debug true - Prints OpenTSDB communication
debug = false
`
@@ -58,8 +58,8 @@ func (o *OpenTSDB) Connect() error {
return nil
}
func (o *OpenTSDB) Write(metrics []telegraf.Metric) error {
if len(metrics) == 0 {
func (o *OpenTSDB) Write(points []*client.Point) error {
if len(points) == 0 {
return nil
}
now := time.Now()
@@ -73,8 +73,8 @@ func (o *OpenTSDB) Write(metrics []telegraf.Metric) error {
}
defer connection.Close()
for _, m := range metrics {
for _, metric := range buildMetrics(m, now, o.Prefix) {
for _, pt := range points {
for _, metric := range buildMetrics(pt, now, o.Prefix) {
messageLine := fmt.Sprintf("put %s %v %s %s\n",
metric.Metric, metric.Timestamp, metric.Value, metric.Tags)
if o.Debug {
@@ -90,10 +90,10 @@ func (o *OpenTSDB) Write(metrics []telegraf.Metric) error {
return nil
}
func buildTags(mTags map[string]string) []string {
tags := make([]string, len(mTags))
func buildTags(ptTags map[string]string) []string {
tags := make([]string, len(ptTags))
index := 0
for k, v := range mTags {
for k, v := range ptTags {
tags[index] = fmt.Sprintf("%s=%s", k, v)
index += 1
}
@@ -101,11 +101,11 @@ func buildTags(mTags map[string]string) []string {
return tags
}
func buildMetrics(m telegraf.Metric, now time.Time, prefix string) []*MetricLine {
func buildMetrics(pt *client.Point, now time.Time, prefix string) []*MetricLine {
ret := []*MetricLine{}
for fieldName, value := range m.Fields() {
for fieldName, value := range pt.Fields() {
metric := &MetricLine{
Metric: fmt.Sprintf("%s%s_%s", prefix, m.Name(), fieldName),
Metric: fmt.Sprintf("%s%s_%s", prefix, pt.Name(), fieldName),
Timestamp: now.Unix(),
}
@@ -115,7 +115,7 @@ func buildMetrics(m telegraf.Metric, now time.Time, prefix string) []*MetricLine
continue
}
metric.Value = metricValue
tagsSlice := buildTags(m.Tags())
tagsSlice := buildTags(pt.Tags())
metric.Tags = fmt.Sprint(strings.Join(tagsSlice, " "))
ret = append(ret, metric)
}
@@ -162,7 +162,7 @@ func (o *OpenTSDB) Close() error {
}
func init() {
outputs.Add("opentsdb", func() telegraf.Output {
outputs.Add("opentsdb", func() outputs.Output {
return &OpenTSDB{}
})
}

View File

@@ -4,7 +4,7 @@ import (
"reflect"
"testing"
"github.com/influxdata/telegraf/testutil"
"github.com/influxdb/telegraf/testutil"
"github.com/stretchr/testify/require"
)
@@ -54,18 +54,18 @@ func TestWrite(t *testing.T) {
require.NoError(t, err)
// Verify that we can successfully write data to OpenTSDB
err = o.Write(testutil.MockMetrics())
err = o.Write(testutil.MockBatchPoints().Points())
require.NoError(t, err)
// Verify postive and negative test cases of writing data
metrics := testutil.MockMetrics()
metrics = append(metrics, testutil.TestMetric(float64(1.0), "justametric.float"))
metrics = append(metrics, testutil.TestMetric(int64(123456789), "justametric.int"))
metrics = append(metrics, testutil.TestMetric(uint64(123456789012345), "justametric.uint"))
metrics = append(metrics, testutil.TestMetric("Lorem Ipsum", "justametric.string"))
metrics = append(metrics, testutil.TestMetric(float64(42.0), "justametric.anotherfloat"))
bp := testutil.MockBatchPoints()
bp.AddPoint(testutil.TestPoint(float64(1.0), "justametric.float"))
bp.AddPoint(testutil.TestPoint(int64(123456789), "justametric.int"))
bp.AddPoint(testutil.TestPoint(uint64(123456789012345), "justametric.uint"))
bp.AddPoint(testutil.TestPoint("Lorem Ipsum", "justametric.string"))
bp.AddPoint(testutil.TestPoint(float64(42.0), "justametric.anotherfloat"))
err = o.Write(metrics)
err = o.Write(bp.Points())
require.NoError(t, err)
}

View File

@@ -5,8 +5,8 @@ import (
"log"
"net/http"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/outputs"
"github.com/influxdb/influxdb/client/v2"
"github.com/influxdb/telegraf/outputs"
"github.com/prometheus/client_golang/prometheus"
)
@@ -16,7 +16,7 @@ type PrometheusClient struct {
}
var sampleConfig = `
## Address to listen on
# Address to listen on
# listen = ":9126"
`
@@ -58,12 +58,12 @@ func (p *PrometheusClient) Description() string {
return "Configuration for the Prometheus client to spawn"
}
func (p *PrometheusClient) Write(metrics []telegraf.Metric) error {
if len(metrics) == 0 {
func (p *PrometheusClient) Write(points []*client.Point) error {
if len(points) == 0 {
return nil
}
for _, point := range metrics {
for _, point := range points {
var labels []string
key := point.Name()
@@ -119,7 +119,7 @@ func (p *PrometheusClient) Write(metrics []telegraf.Metric) error {
}
func init() {
outputs.Add("prometheus_client", func() telegraf.Output {
outputs.Add("prometheus_client", func() outputs.Output {
return &PrometheusClient{}
})
}

View File

@@ -0,0 +1,98 @@
package prometheus_client
import (
"testing"
"github.com/influxdb/influxdb/client/v2"
"github.com/influxdb/telegraf/plugins/prometheus"
"github.com/influxdb/telegraf/testutil"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
var pTesting *PrometheusClient
func TestPrometheusWritePointEmptyTag(t *testing.T) {
if testing.Short() {
t.Skip("Skipping integration test in short mode")
}
p := &prometheus.Prometheus{
Urls: []string{"http://localhost:9126/metrics"},
}
tags := make(map[string]string)
pt1, _ := client.NewPoint(
"test_point_1",
tags,
map[string]interface{}{"value": 0.0})
pt2, _ := client.NewPoint(
"test_point_2",
tags,
map[string]interface{}{"value": 1.0})
var points = []*client.Point{
pt1,
pt2,
}
require.NoError(t, pTesting.Write(points))
expected := []struct {
name string
value float64
tags map[string]string
}{
{"test_point_1", 0.0, tags},
{"test_point_2", 1.0, tags},
}
var acc testutil.Accumulator
require.NoError(t, p.Gather(&acc))
for _, e := range expected {
assert.NoError(t, acc.ValidateValue(e.name, e.value))
}
}
func TestPrometheusWritePointTag(t *testing.T) {
if testing.Short() {
t.Skip("Skipping integration test in short mode")
}
p := &prometheus.Prometheus{
Urls: []string{"http://localhost:9126/metrics"},
}
tags := make(map[string]string)
tags["testtag"] = "testvalue"
pt1, _ := client.NewPoint(
"test_point_3",
tags,
map[string]interface{}{"value": 0.0})
pt2, _ := client.NewPoint(
"test_point_4",
tags,
map[string]interface{}{"value": 1.0})
var points = []*client.Point{
pt1,
pt2,
}
require.NoError(t, pTesting.Write(points))
expected := []struct {
name string
value float64
}{
{"test_point_3", 0.0},
{"test_point_4", 1.0},
}
var acc testutil.Accumulator
require.NoError(t, p.Gather(&acc))
for _, e := range expected {
assert.True(t, acc.CheckTaggedValue(e.name, e.value, tags))
}
}
func init() {
pTesting = &PrometheusClient{Listen: "localhost:9126"}
pTesting.Start()
}

View File

@@ -1,4 +1,8 @@
package telegraf
package outputs
import (
"github.com/influxdb/influxdb/client/v2"
)
type Output interface {
// Connect to the Output
@@ -10,7 +14,7 @@ type Output interface {
// SampleConfig returns the default configuration of the Output
SampleConfig() string
// Write takes in group of points to be written to the Output
Write(metrics []Metric) error
Write(points []*client.Point) error
}
type ServiceOutput interface {
@@ -23,9 +27,17 @@ type ServiceOutput interface {
// SampleConfig returns the default configuration of the Output
SampleConfig() string
// Write takes in group of points to be written to the Output
Write(metrics []Metric) error
Write(points []*client.Point) error
// Start the "service" that will provide an Output
Start() error
// Stop the "service" that will provide an Output
Stop()
}
type Creator func() Output
var Outputs = map[string]Creator{}
func Add(name string, creator Creator) {
Outputs[name] = creator
}

101
outputs/riemann/riemann.go Normal file
View File

@@ -0,0 +1,101 @@
package riemann
import (
"errors"
"fmt"
"os"
"github.com/amir/raidman"
"github.com/influxdb/influxdb/client/v2"
"github.com/influxdb/telegraf/outputs"
)
type Riemann struct {
URL string
Transport string
client *raidman.Client
}
var sampleConfig = `
# URL of server
url = "localhost:5555"
# transport protocol to use either tcp or udp
transport = "tcp"
`
func (r *Riemann) Connect() error {
c, err := raidman.Dial(r.Transport, r.URL)
if err != nil {
return err
}
r.client = c
return nil
}
func (r *Riemann) Close() error {
r.client.Close()
return nil
}
func (r *Riemann) SampleConfig() string {
return sampleConfig
}
func (r *Riemann) Description() string {
return "Configuration for the Riemann server to send metrics to"
}
func (r *Riemann) Write(points []*client.Point) error {
if len(points) == 0 {
return nil
}
var events []*raidman.Event
for _, p := range points {
evs := buildEvents(p)
for _, ev := range evs {
events = append(events, ev)
}
}
var senderr = r.client.SendMulti(events)
if senderr != nil {
return errors.New(fmt.Sprintf("FAILED to send riemann message: %s\n",
senderr))
}
return nil
}
func buildEvents(p *client.Point) []*raidman.Event {
events := []*raidman.Event{}
for fieldName, value := range p.Fields() {
host, ok := p.Tags()["host"]
if !ok {
hostname, err := os.Hostname()
if err != nil {
host = "unknown"
} else {
host = hostname
}
}
event := &raidman.Event{
Host: host,
Service: p.Name() + "_" + fieldName,
Metric: value,
}
events = append(events, event)
}
return events
}
func init() {
outputs.Add("riemann", func() outputs.Output {
return &Riemann{}
})
}

View File

@@ -3,7 +3,7 @@ package riemann
import (
"testing"
"github.com/influxdata/telegraf/testutil"
"github.com/influxdb/telegraf/testutil"
"github.com/stretchr/testify/require"
)
@@ -22,6 +22,6 @@ func TestConnectAndWrite(t *testing.T) {
err := r.Connect()
require.NoError(t, err)
err = r.Write(testutil.MockMetrics())
err = r.Write(testutil.MockBatchPoints().Points())
require.NoError(t, err)
}

View File

@@ -4,8 +4,7 @@ import (
"bytes"
"encoding/binary"
"fmt"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
"github.com/influxdb/telegraf/plugins"
"net"
"strconv"
"strings"
@@ -104,9 +103,11 @@ type Aerospike struct {
}
var sampleConfig = `
## Aerospike servers to connect to (with port)
## This plugin will query all namespaces the aerospike
## server has configured and get stats for them.
# Aerospike servers to connect to (with port)
# Default: servers = ["localhost:3000"]
#
# This plugin will query all namespaces the aerospike
# server has configured and get stats for them.
servers = ["localhost:3000"]
`
@@ -118,7 +119,7 @@ func (a *Aerospike) Description() string {
return "Read stats from an aerospike server"
}
func (a *Aerospike) Gather(acc telegraf.Accumulator) error {
func (a *Aerospike) Gather(acc plugins.Accumulator) error {
if len(a.Servers) == 0 {
return a.gatherServer("127.0.0.1:3000", acc)
}
@@ -139,7 +140,7 @@ func (a *Aerospike) Gather(acc telegraf.Accumulator) error {
return outerr
}
func (a *Aerospike) gatherServer(host string, acc telegraf.Accumulator) error {
func (a *Aerospike) gatherServer(host string, acc plugins.Accumulator) error {
aerospikeInfo, err := getMap(STATISTICS_COMMAND, host)
if err != nil {
return fmt.Errorf("Aerospike info failed: %s", err)
@@ -248,7 +249,7 @@ func get(key []byte, host string) (map[string]string, error) {
func readAerospikeStats(
stats map[string]string,
acc telegraf.Accumulator,
acc plugins.Accumulator,
host string,
namespace string,
) {
@@ -335,7 +336,7 @@ func msgLenFromBytes(buf [6]byte) int64 {
}
func init() {
inputs.Add("aerospike", func() telegraf.Input {
plugins.Add("aerospike", func() plugins.Plugin {
return &Aerospike{}
})
}

View File

@@ -1,12 +1,11 @@
package aerospike
import (
"reflect"
"testing"
"github.com/influxdata/telegraf/testutil"
"github.com/influxdb/telegraf/testutil"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"reflect"
"testing"
)
func TestAerospikeStatistics(t *testing.T) {
@@ -32,7 +31,7 @@ func TestAerospikeStatistics(t *testing.T) {
}
for _, metric := range asMetrics {
assert.True(t, acc.HasIntField("aerospike", metric), metric)
assert.True(t, acc.HasIntValue(metric), metric)
}
}
@@ -50,16 +49,13 @@ func TestReadAerospikeStatsNoNamespace(t *testing.T) {
"stat_read_reqs": "12345",
}
readAerospikeStats(stats, &acc, "host1", "")
fields := map[string]interface{}{
"stat_write_errs": int64(12345),
"stat_read_reqs": int64(12345),
for k := range stats {
if k == "stat-write-errs" {
k = "stat_write_errs"
}
assert.True(t, acc.HasMeasurement(k))
assert.True(t, acc.CheckValue(k, int64(12345)))
}
tags := map[string]string{
"aerospike_host": "host1",
"namespace": "_service",
}
acc.AssertContainsTaggedFields(t, "aerospike", fields, tags)
}
func TestReadAerospikeStatsNamespace(t *testing.T) {
@@ -70,15 +66,13 @@ func TestReadAerospikeStatsNamespace(t *testing.T) {
}
readAerospikeStats(stats, &acc, "host1", "test")
fields := map[string]interface{}{
"stat_write_errs": int64(12345),
"stat_read_reqs": int64(12345),
}
tags := map[string]string{
"aerospike_host": "host1",
"namespace": "test",
}
acc.AssertContainsTaggedFields(t, "aerospike", fields, tags)
for k := range stats {
assert.True(t, acc.ValidateTaggedValue(k, int64(12345), tags) == nil)
}
}
func TestAerospikeUnmarshalList(t *testing.T) {

37
plugins/all/all.go Normal file
View File

@@ -0,0 +1,37 @@
package all
import (
_ "github.com/influxdb/telegraf/plugins/aerospike"
_ "github.com/influxdb/telegraf/plugins/apache"
_ "github.com/influxdb/telegraf/plugins/bcache"
_ "github.com/influxdb/telegraf/plugins/disque"
_ "github.com/influxdb/telegraf/plugins/elasticsearch"
_ "github.com/influxdb/telegraf/plugins/exec"
_ "github.com/influxdb/telegraf/plugins/haproxy"
_ "github.com/influxdb/telegraf/plugins/httpjson"
_ "github.com/influxdb/telegraf/plugins/influxdb"
_ "github.com/influxdb/telegraf/plugins/jolokia"
_ "github.com/influxdb/telegraf/plugins/kafka_consumer"
_ "github.com/influxdb/telegraf/plugins/leofs"
_ "github.com/influxdb/telegraf/plugins/lustre2"
_ "github.com/influxdb/telegraf/plugins/mailchimp"
_ "github.com/influxdb/telegraf/plugins/memcached"
_ "github.com/influxdb/telegraf/plugins/mongodb"
_ "github.com/influxdb/telegraf/plugins/mysql"
_ "github.com/influxdb/telegraf/plugins/nginx"
_ "github.com/influxdb/telegraf/plugins/phpfpm"
_ "github.com/influxdb/telegraf/plugins/ping"
_ "github.com/influxdb/telegraf/plugins/postgresql"
_ "github.com/influxdb/telegraf/plugins/procstat"
_ "github.com/influxdb/telegraf/plugins/prometheus"
_ "github.com/influxdb/telegraf/plugins/puppetagent"
_ "github.com/influxdb/telegraf/plugins/rabbitmq"
_ "github.com/influxdb/telegraf/plugins/redis"
_ "github.com/influxdb/telegraf/plugins/rethinkdb"
_ "github.com/influxdb/telegraf/plugins/statsd"
_ "github.com/influxdb/telegraf/plugins/system"
_ "github.com/influxdb/telegraf/plugins/trig"
_ "github.com/influxdb/telegraf/plugins/twemproxy"
_ "github.com/influxdb/telegraf/plugins/zfs"
_ "github.com/influxdb/telegraf/plugins/zookeeper"
)

View File

@@ -11,8 +11,7 @@ import (
"sync"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
"github.com/influxdb/telegraf/plugins"
)
type Apache struct {
@@ -20,7 +19,7 @@ type Apache struct {
}
var sampleConfig = `
## An array of Apache status URI to gather stats.
# An array of Apache status URI to gather stats.
urls = ["http://localhost/server-status?auto"]
`
@@ -32,7 +31,7 @@ func (n *Apache) Description() string {
return "Read Apache status information (mod_status)"
}
func (n *Apache) Gather(acc telegraf.Accumulator) error {
func (n *Apache) Gather(acc plugins.Accumulator) error {
var wg sync.WaitGroup
var outerr error
@@ -60,7 +59,7 @@ var tr = &http.Transport{
var client = &http.Client{Transport: tr}
func (n *Apache) gatherUrl(addr *url.URL, acc telegraf.Accumulator) error {
func (n *Apache) gatherUrl(addr *url.URL, acc plugins.Accumulator) error {
resp, err := client.Get(addr.String())
if err != nil {
return fmt.Errorf("error making HTTP request to %s: %s", addr.String(), err)
@@ -165,7 +164,7 @@ func getTags(addr *url.URL) map[string]string {
}
func init() {
inputs.Add("apache", func() telegraf.Input {
plugins.Add("apache", func() plugins.Plugin {
return &Apache{}
})
}

View File

@@ -6,8 +6,9 @@ import (
"net/http/httptest"
"testing"
"github.com/influxdata/telegraf/testutil"
"github.com/influxdb/telegraf/testutil"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
@@ -43,31 +44,37 @@ func TestHTTPApache(t *testing.T) {
err := a.Gather(&acc)
require.NoError(t, err)
fields := map[string]interface{}{
"TotalAccesses": float64(1.29811861e+08),
"TotalkBytes": float64(5.213701865e+09),
"CPULoad": float64(6.51929),
"Uptime": float64(941553),
"ReqPerSec": float64(137.87),
"BytesPerSec": float64(5.67024e+06),
"BytesPerReq": float64(41127.4),
"BusyWorkers": float64(270),
"IdleWorkers": float64(630),
"ConnsTotal": float64(1451),
"ConnsAsyncWriting": float64(32),
"ConnsAsyncKeepAlive": float64(945),
"ConnsAsyncClosing": float64(205),
"scboard_waiting": float64(630),
"scboard_starting": float64(0),
"scboard_reading": float64(157),
"scboard_sending": float64(113),
"scboard_keepalive": float64(0),
"scboard_dnslookup": float64(0),
"scboard_closing": float64(0),
"scboard_logging": float64(0),
"scboard_finishing": float64(0),
"scboard_idle_cleanup": float64(0),
"scboard_open": float64(2850),
testInt := []struct {
measurement string
value float64
}{
{"TotalAccesses", 1.29811861e+08},
{"TotalkBytes", 5.213701865e+09},
{"CPULoad", 6.51929},
{"Uptime", 941553},
{"ReqPerSec", 137.87},
{"BytesPerSec", 5.67024e+06},
{"BytesPerReq", 41127.4},
{"BusyWorkers", 270},
{"IdleWorkers", 630},
{"ConnsTotal", 1451},
{"ConnsAsyncWriting", 32},
{"ConnsAsyncKeepAlive", 945},
{"ConnsAsyncClosing", 205},
{"scboard_waiting", 630},
{"scboard_starting", 0},
{"scboard_reading", 157},
{"scboard_sending", 113},
{"scboard_keepalive", 0},
{"scboard_dnslookup", 0},
{"scboard_closing", 0},
{"scboard_logging", 0},
{"scboard_finishing", 0},
{"scboard_idle_cleanup", 0},
{"scboard_open", 2850},
}
for _, test := range testInt {
assert.True(t, acc.CheckValue(test.measurement, test.value))
}
acc.AssertContainsFields(t, "apache", fields)
}

View File

@@ -26,27 +26,27 @@ Measurement names:
dirty_data
Amount of dirty data for this backing device in the cache. Continuously
updated unlike the cache set's version, but may be slightly off.
bypassed
Amount of IO (both reads and writes) that has bypassed the cache
cache_bypass_hits
cache_bypass_misses
Hits and misses for IO that is intended to skip the cache are still counted,
but broken out here.
cache_hits
cache_misses
cache_hit_ratio
Hits and misses are counted per individual IO as bcache sees them; a
partial hit is counted as a miss.
cache_miss_collisions
Counts instances where data was going to be inserted into the cache from a
cache miss, but raced with a write and data was already present (usually 0
since the synchronization for cache misses was rewritten)
cache_readaheads
Count of times readahead occurred.
```
@@ -70,7 +70,7 @@ Using this configuration:
When run with:
```
./telegraf -config telegraf.conf -input-filter bcache -test
./telegraf -config telegraf.conf -filter bcache -test
```
It produces:

View File

@@ -8,8 +8,7 @@ import (
"strconv"
"strings"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
"github.com/influxdb/telegraf/plugins"
)
type Bcache struct {
@@ -18,14 +17,14 @@ type Bcache struct {
}
var sampleConfig = `
## Bcache sets path
## If not specified, then default is:
bcachePath = "/sys/fs/bcache"
## By default, telegraf gather stats for all bcache devices
## Setting devices will restrict the stats to the specified
## bcache devices.
bcacheDevs = ["bcache0"]
# Bcache sets path
# If not specified, then default is:
# bcachePath = "/sys/fs/bcache"
#
# By default, telegraf gather stats for all bcache devices
# Setting devices will restrict the stats to the specified
# bcache devices.
# bcacheDevs = ["bcache0", ...]
`
func (b *Bcache) SampleConfig() string {
@@ -70,7 +69,7 @@ func prettyToBytes(v string) uint64 {
return uint64(result)
}
func (b *Bcache) gatherBcache(bdev string, acc telegraf.Accumulator) error {
func (b *Bcache) gatherBcache(bdev string, acc plugins.Accumulator) error {
tags := getTags(bdev)
metrics, err := filepath.Glob(bdev + "/stats_total/*")
if len(metrics) < 0 {
@@ -105,7 +104,7 @@ func (b *Bcache) gatherBcache(bdev string, acc telegraf.Accumulator) error {
return nil
}
func (b *Bcache) Gather(acc telegraf.Accumulator) error {
func (b *Bcache) Gather(acc plugins.Accumulator) error {
bcacheDevsChecked := make(map[string]bool)
var restrictDevs bool
if len(b.BcacheDevs) != 0 {
@@ -136,7 +135,7 @@ func (b *Bcache) Gather(acc telegraf.Accumulator) error {
}
func init() {
inputs.Add("bcache", func() telegraf.Input {
plugins.Add("bcache", func() plugins.Plugin {
return &Bcache{}
})
}

View File

@@ -5,7 +5,8 @@ import (
"os"
"testing"
"github.com/influxdata/telegraf/testutil"
"github.com/influxdb/telegraf/testutil"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
@@ -28,6 +29,11 @@ var (
testBcacheBackingDevPath = os.TempDir() + "/telegraf/sys/devices/virtual/block/md10"
)
type metrics struct {
name string
value uint64
}
func TestBcacheGeneratesMetrics(t *testing.T) {
err := os.MkdirAll(testBcacheUuidPath, 0755)
require.NoError(t, err)
@@ -47,52 +53,70 @@ func TestBcacheGeneratesMetrics(t *testing.T) {
err = os.MkdirAll(testBcacheUuidPath+"/bdev0/stats_total", 0755)
require.NoError(t, err)
err = ioutil.WriteFile(testBcacheUuidPath+"/bdev0/dirty_data",
[]byte(dirty_data), 0644)
err = ioutil.WriteFile(testBcacheUuidPath+"/bdev0/dirty_data", []byte(dirty_data), 0644)
require.NoError(t, err)
err = ioutil.WriteFile(testBcacheUuidPath+"/bdev0/stats_total/bypassed",
[]byte(bypassed), 0644)
err = ioutil.WriteFile(testBcacheUuidPath+"/bdev0/stats_total/bypassed", []byte(bypassed), 0644)
require.NoError(t, err)
err = ioutil.WriteFile(testBcacheUuidPath+"/bdev0/stats_total/cache_bypass_hits",
[]byte(cache_bypass_hits), 0644)
err = ioutil.WriteFile(testBcacheUuidPath+"/bdev0/stats_total/cache_bypass_hits", []byte(cache_bypass_hits), 0644)
require.NoError(t, err)
err = ioutil.WriteFile(testBcacheUuidPath+"/bdev0/stats_total/cache_bypass_misses",
[]byte(cache_bypass_misses), 0644)
err = ioutil.WriteFile(testBcacheUuidPath+"/bdev0/stats_total/cache_bypass_misses", []byte(cache_bypass_misses), 0644)
require.NoError(t, err)
err = ioutil.WriteFile(testBcacheUuidPath+"/bdev0/stats_total/cache_hit_ratio",
[]byte(cache_hit_ratio), 0644)
err = ioutil.WriteFile(testBcacheUuidPath+"/bdev0/stats_total/cache_hit_ratio", []byte(cache_hit_ratio), 0644)
require.NoError(t, err)
err = ioutil.WriteFile(testBcacheUuidPath+"/bdev0/stats_total/cache_hits",
[]byte(cache_hits), 0644)
err = ioutil.WriteFile(testBcacheUuidPath+"/bdev0/stats_total/cache_hits", []byte(cache_hits), 0644)
require.NoError(t, err)
err = ioutil.WriteFile(testBcacheUuidPath+"/bdev0/stats_total/cache_miss_collisions",
[]byte(cache_miss_collisions), 0644)
err = ioutil.WriteFile(testBcacheUuidPath+"/bdev0/stats_total/cache_miss_collisions", []byte(cache_miss_collisions), 0644)
require.NoError(t, err)
err = ioutil.WriteFile(testBcacheUuidPath+"/bdev0/stats_total/cache_misses",
[]byte(cache_misses), 0644)
err = ioutil.WriteFile(testBcacheUuidPath+"/bdev0/stats_total/cache_misses", []byte(cache_misses), 0644)
require.NoError(t, err)
err = ioutil.WriteFile(testBcacheUuidPath+"/bdev0/stats_total/cache_readaheads",
[]byte(cache_readaheads), 0644)
err = ioutil.WriteFile(testBcacheUuidPath+"/bdev0/stats_total/cache_readaheads", []byte(cache_readaheads), 0644)
require.NoError(t, err)
fields := map[string]interface{}{
"dirty_data": uint64(1610612736),
"bypassed": uint64(5167704440832),
"cache_bypass_hits": uint64(146155333),
"cache_bypass_misses": uint64(0),
"cache_hit_ratio": uint64(90),
"cache_hits": uint64(511469583),
"cache_miss_collisions": uint64(157567),
"cache_misses": uint64(50616331),
"cache_readaheads": uint64(2),
intMetrics := []*metrics{
{
name: "dirty_data",
value: 1610612736,
},
{
name: "bypassed",
value: 5167704440832,
},
{
name: "cache_bypass_hits",
value: 146155333,
},
{
name: "cache_bypass_misses",
value: 0,
},
{
name: "cache_hit_ratio",
value: 90,
},
{
name: "cache_hits",
value: 511469583,
},
{
name: "cache_miss_collisions",
value: 157567,
},
{
name: "cache_misses",
value: 50616331,
},
{
name: "cache_readaheads",
value: 2,
},
}
tags := map[string]string{
@@ -102,19 +126,27 @@ func TestBcacheGeneratesMetrics(t *testing.T) {
var acc testutil.Accumulator
// all devs
//all devs
b := &Bcache{BcachePath: testBcachePath}
err = b.Gather(&acc)
require.NoError(t, err)
acc.AssertContainsTaggedFields(t, "bcache", fields, tags)
// one exist dev
for _, metric := range intMetrics {
assert.True(t, acc.HasUIntValue(metric.name), metric.name)
assert.True(t, acc.CheckTaggedValue(metric.name, metric.value, tags))
}
//one exist dev
b = &Bcache{BcachePath: testBcachePath, BcacheDevs: []string{"bcache0"}}
err = b.Gather(&acc)
require.NoError(t, err)
acc.AssertContainsTaggedFields(t, "bcache", fields, tags)
for _, metric := range intMetrics {
assert.True(t, acc.HasUIntValue(metric.name), metric.name)
assert.True(t, acc.CheckTaggedValue(metric.name, metric.value, tags))
}
err = os.RemoveAll(os.TempDir() + "/telegraf")
require.NoError(t, err)

View File

@@ -10,8 +10,7 @@ import (
"strings"
"sync"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
"github.com/influxdb/telegraf/plugins"
)
type Disque struct {
@@ -22,11 +21,11 @@ type Disque struct {
}
var sampleConfig = `
## An array of URI to gather stats about. Specify an ip or hostname
## with optional port and password. ie disque://localhost, disque://10.10.3.33:18832,
## 10.0.0.1:10000, etc.
## If no servers are specified, then localhost is used as the host.
# An array of URI to gather stats about. Specify an ip or hostname
# with optional port and password. ie disque://localhost, disque://10.10.3.33:18832,
# 10.0.0.1:10000, etc.
#
# If no servers are specified, then localhost is used as the host.
servers = ["localhost"]
`
@@ -62,7 +61,7 @@ var ErrProtocolError = errors.New("disque protocol error")
// Reads stats from all configured servers accumulates stats.
// Returns one of the errors encountered while gather stats (if any).
func (g *Disque) Gather(acc telegraf.Accumulator) error {
func (g *Disque) Gather(acc plugins.Accumulator) error {
if len(g.Servers) == 0 {
url := &url.URL{
Host: ":7711",
@@ -99,7 +98,7 @@ func (g *Disque) Gather(acc telegraf.Accumulator) error {
const defaultPort = "7711"
func (g *Disque) gatherServer(addr *url.URL, acc telegraf.Accumulator) error {
func (g *Disque) gatherServer(addr *url.URL, acc plugins.Accumulator) error {
if g.c == nil {
_, _, err := net.SplitHostPort(addr.Host)
@@ -199,7 +198,7 @@ func (g *Disque) gatherServer(addr *url.URL, acc telegraf.Accumulator) error {
}
func init() {
inputs.Add("disque", func() telegraf.Input {
plugins.Add("disque", func() plugins.Plugin {
return &Disque{}
})
}

View File

@@ -6,7 +6,8 @@ import (
"net"
"testing"
"github.com/influxdata/telegraf/testutil"
"github.com/influxdb/telegraf/testutil"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
@@ -54,26 +55,42 @@ func TestDisqueGeneratesMetrics(t *testing.T) {
err = r.Gather(&acc)
require.NoError(t, err)
fields := map[string]interface{}{
"uptime": uint64(1452705),
"clients": uint64(31),
"blocked_clients": uint64(13),
"used_memory": uint64(1840104),
"used_memory_rss": uint64(3227648),
"used_memory_peak": uint64(89603656),
"total_connections_received": uint64(5062777),
"total_commands_processed": uint64(12308396),
"instantaneous_ops_per_sec": uint64(18),
"latest_fork_usec": uint64(1644),
"registered_jobs": uint64(360),
"registered_queues": uint64(12),
"mem_fragmentation_ratio": float64(1.75),
"used_cpu_sys": float64(19585.73),
"used_cpu_user": float64(11255.96),
"used_cpu_sys_children": float64(1.75),
"used_cpu_user_children": float64(1.91),
checkInt := []struct {
name string
value uint64
}{
{"uptime", 1452705},
{"clients", 31},
{"blocked_clients", 13},
{"used_memory", 1840104},
{"used_memory_rss", 3227648},
{"used_memory_peak", 89603656},
{"total_connections_received", 5062777},
{"total_commands_processed", 12308396},
{"instantaneous_ops_per_sec", 18},
{"latest_fork_usec", 1644},
{"registered_jobs", 360},
{"registered_queues", 12},
}
for _, c := range checkInt {
assert.True(t, acc.CheckValue(c.name, c.value))
}
checkFloat := []struct {
name string
value float64
}{
{"mem_fragmentation_ratio", 1.75},
{"used_cpu_sys", 19585.73},
{"used_cpu_user", 11255.96},
{"used_cpu_sys_children", 1.75},
{"used_cpu_user_children", 1.91},
}
for _, c := range checkFloat {
assert.True(t, acc.CheckValue(c.name, c.value))
}
acc.AssertContainsFields(t, "disque", fields)
}
func TestDisqueCanPullStatsFromMultipleServers(t *testing.T) {
@@ -120,26 +137,42 @@ func TestDisqueCanPullStatsFromMultipleServers(t *testing.T) {
err = r.Gather(&acc)
require.NoError(t, err)
fields := map[string]interface{}{
"uptime": uint64(1452705),
"clients": uint64(31),
"blocked_clients": uint64(13),
"used_memory": uint64(1840104),
"used_memory_rss": uint64(3227648),
"used_memory_peak": uint64(89603656),
"total_connections_received": uint64(5062777),
"total_commands_processed": uint64(12308396),
"instantaneous_ops_per_sec": uint64(18),
"latest_fork_usec": uint64(1644),
"registered_jobs": uint64(360),
"registered_queues": uint64(12),
"mem_fragmentation_ratio": float64(1.75),
"used_cpu_sys": float64(19585.73),
"used_cpu_user": float64(11255.96),
"used_cpu_sys_children": float64(1.75),
"used_cpu_user_children": float64(1.91),
checkInt := []struct {
name string
value uint64
}{
{"uptime", 1452705},
{"clients", 31},
{"blocked_clients", 13},
{"used_memory", 1840104},
{"used_memory_rss", 3227648},
{"used_memory_peak", 89603656},
{"total_connections_received", 5062777},
{"total_commands_processed", 12308396},
{"instantaneous_ops_per_sec", 18},
{"latest_fork_usec", 1644},
{"registered_jobs", 360},
{"registered_queues", 12},
}
for _, c := range checkInt {
assert.True(t, acc.CheckValue(c.name, c.value))
}
checkFloat := []struct {
name string
value float64
}{
{"mem_fragmentation_ratio", 1.75},
{"used_cpu_sys", 19585.73},
{"used_cpu_user", 11255.96},
{"used_cpu_sys_children", 1.75},
{"used_cpu_user_children", 1.91},
}
for _, c := range checkFloat {
assert.True(t, acc.CheckValue(c.name, c.value))
}
acc.AssertContainsFields(t, "disque", fields)
}
const testOutput = `# Server

View File

@@ -2,16 +2,12 @@ package elasticsearch
import (
"encoding/json"
"errors"
"fmt"
"net/http"
"strings"
"sync"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
jsonparser "github.com/influxdata/telegraf/plugins/parsers/json"
"github.com/influxdb/telegraf/internal"
"github.com/influxdb/telegraf/plugins"
)
const statsPath = "/_nodes/stats"
@@ -59,14 +55,14 @@ type indexHealth struct {
}
const sampleConfig = `
## specify a list of one or more Elasticsearch servers
# specify a list of one or more Elasticsearch servers
servers = ["http://localhost:9200"]
## set local to false when you want to read the indices stats from all nodes
## within the cluster
# set local to false when you want to read the indices stats from all nodes
# within the cluster
local = true
## set cluster_health to true when you want to also obtain cluster level stats
# set cluster_health to true when you want to also obtain cluster level stats
cluster_health = false
`
@@ -96,45 +92,25 @@ func (e *Elasticsearch) Description() string {
// Gather reads the stats from Elasticsearch and writes it to the
// Accumulator.
func (e *Elasticsearch) Gather(acc telegraf.Accumulator) error {
errChan := make(chan error, len(e.Servers))
var wg sync.WaitGroup
wg.Add(len(e.Servers))
func (e *Elasticsearch) Gather(acc plugins.Accumulator) error {
for _, serv := range e.Servers {
go func(s string, acc telegraf.Accumulator) {
defer wg.Done()
var url string
if e.Local {
url = s + statsPathLocal
} else {
url = s + statsPath
}
if err := e.gatherNodeStats(url, acc); err != nil {
errChan <- err
return
}
if e.ClusterHealth {
e.gatherClusterStats(fmt.Sprintf("%s/_cluster/health?level=indices", s), acc)
}
}(serv, acc)
var url string
if e.Local {
url = serv + statsPathLocal
} else {
url = serv + statsPath
}
if err := e.gatherNodeStats(url, acc); err != nil {
return err
}
if e.ClusterHealth {
e.gatherClusterStats(fmt.Sprintf("%s/_cluster/health?level=indices", serv), acc)
}
}
wg.Wait()
close(errChan)
// Get all errors and return them as one giant error
errStrings := []string{}
for err := range errChan {
errStrings = append(errStrings, err.Error())
}
if len(errStrings) == 0 {
return nil
}
return errors.New(strings.Join(errStrings, "\n"))
return nil
}
func (e *Elasticsearch) gatherNodeStats(url string, acc telegraf.Accumulator) error {
func (e *Elasticsearch) gatherNodeStats(url string, acc plugins.Accumulator) error {
nodeStats := &struct {
ClusterName string `json:"cluster_name"`
Nodes map[string]*node `json:"nodes"`
@@ -168,7 +144,7 @@ func (e *Elasticsearch) gatherNodeStats(url string, acc telegraf.Accumulator) er
now := time.Now()
for p, s := range stats {
f := jsonparser.JSONFlattener{}
f := internal.JSONFlattener{}
err := f.FlattenJSON("", s)
if err != nil {
return err
@@ -179,7 +155,7 @@ func (e *Elasticsearch) gatherNodeStats(url string, acc telegraf.Accumulator) er
return nil
}
func (e *Elasticsearch) gatherClusterStats(url string, acc telegraf.Accumulator) error {
func (e *Elasticsearch) gatherClusterStats(url string, acc plugins.Accumulator) error {
clusterStats := &clusterHealth{}
if err := e.gatherData(url, clusterStats); err != nil {
return err
@@ -244,7 +220,7 @@ func (e *Elasticsearch) gatherData(url string, v interface{}) error {
}
func init() {
inputs.Add("elasticsearch", func() telegraf.Input {
plugins.Add("elasticsearch", func() plugins.Plugin {
return NewElasticsearch()
})
}

View File

@@ -6,8 +6,8 @@ import (
"strings"
"testing"
"github.com/influxdata/telegraf/testutil"
"github.com/influxdb/telegraf/testutil"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
@@ -52,15 +52,23 @@ func TestElasticsearch(t *testing.T) {
"node_host": "test",
}
acc.AssertContainsTaggedFields(t, "elasticsearch_indices", indicesExpected, tags)
acc.AssertContainsTaggedFields(t, "elasticsearch_os", osExpected, tags)
acc.AssertContainsTaggedFields(t, "elasticsearch_process", processExpected, tags)
acc.AssertContainsTaggedFields(t, "elasticsearch_jvm", jvmExpected, tags)
acc.AssertContainsTaggedFields(t, "elasticsearch_thread_pool", threadPoolExpected, tags)
acc.AssertContainsTaggedFields(t, "elasticsearch_fs", fsExpected, tags)
acc.AssertContainsTaggedFields(t, "elasticsearch_transport", transportExpected, tags)
acc.AssertContainsTaggedFields(t, "elasticsearch_http", httpExpected, tags)
acc.AssertContainsTaggedFields(t, "elasticsearch_breakers", breakersExpected, tags)
testTables := []map[string]float64{
indicesExpected,
osExpected,
processExpected,
jvmExpected,
threadPoolExpected,
fsExpected,
transportExpected,
httpExpected,
breakersExpected,
}
for _, testTable := range testTables {
for k, v := range testTable {
assert.NoError(t, acc.ValidateTaggedValue(k, v, tags))
}
}
}
func TestGatherClusterStats(t *testing.T) {
@@ -72,15 +80,29 @@ func TestGatherClusterStats(t *testing.T) {
var acc testutil.Accumulator
require.NoError(t, es.Gather(&acc))
acc.AssertContainsTaggedFields(t, "elasticsearch_cluster_health",
clusterHealthExpected,
map[string]string{"name": "elasticsearch_telegraf"})
var clusterHealthTests = []struct {
measurement string
fields map[string]interface{}
tags map[string]string
}{
{
"cluster_health",
clusterHealthExpected,
map[string]string{"name": "elasticsearch_telegraf"},
},
{
"indices",
v1IndexExpected,
map[string]string{"index": "v1"},
},
{
"indices",
v2IndexExpected,
map[string]string{"index": "v2"},
},
}
acc.AssertContainsTaggedFields(t, "elasticsearch_indices",
v1IndexExpected,
map[string]string{"index": "v1"})
acc.AssertContainsTaggedFields(t, "elasticsearch_indices",
v2IndexExpected,
map[string]string{"index": "v2"})
for _, exp := range clusterHealthTests {
assert.NoError(t, acc.ValidateTaggedFields(exp.measurement, exp.fields, exp.tags))
}
}

View File

@@ -0,0 +1,759 @@
package elasticsearch
const clusterResponse = `
{
"cluster_name": "elasticsearch_telegraf",
"status": "green",
"timed_out": false,
"number_of_nodes": 3,
"number_of_data_nodes": 3,
"active_primary_shards": 5,
"active_shards": 15,
"relocating_shards": 0,
"initializing_shards": 0,
"unassigned_shards": 0,
"indices": {
"v1": {
"status": "green",
"number_of_shards": 10,
"number_of_replicas": 1,
"active_primary_shards": 10,
"active_shards": 20,
"relocating_shards": 0,
"initializing_shards": 0,
"unassigned_shards": 0
},
"v2": {
"status": "red",
"number_of_shards": 10,
"number_of_replicas": 1,
"active_primary_shards": 0,
"active_shards": 0,
"relocating_shards": 0,
"initializing_shards": 0,
"unassigned_shards": 20
}
}
}
`
var clusterHealthExpected = map[string]interface{}{
"status": "green",
"timed_out": false,
"number_of_nodes": 3,
"number_of_data_nodes": 3,
"active_primary_shards": 5,
"active_shards": 15,
"relocating_shards": 0,
"initializing_shards": 0,
"unassigned_shards": 0,
}
var v1IndexExpected = map[string]interface{}{
"status": "green",
"number_of_shards": 10,
"number_of_replicas": 1,
"active_primary_shards": 10,
"active_shards": 20,
"relocating_shards": 0,
"initializing_shards": 0,
"unassigned_shards": 0,
}
var v2IndexExpected = map[string]interface{}{
"status": "red",
"number_of_shards": 10,
"number_of_replicas": 1,
"active_primary_shards": 0,
"active_shards": 0,
"relocating_shards": 0,
"initializing_shards": 0,
"unassigned_shards": 20,
}
const statsResponse = `
{
"cluster_name": "es-testcluster",
"nodes": {
"SDFsfSDFsdfFSDSDfSFDSDF": {
"timestamp": 1436365550135,
"name": "test.host.com",
"transport_address": "inet[/127.0.0.1:9300]",
"host": "test",
"ip": [
"inet[/127.0.0.1:9300]",
"NONE"
],
"attributes": {
"master": "true"
},
"indices": {
"docs": {
"count": 29652,
"deleted": 5229
},
"store": {
"size_in_bytes": 37715234,
"throttle_time_in_millis": 215
},
"indexing": {
"index_total": 84790,
"index_time_in_millis": 29680,
"index_current": 0,
"delete_total": 13879,
"delete_time_in_millis": 1139,
"delete_current": 0,
"noop_update_total": 0,
"is_throttled": false,
"throttle_time_in_millis": 0
},
"get": {
"total": 1,
"time_in_millis": 2,
"exists_total": 0,
"exists_time_in_millis": 0,
"missing_total": 1,
"missing_time_in_millis": 2,
"current": 0
},
"search": {
"open_contexts": 0,
"query_total": 1452,
"query_time_in_millis": 5695,
"query_current": 0,
"fetch_total": 414,
"fetch_time_in_millis": 146,
"fetch_current": 0
},
"merges": {
"current": 0,
"current_docs": 0,
"current_size_in_bytes": 0,
"total": 133,
"total_time_in_millis": 21060,
"total_docs": 203672,
"total_size_in_bytes": 142900226
},
"refresh": {
"total": 1076,
"total_time_in_millis": 20078
},
"flush": {
"total": 115,
"total_time_in_millis": 2401
},
"warmer": {
"current": 0,
"total": 2319,
"total_time_in_millis": 448
},
"filter_cache": {
"memory_size_in_bytes": 7384,
"evictions": 0
},
"id_cache": {
"memory_size_in_bytes": 0
},
"fielddata": {
"memory_size_in_bytes": 12996,
"evictions": 0
},
"percolate": {
"total": 0,
"time_in_millis": 0,
"current": 0,
"memory_size_in_bytes": -1,
"memory_size": "-1b",
"queries": 0
},
"completion": {
"size_in_bytes": 0
},
"segments": {
"count": 134,
"memory_in_bytes": 1285212,
"index_writer_memory_in_bytes": 0,
"index_writer_max_memory_in_bytes": 172368955,
"version_map_memory_in_bytes": 611844,
"fixed_bit_set_memory_in_bytes": 0
},
"translog": {
"operations": 17702,
"size_in_bytes": 17
},
"suggest": {
"total": 0,
"time_in_millis": 0,
"current": 0
},
"query_cache": {
"memory_size_in_bytes": 0,
"evictions": 0,
"hit_count": 0,
"miss_count": 0
},
"recovery": {
"current_as_source": 0,
"current_as_target": 0,
"throttle_time_in_millis": 0
}
},
"os": {
"timestamp": 1436460392944,
"load_average": [
0.01,
0.04,
0.05
],
"mem": {
"free_in_bytes": 477761536,
"used_in_bytes": 1621868544,
"free_percent": 74,
"used_percent": 25,
"actual_free_in_bytes": 1565470720,
"actual_used_in_bytes": 534159360
},
"swap": {
"used_in_bytes": 0,
"free_in_bytes": 487997440
}
},
"process": {
"timestamp": 1436460392945,
"open_file_descriptors": 160,
"cpu": {
"percent": 2,
"sys_in_millis": 1870,
"user_in_millis": 13610,
"total_in_millis": 15480
},
"mem": {
"total_virtual_in_bytes": 4747890688
}
},
"jvm": {
"timestamp": 1436460392945,
"uptime_in_millis": 202245,
"mem": {
"heap_used_in_bytes": 52709568,
"heap_used_percent": 5,
"heap_committed_in_bytes": 259522560,
"heap_max_in_bytes": 1038876672,
"non_heap_used_in_bytes": 39634576,
"non_heap_committed_in_bytes": 40841216,
"pools": {
"young": {
"used_in_bytes": 32685760,
"max_in_bytes": 279183360,
"peak_used_in_bytes": 71630848,
"peak_max_in_bytes": 279183360
},
"survivor": {
"used_in_bytes": 8912880,
"max_in_bytes": 34865152,
"peak_used_in_bytes": 8912888,
"peak_max_in_bytes": 34865152
},
"old": {
"used_in_bytes": 11110928,
"max_in_bytes": 724828160,
"peak_used_in_bytes": 14354608,
"peak_max_in_bytes": 724828160
}
}
},
"threads": {
"count": 44,
"peak_count": 45
},
"gc": {
"collectors": {
"young": {
"collection_count": 2,
"collection_time_in_millis": 98
},
"old": {
"collection_count": 1,
"collection_time_in_millis": 24
}
}
},
"buffer_pools": {
"direct": {
"count": 40,
"used_in_bytes": 6304239,
"total_capacity_in_bytes": 6304239
},
"mapped": {
"count": 0,
"used_in_bytes": 0,
"total_capacity_in_bytes": 0
}
}
},
"thread_pool": {
"percolate": {
"threads": 123,
"queue": 23,
"active": 13,
"rejected": 235,
"largest": 23,
"completed": 33
},
"fetch_shard_started": {
"threads": 3,
"queue": 1,
"active": 5,
"rejected": 6,
"largest": 4,
"completed": 54
},
"listener": {
"threads": 1,
"queue": 2,
"active": 4,
"rejected": 8,
"largest": 1,
"completed": 1
},
"index": {
"threads": 6,
"queue": 8,
"active": 4,
"rejected": 2,
"largest": 3,
"completed": 6
},
"refresh": {
"threads": 23,
"queue": 7,
"active": 3,
"rejected": 4,
"largest": 8,
"completed": 3
},
"suggest": {
"threads": 2,
"queue": 7,
"active": 2,
"rejected": 1,
"largest": 8,
"completed": 3
},
"generic": {
"threads": 1,
"queue": 4,
"active": 6,
"rejected": 3,
"largest": 2,
"completed": 27
},
"warmer": {
"threads": 2,
"queue": 7,
"active": 3,
"rejected": 2,
"largest": 3,
"completed": 1
},
"search": {
"threads": 5,
"queue": 7,
"active": 2,
"rejected": 7,
"largest": 2,
"completed": 4
},
"flush": {
"threads": 3,
"queue": 8,
"active": 0,
"rejected": 1,
"largest": 5,
"completed": 3
},
"optimize": {
"threads": 3,
"queue": 4,
"active": 1,
"rejected": 2,
"largest": 7,
"completed": 3
},
"fetch_shard_store": {
"threads": 1,
"queue": 7,
"active": 4,
"rejected": 2,
"largest": 4,
"completed": 1
},
"management": {
"threads": 2,
"queue": 3,
"active": 1,
"rejected": 6,
"largest": 2,
"completed": 22
},
"get": {
"threads": 1,
"queue": 8,
"active": 4,
"rejected": 3,
"largest": 2,
"completed": 1
},
"merge": {
"threads": 6,
"queue": 4,
"active": 5,
"rejected": 2,
"largest": 5,
"completed": 1
},
"bulk": {
"threads": 4,
"queue": 5,
"active": 7,
"rejected": 3,
"largest": 1,
"completed": 4
},
"snapshot": {
"threads": 8,
"queue": 5,
"active": 6,
"rejected": 2,
"largest": 1,
"completed": 0
}
},
"fs": {
"timestamp": 1436460392946,
"total": {
"total_in_bytes": 19507089408,
"free_in_bytes": 16909316096,
"available_in_bytes": 15894814720
},
"data": [
{
"path": "/usr/share/elasticsearch/data/elasticsearch/nodes/0",
"mount": "/usr/share/elasticsearch/data",
"type": "ext4",
"total_in_bytes": 19507089408,
"free_in_bytes": 16909316096,
"available_in_bytes": 15894814720
}
]
},
"transport": {
"server_open": 13,
"rx_count": 6,
"rx_size_in_bytes": 1380,
"tx_count": 6,
"tx_size_in_bytes": 1380
},
"http": {
"current_open": 3,
"total_opened": 3
},
"breakers": {
"fielddata": {
"limit_size_in_bytes": 623326003,
"limit_size": "594.4mb",
"estimated_size_in_bytes": 0,
"estimated_size": "0b",
"overhead": 1.03,
"tripped": 0
},
"request": {
"limit_size_in_bytes": 415550668,
"limit_size": "396.2mb",
"estimated_size_in_bytes": 0,
"estimated_size": "0b",
"overhead": 1.0,
"tripped": 0
},
"parent": {
"limit_size_in_bytes": 727213670,
"limit_size": "693.5mb",
"estimated_size_in_bytes": 0,
"estimated_size": "0b",
"overhead": 1.0,
"tripped": 0
}
}
}
}
}
`
var indicesExpected = map[string]float64{
"indices_id_cache_memory_size_in_bytes": 0,
"indices_completion_size_in_bytes": 0,
"indices_suggest_total": 0,
"indices_suggest_time_in_millis": 0,
"indices_suggest_current": 0,
"indices_query_cache_memory_size_in_bytes": 0,
"indices_query_cache_evictions": 0,
"indices_query_cache_hit_count": 0,
"indices_query_cache_miss_count": 0,
"indices_store_size_in_bytes": 37715234,
"indices_store_throttle_time_in_millis": 215,
"indices_merges_current_docs": 0,
"indices_merges_current_size_in_bytes": 0,
"indices_merges_total": 133,
"indices_merges_total_time_in_millis": 21060,
"indices_merges_total_docs": 203672,
"indices_merges_total_size_in_bytes": 142900226,
"indices_merges_current": 0,
"indices_filter_cache_memory_size_in_bytes": 7384,
"indices_filter_cache_evictions": 0,
"indices_indexing_index_total": 84790,
"indices_indexing_index_time_in_millis": 29680,
"indices_indexing_index_current": 0,
"indices_indexing_noop_update_total": 0,
"indices_indexing_throttle_time_in_millis": 0,
"indices_indexing_delete_total": 13879,
"indices_indexing_delete_time_in_millis": 1139,
"indices_indexing_delete_current": 0,
"indices_get_exists_time_in_millis": 0,
"indices_get_missing_total": 1,
"indices_get_missing_time_in_millis": 2,
"indices_get_current": 0,
"indices_get_total": 1,
"indices_get_time_in_millis": 2,
"indices_get_exists_total": 0,
"indices_refresh_total": 1076,
"indices_refresh_total_time_in_millis": 20078,
"indices_percolate_current": 0,
"indices_percolate_memory_size_in_bytes": -1,
"indices_percolate_queries": 0,
"indices_percolate_total": 0,
"indices_percolate_time_in_millis": 0,
"indices_translog_operations": 17702,
"indices_translog_size_in_bytes": 17,
"indices_recovery_current_as_source": 0,
"indices_recovery_current_as_target": 0,
"indices_recovery_throttle_time_in_millis": 0,
"indices_docs_count": 29652,
"indices_docs_deleted": 5229,
"indices_flush_total_time_in_millis": 2401,
"indices_flush_total": 115,
"indices_fielddata_memory_size_in_bytes": 12996,
"indices_fielddata_evictions": 0,
"indices_search_fetch_current": 0,
"indices_search_open_contexts": 0,
"indices_search_query_total": 1452,
"indices_search_query_time_in_millis": 5695,
"indices_search_query_current": 0,
"indices_search_fetch_total": 414,
"indices_search_fetch_time_in_millis": 146,
"indices_warmer_current": 0,
"indices_warmer_total": 2319,
"indices_warmer_total_time_in_millis": 448,
"indices_segments_count": 134,
"indices_segments_memory_in_bytes": 1285212,
"indices_segments_index_writer_memory_in_bytes": 0,
"indices_segments_index_writer_max_memory_in_bytes": 172368955,
"indices_segments_version_map_memory_in_bytes": 611844,
"indices_segments_fixed_bit_set_memory_in_bytes": 0,
}
var osExpected = map[string]float64{
"os_swap_used_in_bytes": 0,
"os_swap_free_in_bytes": 487997440,
"os_timestamp": 1436460392944,
"os_mem_free_percent": 74,
"os_mem_used_percent": 25,
"os_mem_actual_free_in_bytes": 1565470720,
"os_mem_actual_used_in_bytes": 534159360,
"os_mem_free_in_bytes": 477761536,
"os_mem_used_in_bytes": 1621868544,
}
var processExpected = map[string]float64{
"process_mem_total_virtual_in_bytes": 4747890688,
"process_timestamp": 1436460392945,
"process_open_file_descriptors": 160,
"process_cpu_total_in_millis": 15480,
"process_cpu_percent": 2,
"process_cpu_sys_in_millis": 1870,
"process_cpu_user_in_millis": 13610,
}
var jvmExpected = map[string]float64{
"jvm_timestamp": 1436460392945,
"jvm_uptime_in_millis": 202245,
"jvm_mem_non_heap_used_in_bytes": 39634576,
"jvm_mem_non_heap_committed_in_bytes": 40841216,
"jvm_mem_pools_young_max_in_bytes": 279183360,
"jvm_mem_pools_young_peak_used_in_bytes": 71630848,
"jvm_mem_pools_young_peak_max_in_bytes": 279183360,
"jvm_mem_pools_young_used_in_bytes": 32685760,
"jvm_mem_pools_survivor_peak_used_in_bytes": 8912888,
"jvm_mem_pools_survivor_peak_max_in_bytes": 34865152,
"jvm_mem_pools_survivor_used_in_bytes": 8912880,
"jvm_mem_pools_survivor_max_in_bytes": 34865152,
"jvm_mem_pools_old_peak_max_in_bytes": 724828160,
"jvm_mem_pools_old_used_in_bytes": 11110928,
"jvm_mem_pools_old_max_in_bytes": 724828160,
"jvm_mem_pools_old_peak_used_in_bytes": 14354608,
"jvm_mem_heap_used_in_bytes": 52709568,
"jvm_mem_heap_used_percent": 5,
"jvm_mem_heap_committed_in_bytes": 259522560,
"jvm_mem_heap_max_in_bytes": 1038876672,
"jvm_threads_peak_count": 45,
"jvm_threads_count": 44,
"jvm_gc_collectors_young_collection_count": 2,
"jvm_gc_collectors_young_collection_time_in_millis": 98,
"jvm_gc_collectors_old_collection_count": 1,
"jvm_gc_collectors_old_collection_time_in_millis": 24,
"jvm_buffer_pools_direct_count": 40,
"jvm_buffer_pools_direct_used_in_bytes": 6304239,
"jvm_buffer_pools_direct_total_capacity_in_bytes": 6304239,
"jvm_buffer_pools_mapped_count": 0,
"jvm_buffer_pools_mapped_used_in_bytes": 0,
"jvm_buffer_pools_mapped_total_capacity_in_bytes": 0,
}
var threadPoolExpected = map[string]float64{
"thread_pool_merge_threads": 6,
"thread_pool_merge_queue": 4,
"thread_pool_merge_active": 5,
"thread_pool_merge_rejected": 2,
"thread_pool_merge_largest": 5,
"thread_pool_merge_completed": 1,
"thread_pool_bulk_threads": 4,
"thread_pool_bulk_queue": 5,
"thread_pool_bulk_active": 7,
"thread_pool_bulk_rejected": 3,
"thread_pool_bulk_largest": 1,
"thread_pool_bulk_completed": 4,
"thread_pool_warmer_threads": 2,
"thread_pool_warmer_queue": 7,
"thread_pool_warmer_active": 3,
"thread_pool_warmer_rejected": 2,
"thread_pool_warmer_largest": 3,
"thread_pool_warmer_completed": 1,
"thread_pool_get_largest": 2,
"thread_pool_get_completed": 1,
"thread_pool_get_threads": 1,
"thread_pool_get_queue": 8,
"thread_pool_get_active": 4,
"thread_pool_get_rejected": 3,
"thread_pool_index_threads": 6,
"thread_pool_index_queue": 8,
"thread_pool_index_active": 4,
"thread_pool_index_rejected": 2,
"thread_pool_index_largest": 3,
"thread_pool_index_completed": 6,
"thread_pool_suggest_threads": 2,
"thread_pool_suggest_queue": 7,
"thread_pool_suggest_active": 2,
"thread_pool_suggest_rejected": 1,
"thread_pool_suggest_largest": 8,
"thread_pool_suggest_completed": 3,
"thread_pool_fetch_shard_store_queue": 7,
"thread_pool_fetch_shard_store_active": 4,
"thread_pool_fetch_shard_store_rejected": 2,
"thread_pool_fetch_shard_store_largest": 4,
"thread_pool_fetch_shard_store_completed": 1,
"thread_pool_fetch_shard_store_threads": 1,
"thread_pool_management_threads": 2,
"thread_pool_management_queue": 3,
"thread_pool_management_active": 1,
"thread_pool_management_rejected": 6,
"thread_pool_management_largest": 2,
"thread_pool_management_completed": 22,
"thread_pool_percolate_queue": 23,
"thread_pool_percolate_active": 13,
"thread_pool_percolate_rejected": 235,
"thread_pool_percolate_largest": 23,
"thread_pool_percolate_completed": 33,
"thread_pool_percolate_threads": 123,
"thread_pool_listener_active": 4,
"thread_pool_listener_rejected": 8,
"thread_pool_listener_largest": 1,
"thread_pool_listener_completed": 1,
"thread_pool_listener_threads": 1,
"thread_pool_listener_queue": 2,
"thread_pool_search_rejected": 7,
"thread_pool_search_largest": 2,
"thread_pool_search_completed": 4,
"thread_pool_search_threads": 5,
"thread_pool_search_queue": 7,
"thread_pool_search_active": 2,
"thread_pool_fetch_shard_started_threads": 3,
"thread_pool_fetch_shard_started_queue": 1,
"thread_pool_fetch_shard_started_active": 5,
"thread_pool_fetch_shard_started_rejected": 6,
"thread_pool_fetch_shard_started_largest": 4,
"thread_pool_fetch_shard_started_completed": 54,
"thread_pool_refresh_rejected": 4,
"thread_pool_refresh_largest": 8,
"thread_pool_refresh_completed": 3,
"thread_pool_refresh_threads": 23,
"thread_pool_refresh_queue": 7,
"thread_pool_refresh_active": 3,
"thread_pool_optimize_threads": 3,
"thread_pool_optimize_queue": 4,
"thread_pool_optimize_active": 1,
"thread_pool_optimize_rejected": 2,
"thread_pool_optimize_largest": 7,
"thread_pool_optimize_completed": 3,
"thread_pool_snapshot_largest": 1,
"thread_pool_snapshot_completed": 0,
"thread_pool_snapshot_threads": 8,
"thread_pool_snapshot_queue": 5,
"thread_pool_snapshot_active": 6,
"thread_pool_snapshot_rejected": 2,
"thread_pool_generic_threads": 1,
"thread_pool_generic_queue": 4,
"thread_pool_generic_active": 6,
"thread_pool_generic_rejected": 3,
"thread_pool_generic_largest": 2,
"thread_pool_generic_completed": 27,
"thread_pool_flush_threads": 3,
"thread_pool_flush_queue": 8,
"thread_pool_flush_active": 0,
"thread_pool_flush_rejected": 1,
"thread_pool_flush_largest": 5,
"thread_pool_flush_completed": 3,
}
var fsExpected = map[string]float64{
"fs_timestamp": 1436460392946,
"fs_total_free_in_bytes": 16909316096,
"fs_total_available_in_bytes": 15894814720,
"fs_total_total_in_bytes": 19507089408,
}
var transportExpected = map[string]float64{
"transport_server_open": 13,
"transport_rx_count": 6,
"transport_rx_size_in_bytes": 1380,
"transport_tx_count": 6,
"transport_tx_size_in_bytes": 1380,
}
var httpExpected = map[string]float64{
"http_current_open": 3,
"http_total_opened": 3,
}
var breakersExpected = map[string]float64{
"breakers_fielddata_estimated_size_in_bytes": 0,
"breakers_fielddata_overhead": 1.03,
"breakers_fielddata_tripped": 0,
"breakers_fielddata_limit_size_in_bytes": 623326003,
"breakers_request_estimated_size_in_bytes": 0,
"breakers_request_overhead": 1.0,
"breakers_request_tripped": 0,
"breakers_request_limit_size_in_bytes": 415550668,
"breakers_parent_overhead": 1.0,
"breakers_parent_tripped": 0,
"breakers_parent_limit_size_in_bytes": 727213670,
"breakers_parent_estimated_size_in_bytes": 0,
}

42
plugins/exec/README.md Normal file
View File

@@ -0,0 +1,42 @@
# Exec Plugin
The exec plugin can execute arbitrary commands which output JSON. Then it flattens JSON and finds
all numeric values, treating them as floats.
For example, if you have a json-returning command called mycollector, you could
setup the exec plugin with:
```
[[exec.commands]]
command = "/usr/bin/mycollector --output=json"
name = "mycollector"
interval = 10
```
The name is used as a prefix for the measurements.
The interval is used to determine how often a particular command should be run. Each
time the exec plugin runs, it will only run a particular command if it has been at least
`interval` seconds since the exec plugin last ran the command.
# Sample
Let's say that we have a command named "mycollector", which gives the following output:
```json
{
"a": 0.5,
"b": {
"c": "some text",
"d": 0.1,
"e": 5
}
}
```
The collected metrics will be:
```
exec_mycollector_a value=0.5
exec_mycollector_b_d value=0.1
exec_mycollector_b_e value=5
```

98
plugins/exec/exec.go Normal file
View File

@@ -0,0 +1,98 @@
package exec
import (
"bytes"
"encoding/json"
"fmt"
"os/exec"
"github.com/gonuts/go-shellquote"
"github.com/influxdb/telegraf/internal"
"github.com/influxdb/telegraf/plugins"
)
const sampleConfig = `
# the command to run
command = "/usr/bin/mycollector --foo=bar"
# name of the command (used as a prefix for measurements)
name = "mycollector"
`
type Exec struct {
Command string
Name string
runner Runner
}
type Runner interface {
Run(*Exec) ([]byte, error)
}
type CommandRunner struct{}
func (c CommandRunner) Run(e *Exec) ([]byte, error) {
split_cmd, err := shellquote.Split(e.Command)
if err != nil || len(split_cmd) == 0 {
return nil, fmt.Errorf("exec: unable to parse command, %s", err)
}
cmd := exec.Command(split_cmd[0], split_cmd[1:]...)
var out bytes.Buffer
cmd.Stdout = &out
if err := cmd.Run(); err != nil {
return nil, fmt.Errorf("exec: %s for command '%s'", err, e.Command)
}
return out.Bytes(), nil
}
func NewExec() *Exec {
return &Exec{runner: CommandRunner{}}
}
func (e *Exec) SampleConfig() string {
return sampleConfig
}
func (e *Exec) Description() string {
return "Read flattened metrics from one or more commands that output JSON to stdout"
}
func (e *Exec) Gather(acc plugins.Accumulator) error {
out, err := e.runner.Run(e)
if err != nil {
return err
}
var jsonOut interface{}
err = json.Unmarshal(out, &jsonOut)
if err != nil {
return fmt.Errorf("exec: unable to parse output of '%s' as JSON, %s",
e.Command, err)
}
f := internal.JSONFlattener{}
err = f.FlattenJSON("", jsonOut)
if err != nil {
return err
}
var msrmnt_name string
if e.Name == "" {
msrmnt_name = "exec"
} else {
msrmnt_name = "exec_" + e.Name
}
acc.AddFields(msrmnt_name, f.Fields, nil)
return nil
}
func init() {
plugins.Add("exec", func() plugins.Plugin {
return NewExec()
})
}

262
plugins/exec/exec_test.go Normal file
View File

@@ -0,0 +1,262 @@
package exec
import (
"fmt"
"github.com/influxdb/telegraf/testutil"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"math"
"testing"
"time"
)
// Midnight 9/22/2015
const baseTimeSeconds = 1442905200
const validJson = `
{
"status": "green",
"num_processes": 82,
"cpu": {
"status": "red",
"nil_status": null,
"used": 8234,
"free": 32
},
"percent": 0.81,
"users": [0, 1, 2, 3]
}`
const malformedJson = `
{
"status": "green",
`
type runnerMock struct {
out []byte
err error
}
type clockMock struct {
now time.Time
}
func newRunnerMock(out []byte, err error) Runner {
return &runnerMock{
out: out,
err: err,
}
}
func (r runnerMock) Run(command *Command) ([]byte, error) {
if r.err != nil {
return nil, r.err
}
return r.out, nil
}
func newClockMock(now time.Time) Clock {
return &clockMock{now: now}
}
func (c clockMock) Now() time.Time {
return c.now
}
func TestExec(t *testing.T) {
runner := newRunnerMock([]byte(validJson), nil)
clock := newClockMock(time.Unix(baseTimeSeconds+20, 0))
command := Command{
Command: "testcommand arg1",
Name: "mycollector",
Interval: 10,
lastRunAt: time.Unix(baseTimeSeconds, 0),
}
e := &Exec{
runner: runner,
clock: clock,
Commands: []*Command{&command},
}
var acc testutil.Accumulator
initialPoints := len(acc.Points)
err := e.Gather(&acc)
deltaPoints := len(acc.Points) - initialPoints
require.NoError(t, err)
checkFloat := []struct {
name string
value float64
}{
{"mycollector_num_processes", 82},
{"mycollector_cpu_used", 8234},
{"mycollector_cpu_free", 32},
{"mycollector_percent", 0.81},
}
for _, c := range checkFloat {
assert.True(t, acc.CheckValue(c.name, c.value))
}
assert.Equal(t, deltaPoints, 4, "non-numeric measurements should be ignored")
}
func TestExecMalformed(t *testing.T) {
runner := newRunnerMock([]byte(malformedJson), nil)
clock := newClockMock(time.Unix(baseTimeSeconds+20, 0))
command := Command{
Command: "badcommand arg1",
Name: "mycollector",
Interval: 10,
lastRunAt: time.Unix(baseTimeSeconds, 0),
}
e := &Exec{
runner: runner,
clock: clock,
Commands: []*Command{&command},
}
var acc testutil.Accumulator
initialPoints := len(acc.Points)
err := e.Gather(&acc)
deltaPoints := len(acc.Points) - initialPoints
require.Error(t, err)
assert.Equal(t, deltaPoints, 0, "No new points should have been added")
}
func TestCommandError(t *testing.T) {
runner := newRunnerMock(nil, fmt.Errorf("exit status code 1"))
clock := newClockMock(time.Unix(baseTimeSeconds+20, 0))
command := Command{
Command: "badcommand",
Name: "mycollector",
Interval: 10,
lastRunAt: time.Unix(baseTimeSeconds, 0),
}
e := &Exec{
runner: runner,
clock: clock,
Commands: []*Command{&command},
}
var acc testutil.Accumulator
initialPoints := len(acc.Points)
err := e.Gather(&acc)
deltaPoints := len(acc.Points) - initialPoints
require.Error(t, err)
assert.Equal(t, deltaPoints, 0, "No new points should have been added")
}
func TestExecNotEnoughTime(t *testing.T) {
runner := newRunnerMock([]byte(validJson), nil)
clock := newClockMock(time.Unix(baseTimeSeconds+5, 0))
command := Command{
Command: "testcommand arg1",
Name: "mycollector",
Interval: 10,
lastRunAt: time.Unix(baseTimeSeconds, 0),
}
e := &Exec{
runner: runner,
clock: clock,
Commands: []*Command{&command},
}
var acc testutil.Accumulator
initialPoints := len(acc.Points)
err := e.Gather(&acc)
deltaPoints := len(acc.Points) - initialPoints
require.NoError(t, err)
assert.Equal(t, deltaPoints, 0, "No new points should have been added")
}
func TestExecUninitializedLastRunAt(t *testing.T) {
runner := newRunnerMock([]byte(validJson), nil)
clock := newClockMock(time.Unix(baseTimeSeconds, 0))
command := Command{
Command: "testcommand arg1",
Name: "mycollector",
Interval: math.MaxInt32,
// Uninitialized lastRunAt should default to time.Unix(0, 0), so this should
// run no matter what the interval is
}
e := &Exec{
runner: runner,
clock: clock,
Commands: []*Command{&command},
}
var acc testutil.Accumulator
initialPoints := len(acc.Points)
err := e.Gather(&acc)
deltaPoints := len(acc.Points) - initialPoints
require.NoError(t, err)
checkFloat := []struct {
name string
value float64
}{
{"mycollector_num_processes", 82},
{"mycollector_cpu_used", 8234},
{"mycollector_cpu_free", 32},
{"mycollector_percent", 0.81},
}
for _, c := range checkFloat {
assert.True(t, acc.CheckValue(c.name, c.value))
}
assert.Equal(t, deltaPoints, 4, "non-numeric measurements should be ignored")
}
func TestExecOneNotEnoughTimeAndOneEnoughTime(t *testing.T) {
runner := newRunnerMock([]byte(validJson), nil)
clock := newClockMock(time.Unix(baseTimeSeconds+5, 0))
notEnoughTimeCommand := Command{
Command: "testcommand arg1",
Name: "mycollector",
Interval: 10,
lastRunAt: time.Unix(baseTimeSeconds, 0),
}
enoughTimeCommand := Command{
Command: "testcommand arg1",
Name: "mycollector",
Interval: 3,
lastRunAt: time.Unix(baseTimeSeconds, 0),
}
e := &Exec{
runner: runner,
clock: clock,
Commands: []*Command{&notEnoughTimeCommand, &enoughTimeCommand},
}
var acc testutil.Accumulator
initialPoints := len(acc.Points)
err := e.Gather(&acc)
deltaPoints := len(acc.Points) - initialPoints
require.NoError(t, err)
checkFloat := []struct {
name string
value float64
}{
{"mycollector_num_processes", 82},
{"mycollector_cpu_used", 8234},
{"mycollector_cpu_free", 32},
{"mycollector_percent", 0.81},
}
for _, c := range checkFloat {
assert.True(t, acc.CheckValue(c.name, c.value))
}
assert.Equal(t, deltaPoints, 4, "Only one command should have been run")
}

View File

@@ -3,8 +3,7 @@ package haproxy
import (
"encoding/csv"
"fmt"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
"github.com/influxdb/telegraf/plugins"
"io"
"net/http"
"net/url"
@@ -86,13 +85,13 @@ type haproxy struct {
}
var sampleConfig = `
## An array of address to gather stats about. Specify an ip on hostname
## with optional port. ie localhost, 10.10.3.33:1936, etc.
## If no servers are specified, then default to 127.0.0.1:1936
# An array of address to gather stats about. Specify an ip on hostname
# with optional port. ie localhost, 10.10.3.33:1936, etc.
#
# If no servers are specified, then default to 127.0.0.1:1936
servers = ["http://myhaproxy.com:1936", "http://anotherhaproxy.com:1936"]
## Or you can also use local socket(not work yet)
## servers = ["socket://run/haproxy/admin.sock"]
# Or you can also use local socket(not work yet)
# servers = ["socket:/run/haproxy/admin.sock"]
`
func (r *haproxy) SampleConfig() string {
@@ -105,7 +104,7 @@ func (r *haproxy) Description() string {
// Reads stats from all configured servers accumulates stats.
// Returns one of the errors encountered while gather stats (if any).
func (g *haproxy) Gather(acc telegraf.Accumulator) error {
func (g *haproxy) Gather(acc plugins.Accumulator) error {
if len(g.Servers) == 0 {
return g.gatherServer("http://127.0.0.1:1936", acc)
}
@@ -127,7 +126,7 @@ func (g *haproxy) Gather(acc telegraf.Accumulator) error {
return outerr
}
func (g *haproxy) gatherServer(addr string, acc telegraf.Accumulator) error {
func (g *haproxy) gatherServer(addr string, acc plugins.Accumulator) error {
if g.client == nil {
client := &http.Client{}
@@ -157,7 +156,7 @@ func (g *haproxy) gatherServer(addr string, acc telegraf.Accumulator) error {
return importCsvResult(res.Body, acc, u.Host)
}
func importCsvResult(r io.Reader, acc telegraf.Accumulator, host string) error {
func importCsvResult(r io.Reader, acc plugins.Accumulator, host string) error {
csv := csv.NewReader(r)
result, err := csv.ReadAll()
now := time.Now()
@@ -359,7 +358,7 @@ func importCsvResult(r io.Reader, acc telegraf.Accumulator, host string) error {
}
func init() {
inputs.Add("haproxy", func() telegraf.Input {
plugins.Add("haproxy", func() plugins.Plugin {
return &haproxy{}
})
}

View File

@@ -5,7 +5,7 @@ import (
"strings"
"testing"
"github.com/influxdata/telegraf/testutil"
"github.com/influxdb/telegraf/testutil"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"net/http"
@@ -47,39 +47,52 @@ func TestHaproxyGeneratesMetricsWithAuthentication(t *testing.T) {
"sv": "host0",
}
fields := map[string]interface{}{
"active_servers": uint64(1),
"backup_servers": uint64(0),
"bin": uint64(510913516),
"bout": uint64(2193856571),
"check_duration": uint64(10),
"cli_abort": uint64(73),
"ctime": uint64(2),
"downtime": uint64(0),
"dresp": uint64(0),
"econ": uint64(0),
"eresp": uint64(1),
"http_response.1xx": uint64(0),
"http_response.2xx": uint64(119534),
"http_response.3xx": uint64(48051),
"http_response.4xx": uint64(2345),
"http_response.5xx": uint64(1056),
"lbtot": uint64(171013),
"qcur": uint64(0),
"qmax": uint64(0),
"qtime": uint64(0),
"rate": uint64(3),
"rate_max": uint64(12),
"rtime": uint64(312),
"scur": uint64(1),
"smax": uint64(32),
"srv_abort": uint64(1),
"stot": uint64(171014),
"ttime": uint64(2341),
"wredis": uint64(0),
"wretr": uint64(1),
assert.NoError(t, acc.ValidateTaggedValue("stot", uint64(171014), tags))
checkInt := []struct {
name string
value uint64
}{
{"qmax", 81},
{"scur", 288},
{"smax", 713},
{"bin", 5557055817},
{"bout", 24096715169},
{"dreq", 1102},
{"dresp", 80},
{"ereq", 95740},
{"econ", 0},
{"eresp", 0},
{"wretr", 17},
{"wredis", 19},
{"active_servers", 1},
{"backup_servers", 0},
{"downtime", 0},
{"throttle", 13},
{"lbtot", 114},
{"rate", 18},
{"rate_max", 102},
{"check_duration", 1},
{"http_response.1xx", 0},
{"http_response.2xx", 1314093},
{"http_response.3xx", 537036},
{"http_response.4xx", 123452},
{"http_response.5xx", 11966},
{"req_rate", 35},
{"req_rate_max", 140},
{"req_tot", 1987928},
{"cli_abort", 0},
{"srv_abort", 0},
{"qtime", 0},
{"ctime", 2},
{"rtime", 23},
{"ttime", 545},
}
for _, c := range checkInt {
assert.Equal(t, true, acc.CheckValue(c.name, c.value))
}
acc.AssertContainsTaggedFields(t, "haproxy", fields, tags)
//Here, we should get error because we don't pass authentication data
r = &haproxy{
@@ -111,39 +124,10 @@ func TestHaproxyGeneratesMetricsWithoutAuthentication(t *testing.T) {
"sv": "host0",
}
fields := map[string]interface{}{
"active_servers": uint64(1),
"backup_servers": uint64(0),
"bin": uint64(510913516),
"bout": uint64(2193856571),
"check_duration": uint64(10),
"cli_abort": uint64(73),
"ctime": uint64(2),
"downtime": uint64(0),
"dresp": uint64(0),
"econ": uint64(0),
"eresp": uint64(1),
"http_response.1xx": uint64(0),
"http_response.2xx": uint64(119534),
"http_response.3xx": uint64(48051),
"http_response.4xx": uint64(2345),
"http_response.5xx": uint64(1056),
"lbtot": uint64(171013),
"qcur": uint64(0),
"qmax": uint64(0),
"qtime": uint64(0),
"rate": uint64(3),
"rate_max": uint64(12),
"rtime": uint64(312),
"scur": uint64(1),
"smax": uint64(32),
"srv_abort": uint64(1),
"stot": uint64(171014),
"ttime": uint64(2341),
"wredis": uint64(0),
"wretr": uint64(1),
}
acc.AssertContainsTaggedFields(t, "haproxy", fields, tags)
assert.NoError(t, acc.ValidateTaggedValue("stot", uint64(171014), tags))
assert.NoError(t, acc.ValidateTaggedValue("scur", uint64(1), tags))
assert.NoError(t, acc.ValidateTaggedValue("rate", uint64(3), tags))
assert.Equal(t, true, acc.CheckValue("bin", uint64(5557055817)))
}
//When not passing server config, we default to localhost

View File

@@ -45,16 +45,6 @@ You can also specify additional request parameters for the service:
```
You can also specify additional request header parameters for the service:
```
[[httpjson.services]]
...
[httpjson.services.headers]
X-Auth-Token = "my-xauth-token"
apiVersion = "v1"
```
# Example:

View File

@@ -1,6 +1,7 @@
package httpjson
import (
"encoding/json"
"errors"
"fmt"
"io/ioutil"
@@ -8,11 +9,9 @@ import (
"net/url"
"strings"
"sync"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
"github.com/influxdata/telegraf/plugins/parsers"
"github.com/influxdb/telegraf/internal"
"github.com/influxdb/telegraf/plugins"
)
type HttpJson struct {
@@ -21,9 +20,7 @@ type HttpJson struct {
Method string
TagKeys []string
Parameters map[string]string
Headers map[string]string
client HTTPClient
client HTTPClient
}
type HTTPClient interface {
@@ -47,36 +44,28 @@ func (c RealHTTPClient) MakeRequest(req *http.Request) (*http.Response, error) {
}
var sampleConfig = `
## NOTE This plugin only reads numerical measurements, strings and booleans
## will be ignored.
## a name for the service being polled
# a name for the service being polled
name = "webserver_stats"
## URL of each server in the service's cluster
# URL of each server in the service's cluster
servers = [
"http://localhost:9999/stats/",
"http://localhost:9998/stats/",
]
## HTTP method to use: GET or POST (case-sensitive)
# HTTP method to use (case-sensitive)
method = "GET"
## List of tag names to extract from top-level of JSON server response
# List of tag names to extract from top-level of JSON server response
# tag_keys = [
# "my_tag_1",
# "my_tag_2"
# ]
## HTTP parameters (all values must be strings)
[inputs.httpjson.parameters]
# HTTP parameters (all values must be strings)
[plugins.httpjson.parameters]
event_type = "cpu_spike"
threshold = "0.75"
## HTTP Header parameters (all values must be strings)
# [inputs.httpjson.headers]
# X-Auth-Token = "my-xauth-token"
# apiVersion = "v1"
`
func (h *HttpJson) SampleConfig() string {
@@ -88,7 +77,7 @@ func (h *HttpJson) Description() string {
}
// Gathers data for all servers.
func (h *HttpJson) Gather(acc telegraf.Accumulator) error {
func (h *HttpJson) Gather(acc plugins.Accumulator) error {
var wg sync.WaitGroup
errorChannel := make(chan error, len(h.Servers))
@@ -127,11 +116,33 @@ func (h *HttpJson) Gather(acc telegraf.Accumulator) error {
// Returns:
// error: Any error that may have occurred
func (h *HttpJson) gatherServer(
acc telegraf.Accumulator,
acc plugins.Accumulator,
serverURL string,
) error {
resp, responseTime, err := h.sendRequest(serverURL)
resp, err := h.sendRequest(serverURL)
if err != nil {
return err
}
var jsonOut map[string]interface{}
if err = json.Unmarshal([]byte(resp), &jsonOut); err != nil {
return errors.New("Error decoding JSON response")
}
tags := map[string]string{
"server": serverURL,
}
for _, tag := range h.TagKeys {
switch v := jsonOut[tag].(type) {
case string:
tags[tag] = v
}
delete(jsonOut, tag)
}
f := internal.JSONFlattener{}
err = f.FlattenJSON("", jsonOut)
if err != nil {
return err
}
@@ -142,90 +153,47 @@ func (h *HttpJson) gatherServer(
} else {
msrmnt_name = "httpjson_" + h.Name
}
tags := map[string]string{
"server": serverURL,
}
parser, err := parsers.NewJSONParser(msrmnt_name, h.TagKeys, tags)
if err != nil {
return err
}
metrics, err := parser.Parse([]byte(resp))
if err != nil {
return err
}
for _, metric := range metrics {
fields := make(map[string]interface{})
for k, v := range metric.Fields() {
fields[k] = v
}
fields["response_time"] = responseTime
acc.AddFields(metric.Name(), fields, metric.Tags())
}
acc.AddFields(msrmnt_name, f.Fields, nil)
return nil
}
// Sends an HTTP request to the server using the HttpJson object's HTTPClient.
// This request can be either a GET or a POST.
// Sends an HTTP request to the server using the HttpJson object's HTTPClient
// Parameters:
// serverURL: endpoint to send request to
//
// Returns:
// string: body of the response
// error : Any error that may have occurred
func (h *HttpJson) sendRequest(serverURL string) (string, float64, error) {
func (h *HttpJson) sendRequest(serverURL string) (string, error) {
// Prepare URL
requestURL, err := url.Parse(serverURL)
if err != nil {
return "", -1, fmt.Errorf("Invalid server URL \"%s\"", serverURL)
return "", fmt.Errorf("Invalid server URL \"%s\"", serverURL)
}
data := url.Values{}
switch {
case h.Method == "GET":
params := requestURL.Query()
for k, v := range h.Parameters {
params.Add(k, v)
}
requestURL.RawQuery = params.Encode()
case h.Method == "POST":
requestURL.RawQuery = ""
for k, v := range h.Parameters {
data.Add(k, v)
}
params := url.Values{}
for k, v := range h.Parameters {
params.Add(k, v)
}
requestURL.RawQuery = params.Encode()
// Create + send request
req, err := http.NewRequest(h.Method, requestURL.String(),
strings.NewReader(data.Encode()))
req, err := http.NewRequest(h.Method, requestURL.String(), nil)
if err != nil {
return "", -1, err
return "", err
}
defer req.Body.Close()
// Add header parameters
for k, v := range h.Headers {
if strings.ToLower(k) == "host" {
req.Host = v
} else {
req.Header.Add(k, v)
}
}
start := time.Now()
resp, err := h.client.MakeRequest(req)
if err != nil {
return "", -1, err
return "", err
}
defer resp.Body.Close()
defer resp.Body.Close()
responseTime := time.Since(start).Seconds()
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
return string(body), responseTime, err
return string(body), err
}
// Process response
@@ -236,14 +204,14 @@ func (h *HttpJson) sendRequest(serverURL string) (string, float64, error) {
http.StatusText(resp.StatusCode),
http.StatusOK,
http.StatusText(http.StatusOK))
return string(body), responseTime, err
return string(body), err
}
return string(body), responseTime, err
return string(body), err
}
func init() {
inputs.Add("httpjson", func() telegraf.Input {
plugins.Add("httpjson", func() plugins.Plugin {
return &HttpJson{client: RealHTTPClient{client: &http.Client{}}}
})
}

View File

@@ -0,0 +1,223 @@
package httpjson
import (
"fmt"
"io/ioutil"
"net/http"
"strings"
"testing"
"github.com/influxdb/telegraf/testutil"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
const validJSON = `
{
"parent": {
"child": 3,
"ignored_child": "hi"
},
"ignored_null": null,
"integer": 4,
"ignored_list": [3, 4],
"ignored_parent": {
"another_ignored_list": [4],
"another_ignored_null": null,
"ignored_string": "hello, world!"
}
}`
const validJSONTags = `
{
"value": 15,
"role": "master",
"build": "123"
}`
const invalidJSON = "I don't think this is JSON"
const empty = ""
type mockHTTPClient struct {
responseBody string
statusCode int
}
// Mock implementation of MakeRequest. Usually returns an http.Response with
// hard-coded responseBody and statusCode. However, if the request uses a
// nonstandard method, it uses status code 405 (method not allowed)
func (c mockHTTPClient) MakeRequest(req *http.Request) (*http.Response, error) {
resp := http.Response{}
resp.StatusCode = c.statusCode
// basic error checking on request method
allowedMethods := []string{"GET", "HEAD", "POST", "PUT", "DELETE", "TRACE", "CONNECT"}
methodValid := false
for _, method := range allowedMethods {
if req.Method == method {
methodValid = true
break
}
}
if !methodValid {
resp.StatusCode = 405 // Method not allowed
}
resp.Body = ioutil.NopCloser(strings.NewReader(c.responseBody))
return &resp, nil
}
// Generates a pointer to an HttpJson object that uses a mock HTTP client.
// Parameters:
// response : Body of the response that the mock HTTP client should return
// statusCode: HTTP status code the mock HTTP client should return
//
// Returns:
// *HttpJson: Pointer to an HttpJson object that uses the generated mock HTTP client
func genMockHttpJson(response string, statusCode int) *HttpJson {
return &HttpJson{
client: mockHTTPClient{responseBody: response, statusCode: statusCode},
Services: []Service{
Service{
Servers: []string{
"http://server1.example.com/metrics/",
"http://server2.example.com/metrics/",
},
Name: "my_webapp",
Method: "GET",
Parameters: map[string]string{
"httpParam1": "12",
"httpParam2": "the second parameter",
},
},
Service{
Servers: []string{
"http://server3.example.com/metrics/",
"http://server4.example.com/metrics/",
},
Name: "other_webapp",
Method: "POST",
Parameters: map[string]string{
"httpParam1": "12",
"httpParam2": "the second parameter",
},
TagKeys: []string{
"role",
"build",
},
},
},
}
}
// Test that the proper values are ignored or collected
func TestHttpJson200(t *testing.T) {
httpjson := genMockHttpJson(validJSON, 200)
var acc testutil.Accumulator
err := httpjson.Gather(&acc)
require.NoError(t, err)
assert.Equal(t, 8, len(acc.Points))
for _, service := range httpjson.Services {
for _, srv := range service.Servers {
require.NoError(t,
acc.ValidateTaggedValue(
fmt.Sprintf("%s_parent_child", service.Name),
3.0,
map[string]string{"server": srv},
),
)
require.NoError(t,
acc.ValidateTaggedValue(
fmt.Sprintf("%s_integer", service.Name),
4.0,
map[string]string{"server": srv},
),
)
}
}
}
// Test response to HTTP 500
func TestHttpJson500(t *testing.T) {
httpjson := genMockHttpJson(validJSON, 500)
var acc testutil.Accumulator
err := httpjson.Gather(&acc)
assert.NotNil(t, err)
// 4 error lines for (2 urls) * (2 services)
assert.Equal(t, len(strings.Split(err.Error(), "\n")), 4)
assert.Equal(t, 0, len(acc.Points))
}
// Test response to HTTP 405
func TestHttpJsonBadMethod(t *testing.T) {
httpjson := genMockHttpJson(validJSON, 200)
httpjson.Services[0].Method = "NOT_A_REAL_METHOD"
var acc testutil.Accumulator
err := httpjson.Gather(&acc)
assert.NotNil(t, err)
// 2 error lines for (2 urls) * (1 falied service)
assert.Equal(t, len(strings.Split(err.Error(), "\n")), 2)
// (2 measurements) * (2 servers) * (1 successful service)
assert.Equal(t, 4, len(acc.Points))
}
// Test response to malformed JSON
func TestHttpJsonBadJson(t *testing.T) {
httpjson := genMockHttpJson(invalidJSON, 200)
var acc testutil.Accumulator
err := httpjson.Gather(&acc)
assert.NotNil(t, err)
// 4 error lines for (2 urls) * (2 services)
assert.Equal(t, len(strings.Split(err.Error(), "\n")), 4)
assert.Equal(t, 0, len(acc.Points))
}
// Test response to empty string as response objectgT
func TestHttpJsonEmptyResponse(t *testing.T) {
httpjson := genMockHttpJson(empty, 200)
var acc testutil.Accumulator
err := httpjson.Gather(&acc)
assert.NotNil(t, err)
// 4 error lines for (2 urls) * (2 services)
assert.Equal(t, len(strings.Split(err.Error(), "\n")), 4)
assert.Equal(t, 0, len(acc.Points))
}
// Test that the proper values are ignored or collected
func TestHttpJson200Tags(t *testing.T) {
httpjson := genMockHttpJson(validJSONTags, 200)
var acc testutil.Accumulator
err := httpjson.Gather(&acc)
require.NoError(t, err)
assert.Equal(t, 4, len(acc.Points))
for _, service := range httpjson.Services {
if service.Name == "other_webapp" {
for _, srv := range service.Servers {
require.NoError(t,
acc.ValidateTaggedValue(
fmt.Sprintf("%s_value", service.Name),
15.0,
map[string]string{"server": srv, "role": "master", "build": "123"},
),
)
}
}
}
}

View File

@@ -5,7 +5,7 @@ The influxdb plugin collects InfluxDB-formatted data from JSON endpoints.
With a configuration of:
```toml
[[inputs.influxdb]]
[[plugins.influxdb]]
urls = [
"http://127.0.0.1:8086/debug/vars",
"http://192.168.2.1:8086/debug/vars"

View File

@@ -8,8 +8,7 @@ import (
"strings"
"sync"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
"github.com/influxdb/telegraf/plugins"
)
type InfluxDB struct {
@@ -22,18 +21,18 @@ func (*InfluxDB) Description() string {
func (*InfluxDB) SampleConfig() string {
return `
## Works with InfluxDB debug endpoints out of the box,
## but other services can use this format too.
## See the influxdb plugin's README for more details.
# Works with InfluxDB debug endpoints out of the box,
# but other services can use this format too.
# See the influxdb plugin's README for more details.
## Multiple URLs from which to read InfluxDB-formatted JSON
# Multiple URLs from which to read InfluxDB-formatted JSON
urls = [
"http://localhost:8086/debug/vars"
]
`
}
func (i *InfluxDB) Gather(acc telegraf.Accumulator) error {
func (i *InfluxDB) Gather(acc plugins.Accumulator) error {
errorChannel := make(chan error, len(i.URLs))
var wg sync.WaitGroup
@@ -78,7 +77,7 @@ type point struct {
// Returns:
// error: Any error that may have occurred
func (i *InfluxDB) gatherURL(
acc telegraf.Accumulator,
acc plugins.Accumulator,
url string,
) error {
resp, err := http.Get(url)
@@ -131,7 +130,7 @@ func (i *InfluxDB) gatherURL(
p.Tags["url"] = url
acc.AddFields(
"influxdb_"+p.Name,
p.Name,
p.Values,
p.Tags,
)
@@ -141,7 +140,7 @@ func (i *InfluxDB) gatherURL(
}
func init() {
inputs.Add("influxdb", func() telegraf.Input {
plugins.Add("influxdb", func() plugins.Plugin {
return &InfluxDB{}
})
}

View File

@@ -5,8 +5,8 @@ import (
"net/http/httptest"
"testing"
"github.com/influxdata/telegraf/plugins/inputs/influxdb"
"github.com/influxdata/telegraf/testutil"
"github.com/influxdb/telegraf/plugins/influxdb"
"github.com/influxdb/telegraf/testutil"
"github.com/stretchr/testify/require"
)
@@ -71,27 +71,30 @@ func TestBasic(t *testing.T) {
var acc testutil.Accumulator
require.NoError(t, plugin.Gather(&acc))
require.Len(t, acc.Metrics, 2)
fields := map[string]interface{}{
// JSON will truncate floats to integer representations.
// Since there's no distinction in JSON, we can't assume it's an int.
"i": -1.0,
"f": 0.5,
"b": true,
"s": "string",
}
tags := map[string]string{
"id": "ex1",
"url": fakeServer.URL + "/endpoint",
}
acc.AssertContainsTaggedFields(t, "influxdb_foo", fields, tags)
fields = map[string]interface{}{
"x": "x",
}
tags = map[string]string{
"id": "ex2",
"url": fakeServer.URL + "/endpoint",
}
acc.AssertContainsTaggedFields(t, "influxdb_bar", fields, tags)
require.Len(t, acc.Points, 2)
require.NoError(t, acc.ValidateTaggedFieldsValue(
"foo",
map[string]interface{}{
// JSON will truncate floats to integer representations.
// Since there's no distinction in JSON, we can't assume it's an int.
"i": -1.0,
"f": 0.5,
"b": true,
"s": "string",
},
map[string]string{
"id": "ex1",
"url": fakeServer.URL + "/endpoint",
},
))
require.NoError(t, acc.ValidateTaggedFieldsValue(
"bar",
map[string]interface{}{
"x": "x",
},
map[string]string{
"id": "ex2",
"url": fakeServer.URL + "/endpoint",
},
))
}

View File

@@ -1,39 +0,0 @@
# Example Input Plugin
The example plugin gathers metrics about example things
### Configuration:
```toml
# Description
[[inputs.example]]
# SampleConfig
```
### Measurements & Fields:
<optional description>
- measurement1
- field1 (type, unit)
- field2 (float, percent)
- measurement2
- field3 (integer, bytes)
### Tags:
- All measurements have the following tags:
- tag1 (optional description)
- tag2
- measurement2 has the following tags:
- tag3
### Example Output:
Give an example `-test` output here
```
$ ./telegraf -config telegraf.conf -input-filter example -test
measurement1,tag1=foo,tag2=bar field1=1i,field2=2.1 1453831884664956455
measurement2,tag1=foo,tag2=bar,tag3=baz field3=1i 1453831884664956455
```

Some files were not shown because too many files have changed in this diff Show More