Merge pull request #1 from influxdata/master

Update from base
This commit is contained in:
RyanMnM 2016-01-22 11:25:46 +00:00
commit eef13b2983
216 changed files with 9647 additions and 6958 deletions

2
.gitignore vendored
View File

@ -2,3 +2,5 @@ tivan
.vagrant .vagrant
/telegraf /telegraf
.idea .idea
*~
*#

View File

@ -1,29 +1,104 @@
## v0.10.1 [unreleased]
### Release Notes:
- The docker plugin has been significantly overhauled to add more metrics
and allow for docker-machine (incl OSX) support.
[See the readme](https://github.com/influxdata/telegraf/blob/master/plugins/inputs/docker/README.md)
for the latest measurements, fields, and tags. There is also now support for
specifying a docker endpoint to get metrics from.
### Features
- [#509](https://github.com/influxdata/telegraf/pull/509): Flatten JSON arrays with indices. Thanks @psilva261!
- [#512](https://github.com/influxdata/telegraf/pull/512): Python 3 build script, add lsof dep to package. Thanks @Ormod!
- [#475](https://github.com/influxdata/telegraf/pull/475): Add response time to httpjson plugin. Thanks @titilambert!
- [#519](https://github.com/influxdata/telegraf/pull/519): Added a sensors input based on lm-sensors. Thanks @md14454!
- [#467](https://github.com/influxdata/telegraf/issues/467): Add option to disable statsd measurement name conversion.
- [#534](https://github.com/influxdata/telegraf/pull/534): NSQ input plugin. Thanks @allingeek!
- [#494](https://github.com/influxdata/telegraf/pull/494): Graphite output plugin. Thanks @titilambert!
- AMQP SSL support. Thanks @ekini!
- [#539](https://github.com/influxdata/telegraf/pull/539): Reload config on SIGHUP. Thanks @titilambert!
- [#522](https://github.com/influxdata/telegraf/pull/522): Phusion passenger input plugin. Thanks @kureikain!
- [#541](https://github.com/influxdata/telegraf/pull/541): Kafka output TLS cert support. Thanks @Ormod!
- [#551](https://github.com/influxdata/telegraf/pull/551): Statsd UDP read packet size now defaults to 1500 bytes, and is configurable.
- [#552](https://github.com/influxdata/telegraf/pull/552): Support for collection interval jittering.
- [#484](https://github.com/influxdata/telegraf/issues/484): Include usage percent with procstat metrics.
- [#553](https://github.com/influxdata/telegraf/pull/553): Amazon CloudWatch output. thanks @skwong2!
- [#503](https://github.com/influxdata/telegraf/pull/503): Support docker endpoint configuration.
- [#563](https://github.com/influxdata/telegraf/pull/563): Docker plugin overhaul.
### Bugfixes
- [#506](https://github.com/influxdata/telegraf/pull/506): Ping input doesn't return response time metric when timeout. Thanks @titilambert!
- [#508](https://github.com/influxdata/telegraf/pull/508): Fix prometheus cardinality issue with the `net` plugin
- [#499](https://github.com/influxdata/telegraf/issues/499) & [#502](https://github.com/influxdata/telegraf/issues/502): php fpm unix socket and other fixes, thanks @kureikain!
- [#543](https://github.com/influxdata/telegraf/issues/543): Statsd Packet size sometimes truncated.
- [#440](https://github.com/influxdata/telegraf/issues/440): Don't query filtered devices for disk stats.
- [#463](https://github.com/influxdata/telegraf/issues/463): Docker plugin not working on AWS Linux
## v0.10.0 [2016-01-12]
### Release Notes
- Linux packages have been taken out of `opt`, the binary is now in `/usr/bin`
and configuration files are in `/etc/telegraf`
- **breaking change** `plugins` have been renamed to `inputs`. This was done because
`plugins` is too generic, as there are now also "output plugins", and will likely
be "aggregator plugins" and "filter plugins" in the future. Additionally,
`inputs/` and `outputs/` directories have been placed in the root-level `plugins/`
directory.
- **breaking change** the `io` plugin has been renamed `diskio`
- **breaking change** plugin measurements aggregated into a single measurement.
- **breaking change** `jolokia` plugin: must use global tag/drop/pass parameters
for configuration.
- **breaking change** `twemproxy` plugin: `prefix` option removed.
- **breaking change** `procstat` cpu measurements are now prepended with `cpu_time_`
instead of only `cpu_`
- **breaking change** some command-line flags have been renamed to separate words.
`-configdirectory` -> `-config-directory`, `-filter` -> `-input-filter`,
`-outputfilter` -> `-output-filter`
- The prometheus plugin schema has not been changed (measurements have not been
aggregated).
### Packaging change note:
RHEL/CentOS users upgrading from 0.2.x to 0.10.0 will probably have their
configurations overwritten by the upgrade. There is a backup stored at
/etc/telegraf/telegraf.conf.$(date +%s).backup.
### Features
- Plugin measurements aggregated into a single measurement.
- Added ability to specify per-plugin tags
- Added ability to specify per-plugin measurement suffix and prefix.
(`name_prefix` and `name_suffix`)
- Added ability to override base plugin measurement name. (`name_override`)
### Bugfixes
## v0.2.5 [unreleased] ## v0.2.5 [unreleased]
### Features ### Features
- [#427](https://github.com/influxdb/telegraf/pull/427): zfs plugin: pool stats added. Thanks @allenpetersen! - [#427](https://github.com/influxdata/telegraf/pull/427): zfs plugin: pool stats added. Thanks @allenpetersen!
- [#428](https://github.com/influxdb/telegraf/pull/428): Amazon Kinesis output. Thanks @jimmystewpot! - [#428](https://github.com/influxdata/telegraf/pull/428): Amazon Kinesis output. Thanks @jimmystewpot!
- [#449](https://github.com/influxdb/telegraf/pull/449): influxdb plugin, thanks @mark-rushakoff - [#449](https://github.com/influxdata/telegraf/pull/449): influxdb plugin, thanks @mark-rushakoff
### Bugfixes ### Bugfixes
- [#430](https://github.com/influxdb/telegraf/issues/430): Network statistics removed in elasticsearch 2.1. Thanks @jipperinbham! - [#430](https://github.com/influxdata/telegraf/issues/430): Network statistics removed in elasticsearch 2.1. Thanks @jipperinbham!
- [#452](https://github.com/influxdb/telegraf/issues/452): Elasticsearch open file handles error. Thanks @jipperinbham! - [#452](https://github.com/influxdata/telegraf/issues/452): Elasticsearch open file handles error. Thanks @jipperinbham!
## v0.2.4 [2015-12-08] ## v0.2.4 [2015-12-08]
### Features ### Features
- [#412](https://github.com/influxdb/telegraf/pull/412): Additional memcached stats. Thanks @mgresser! - [#412](https://github.com/influxdata/telegraf/pull/412): Additional memcached stats. Thanks @mgresser!
- [#410](https://github.com/influxdb/telegraf/pull/410): Additional redis metrics. Thanks @vlaadbrain! - [#410](https://github.com/influxdata/telegraf/pull/410): Additional redis metrics. Thanks @vlaadbrain!
- [#414](https://github.com/influxdb/telegraf/issues/414): Jolokia plugin auth parameters - [#414](https://github.com/influxdata/telegraf/issues/414): Jolokia plugin auth parameters
- [#415](https://github.com/influxdb/telegraf/issues/415): memcached plugin: support unix sockets - [#415](https://github.com/influxdata/telegraf/issues/415): memcached plugin: support unix sockets
- [#418](https://github.com/influxdb/telegraf/pull/418): memcached plugin additional unit tests. - [#418](https://github.com/influxdata/telegraf/pull/418): memcached plugin additional unit tests.
- [#408](https://github.com/influxdb/telegraf/pull/408): MailChimp plugin. - [#408](https://github.com/influxdata/telegraf/pull/408): MailChimp plugin.
- [#382](https://github.com/influxdb/telegraf/pull/382): Add system wide network protocol stats to `net` plugin. - [#382](https://github.com/influxdata/telegraf/pull/382): Add system wide network protocol stats to `net` plugin.
- [#401](https://github.com/influxdb/telegraf/pull/401): Support pass/drop/tagpass/tagdrop for outputs. Thanks @oldmantaiter! - [#401](https://github.com/influxdata/telegraf/pull/401): Support pass/drop/tagpass/tagdrop for outputs. Thanks @oldmantaiter!
### Bugfixes ### Bugfixes
- [#405](https://github.com/influxdb/telegraf/issues/405): Prometheus output cardinality issue - [#405](https://github.com/influxdata/telegraf/issues/405): Prometheus output cardinality issue
- [#388](https://github.com/influxdb/telegraf/issues/388): Fix collection hangup when cpu times decrement. - [#388](https://github.com/influxdata/telegraf/issues/388): Fix collection hangup when cpu times decrement.
## v0.2.3 [2015-11-30] ## v0.2.3 [2015-11-30]
@ -38,11 +113,11 @@ functional.
same type can be specified, like this: same type can be specified, like this:
``` ```
[[plugins.cpu]] [[inputs.cpu]]
percpu = false percpu = false
totalcpu = true totalcpu = true
[[plugins.cpu]] [[inputs.cpu]]
percpu = true percpu = true
totalcpu = false totalcpu = false
drop = ["cpu_time"] drop = ["cpu_time"]
@ -52,15 +127,15 @@ same type can be specified, like this:
- Aerospike plugin: tag changed from `host` -> `aerospike_host` - Aerospike plugin: tag changed from `host` -> `aerospike_host`
### Features ### Features
- [#379](https://github.com/influxdb/telegraf/pull/379): Riemann output, thanks @allenj! - [#379](https://github.com/influxdata/telegraf/pull/379): Riemann output, thanks @allenj!
- [#375](https://github.com/influxdb/telegraf/pull/375): kafka_consumer service plugin. - [#375](https://github.com/influxdata/telegraf/pull/375): kafka_consumer service plugin.
- [#392](https://github.com/influxdb/telegraf/pull/392): Procstat plugin can now accept pgrep -f pattern, thanks @ecarreras! - [#392](https://github.com/influxdata/telegraf/pull/392): Procstat plugin can now accept pgrep -f pattern, thanks @ecarreras!
- [#383](https://github.com/influxdb/telegraf/pull/383): Specify plugins as a list. - [#383](https://github.com/influxdata/telegraf/pull/383): Specify plugins as a list.
- [#354](https://github.com/influxdb/telegraf/pull/354): Add ability to specify multiple metrics in one statsd line. Thanks @MerlinDMC! - [#354](https://github.com/influxdata/telegraf/pull/354): Add ability to specify multiple metrics in one statsd line. Thanks @MerlinDMC!
### Bugfixes ### Bugfixes
- [#371](https://github.com/influxdb/telegraf/issues/371): Kafka consumer plugin not functioning. - [#371](https://github.com/influxdata/telegraf/issues/371): Kafka consumer plugin not functioning.
- [#389](https://github.com/influxdb/telegraf/issues/389): NaN value panic - [#389](https://github.com/influxdata/telegraf/issues/389): NaN value panic
## v0.2.2 [2015-11-18] ## v0.2.2 [2015-11-18]
@ -69,7 +144,7 @@ same type can be specified, like this:
lists of servers/URLs. 0.2.2 is being released solely to fix that bug lists of servers/URLs. 0.2.2 is being released solely to fix that bug
### Bugfixes ### Bugfixes
- [#377](https://github.com/influxdb/telegraf/pull/377): Fix for duplicate slices in plugins. - [#377](https://github.com/influxdata/telegraf/pull/377): Fix for duplicate slices in inputs.
## v0.2.1 [2015-11-16] ## v0.2.1 [2015-11-16]
@ -86,22 +161,22 @@ changed to just run docker commands in the Makefile. See `make docker-run` and
same type. same type.
### Features ### Features
- [#325](https://github.com/influxdb/telegraf/pull/325): NSQ output. Thanks @jrxFive! - [#325](https://github.com/influxdata/telegraf/pull/325): NSQ output. Thanks @jrxFive!
- [#318](https://github.com/influxdb/telegraf/pull/318): Prometheus output. Thanks @oldmantaiter! - [#318](https://github.com/influxdata/telegraf/pull/318): Prometheus output. Thanks @oldmantaiter!
- [#338](https://github.com/influxdb/telegraf/pull/338): Restart Telegraf on package upgrade. Thanks @linsomniac! - [#338](https://github.com/influxdata/telegraf/pull/338): Restart Telegraf on package upgrade. Thanks @linsomniac!
- [#337](https://github.com/influxdb/telegraf/pull/337): Jolokia plugin, thanks @saiello! - [#337](https://github.com/influxdata/telegraf/pull/337): Jolokia plugin, thanks @saiello!
- [#350](https://github.com/influxdb/telegraf/pull/350): Amon output. - [#350](https://github.com/influxdata/telegraf/pull/350): Amon output.
- [#365](https://github.com/influxdb/telegraf/pull/365): Twemproxy plugin by @codeb2cc - [#365](https://github.com/influxdata/telegraf/pull/365): Twemproxy plugin by @codeb2cc
- [#317](https://github.com/influxdb/telegraf/issues/317): ZFS plugin, thanks @cornerot! - [#317](https://github.com/influxdata/telegraf/issues/317): ZFS plugin, thanks @cornerot!
- [#364](https://github.com/influxdb/telegraf/pull/364): Support InfluxDB UDP output. - [#364](https://github.com/influxdata/telegraf/pull/364): Support InfluxDB UDP output.
- [#370](https://github.com/influxdb/telegraf/pull/370): Support specifying multiple outputs, as lists. - [#370](https://github.com/influxdata/telegraf/pull/370): Support specifying multiple outputs, as lists.
- [#372](https://github.com/influxdb/telegraf/pull/372): Remove gosigar and update go-dockerclient for FreeBSD support. Thanks @MerlinDMC! - [#372](https://github.com/influxdata/telegraf/pull/372): Remove gosigar and update go-dockerclient for FreeBSD support. Thanks @MerlinDMC!
### Bugfixes ### Bugfixes
- [#331](https://github.com/influxdb/telegraf/pull/331): Dont overwrite host tag in redis plugin. - [#331](https://github.com/influxdata/telegraf/pull/331): Dont overwrite host tag in redis plugin.
- [#336](https://github.com/influxdb/telegraf/pull/336): Mongodb plugin should take 2 measurements. - [#336](https://github.com/influxdata/telegraf/pull/336): Mongodb plugin should take 2 measurements.
- [#351](https://github.com/influxdb/telegraf/issues/317): Fix continual "CREATE DATABASE" in writes - [#351](https://github.com/influxdata/telegraf/issues/317): Fix continual "CREATE DATABASE" in writes
- [#360](https://github.com/influxdb/telegraf/pull/360): Apply prefix before ShouldPass check. Thanks @sotfo! - [#360](https://github.com/influxdata/telegraf/pull/360): Apply prefix before ShouldPass check. Thanks @sotfo!
## v0.2.0 [2015-10-27] ## v0.2.0 [2015-10-27]
@ -122,38 +197,38 @@ be controlled via the `round_interval` and `flush_jitter` config options.
- Telegraf will now retry metric flushes twice - Telegraf will now retry metric flushes twice
### Features ### Features
- [#205](https://github.com/influxdb/telegraf/issues/205): Include per-db redis keyspace info - [#205](https://github.com/influxdata/telegraf/issues/205): Include per-db redis keyspace info
- [#226](https://github.com/influxdb/telegraf/pull/226): Add timestamps to points in Kafka/AMQP outputs. Thanks @ekini - [#226](https://github.com/influxdata/telegraf/pull/226): Add timestamps to points in Kafka/AMQP outputs. Thanks @ekini
- [#90](https://github.com/influxdb/telegraf/issues/90): Add Docker labels to tags in docker plugin - [#90](https://github.com/influxdata/telegraf/issues/90): Add Docker labels to tags in docker plugin
- [#223](https://github.com/influxdb/telegraf/pull/223): Add port tag to nginx plugin. Thanks @neezgee! - [#223](https://github.com/influxdata/telegraf/pull/223): Add port tag to nginx plugin. Thanks @neezgee!
- [#227](https://github.com/influxdb/telegraf/pull/227): Add command intervals to exec plugin. Thanks @jpalay! - [#227](https://github.com/influxdata/telegraf/pull/227): Add command intervals to exec plugin. Thanks @jpalay!
- [#241](https://github.com/influxdb/telegraf/pull/241): MQTT Output. Thanks @shirou! - [#241](https://github.com/influxdata/telegraf/pull/241): MQTT Output. Thanks @shirou!
- Memory plugin: cached and buffered measurements re-added - Memory plugin: cached and buffered measurements re-added
- Logging: additional logging for each collection interval, track the number - Logging: additional logging for each collection interval, track the number
of metrics collected and from how many plugins. of metrics collected and from how many inputs.
- [#240](https://github.com/influxdb/telegraf/pull/240): procstat plugin, thanks @ranjib! - [#240](https://github.com/influxdata/telegraf/pull/240): procstat plugin, thanks @ranjib!
- [#244](https://github.com/influxdb/telegraf/pull/244): netstat plugin, thanks @shirou! - [#244](https://github.com/influxdata/telegraf/pull/244): netstat plugin, thanks @shirou!
- [#262](https://github.com/influxdb/telegraf/pull/262): zookeeper plugin, thanks @jrxFive! - [#262](https://github.com/influxdata/telegraf/pull/262): zookeeper plugin, thanks @jrxFive!
- [#237](https://github.com/influxdb/telegraf/pull/237): statsd service plugin, thanks @sparrc - [#237](https://github.com/influxdata/telegraf/pull/237): statsd service plugin, thanks @sparrc
- [#273](https://github.com/influxdb/telegraf/pull/273): puppet agent plugin, thats @jrxFive! - [#273](https://github.com/influxdata/telegraf/pull/273): puppet agent plugin, thats @jrxFive!
- [#280](https://github.com/influxdb/telegraf/issues/280): Use InfluxDB client v2. - [#280](https://github.com/influxdata/telegraf/issues/280): Use InfluxDB client v2.
- [#281](https://github.com/influxdb/telegraf/issues/281): Eliminate need to deep copy Batch Points. - [#281](https://github.com/influxdata/telegraf/issues/281): Eliminate need to deep copy Batch Points.
- [#286](https://github.com/influxdb/telegraf/issues/286): bcache plugin, thanks @cornerot! - [#286](https://github.com/influxdata/telegraf/issues/286): bcache plugin, thanks @cornerot!
- [#287](https://github.com/influxdb/telegraf/issues/287): Batch AMQP output, thanks @ekini! - [#287](https://github.com/influxdata/telegraf/issues/287): Batch AMQP output, thanks @ekini!
- [#301](https://github.com/influxdb/telegraf/issues/301): Collect on even intervals - [#301](https://github.com/influxdata/telegraf/issues/301): Collect on even intervals
- [#298](https://github.com/influxdb/telegraf/pull/298): Support retrying output writes - [#298](https://github.com/influxdata/telegraf/pull/298): Support retrying output writes
- [#300](https://github.com/influxdb/telegraf/issues/300): aerospike plugin. Thanks @oldmantaiter! - [#300](https://github.com/influxdata/telegraf/issues/300): aerospike plugin. Thanks @oldmantaiter!
- [#322](https://github.com/influxdb/telegraf/issues/322): Librato output. Thanks @jipperinbham! - [#322](https://github.com/influxdata/telegraf/issues/322): Librato output. Thanks @jipperinbham!
### Bugfixes ### Bugfixes
- [#228](https://github.com/influxdb/telegraf/pull/228): New version of package will replace old one. Thanks @ekini! - [#228](https://github.com/influxdata/telegraf/pull/228): New version of package will replace old one. Thanks @ekini!
- [#232](https://github.com/influxdb/telegraf/pull/232): Fix bashism run during deb package installation. Thanks @yankcrime! - [#232](https://github.com/influxdata/telegraf/pull/232): Fix bashism run during deb package installation. Thanks @yankcrime!
- [#261](https://github.com/influxdb/telegraf/issues/260): RabbitMQ panics if wrong credentials given. Thanks @ekini! - [#261](https://github.com/influxdata/telegraf/issues/260): RabbitMQ panics if wrong credentials given. Thanks @ekini!
- [#245](https://github.com/influxdb/telegraf/issues/245): Document Exec plugin example. Thanks @ekini! - [#245](https://github.com/influxdata/telegraf/issues/245): Document Exec plugin example. Thanks @ekini!
- [#264](https://github.com/influxdb/telegraf/issues/264): logrotate config file fixes. Thanks @linsomniac! - [#264](https://github.com/influxdata/telegraf/issues/264): logrotate config file fixes. Thanks @linsomniac!
- [#290](https://github.com/influxdb/telegraf/issues/290): Fix some plugins sending their values as strings. - [#290](https://github.com/influxdata/telegraf/issues/290): Fix some plugins sending their values as strings.
- [#289](https://github.com/influxdb/telegraf/issues/289): Fix accumulator panic on nil tags. - [#289](https://github.com/influxdata/telegraf/issues/289): Fix accumulator panic on nil tags.
- [#302](https://github.com/influxdb/telegraf/issues/302): Fix `[tags]` getting applied, thanks @gotyaoi! - [#302](https://github.com/influxdata/telegraf/issues/302): Fix `[tags]` getting applied, thanks @gotyaoi!
## v0.1.9 [2015-09-22] ## v0.1.9 [2015-09-22]
@ -163,7 +238,7 @@ will still be backwards compatible if only `url` is specified.
- The -test flag will now output two metric collections - The -test flag will now output two metric collections
- Support for filtering telegraf outputs on the CLI -- Telegraf will now - Support for filtering telegraf outputs on the CLI -- Telegraf will now
allow filtering of output sinks on the command-line using the `-outputfilter` allow filtering of output sinks on the command-line using the `-outputfilter`
flag, much like how the `-filter` flag works for plugins. flag, much like how the `-filter` flag works for inputs.
- Support for filtering on config-file creation -- Telegraf now supports - Support for filtering on config-file creation -- Telegraf now supports
filtering to -sample-config command. You can now run filtering to -sample-config command. You can now run
`telegraf -sample-config -filter cpu -outputfilter influxdb` to get a config `telegraf -sample-config -filter cpu -outputfilter influxdb` to get a config
@ -179,27 +254,27 @@ have been renamed for consistency. Some measurements have also been removed from
re-added in a "verbose" mode if there is demand for it. re-added in a "verbose" mode if there is demand for it.
### Features ### Features
- [#143](https://github.com/influxdb/telegraf/issues/143): InfluxDB clustering support - [#143](https://github.com/influxdata/telegraf/issues/143): InfluxDB clustering support
- [#181](https://github.com/influxdb/telegraf/issues/181): Makefile GOBIN support. Thanks @Vye! - [#181](https://github.com/influxdata/telegraf/issues/181): Makefile GOBIN support. Thanks @Vye!
- [#203](https://github.com/influxdb/telegraf/pull/200): AMQP output. Thanks @ekini! - [#203](https://github.com/influxdata/telegraf/pull/200): AMQP output. Thanks @ekini!
- [#182](https://github.com/influxdb/telegraf/pull/182): OpenTSDB output. Thanks @rplessl! - [#182](https://github.com/influxdata/telegraf/pull/182): OpenTSDB output. Thanks @rplessl!
- [#187](https://github.com/influxdb/telegraf/pull/187): Retry output sink connections on startup. - [#187](https://github.com/influxdata/telegraf/pull/187): Retry output sink connections on startup.
- [#220](https://github.com/influxdb/telegraf/pull/220): Add port tag to apache plugin. Thanks @neezgee! - [#220](https://github.com/influxdata/telegraf/pull/220): Add port tag to apache plugin. Thanks @neezgee!
- [#217](https://github.com/influxdb/telegraf/pull/217): Add filtering for output sinks - [#217](https://github.com/influxdata/telegraf/pull/217): Add filtering for output sinks
and filtering when specifying a config file. and filtering when specifying a config file.
### Bugfixes ### Bugfixes
- [#170](https://github.com/influxdb/telegraf/issues/170): Systemd support - [#170](https://github.com/influxdata/telegraf/issues/170): Systemd support
- [#175](https://github.com/influxdb/telegraf/issues/175): Set write precision before gathering metrics - [#175](https://github.com/influxdata/telegraf/issues/175): Set write precision before gathering metrics
- [#178](https://github.com/influxdb/telegraf/issues/178): redis plugin, multiple server thread hang bug - [#178](https://github.com/influxdata/telegraf/issues/178): redis plugin, multiple server thread hang bug
- Fix net plugin on darwin - Fix net plugin on darwin
- [#84](https://github.com/influxdb/telegraf/issues/84): Fix docker plugin on CentOS. Thanks @neezgee! - [#84](https://github.com/influxdata/telegraf/issues/84): Fix docker plugin on CentOS. Thanks @neezgee!
- [#189](https://github.com/influxdb/telegraf/pull/189): Fix mem_used_perc. Thanks @mced! - [#189](https://github.com/influxdata/telegraf/pull/189): Fix mem_used_perc. Thanks @mced!
- [#192](https://github.com/influxdb/telegraf/issues/192): Increase compatibility of postgresql plugin. Now supports versions 8.1+ - [#192](https://github.com/influxdata/telegraf/issues/192): Increase compatibility of postgresql plugin. Now supports versions 8.1+
- [#203](https://github.com/influxdb/telegraf/issues/203): EL5 rpm support. Thanks @ekini! - [#203](https://github.com/influxdata/telegraf/issues/203): EL5 rpm support. Thanks @ekini!
- [#206](https://github.com/influxdb/telegraf/issues/206): CPU steal/guest values wrong on linux. - [#206](https://github.com/influxdata/telegraf/issues/206): CPU steal/guest values wrong on linux.
- [#212](https://github.com/influxdb/telegraf/issues/212): Add hashbang to postinstall script. Thanks @ekini! - [#212](https://github.com/influxdata/telegraf/issues/212): Add hashbang to postinstall script. Thanks @ekini!
- [#212](https://github.com/influxdb/telegraf/issues/212): Fix makefile warning. Thanks @ekini! - [#212](https://github.com/influxdata/telegraf/issues/212): Fix makefile warning. Thanks @ekini!
## v0.1.8 [2015-09-04] ## v0.1.8 [2015-09-04]
@ -208,106 +283,106 @@ and filtering when specifying a config file.
- Now using Go 1.5 to build telegraf - Now using Go 1.5 to build telegraf
### Features ### Features
- [#150](https://github.com/influxdb/telegraf/pull/150): Add Host Uptime metric to system plugin - [#150](https://github.com/influxdata/telegraf/pull/150): Add Host Uptime metric to system plugin
- [#158](https://github.com/influxdb/telegraf/pull/158): Apache Plugin. Thanks @KPACHbIuLLIAnO4 - [#158](https://github.com/influxdata/telegraf/pull/158): Apache Plugin. Thanks @KPACHbIuLLIAnO4
- [#159](https://github.com/influxdb/telegraf/pull/159): Use second precision for InfluxDB writes - [#159](https://github.com/influxdata/telegraf/pull/159): Use second precision for InfluxDB writes
- [#165](https://github.com/influxdb/telegraf/pull/165): Add additional metrics to mysql plugin. Thanks @nickscript0 - [#165](https://github.com/influxdata/telegraf/pull/165): Add additional metrics to mysql plugin. Thanks @nickscript0
- [#162](https://github.com/influxdb/telegraf/pull/162): Write UTC by default, provide option - [#162](https://github.com/influxdata/telegraf/pull/162): Write UTC by default, provide option
- [#166](https://github.com/influxdb/telegraf/pull/166): Upload binaries to S3 - [#166](https://github.com/influxdata/telegraf/pull/166): Upload binaries to S3
- [#169](https://github.com/influxdb/telegraf/pull/169): Ping plugin - [#169](https://github.com/influxdata/telegraf/pull/169): Ping plugin
### Bugfixes ### Bugfixes
## v0.1.7 [2015-08-28] ## v0.1.7 [2015-08-28]
### Features ### Features
- [#38](https://github.com/influxdb/telegraf/pull/38): Kafka output producer. - [#38](https://github.com/influxdata/telegraf/pull/38): Kafka output producer.
- [#133](https://github.com/influxdb/telegraf/pull/133): Add plugin.Gather error logging. Thanks @nickscript0! - [#133](https://github.com/influxdata/telegraf/pull/133): Add plugin.Gather error logging. Thanks @nickscript0!
- [#136](https://github.com/influxdb/telegraf/issues/136): Add a -usage flag for printing usage of a single plugin. - [#136](https://github.com/influxdata/telegraf/issues/136): Add a -usage flag for printing usage of a single plugin.
- [#137](https://github.com/influxdb/telegraf/issues/137): Memcached: fix when a value contains a space - [#137](https://github.com/influxdata/telegraf/issues/137): Memcached: fix when a value contains a space
- [#138](https://github.com/influxdb/telegraf/issues/138): MySQL server address tag. - [#138](https://github.com/influxdata/telegraf/issues/138): MySQL server address tag.
- [#142](https://github.com/influxdb/telegraf/pull/142): Add Description and SampleConfig funcs to output interface - [#142](https://github.com/influxdata/telegraf/pull/142): Add Description and SampleConfig funcs to output interface
- Indent the toml config file for readability - Indent the toml config file for readability
### Bugfixes ### Bugfixes
- [#128](https://github.com/influxdb/telegraf/issues/128): system_load measurement missing. - [#128](https://github.com/influxdata/telegraf/issues/128): system_load measurement missing.
- [#129](https://github.com/influxdb/telegraf/issues/129): Latest pkg url fix. - [#129](https://github.com/influxdata/telegraf/issues/129): Latest pkg url fix.
- [#131](https://github.com/influxdb/telegraf/issues/131): Fix memory reporting on linux & darwin. Thanks @subhachandrachandra! - [#131](https://github.com/influxdata/telegraf/issues/131): Fix memory reporting on linux & darwin. Thanks @subhachandrachandra!
- [#140](https://github.com/influxdb/telegraf/issues/140): Memory plugin prec->perc typo fix. Thanks @brunoqc! - [#140](https://github.com/influxdata/telegraf/issues/140): Memory plugin prec->perc typo fix. Thanks @brunoqc!
## v0.1.6 [2015-08-20] ## v0.1.6 [2015-08-20]
### Features ### Features
- [#112](https://github.com/influxdb/telegraf/pull/112): Datadog output. Thanks @jipperinbham! - [#112](https://github.com/influxdata/telegraf/pull/112): Datadog output. Thanks @jipperinbham!
- [#116](https://github.com/influxdb/telegraf/pull/116): Use godep to vendor all dependencies - [#116](https://github.com/influxdata/telegraf/pull/116): Use godep to vendor all dependencies
- [#120](https://github.com/influxdb/telegraf/pull/120): Httpjson plugin. Thanks @jpalay & @alvaromorales! - [#120](https://github.com/influxdata/telegraf/pull/120): Httpjson plugin. Thanks @jpalay & @alvaromorales!
### Bugfixes ### Bugfixes
- [#113](https://github.com/influxdb/telegraf/issues/113): Update README with Telegraf/InfluxDB compatibility - [#113](https://github.com/influxdata/telegraf/issues/113): Update README with Telegraf/InfluxDB compatibility
- [#118](https://github.com/influxdb/telegraf/pull/118): Fix for disk usage stats in Windows. Thanks @srfraser! - [#118](https://github.com/influxdata/telegraf/pull/118): Fix for disk usage stats in Windows. Thanks @srfraser!
- [#122](https://github.com/influxdb/telegraf/issues/122): Fix for DiskUsage segv fault. Thanks @srfraser! - [#122](https://github.com/influxdata/telegraf/issues/122): Fix for DiskUsage segv fault. Thanks @srfraser!
- [#126](https://github.com/influxdb/telegraf/issues/126): Nginx plugin not catching net.SplitHostPort error - [#126](https://github.com/influxdata/telegraf/issues/126): Nginx plugin not catching net.SplitHostPort error
## v0.1.5 [2015-08-13] ## v0.1.5 [2015-08-13]
### Features ### Features
- [#54](https://github.com/influxdb/telegraf/pull/54): MongoDB plugin. Thanks @jipperinbham! - [#54](https://github.com/influxdata/telegraf/pull/54): MongoDB plugin. Thanks @jipperinbham!
- [#55](https://github.com/influxdb/telegraf/pull/55): Elasticsearch plugin. Thanks @brocaar! - [#55](https://github.com/influxdata/telegraf/pull/55): Elasticsearch plugin. Thanks @brocaar!
- [#71](https://github.com/influxdb/telegraf/pull/71): HAProxy plugin. Thanks @kureikain! - [#71](https://github.com/influxdata/telegraf/pull/71): HAProxy plugin. Thanks @kureikain!
- [#72](https://github.com/influxdb/telegraf/pull/72): Adding TokuDB metrics to MySQL. Thanks vadimtk! - [#72](https://github.com/influxdata/telegraf/pull/72): Adding TokuDB metrics to MySQL. Thanks vadimtk!
- [#73](https://github.com/influxdb/telegraf/pull/73): RabbitMQ plugin. Thanks @ianunruh! - [#73](https://github.com/influxdata/telegraf/pull/73): RabbitMQ plugin. Thanks @ianunruh!
- [#77](https://github.com/influxdb/telegraf/issues/77): Automatically create database. - [#77](https://github.com/influxdata/telegraf/issues/77): Automatically create database.
- [#79](https://github.com/influxdb/telegraf/pull/56): Nginx plugin. Thanks @codeb2cc! - [#79](https://github.com/influxdata/telegraf/pull/56): Nginx plugin. Thanks @codeb2cc!
- [#86](https://github.com/influxdb/telegraf/pull/86): Lustre2 plugin. Thanks srfraser! - [#86](https://github.com/influxdata/telegraf/pull/86): Lustre2 plugin. Thanks srfraser!
- [#91](https://github.com/influxdb/telegraf/pull/91): Unit testing - [#91](https://github.com/influxdata/telegraf/pull/91): Unit testing
- [#92](https://github.com/influxdb/telegraf/pull/92): Exec plugin. Thanks @alvaromorales! - [#92](https://github.com/influxdata/telegraf/pull/92): Exec plugin. Thanks @alvaromorales!
- [#98](https://github.com/influxdb/telegraf/pull/98): LeoFS plugin. Thanks @mocchira! - [#98](https://github.com/influxdata/telegraf/pull/98): LeoFS plugin. Thanks @mocchira!
- [#103](https://github.com/influxdb/telegraf/pull/103): Filter by metric tags. Thanks @srfraser! - [#103](https://github.com/influxdata/telegraf/pull/103): Filter by metric tags. Thanks @srfraser!
- [#106](https://github.com/influxdb/telegraf/pull/106): Options to filter plugins on startup. Thanks @zepouet! - [#106](https://github.com/influxdata/telegraf/pull/106): Options to filter plugins on startup. Thanks @zepouet!
- [#107](https://github.com/influxdb/telegraf/pull/107): Multiple outputs beyong influxdb. Thanks @jipperinbham! - [#107](https://github.com/influxdata/telegraf/pull/107): Multiple outputs beyong influxdb. Thanks @jipperinbham!
- [#108](https://github.com/influxdb/telegraf/issues/108): Support setting per-CPU and total-CPU gathering. - [#108](https://github.com/influxdata/telegraf/issues/108): Support setting per-CPU and total-CPU gathering.
- [#111](https://github.com/influxdb/telegraf/pull/111): Report CPU Usage in cpu plugin. Thanks @jpalay! - [#111](https://github.com/influxdata/telegraf/pull/111): Report CPU Usage in cpu plugin. Thanks @jpalay!
### Bugfixes ### Bugfixes
- [#85](https://github.com/influxdb/telegraf/pull/85): Fix GetLocalHost testutil function for mac users - [#85](https://github.com/influxdata/telegraf/pull/85): Fix GetLocalHost testutil function for mac users
- [#89](https://github.com/influxdb/telegraf/pull/89): go fmt fixes - [#89](https://github.com/influxdata/telegraf/pull/89): go fmt fixes
- [#94](https://github.com/influxdb/telegraf/pull/94): Fix for issue #93, explicitly call sarama.v1 -> sarama - [#94](https://github.com/influxdata/telegraf/pull/94): Fix for issue #93, explicitly call sarama.v1 -> sarama
- [#101](https://github.com/influxdb/telegraf/issues/101): switch back from master branch if building locally - [#101](https://github.com/influxdata/telegraf/issues/101): switch back from master branch if building locally
- [#99](https://github.com/influxdb/telegraf/issues/99): update integer output to new InfluxDB line protocol format - [#99](https://github.com/influxdata/telegraf/issues/99): update integer output to new InfluxDB line protocol format
## v0.1.4 [2015-07-09] ## v0.1.4 [2015-07-09]
### Features ### Features
- [#56](https://github.com/influxdb/telegraf/pull/56): Update README for Kafka plugin. Thanks @EmilS! - [#56](https://github.com/influxdata/telegraf/pull/56): Update README for Kafka plugin. Thanks @EmilS!
### Bugfixes ### Bugfixes
- [#50](https://github.com/influxdb/telegraf/pull/50): Fix init.sh script to use telegraf directory. Thanks @jseriff! - [#50](https://github.com/influxdata/telegraf/pull/50): Fix init.sh script to use telegraf directory. Thanks @jseriff!
- [#52](https://github.com/influxdb/telegraf/pull/52): Update CHANGELOG to reference updated directory. Thanks @benfb! - [#52](https://github.com/influxdata/telegraf/pull/52): Update CHANGELOG to reference updated directory. Thanks @benfb!
## v0.1.3 [2015-07-05] ## v0.1.3 [2015-07-05]
### Features ### Features
- [#35](https://github.com/influxdb/telegraf/pull/35): Add Kafka plugin. Thanks @EmilS! - [#35](https://github.com/influxdata/telegraf/pull/35): Add Kafka plugin. Thanks @EmilS!
- [#47](https://github.com/influxdb/telegraf/pull/47): Add RethinkDB plugin. Thanks @jipperinbham! - [#47](https://github.com/influxdata/telegraf/pull/47): Add RethinkDB plugin. Thanks @jipperinbham!
### Bugfixes ### Bugfixes
- [#45](https://github.com/influxdb/telegraf/pull/45): Skip disk tags that don't have a value. Thanks @jhofeditz! - [#45](https://github.com/influxdata/telegraf/pull/45): Skip disk tags that don't have a value. Thanks @jhofeditz!
- [#43](https://github.com/influxdb/telegraf/pull/43): Fix bug in MySQL plugin. Thanks @marcosnils! - [#43](https://github.com/influxdata/telegraf/pull/43): Fix bug in MySQL plugin. Thanks @marcosnils!
## v0.1.2 [2015-07-01] ## v0.1.2 [2015-07-01]
### Features ### Features
- [#12](https://github.com/influxdb/telegraf/pull/12): Add Linux/ARM to the list of built binaries. Thanks @voxxit! - [#12](https://github.com/influxdata/telegraf/pull/12): Add Linux/ARM to the list of built binaries. Thanks @voxxit!
- [#14](https://github.com/influxdb/telegraf/pull/14): Clarify the S3 buckets that Telegraf is pushed to. - [#14](https://github.com/influxdata/telegraf/pull/14): Clarify the S3 buckets that Telegraf is pushed to.
- [#16](https://github.com/influxdb/telegraf/pull/16): Convert Redis to use URI, support Redis AUTH. Thanks @jipperinbham! - [#16](https://github.com/influxdata/telegraf/pull/16): Convert Redis to use URI, support Redis AUTH. Thanks @jipperinbham!
- [#21](https://github.com/influxdb/telegraf/pull/21): Add memcached plugin. Thanks @Yukki! - [#21](https://github.com/influxdata/telegraf/pull/21): Add memcached plugin. Thanks @Yukki!
### Bugfixes ### Bugfixes
- [#13](https://github.com/influxdb/telegraf/pull/13): Fix the packaging script. - [#13](https://github.com/influxdata/telegraf/pull/13): Fix the packaging script.
- [#19](https://github.com/influxdb/telegraf/pull/19): Add host name to metric tags. Thanks @sherifzain! - [#19](https://github.com/influxdata/telegraf/pull/19): Add host name to metric tags. Thanks @sherifzain!
- [#20](https://github.com/influxdb/telegraf/pull/20): Fix race condition with accumulator mutex. Thanks @nkatsaros! - [#20](https://github.com/influxdata/telegraf/pull/20): Fix race condition with accumulator mutex. Thanks @nkatsaros!
- [#23](https://github.com/influxdb/telegraf/pull/23): Change name of folder for packages. Thanks @colinrymer! - [#23](https://github.com/influxdata/telegraf/pull/23): Change name of folder for packages. Thanks @colinrymer!
- [#32](https://github.com/influxdb/telegraf/pull/32): Fix spelling of memoory -> memory. Thanks @tylernisonoff! - [#32](https://github.com/influxdata/telegraf/pull/32): Fix spelling of memoory -> memory. Thanks @tylernisonoff!
## v0.1.1 [2015-06-19] ## v0.1.1 [2015-06-19]

199
CONFIGURATION.md Normal file
View File

@ -0,0 +1,199 @@
# Telegraf Configuration
## Generating a Configuration File
A default Telegraf config file can be generated using the `-sample-config` flag,
like this: `telegraf -sample-config`
To generate a file with specific inputs and outputs, you can use the
`-input-filter` and `-output-filter` flags, like this:
`telegraf -sample-config -input-filter cpu:mem:net:swap -output-filter influxdb:kafka`
## Telegraf Agent Configuration
Telegraf has a few options you can configure under the `agent` section of the
config.
* **hostname**: The hostname is passed as a tag. By default this will be
the value returned by `hostname` on the machine running Telegraf.
You can override that value here.
* **interval**: How often to gather metrics. Uses a simple number +
unit parser, e.g. "10s" for 10 seconds or "5m" for 5 minutes.
* **debug**: Set to true to gather and send metrics to STDOUT as well as
InfluxDB.
## Input Configuration
There are some configuration options that are configurable per input:
* **name_override**: Override the base name of the measurement.
(Default is the name of the input).
* **name_prefix**: Specifies a prefix to attach to the measurement name.
* **name_suffix**: Specifies a suffix to attach to the measurement name.
* **tags**: A map of tags to apply to a specific input's measurements.
* **interval**: How often to gather this metric. Normal plugins use a single
global interval, but if one particular input should be run less or more often,
you can configure that here.
### Input Filters
There are also filters that can be configured per input:
* **pass**: An array of strings that is used to filter metrics generated by the
current input. Each string in the array is tested as a glob match against field names
and if it matches, the field is emitted.
* **drop**: The inverse of pass, if a field name matches, it is not emitted.
* **tagpass**: tag names and arrays of strings that are used to filter
measurements by the current input. Each string in the array is tested as a glob
match against the tag name, and if it matches the measurement is emitted.
* **tagdrop**: The inverse of tagpass. If a tag matches, the measurement is not
emitted. This is tested on measurements that have passed the tagpass test.
### Input Configuration Examples
This is a full working config that will output CPU data to an InfluxDB instance
at 192.168.59.103:8086, tagging measurements with dc="denver-1". It will output
measurements at a 10s interval and will collect per-cpu data, dropping any
fields which begin with `time_`.
```toml
[tags]
dc = "denver-1"
[agent]
interval = "10s"
# OUTPUTS
[[outputs.influxdb]]
url = "http://192.168.59.103:8086" # required.
database = "telegraf" # required.
precision = "s"
# INPUTS
[[inputs.cpu]]
percpu = true
totalcpu = false
# filter all fields beginning with 'time_'
drop = ["time_*"]
```
### Input Config: tagpass and tagdrop
```toml
[[inputs.cpu]]
percpu = true
totalcpu = false
drop = ["cpu_time"]
# Don't collect CPU data for cpu6 & cpu7
[inputs.cpu.tagdrop]
cpu = [ "cpu6", "cpu7" ]
[[inputs.disk]]
[inputs.disk.tagpass]
# tagpass conditions are OR, not AND.
# If the (filesystem is ext4 or xfs) OR (the path is /opt or /home)
# then the metric passes
fstype = [ "ext4", "xfs" ]
# Globs can also be used on the tag values
path = [ "/opt", "/home*" ]
```
### Input Config: pass and drop
```toml
# Drop all metrics for guest & steal CPU usage
[[inputs.cpu]]
percpu = false
totalcpu = true
drop = ["usage_guest", "usage_steal"]
# Only store inode related metrics for disks
[[inputs.disk]]
pass = ["inodes*"]
```
### Input config: prefix, suffix, and override
This plugin will emit measurements with the name `cpu_total`
```toml
[[inputs.cpu]]
name_suffix = "_total"
percpu = false
totalcpu = true
```
This will emit measurements with the name `foobar`
```toml
[[inputs.cpu]]
name_override = "foobar"
percpu = false
totalcpu = true
```
### Input config: tags
This plugin will emit measurements with two additional tags: `tag1=foo` and
`tag2=bar`
```toml
[[inputs.cpu]]
percpu = false
totalcpu = true
[inputs.cpu.tags]
tag1 = "foo"
tag2 = "bar"
```
### Multiple inputs of the same type
Additional inputs (or outputs) of the same type can be specified,
just define more instances in the config file. It is highly recommended that
you utilize `name_override`, `name_prefix`, or `name_suffix` config options
to avoid measurement collisions:
```toml
[[inputs.cpu]]
percpu = false
totalcpu = true
[[inputs.cpu]]
percpu = true
totalcpu = false
name_override = "percpu_usage"
drop = ["cpu_time*"]
```
## Output Configuration
Telegraf also supports specifying multiple output sinks to send data to,
configuring each output sink is different, but examples can be
found by running `telegraf -sample-config`.
Outputs also support the same configurable options as inputs
(pass, drop, tagpass, tagdrop)
```toml
[[outputs.influxdb]]
urls = [ "http://localhost:8086" ]
database = "telegraf"
precision = "s"
# Drop all measurements that start with "aerospike"
drop = ["aerospike*"]
[[outputs.influxdb]]
urls = [ "http://localhost:8086" ]
database = "telegraf-aerospike-data"
precision = "s"
# Only accept aerospike data:
pass = ["aerospike*"]
[[outputs.influxdb]]
urls = [ "http://localhost:8086" ]
database = "telegraf-cpu0-data"
precision = "s"
# Only store measurements where the tag "cpu" matches the value "cpu0"
[outputs.influxdb.tagpass]
cpu = ["cpu0"]
```

View File

@ -1,35 +1,48 @@
## Steps for Contributing:
1. [Sign the CLA](https://github.com/influxdata/telegraf/blob/master/CONTRIBUTING.md#sign-the-cla)
1. Write your input or output plugin (see below for details)
1. Add your plugin to `plugins/inputs/all/all.go` or `plugins/outputs/all/all.go`
1. If your plugin requires a new Go package,
[add it](https://github.com/influxdata/telegraf/blob/master/CONTRIBUTING.md#adding-a-dependency)
## Sign the CLA ## Sign the CLA
Before we can merge a pull request, you will need to sign the CLA, Before we can merge a pull request, you will need to sign the CLA,
which can be found [on our website](http://influxdb.com/community/cla.html) which can be found [on our website](http://influxdb.com/community/cla.html)
## Plugins ## Adding a dependency
This section is for developers who want to create new collection plugins. Assuming you can already build the project:
1. `go get github.com/sparrc/gdm`
1. `gdm save`
## Input Plugins
This section is for developers who want to create new collection inputs.
Telegraf is entirely plugin driven. This interface allows for operators to Telegraf is entirely plugin driven. This interface allows for operators to
pick and chose what is gathered as well as makes it easy for developers pick and chose what is gathered as well as makes it easy for developers
to create new ways of generating metrics. to create new ways of generating metrics.
Plugin authorship is kept as simple as possible to promote people to develop Plugin authorship is kept as simple as possible to promote people to develop
and submit new plugins. and submit new inputs.
### Plugin Guidelines ### Input Plugin Guidelines
* A plugin must conform to the `plugins.Plugin` interface. * A plugin must conform to the `inputs.Input` interface.
* Each generated metric automatically has the name of the plugin that generated * Input Plugins should call `inputs.Add` in their `init` function to register themselves.
it prepended. This is to keep plugins honest.
* Plugins should call `plugins.Add` in their `init` function to register themselves.
See below for a quick example. See below for a quick example.
* To be available within Telegraf itself, plugins must add themselves to the * Input Plugins must be added to the
`github.com/influxdb/telegraf/plugins/all/all.go` file. `github.com/influxdata/telegraf/plugins/inputs/all/all.go` file.
* The `SampleConfig` function should return valid toml that describes how the * The `SampleConfig` function should return valid toml that describes how the
plugin can be configured. This is include in `telegraf -sample-config`. plugin can be configured. This is include in `telegraf -sample-config`.
* The `Description` function should say in one line what this plugin does. * The `Description` function should say in one line what this plugin does.
### Plugin interface ### Input interface
```go ```go
type Plugin interface { type Input interface {
SampleConfig() string SampleConfig() string
Description() string Description() string
Gather(Accumulator) error Gather(Accumulator) error
@ -52,52 +65,32 @@ type Accumulator interface {
The way that a plugin emits metrics is by interacting with the Accumulator. The way that a plugin emits metrics is by interacting with the Accumulator.
The `Add` function takes 3 arguments: The `Add` function takes 3 arguments:
* **measurement**: A string description of the metric. For instance `bytes_read` or `faults`. * **measurement**: A string description of the metric. For instance `bytes_read` or `
faults`.
* **value**: A value for the metric. This accepts 5 different types of value: * **value**: A value for the metric. This accepts 5 different types of value:
* **int**: The most common type. All int types are accepted but favor using `int64` * **int**: The most common type. All int types are accepted but favor using `int64`
Useful for counters, etc. Useful for counters, etc.
* **float**: Favor `float64`, useful for gauges, percentages, etc. * **float**: Favor `float64`, useful for gauges, percentages, etc.
* **bool**: `true` or `false`, useful to indicate the presence of a state. `light_on`, etc. * **bool**: `true` or `false`, useful to indicate the presence of a state. `light_on`,
* **string**: Typically used to indicate a message, or some kind of freeform information. etc.
* **time.Time**: Useful for indicating when a state last occurred, for instance `light_on_since`. * **string**: Typically used to indicate a message, or some kind of freeform
information.
* **time.Time**: Useful for indicating when a state last occurred, for instance `
light_on_since`.
* **tags**: This is a map of strings to strings to describe the where or who * **tags**: This is a map of strings to strings to describe the where or who
about the metric. For instance, the `net` plugin adds a tag named `"interface"` about the metric. For instance, the `net` plugin adds a tag named `"interface"`
set to the name of the network interface, like `"eth0"`. set to the name of the network interface, like `"eth0"`.
The `AddFieldsWithTime` allows multiple values for a point to be passed. The values
used are the same type profile as **value** above. The **timestamp** argument
allows a point to be registered as having occurred at an arbitrary time.
Let's say you've written a plugin that emits metrics about processes on the current host. Let's say you've written a plugin that emits metrics about processes on the current host.
```go ### Input Plugin Example
type Process struct {
CPUTime float64
MemoryBytes int64
PID int
}
func Gather(acc plugins.Accumulator) error {
for _, process := range system.Processes() {
tags := map[string]string {
"pid": fmt.Sprintf("%d", process.Pid),
}
acc.Add("cpu", process.CPUTime, tags, time.Now())
acc.Add("memory", process.MemoryBytes, tags, time.Now())
}
}
```
### Plugin Example
```go ```go
package simple package simple
// simple.go // simple.go
import "github.com/influxdb/telegraf/plugins" import "github.com/influxdata/telegraf/plugins/inputs"
type Simple struct { type Simple struct {
Ok bool Ok bool
@ -111,7 +104,7 @@ func (s *Simple) SampleConfig() string {
return "ok = true # indicate if everything is fine" return "ok = true # indicate if everything is fine"
} }
func (s *Simple) Gather(acc plugins.Accumulator) error { func (s *Simple) Gather(acc inputs.Accumulator) error {
if s.Ok { if s.Ok {
acc.Add("state", "pretty good", nil) acc.Add("state", "pretty good", nil)
} else { } else {
@ -122,19 +115,19 @@ func (s *Simple) Gather(acc plugins.Accumulator) error {
} }
func init() { func init() {
plugins.Add("simple", func() plugins.Plugin { return &Simple{} }) inputs.Add("simple", func() inputs.Input { return &Simple{} })
} }
``` ```
## Service Plugins ## Service Input Plugins
This section is for developers who want to create new "service" collection This section is for developers who want to create new "service" collection
plugins. A service plugin differs from a regular plugin in that it operates inputs. A service plugin differs from a regular plugin in that it operates
a background service while Telegraf is running. One example would be the `statsd` a background service while Telegraf is running. One example would be the `statsd`
plugin, which operates a statsd server. plugin, which operates a statsd server.
Service Plugins are substantially more complicated than a regular plugin, as they Service Input Plugins are substantially more complicated than a regular plugin, as they
will require threads and locks to verify data integrity. Service Plugins should will require threads and locks to verify data integrity. Service Input Plugins should
be avoided unless there is no way to create their behavior with a regular plugin. be avoided unless there is no way to create their behavior with a regular plugin.
Their interface is quite similar to a regular plugin, with the addition of `Start()` Their interface is quite similar to a regular plugin, with the addition of `Start()`
@ -143,7 +136,7 @@ and `Stop()` methods.
### Service Plugin Guidelines ### Service Plugin Guidelines
* Same as the `Plugin` guidelines, except that they must conform to the * Same as the `Plugin` guidelines, except that they must conform to the
`plugins.ServicePlugin` interface. `inputs.ServiceInput` interface.
### Service Plugin interface ### Service Plugin interface
@ -157,19 +150,19 @@ type ServicePlugin interface {
} }
``` ```
## Outputs ## Output Plugins
This section is for developers who want to create a new output sink. Outputs This section is for developers who want to create a new output sink. Outputs
are created in a similar manner as collection plugins, and their interface has are created in a similar manner as collection plugins, and their interface has
similar constructs. similar constructs.
### Output Guidelines ### Output Plugin Guidelines
* An output must conform to the `outputs.Output` interface. * An output must conform to the `outputs.Output` interface.
* Outputs should call `outputs.Add` in their `init` function to register themselves. * Outputs should call `outputs.Add` in their `init` function to register themselves.
See below for a quick example. See below for a quick example.
* To be available within Telegraf itself, plugins must add themselves to the * To be available within Telegraf itself, plugins must add themselves to the
`github.com/influxdb/telegraf/outputs/all/all.go` file. `github.com/influxdata/telegraf/plugins/outputs/all/all.go` file.
* The `SampleConfig` function should return valid toml that describes how the * The `SampleConfig` function should return valid toml that describes how the
output can be configured. This is include in `telegraf -sample-config`. output can be configured. This is include in `telegraf -sample-config`.
* The `Description` function should say in one line what this output does. * The `Description` function should say in one line what this output does.
@ -193,7 +186,7 @@ package simpleoutput
// simpleoutput.go // simpleoutput.go
import "github.com/influxdb/telegraf/outputs" import "github.com/influxdata/telegraf/plugins/outputs"
type Simple struct { type Simple struct {
Ok bool Ok bool
@ -230,7 +223,7 @@ func init() {
``` ```
## Service Outputs ## Service Output Plugins
This section is for developers who want to create new "service" output. A This section is for developers who want to create new "service" output. A
service output differs from a regular output in that it operates a background service service output differs from a regular output in that it operates a background service
@ -243,7 +236,7 @@ and `Stop()` methods.
### Service Output Guidelines ### Service Output Guidelines
* Same as the `Output` guidelines, except that they must conform to the * Same as the `Output` guidelines, except that they must conform to the
`plugins.ServiceOutput` interface. `output.ServiceOutput` interface.
### Service Output interface ### Service Output interface
@ -274,7 +267,7 @@ which would take some time to replicate.
To overcome this situation we've decided to use docker containers to provide a To overcome this situation we've decided to use docker containers to provide a
fast and reproducible environment to test those services which require it. fast and reproducible environment to test those services which require it.
For other situations For other situations
(i.e: https://github.com/influxdb/telegraf/blob/master/plugins/redis/redis_test.go ) (i.e: https://github.com/influxdata/telegraf/blob/master/plugins/redis/redis_test.go)
a simple mock will suffice. a simple mock will suffice.
To execute Telegraf tests follow these simple steps: To execute Telegraf tests follow these simple steps:

54
Godeps
View File

@ -1,52 +1,54 @@
git.eclipse.org/gitroot/paho/org.eclipse.paho.mqtt.golang.git dbd8d5c40a582eb9adacde36b47932b3a3ad0034 git.eclipse.org/gitroot/paho/org.eclipse.paho.mqtt.golang.git dbd8d5c40a582eb9adacde36b47932b3a3ad0034
github.com/Shopify/sarama 159e9990b0796511607dd0d7aaa3eb37d1829d16 github.com/Shopify/sarama d37c73f2b2bce85f7fa16b6a550d26c5372892ef
github.com/Sirupsen/logrus 446d1c146faa8ed3f4218f056fcd165f6bcfda81 github.com/Sirupsen/logrus f7f79f729e0fbe2fcc061db48a9ba0263f588252
github.com/amir/raidman 6a8e089bbe32e6b907feae5ba688841974b3c339 github.com/amir/raidman 6a8e089bbe32e6b907feae5ba688841974b3c339
github.com/armon/go-metrics 06b60999766278efd6d2b5d8418a58c3d5b99e87 github.com/armon/go-metrics 345426c77237ece5dab0e1605c3e4b35c3f54757
github.com/aws/aws-sdk-go 999b1591218c36d5050d1ba7266eba956e65965f github.com/aws/aws-sdk-go 3ad0b07b44c22c21c734d1094981540b7a11e942
github.com/beorn7/perks b965b613227fddccbfffe13eae360ed3fa822f8d github.com/beorn7/perks b965b613227fddccbfffe13eae360ed3fa822f8d
github.com/boltdb/bolt b34b35ea8d06bb9ae69d9a349119252e4c1d8ee0 github.com/boltdb/bolt 6465994716bf6400605746e79224cf1e7ed68725
github.com/cenkalti/backoff 4dc77674aceaabba2c7e3da25d4c823edfb73f99 github.com/cenkalti/backoff 4dc77674aceaabba2c7e3da25d4c823edfb73f99
github.com/dancannon/gorethink a124c9663325ed9f7fb669d17c69961b59151e6e github.com/dancannon/gorethink ff457cac6a529d9749d841a733d76e8305cba3c8
github.com/davecgh/go-spew 5215b55f46b2b919f50a1df0eaa5886afe4e3b3d github.com/davecgh/go-spew 5215b55f46b2b919f50a1df0eaa5886afe4e3b3d
github.com/eapache/go-resiliency f341fb4dca45128e4aa86389fa6a675d55fe25e1 github.com/eapache/go-resiliency b86b1ec0dd4209a588dc1285cdd471e73525c0b3
github.com/eapache/queue ded5959c0d4e360646dc9e9908cff48666781367 github.com/eapache/queue ded5959c0d4e360646dc9e9908cff48666781367
github.com/fsouza/go-dockerclient 7177a9e3543b0891a5d91dbf7051e0f71455c8ef github.com/fsouza/go-dockerclient 6fb38e6bb3d544d7eb5b55fd396cd4e6850802d8
github.com/go-ini/ini 9314fb0ef64171d6a3d0a4fa570dfa33441cba05 github.com/go-ini/ini afbd495e5aaea13597b5e14fe514ddeaa4d76fc3
github.com/go-sql-driver/mysql d512f204a577a4ab037a1816604c48c9c13210be github.com/go-sql-driver/mysql 72ea5d0b32a04c67710bf63e97095d82aea5f352
github.com/gogo/protobuf e492fd34b12d0230755c45aa5fb1e1eea6a84aa9 github.com/gogo/protobuf c57e439bad574c2e0877ff18d514badcfced004d
github.com/golang/protobuf 68415e7123da32b07eab49c96d2c4d6158360e9b github.com/golang/protobuf 2402d76f3d41f928c7902a765dfc872356dd3aad
github.com/golang/snappy 723cc1e459b8eea2dea4583200fd60757d40097a github.com/golang/snappy 723cc1e459b8eea2dea4583200fd60757d40097a
github.com/gonuts/go-shellquote e842a11b24c6abfb3dd27af69a17f482e4b483c2 github.com/gonuts/go-shellquote e842a11b24c6abfb3dd27af69a17f482e4b483c2
github.com/hailocab/go-hostpool 0637eae892be221164aff5fcbccc57171aea6406 github.com/hailocab/go-hostpool 50839ee41f32bfca8d03a183031aa634b2dc1c64
github.com/hashicorp/go-msgpack fa3f63826f7c23912c15263591e65d54d080b458 github.com/hashicorp/go-msgpack fa3f63826f7c23912c15263591e65d54d080b458
github.com/hashicorp/raft d136cd15dfb7876fd7c89cad1995bc4f19ceb294 github.com/hashicorp/raft b95f335efee1992886864389183ebda0c0a5d0f6
github.com/hashicorp/raft-boltdb d1e82c1ec3f15ee991f7cc7ffd5b67ff6f5bbaee github.com/hashicorp/raft-boltdb d1e82c1ec3f15ee991f7cc7ffd5b67ff6f5bbaee
github.com/influxdb/influxdb 69a7664f2d4b75aec300b7cbfc7e57c971721f04 github.com/influxdata/influxdb 0e0f85a0c1fd1788ae4f9145531b02c539cfa5b5
github.com/influxdb/influxdb 0e0f85a0c1fd1788ae4f9145531b02c539cfa5b5
github.com/jmespath/go-jmespath c01cf91b011868172fdcd9f41838e80c9d716264 github.com/jmespath/go-jmespath c01cf91b011868172fdcd9f41838e80c9d716264
github.com/klauspost/crc32 0aff1ea9c20474c3901672b5b6ead0ac611156de github.com/klauspost/crc32 999f3125931f6557b991b2f8472172bdfa578d38
github.com/lib/pq 11fc39a580a008f1f39bb3d11d984fb34ed778d9 github.com/lib/pq 8ad2b298cadd691a77015666a5372eae5dbfac8f
github.com/matttproud/golang_protobuf_extensions d0c3fe89de86839aecf2e0579c40ba3bb336a453 github.com/matttproud/golang_protobuf_extensions d0c3fe89de86839aecf2e0579c40ba3bb336a453
github.com/mreiferson/go-snappystream 028eae7ab5c4c9e2d1cb4c4ca1e53259bbe7e504 github.com/mreiferson/go-snappystream 028eae7ab5c4c9e2d1cb4c4ca1e53259bbe7e504
github.com/naoina/go-stringutil 6b638e95a32d0c1131db0e7fe83775cbea4a0d0b github.com/naoina/go-stringutil 6b638e95a32d0c1131db0e7fe83775cbea4a0d0b
github.com/naoina/toml 751171607256bb66e64c9f0220c00662420c38e9 github.com/naoina/toml 751171607256bb66e64c9f0220c00662420c38e9
github.com/nsqio/go-nsq 2118015c120962edc5d03325c680daf3163a8b5f github.com/nsqio/go-nsq 2118015c120962edc5d03325c680daf3163a8b5f
github.com/pborman/uuid cccd189d45f7ac3368a0d127efb7f4d08ae0b655 github.com/pborman/uuid dee7705ef7b324f27ceb85a121c61f2c2e8ce988
github.com/pmezard/go-difflib e8554b8641db39598be7f6342874b958f12ae1d4 github.com/pmezard/go-difflib 792786c7400a136282c1664665ae0a8db921c6c2
github.com/prometheus/client_golang 67994f177195311c3ea3d4407ed0175e34a4256f github.com/prometheus/client_golang 67994f177195311c3ea3d4407ed0175e34a4256f
github.com/prometheus/client_model fa8ad6fec33561be4280a8f0514318c79d7f6cb6 github.com/prometheus/client_model fa8ad6fec33561be4280a8f0514318c79d7f6cb6
github.com/prometheus/common 56b90312e937d43b930f06a59bf0d6a4ae1944bc github.com/prometheus/common 0a3005bb37bc411040083a55372e77c405f6464c
github.com/prometheus/procfs 406e5b7bfd8201a36e2bb5f7bdae0b03380c2ce8 github.com/prometheus/procfs 406e5b7bfd8201a36e2bb5f7bdae0b03380c2ce8
github.com/samuel/go-zookeeper 218e9c81c0dd8b3b18172b2bbfad92cc7d6db55f github.com/samuel/go-zookeeper 218e9c81c0dd8b3b18172b2bbfad92cc7d6db55f
github.com/shirou/gopsutil fc932d9090f13a84fb4b3cb8baa124610cab184c github.com/shirou/gopsutil 8850f58d7035653e1ab90711481954c8ca1b9813
github.com/streadway/amqp b4f3ceab0337f013208d31348b578d83c0064744 github.com/streadway/amqp b4f3ceab0337f013208d31348b578d83c0064744
github.com/stretchr/objx 1a9d0bb9f541897e62256577b352fdbc1fb4fd94 github.com/stretchr/objx 1a9d0bb9f541897e62256577b352fdbc1fb4fd94
github.com/stretchr/testify e3a8ff8ce36581f87a15341206f205b1da467059 github.com/stretchr/testify f390dcf405f7b83c997eac1b06768bb9f44dec18
github.com/wvanbergen/kafka 1a8639a45164fcc245d5c7b4bd3ccfbd1a0ffbf3 github.com/wvanbergen/kafka 1a8639a45164fcc245d5c7b4bd3ccfbd1a0ffbf3
github.com/wvanbergen/kazoo-go 0f768712ae6f76454f987c3356177e138df258f8 github.com/wvanbergen/kazoo-go 0f768712ae6f76454f987c3356177e138df258f8
golang.org/x/crypto 7b85b097bf7527677d54d3220065e966a0e3b613 golang.org/x/crypto 3760e016850398b85094c4c99e955b8c3dea5711
golang.org/x/net 1796f9b8b7178e3c7587dff118d3bb9d37f9b0b3 golang.org/x/net 72aa00c6241a8013dc9b040abb45f57edbe73945
gopkg.in/dancannon/gorethink.v1 a124c9663325ed9f7fb669d17c69961b59151e6e golang.org/x/text cf4986612c83df6c55578ba198316d1684a9a287
gopkg.in/dancannon/gorethink.v1 e2cef022d0495329dfb0635991de76efcab5cf50
gopkg.in/fatih/pool.v2 cba550ebf9bce999a02e963296d4bc7a486cb715 gopkg.in/fatih/pool.v2 cba550ebf9bce999a02e963296d4bc7a486cb715
gopkg.in/mgo.v2 e30de8ac9ae3b30df7065f766c71f88bba7d4e49 gopkg.in/mgo.v2 03c9f3ee4c14c8e51ee521a6a7d0425658dd6f64
gopkg.in/yaml.v2 f7716cbe52baa25d2e9b0d0da546fcf909fc16b4 gopkg.in/yaml.v2 f7716cbe52baa25d2e9b0d0da546fcf909fc16b4

View File

@ -21,21 +21,8 @@ dev: prepare
"-X main.Version=$(VERSION)" \ "-X main.Version=$(VERSION)" \
./cmd/telegraf/telegraf.go ./cmd/telegraf/telegraf.go
# Build linux 64-bit, 32-bit and arm architectures
build-linux-bins: prepare
GOARCH=amd64 GOOS=linux go build -o telegraf_linux_amd64 \
-ldflags "-X main.Version=$(VERSION)" \
./cmd/telegraf/telegraf.go
GOARCH=386 GOOS=linux go build -o telegraf_linux_386 \
-ldflags "-X main.Version=$(VERSION)" \
./cmd/telegraf/telegraf.go
GOARCH=arm GOOS=linux go build -o telegraf_linux_arm \
-ldflags "-X main.Version=$(VERSION)" \
./cmd/telegraf/telegraf.go
# Get dependencies and use gdm to checkout changesets # Get dependencies and use gdm to checkout changesets
prepare: prepare:
go get ./...
go get github.com/sparrc/gdm go get github.com/sparrc/gdm
gdm restore gdm restore

277
README.md
View File

@ -1,30 +1,43 @@
# Telegraf - A native agent for InfluxDB [![Circle CI](https://circleci.com/gh/influxdata/telegraf.svg?style=svg)](https://circleci.com/gh/influxdata/telegraf) # Telegraf [![Circle CI](https://circleci.com/gh/influxdata/telegraf.svg?style=svg)](https://circleci.com/gh/influxdata/telegraf)
Telegraf is an agent written in Go for collecting metrics from the system it's Telegraf is an agent written in Go for collecting metrics from the system it's
running on, or from other services, and writing them into InfluxDB. running on, or from other services, and writing them into InfluxDB or other
[outputs](https://github.com/influxdata/telegraf#supported-output-plugins).
Design goals are to have a minimal memory footprint with a plugin system so Design goals are to have a minimal memory footprint with a plugin system so
that developers in the community can easily add support for collecting metrics that developers in the community can easily add support for collecting metrics
from well known services (like Hadoop, Postgres, or Redis) and third party from well known services (like Hadoop, Postgres, or Redis) and third party
APIs (like Mailchimp, AWS CloudWatch, or Google Analytics). APIs (like Mailchimp, AWS CloudWatch, or Google Analytics).
We'll eagerly accept pull requests for new plugins and will manage the set of New input and output plugins are designed to be easy to contribute,
plugins that Telegraf supports. See the we'll eagerly accept pull
[contributing guide](CONTRIBUTING.md) for instructions on requests and will manage the set of plugins that Telegraf supports.
writing new plugins. See the [contributing guide](CONTRIBUTING.md) for instructions on writing
new plugins.
## Installation: ## Installation:
NOTE: Telegraf 0.10.x is **not** backwards-compatible with previous versions
of telegraf, both in the database layout and the configuration file. 0.2.x
will continue to be supported, see below for download links.
For more details on the differences between Telegraf 0.2.x and 0.10.x, see
the [release blog post](https://influxdata.com/blog/announcing-telegraf-0-10-0/).
### Linux deb and rpm packages: ### Linux deb and rpm packages:
Latest: Latest:
* http://get.influxdb.org/telegraf/telegraf_0.10.0-1_amd64.deb
* http://get.influxdb.org/telegraf/telegraf-0.10.0-1.x86_64.rpm
0.2.x:
* http://get.influxdb.org/telegraf/telegraf_0.2.4_amd64.deb * http://get.influxdb.org/telegraf/telegraf_0.2.4_amd64.deb
* http://get.influxdb.org/telegraf/telegraf-0.2.4-1.x86_64.rpm * http://get.influxdb.org/telegraf/telegraf-0.2.4-1.x86_64.rpm
##### Package instructions: ##### Package instructions:
* Telegraf binary is installed in `/opt/telegraf/telegraf` * Telegraf binary is installed in `/usr/bin/telegraf`
* Telegraf daemon configuration file is in `/etc/opt/telegraf/telegraf.conf` * Telegraf daemon configuration file is in `/etc/telegraf/telegraf.conf`
* On sysv systems, the telegraf daemon can be controlled via * On sysv systems, the telegraf daemon can be controlled via
`service telegraf [action]` `service telegraf [action]`
* On systemd systems (such as Ubuntu 15+), the telegraf daemon can be * On systemd systems (such as Ubuntu 15+), the telegraf daemon can be
@ -33,6 +46,11 @@ controlled via `systemctl [action] telegraf`
### Linux binaries: ### Linux binaries:
Latest: Latest:
* http://get.influxdb.org/telegraf/telegraf-0.10.0_linux_amd64.tar.gz
* http://get.influxdb.org/telegraf/telegraf-0.10.0_linux_386.tar.gz
* http://get.influxdb.org/telegraf/telegraf-0.10.0_linux_arm.tar.gz
0.2.x:
* http://get.influxdb.org/telegraf/telegraf_linux_amd64_0.2.4.tar.gz * http://get.influxdb.org/telegraf/telegraf_linux_amd64_0.2.4.tar.gz
* http://get.influxdb.org/telegraf/telegraf_linux_386_0.2.4.tar.gz * http://get.influxdb.org/telegraf/telegraf_linux_386_0.2.4.tar.gz
* http://get.influxdb.org/telegraf/telegraf_linux_arm_0.2.4.tar.gz * http://get.influxdb.org/telegraf/telegraf_linux_arm_0.2.4.tar.gz
@ -51,182 +69,83 @@ brew update
brew install telegraf brew install telegraf
``` ```
### Version 0.3.0 Beta
Version 0.3.0 will introduce many new breaking changes to Telegraf. For starters,
plugin measurements will be aggregated into fields. This means that there will no
longer be a `cpu_usage_idle` measurement, there will be a `cpu` measurement with
a `usage_idle` field.
There will also be config file changes, meaning that your 0.2.x Telegraf config
files will no longer work properly. It is recommended that you use the
`-sample-config` flag to generate a new config file to see what the changes are.
You can also read the
[0.3.0 configuration guide](https://github.com/influxdb/telegraf/blob/0.3.0/CONFIGURATION.md)
to see some of the new features and options available.
You can read more about the justifications for the aggregated measurements
[here](https://github.com/influxdb/telegraf/issues/152), and a more detailed
breakdown of the work [here](https://github.com/influxdb/telegraf/pull/437).
Once we're closer to a full release, there will be a detailed blog post
explaining all the changes.
* http://get.influxdb.org/telegraf/telegraf_0.3.0-beta2_amd64.deb
* http://get.influxdb.org/telegraf/telegraf-0.3.0_beta2-1.x86_64.rpm
* http://get.influxdb.org/telegraf/telegraf_linux_amd64_0.3.0-beta2.tar.gz
* http://get.influxdb.org/telegraf/telegraf_linux_386_0.3.0-beta2.tar.gz
* http://get.influxdb.org/telegraf/telegraf_linux_arm_0.3.0-beta2.tar.gz
### From Source: ### From Source:
Telegraf manages dependencies via [gdm](https://github.com/sparrc/gdm), Telegraf manages dependencies via [gdm](https://github.com/sparrc/gdm),
which gets installed via the Makefile which gets installed via the Makefile
if you don't have it already. You also must build with golang version 1.4+. if you don't have it already. You also must build with golang version 1.5+.
1. [Install Go](https://golang.org/doc/install) 1. [Install Go](https://golang.org/doc/install)
2. [Setup your GOPATH](https://golang.org/doc/code.html#GOPATH) 2. [Setup your GOPATH](https://golang.org/doc/code.html#GOPATH)
3. Run `go get github.com/influxdb/telegraf` 3. Run `go get github.com/influxdata/telegraf`
4. Run `cd $GOPATH/src/github.com/influxdb/telegraf` 4. Run `cd $GOPATH/src/github.com/influxdata/telegraf`
5. Run `make` 5. Run `make`
### How to use it: ### How to use it:
* Run `telegraf -sample-config > telegraf.conf` to create an initial configuration. ```console
* Or run `telegraf -sample-config -filter cpu:mem -outputfilter influxdb > telegraf.conf`. $ telegraf -help
to create a config file with only CPU and memory plugins defined, and InfluxDB Telegraf, The plugin-driven server agent for collecting and reporting metrics.
output defined.
* Edit the configuration to match your needs.
* Run `telegraf -config telegraf.conf -test` to output one full measurement
sample to STDOUT. NOTE: you may want to run as the telegraf user if you are using
the linux packages `sudo -u telegraf telegraf -config telegraf.conf -test`
* Run `telegraf -config telegraf.conf` to gather and send metrics to configured outputs.
* Run `telegraf -config telegraf.conf -filter system:swap`.
to run telegraf with only the system & swap plugins defined in the config.
## Telegraf Options Usage:
Telegraf has a few options you can configure under the `agent` section of the telegraf <flags>
config.
* **hostname**: The hostname is passed as a tag. By default this will be The flags are:
the value returned by `hostname` on the machine running Telegraf.
You can override that value here.
* **interval**: How often to gather metrics. Uses a simple number +
unit parser, e.g. "10s" for 10 seconds or "5m" for 5 minutes.
* **debug**: Set to true to gather and send metrics to STDOUT as well as
InfluxDB.
## Plugin Options -config <file> configuration file to load
-test gather metrics once, print them to stdout, and exit
-sample-config print out full sample configuration to stdout
-config-directory directory containing additional *.conf files
-input-filter filter the input plugins to enable, separator is :
-output-filter filter the output plugins to enable, separator is :
-usage print usage for a plugin, ie, 'telegraf -usage mysql'
-debug print metrics as they're generated to stdout
-quiet run in quiet mode
-version print the version to stdout
There are 5 configuration options that are configurable per plugin: Examples:
* **pass**: An array of strings that is used to filter metrics generated by the # generate a telegraf config file:
current plugin. Each string in the array is tested as a glob match against metric names telegraf -sample-config > telegraf.conf
and if it matches, the metric is emitted.
* **drop**: The inverse of pass, if a metric name matches, it is not emitted.
* **tagpass**: tag names and arrays of strings that are used to filter metrics by the current plugin. Each string in the array is tested as a glob match against
the tag name, and if it matches the metric is emitted.
* **tagdrop**: The inverse of tagpass. If a tag matches, the metric is not emitted.
This is tested on metrics that have passed the tagpass test.
* **interval**: How often to gather this metric. Normal plugins use a single
global interval, but if one particular plugin should be run less or more often,
you can configure that here.
### Plugin Configuration Examples # generate config with only cpu input & influxdb output plugins defined
telegraf -sample-config -input-filter cpu -output-filter influxdb
This is a full working config that will output CPU data to an InfluxDB instance # run a single telegraf collection, outputing metrics to stdout
at 192.168.59.103:8086, tagging measurements with dc="denver-1". It will output telegraf -config telegraf.conf -test
measurements at a 10s interval and will collect per-cpu data, dropping any
measurements which begin with `cpu_time`.
```toml # run telegraf with all plugins defined in config file
[tags] telegraf -config telegraf.conf
dc = "denver-1"
[agent] # run telegraf, enabling the cpu & memory input, and influxdb output plugins
interval = "10s" telegraf -config telegraf.conf -input-filter cpu:mem -output-filter influxdb
# OUTPUTS
[outputs]
[[outputs.influxdb]]
url = "http://192.168.59.103:8086" # required.
database = "telegraf" # required.
precision = "s"
# PLUGINS
[plugins]
[[plugins.cpu]]
percpu = true
totalcpu = false
drop = ["cpu_time*"]
``` ```
Below is how to configure `tagpass` and `tagdrop` parameters ## Configuration
```toml See the [configuration guide](CONFIGURATION.md) for a rundown of the more advanced
[plugins] configuration options.
[[plugins.cpu]]
percpu = true
totalcpu = false
drop = ["cpu_time"]
# Don't collect CPU data for cpu6 & cpu7
[plugins.cpu.tagdrop]
cpu = [ "cpu6", "cpu7" ]
[[plugins.disk]] ## Supported Input Plugins
[plugins.disk.tagpass]
# tagpass conditions are OR, not AND.
# If the (filesystem is ext4 or xfs) OR (the path is /opt or /home)
# then the metric passes
fstype = [ "ext4", "xfs" ]
# Globs can also be used on the tag values
path = [ "/opt", "/home*" ]
```
Below is how to configure `pass` and `drop` parameters Telegraf currently has support for collecting metrics from many sources. For
more information on each, please look at the directory of the same name in
`plugins/inputs`.
```toml Currently implemented sources:
# Drop all metrics for guest CPU usage
[[plugins.cpu]]
drop = [ "cpu_usage_guest" ]
# Only store inode related metrics for disks
[[plugins.disk]]
pass = [ "disk_inodes*" ]
```
Additional plugins (or outputs) of the same type can be specified,
just define more instances in the config file:
```toml
[[plugins.cpu]]
percpu = false
totalcpu = true
[[plugins.cpu]]
percpu = true
totalcpu = false
drop = ["cpu_time*"]
```
## Supported Plugins
**You can view usage instructions for each plugin by running**
`telegraf -usage <pluginname>`.
Telegraf currently has support for collecting metrics from:
* aerospike * aerospike
* apache * apache
* bcache * bcache
* disque * disque
* docker
* elasticsearch * elasticsearch
* exec (generic JSON-emitting executable plugin) * exec (generic JSON-emitting executable plugin)
* haproxy * haproxy
* httpjson (generic JSON-emitting http service plugin) * httpjson (generic JSON-emitting http service plugin)
* influxdb * influxdb
* jolokia (remote JMX with JSON over HTTP) * jolokia
* leofs * leofs
* lustre2 * lustre2
* mailchimp * mailchimp
@ -234,7 +153,9 @@ Telegraf currently has support for collecting metrics from:
* mongodb * mongodb
* mysql * mysql
* nginx * nginx
* nsq
* phpfpm * phpfpm
* phusion passenger
* ping * ping
* postgresql * postgresql
* procstat * procstat
@ -246,18 +167,17 @@ Telegraf currently has support for collecting metrics from:
* twemproxy * twemproxy
* zfs * zfs
* zookeeper * zookeeper
* sensors
* system * system
* cpu * cpu
* mem * mem
* io
* net * net
* netstat * netstat
* disk * disk
* diskio
* swap * swap
## Supported Service Plugins Telegraf can also collect metrics via the following service plugins:
Telegraf can collect metrics via the following services:
* statsd * statsd
* kafka_consumer * kafka_consumer
@ -265,52 +185,21 @@ Telegraf can collect metrics via the following services:
We'll be adding support for many more over the coming months. Read on if you We'll be adding support for many more over the coming months. Read on if you
want to add support for another service or third-party API. want to add support for another service or third-party API.
## Output options ## Supported Output Plugins
Telegraf also supports specifying multiple output sinks to send data to,
configuring each output sink is different, but examples can be
found by running `telegraf -sample-config`.
Outputs also support the same configurable options as plugins
(pass, drop, tagpass, tagdrop), added in 0.2.4
```toml
[[outputs.influxdb]]
urls = [ "http://localhost:8086" ]
database = "telegraf"
precision = "s"
# Drop all measurements that start with "aerospike"
drop = ["aerospike*"]
[[outputs.influxdb]]
urls = [ "http://localhost:8086" ]
database = "telegraf-aerospike-data"
precision = "s"
# Only accept aerospike data:
pass = ["aerospike*"]
[[outputs.influxdb]]
urls = [ "http://localhost:8086" ]
database = "telegraf-cpu0-data"
precision = "s"
# Only store measurements where the tag "cpu" matches the value "cpu0"
[outputs.influxdb.tagpass]
cpu = ["cpu0"]
```
## Supported Outputs
* influxdb * influxdb
* nsq
* kafka
* datadog
* opentsdb
* amqp (rabbitmq)
* mqtt
* librato
* prometheus
* amon * amon
* amqp
* aws kinesis
* aws cloudwatch
* datadog
* graphite
* kafka
* librato
* mqtt
* nsq
* opentsdb
* prometheus
* riemann * riemann
## Contributing ## Contributing

View File

@ -7,9 +7,9 @@ import (
"sync" "sync"
"time" "time"
"github.com/influxdb/telegraf/internal/config" "github.com/influxdata/telegraf/internal/config"
"github.com/influxdb/influxdb/client/v2" "github.com/influxdata/influxdb/client/v2"
) )
type Accumulator interface { type Accumulator interface {
@ -29,12 +29,12 @@ type Accumulator interface {
} }
func NewAccumulator( func NewAccumulator(
pluginConfig *config.PluginConfig, inputConfig *config.InputConfig,
points chan *client.Point, points chan *client.Point,
) Accumulator { ) Accumulator {
acc := accumulator{} acc := accumulator{}
acc.points = points acc.points = points
acc.pluginConfig = pluginConfig acc.inputConfig = inputConfig
return &acc return &acc
} }
@ -47,7 +47,7 @@ type accumulator struct {
debug bool debug bool
pluginConfig *config.PluginConfig inputConfig *config.InputConfig
prefix string prefix string
} }
@ -69,31 +69,77 @@ func (ac *accumulator) AddFields(
tags map[string]string, tags map[string]string,
t ...time.Time, t ...time.Time,
) { ) {
// Validate uint64 and float64 fields if len(fields) == 0 || len(measurement) == 0 {
return
}
if !ac.inputConfig.Filter.ShouldTagsPass(tags) {
return
}
// Override measurement name if set
if len(ac.inputConfig.NameOverride) != 0 {
measurement = ac.inputConfig.NameOverride
}
// Apply measurement prefix and suffix if set
if len(ac.inputConfig.MeasurementPrefix) != 0 {
measurement = ac.inputConfig.MeasurementPrefix + measurement
}
if len(ac.inputConfig.MeasurementSuffix) != 0 {
measurement = measurement + ac.inputConfig.MeasurementSuffix
}
if tags == nil {
tags = make(map[string]string)
}
// Apply plugin-wide tags if set
for k, v := range ac.inputConfig.Tags {
if _, ok := tags[k]; !ok {
tags[k] = v
}
}
// Apply daemon-wide tags if set
for k, v := range ac.defaultTags {
if _, ok := tags[k]; !ok {
tags[k] = v
}
}
result := make(map[string]interface{})
for k, v := range fields { for k, v := range fields {
// Filter out any filtered fields
if ac.inputConfig != nil {
if !ac.inputConfig.Filter.ShouldPass(k) {
continue
}
}
result[k] = v
// Validate uint64 and float64 fields
switch val := v.(type) { switch val := v.(type) {
case uint64: case uint64:
// InfluxDB does not support writing uint64 // InfluxDB does not support writing uint64
if val < uint64(9223372036854775808) { if val < uint64(9223372036854775808) {
fields[k] = int64(val) result[k] = int64(val)
} else { } else {
fields[k] = int64(9223372036854775807) result[k] = int64(9223372036854775807)
} }
case float64: case float64:
// NaNs are invalid values in influxdb, skip measurement // NaNs are invalid values in influxdb, skip measurement
if math.IsNaN(val) || math.IsInf(val, 0) { if math.IsNaN(val) || math.IsInf(val, 0) {
if ac.debug { if ac.debug {
log.Printf("Measurement [%s] has a NaN or Inf field, skipping", log.Printf("Measurement [%s] field [%s] has a NaN or Inf "+
measurement) "field, skipping",
measurement, k)
} }
continue
}
}
}
fields = nil
if len(result) == 0 {
return return
} }
}
}
if tags == nil {
tags = make(map[string]string)
}
var timestamp time.Time var timestamp time.Time
if len(t) > 0 { if len(t) > 0 {
@ -106,19 +152,7 @@ func (ac *accumulator) AddFields(
measurement = ac.prefix + measurement measurement = ac.prefix + measurement
} }
if ac.pluginConfig != nil { pt, err := client.NewPoint(measurement, tags, result, timestamp)
if !ac.pluginConfig.Filter.ShouldPass(measurement) || !ac.pluginConfig.Filter.ShouldTagsPass(tags) {
return
}
}
for k, v := range ac.defaultTags {
if _, ok := tags[k]; !ok {
tags[k] = v
}
}
pt, err := client.NewPoint(measurement, tags, fields, timestamp)
if err != nil { if err != nil {
log.Printf("Error adding point [%s]: %s\n", measurement, err.Error()) log.Printf("Error adding point [%s]: %s\n", measurement, err.Error())
return return

128
agent.go
View File

@ -1,19 +1,20 @@
package telegraf package telegraf
import ( import (
"crypto/rand" cryptorand "crypto/rand"
"fmt" "fmt"
"log" "log"
"math/big" "math/big"
"math/rand"
"os" "os"
"sync" "sync"
"time" "time"
"github.com/influxdb/telegraf/internal/config" "github.com/influxdata/telegraf/internal/config"
"github.com/influxdb/telegraf/outputs" "github.com/influxdata/telegraf/plugins/inputs"
"github.com/influxdb/telegraf/plugins" "github.com/influxdata/telegraf/plugins/outputs"
"github.com/influxdb/influxdb/client/v2" "github.com/influxdata/influxdb/client/v2"
) )
// Agent runs telegraf and collects data based on the given config // Agent runs telegraf and collects data based on the given config
@ -58,7 +59,7 @@ func (a *Agent) Connect() error {
} }
err := o.Output.Connect() err := o.Output.Connect()
if err != nil { if err != nil {
log.Printf("Failed to connect to output %s, retrying in 15s\n", o.Name) log.Printf("Failed to connect to output %s, retrying in 15s, error was '%s' \n", o.Name, err)
time.Sleep(15 * time.Second) time.Sleep(15 * time.Second)
err = o.Output.Connect() err = o.Output.Connect()
if err != nil { if err != nil {
@ -85,33 +86,44 @@ func (a *Agent) Close() error {
return err return err
} }
// gatherParallel runs the plugins that are using the same reporting interval // gatherParallel runs the inputs that are using the same reporting interval
// as the telegraf agent. // as the telegraf agent.
func (a *Agent) gatherParallel(pointChan chan *client.Point) error { func (a *Agent) gatherParallel(pointChan chan *client.Point) error {
var wg sync.WaitGroup var wg sync.WaitGroup
start := time.Now() start := time.Now()
counter := 0 counter := 0
for _, plugin := range a.Config.Plugins { jitter := a.Config.Agent.CollectionJitter.Duration.Nanoseconds()
if plugin.Config.Interval != 0 { for _, input := range a.Config.Inputs {
if input.Config.Interval != 0 {
continue continue
} }
wg.Add(1) wg.Add(1)
counter++ counter++
go func(plugin *config.RunningPlugin) { go func(input *config.RunningInput) {
defer wg.Done() defer wg.Done()
acc := NewAccumulator(plugin.Config, pointChan) acc := NewAccumulator(input.Config, pointChan)
acc.SetDebug(a.Config.Agent.Debug) acc.SetDebug(a.Config.Agent.Debug)
acc.SetPrefix(plugin.Name + "_")
acc.SetDefaultTags(a.Config.Tags) acc.SetDefaultTags(a.Config.Tags)
if err := plugin.Plugin.Gather(acc); err != nil { if jitter != 0 {
log.Printf("Error in plugin [%s]: %s", plugin.Name, err) nanoSleep := rand.Int63n(jitter)
d, err := time.ParseDuration(fmt.Sprintf("%dns", nanoSleep))
if err != nil {
log.Printf("Jittering collection interval failed for plugin %s",
input.Name)
} else {
time.Sleep(d)
}
} }
}(plugin) if err := input.Input.Gather(acc); err != nil {
log.Printf("Error in input [%s]: %s", input.Name, err)
}
}(input)
} }
if counter == 0 { if counter == 0 {
@ -121,36 +133,39 @@ func (a *Agent) gatherParallel(pointChan chan *client.Point) error {
wg.Wait() wg.Wait()
elapsed := time.Since(start) elapsed := time.Since(start)
log.Printf("Gathered metrics, (%s interval), from %d plugins in %s\n", if !a.Config.Agent.Quiet {
a.Config.Agent.Interval, counter, elapsed) log.Printf("Gathered metrics, (%s interval), from %d inputs in %s\n",
a.Config.Agent.Interval.Duration, counter, elapsed)
}
return nil return nil
} }
// gatherSeparate runs the plugins that have been configured with their own // gatherSeparate runs the inputs that have been configured with their own
// reporting interval. // reporting interval.
func (a *Agent) gatherSeparate( func (a *Agent) gatherSeparate(
shutdown chan struct{}, shutdown chan struct{},
plugin *config.RunningPlugin, input *config.RunningInput,
pointChan chan *client.Point, pointChan chan *client.Point,
) error { ) error {
ticker := time.NewTicker(plugin.Config.Interval) ticker := time.NewTicker(input.Config.Interval)
for { for {
var outerr error var outerr error
start := time.Now() start := time.Now()
acc := NewAccumulator(plugin.Config, pointChan) acc := NewAccumulator(input.Config, pointChan)
acc.SetDebug(a.Config.Agent.Debug) acc.SetDebug(a.Config.Agent.Debug)
acc.SetPrefix(plugin.Name + "_")
acc.SetDefaultTags(a.Config.Tags) acc.SetDefaultTags(a.Config.Tags)
if err := plugin.Plugin.Gather(acc); err != nil { if err := input.Input.Gather(acc); err != nil {
log.Printf("Error in plugin [%s]: %s", plugin.Name, err) log.Printf("Error in input [%s]: %s", input.Name, err)
} }
elapsed := time.Since(start) elapsed := time.Since(start)
if !a.Config.Agent.Quiet {
log.Printf("Gathered metrics, (separate %s interval), from %s in %s\n", log.Printf("Gathered metrics, (separate %s interval), from %s in %s\n",
plugin.Config.Interval, plugin.Name, elapsed) input.Config.Interval, input.Name, elapsed)
}
if outerr != nil { if outerr != nil {
return outerr return outerr
@ -165,7 +180,7 @@ func (a *Agent) gatherSeparate(
} }
} }
// Test verifies that we can 'Gather' from all plugins with their configured // Test verifies that we can 'Gather' from all inputs with their configured
// Config struct // Config struct
func (a *Agent) Test() error { func (a *Agent) Test() error {
shutdown := make(chan struct{}) shutdown := make(chan struct{})
@ -184,27 +199,27 @@ func (a *Agent) Test() error {
} }
}() }()
for _, plugin := range a.Config.Plugins { for _, input := range a.Config.Inputs {
acc := NewAccumulator(plugin.Config, pointChan) acc := NewAccumulator(input.Config, pointChan)
acc.SetDebug(true) acc.SetDebug(true)
acc.SetPrefix(plugin.Name + "_") // acc.SetPrefix(input.Name + "_")
fmt.Printf("* Plugin: %s, Collection 1\n", plugin.Name) fmt.Printf("* Plugin: %s, Collection 1\n", input.Name)
if plugin.Config.Interval != 0 { if input.Config.Interval != 0 {
fmt.Printf("* Internal: %s\n", plugin.Config.Interval) fmt.Printf("* Internal: %s\n", input.Config.Interval)
} }
if err := plugin.Plugin.Gather(acc); err != nil { if err := input.Input.Gather(acc); err != nil {
return err return err
} }
// Special instructions for some plugins. cpu, for example, needs to be // Special instructions for some inputs. cpu, for example, needs to be
// run twice in order to return cpu usage percentages. // run twice in order to return cpu usage percentages.
switch plugin.Name { switch input.Name {
case "cpu", "mongodb": case "cpu", "mongodb", "procstat":
time.Sleep(500 * time.Millisecond) time.Sleep(500 * time.Millisecond)
fmt.Printf("* Plugin: %s, Collection 2\n", plugin.Name) fmt.Printf("* Plugin: %s, Collection 2\n", input.Name)
if err := plugin.Plugin.Gather(acc); err != nil { if err := input.Input.Gather(acc); err != nil {
return err return err
} }
} }
@ -235,8 +250,10 @@ func (a *Agent) writeOutput(
if err == nil { if err == nil {
// Write successful // Write successful
elapsed := time.Since(start) elapsed := time.Since(start)
if !a.Config.Agent.Quiet {
log.Printf("Flushed %d metrics to output %s in %s\n", log.Printf("Flushed %d metrics to output %s in %s\n",
len(filtered), ro.Name, elapsed) len(filtered), ro.Name, elapsed)
}
return return
} }
@ -309,7 +326,7 @@ func jitterInterval(ininterval, injitter time.Duration) time.Duration {
outinterval := ininterval outinterval := ininterval
if injitter.Nanoseconds() != 0 { if injitter.Nanoseconds() != 0 {
maxjitter := big.NewInt(injitter.Nanoseconds()) maxjitter := big.NewInt(injitter.Nanoseconds())
if j, err := rand.Int(rand.Reader, maxjitter); err == nil { if j, err := cryptorand.Int(cryptorand.Reader, maxjitter); err == nil {
jitter = j.Int64() jitter = j.Int64()
} }
outinterval = time.Duration(jitter + ininterval.Nanoseconds()) outinterval = time.Duration(jitter + ininterval.Nanoseconds())
@ -327,15 +344,16 @@ func jitterInterval(ininterval, injitter time.Duration) time.Duration {
func (a *Agent) Run(shutdown chan struct{}) error { func (a *Agent) Run(shutdown chan struct{}) error {
var wg sync.WaitGroup var wg sync.WaitGroup
a.Config.Agent.FlushInterval.Duration = jitterInterval(a.Config.Agent.FlushInterval.Duration, a.Config.Agent.FlushInterval.Duration = jitterInterval(
a.Config.Agent.FlushInterval.Duration,
a.Config.Agent.FlushJitter.Duration) a.Config.Agent.FlushJitter.Duration)
log.Printf("Agent Config: Interval:%s, Debug:%#v, Hostname:%#v, "+ log.Printf("Agent Config: Interval:%s, Debug:%#v, Quiet:%#v, Hostname:%#v, "+
"Flush Interval:%s\n", "Flush Interval:%s \n",
a.Config.Agent.Interval, a.Config.Agent.Debug, a.Config.Agent.Interval.Duration, a.Config.Agent.Debug, a.Config.Agent.Quiet,
a.Config.Agent.Hostname, a.Config.Agent.FlushInterval) a.Config.Agent.Hostname, a.Config.Agent.FlushInterval.Duration)
// channel shared between all plugin threads for accumulating points // channel shared between all input threads for accumulating points
pointChan := make(chan *client.Point, 1000) pointChan := make(chan *client.Point, 1000)
// Round collection to nearest interval by sleeping // Round collection to nearest interval by sleeping
@ -354,29 +372,29 @@ func (a *Agent) Run(shutdown chan struct{}) error {
} }
}() }()
for _, plugin := range a.Config.Plugins { for _, input := range a.Config.Inputs {
// Start service of any ServicePlugins // Start service of any ServicePlugins
switch p := plugin.Plugin.(type) { switch p := input.Input.(type) {
case plugins.ServicePlugin: case inputs.ServiceInput:
if err := p.Start(); err != nil { if err := p.Start(); err != nil {
log.Printf("Service for plugin %s failed to start, exiting\n%s\n", log.Printf("Service for input %s failed to start, exiting\n%s\n",
plugin.Name, err.Error()) input.Name, err.Error())
return err return err
} }
defer p.Stop() defer p.Stop()
} }
// Special handling for plugins that have their own collection interval // Special handling for inputs that have their own collection interval
// configured. Default intervals are handled below with gatherParallel // configured. Default intervals are handled below with gatherParallel
if plugin.Config.Interval != 0 { if input.Config.Interval != 0 {
wg.Add(1) wg.Add(1)
go func(plugin *config.RunningPlugin) { go func(input *config.RunningInput) {
defer wg.Done() defer wg.Done()
if err := a.gatherSeparate(shutdown, plugin, pointChan); err != nil { if err := a.gatherSeparate(shutdown, input, pointChan); err != nil {
log.Printf(err.Error()) log.Printf(err.Error())
} }
}(plugin) }(input)
} }
} }

View File

@ -5,80 +5,99 @@ import (
"testing" "testing"
"time" "time"
"github.com/influxdb/telegraf/internal/config" "github.com/influxdata/telegraf/internal/config"
// needing to load the plugins // needing to load the plugins
_ "github.com/influxdb/telegraf/plugins/all" _ "github.com/influxdata/telegraf/plugins/inputs/all"
// needing to load the outputs // needing to load the outputs
_ "github.com/influxdb/telegraf/outputs/all" _ "github.com/influxdata/telegraf/plugins/outputs/all"
) )
func TestAgent_LoadPlugin(t *testing.T) { func TestAgent_LoadPlugin(t *testing.T) {
c := config.NewConfig() c := config.NewConfig()
c.PluginFilters = []string{"mysql"} c.InputFilters = []string{"mysql"}
c.LoadConfig("./internal/config/testdata/telegraf-agent.toml") err := c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
assert.NoError(t, err)
a, _ := NewAgent(c) a, _ := NewAgent(c)
assert.Equal(t, 1, len(a.Config.Plugins)) assert.Equal(t, 1, len(a.Config.Inputs))
c = config.NewConfig() c = config.NewConfig()
c.PluginFilters = []string{"foo"} c.InputFilters = []string{"foo"}
c.LoadConfig("./internal/config/testdata/telegraf-agent.toml") err = c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
assert.NoError(t, err)
a, _ = NewAgent(c) a, _ = NewAgent(c)
assert.Equal(t, 0, len(a.Config.Plugins)) assert.Equal(t, 0, len(a.Config.Inputs))
c = config.NewConfig() c = config.NewConfig()
c.PluginFilters = []string{"mysql", "foo"} c.InputFilters = []string{"mysql", "foo"}
c.LoadConfig("./internal/config/testdata/telegraf-agent.toml") err = c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
assert.NoError(t, err)
a, _ = NewAgent(c) a, _ = NewAgent(c)
assert.Equal(t, 1, len(a.Config.Plugins)) assert.Equal(t, 1, len(a.Config.Inputs))
c = config.NewConfig() c = config.NewConfig()
c.PluginFilters = []string{"mysql", "redis"} c.InputFilters = []string{"mysql", "redis"}
c.LoadConfig("./internal/config/testdata/telegraf-agent.toml") err = c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
assert.NoError(t, err)
a, _ = NewAgent(c) a, _ = NewAgent(c)
assert.Equal(t, 2, len(a.Config.Plugins)) assert.Equal(t, 2, len(a.Config.Inputs))
c = config.NewConfig() c = config.NewConfig()
c.PluginFilters = []string{"mysql", "foo", "redis", "bar"} c.InputFilters = []string{"mysql", "foo", "redis", "bar"}
c.LoadConfig("./internal/config/testdata/telegraf-agent.toml") err = c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
assert.NoError(t, err)
a, _ = NewAgent(c) a, _ = NewAgent(c)
assert.Equal(t, 2, len(a.Config.Plugins)) assert.Equal(t, 2, len(a.Config.Inputs))
} }
func TestAgent_LoadOutput(t *testing.T) { func TestAgent_LoadOutput(t *testing.T) {
c := config.NewConfig() c := config.NewConfig()
c.OutputFilters = []string{"influxdb"} c.OutputFilters = []string{"influxdb"}
c.LoadConfig("./internal/config/testdata/telegraf-agent.toml") err := c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
assert.NoError(t, err)
a, _ := NewAgent(c) a, _ := NewAgent(c)
assert.Equal(t, 2, len(a.Config.Outputs)) assert.Equal(t, 2, len(a.Config.Outputs))
c = config.NewConfig()
c.OutputFilters = []string{"kafka"}
err = c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
assert.NoError(t, err)
a, _ = NewAgent(c)
assert.Equal(t, 1, len(a.Config.Outputs))
c = config.NewConfig() c = config.NewConfig()
c.OutputFilters = []string{} c.OutputFilters = []string{}
c.LoadConfig("./internal/config/testdata/telegraf-agent.toml") err = c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
assert.NoError(t, err)
a, _ = NewAgent(c) a, _ = NewAgent(c)
assert.Equal(t, 3, len(a.Config.Outputs)) assert.Equal(t, 3, len(a.Config.Outputs))
c = config.NewConfig() c = config.NewConfig()
c.OutputFilters = []string{"foo"} c.OutputFilters = []string{"foo"}
c.LoadConfig("./internal/config/testdata/telegraf-agent.toml") err = c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
assert.NoError(t, err)
a, _ = NewAgent(c) a, _ = NewAgent(c)
assert.Equal(t, 0, len(a.Config.Outputs)) assert.Equal(t, 0, len(a.Config.Outputs))
c = config.NewConfig() c = config.NewConfig()
c.OutputFilters = []string{"influxdb", "foo"} c.OutputFilters = []string{"influxdb", "foo"}
c.LoadConfig("./internal/config/testdata/telegraf-agent.toml") err = c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
assert.NoError(t, err)
a, _ = NewAgent(c) a, _ = NewAgent(c)
assert.Equal(t, 2, len(a.Config.Outputs)) assert.Equal(t, 2, len(a.Config.Outputs))
c = config.NewConfig() c = config.NewConfig()
c.OutputFilters = []string{"influxdb", "kafka"} c.OutputFilters = []string{"influxdb", "kafka"}
c.LoadConfig("./internal/config/testdata/telegraf-agent.toml") err = c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
assert.NoError(t, err)
assert.Equal(t, 3, len(c.Outputs))
a, _ = NewAgent(c) a, _ = NewAgent(c)
assert.Equal(t, 3, len(a.Config.Outputs)) assert.Equal(t, 3, len(a.Config.Outputs))
c = config.NewConfig() c = config.NewConfig()
c.OutputFilters = []string{"influxdb", "foo", "kafka", "bar"} c.OutputFilters = []string{"influxdb", "foo", "kafka", "bar"}
c.LoadConfig("./internal/config/testdata/telegraf-agent.toml") err = c.LoadConfig("./internal/config/testdata/telegraf-agent.toml")
assert.NoError(t, err)
a, _ = NewAgent(c) a, _ = NewAgent(c)
assert.Equal(t, 3, len(a.Config.Outputs)) assert.Equal(t, 3, len(a.Config.Outputs))
} }

674
build.py Executable file
View File

@ -0,0 +1,674 @@
#!/usr/bin/env python2.7
#
# This is the Telegraf build script.
#
# Current caveats:
# - Does not checkout the correct commit/branch (for now, you will need to do so manually)
# - Has external dependencies for packaging (fpm) and uploading (boto)
#
import sys
import os
import subprocess
import time
import datetime
import shutil
import tempfile
import hashlib
import re
try:
import boto
from boto.s3.key import Key
except ImportError:
pass
# PACKAGING VARIABLES
INSTALL_ROOT_DIR = "/usr/bin"
LOG_DIR = "/var/log/telegraf"
SCRIPT_DIR = "/usr/lib/telegraf/scripts"
CONFIG_DIR = "/etc/telegraf"
LOGROTATE_DIR = "/etc/logrotate.d"
INIT_SCRIPT = "scripts/init.sh"
SYSTEMD_SCRIPT = "scripts/telegraf.service"
LOGROTATE_SCRIPT = "etc/logrotate.d/telegraf"
DEFAULT_CONFIG = "etc/telegraf.conf"
POSTINST_SCRIPT = "scripts/post-install.sh"
PREINST_SCRIPT = "scripts/pre-install.sh"
# META-PACKAGE VARIABLES
PACKAGE_LICENSE = "MIT"
PACKAGE_URL = "https://github.com/influxdata/telegraf"
MAINTAINER = "support@influxdb.com"
VENDOR = "InfluxData"
DESCRIPTION = "Plugin-driven server agent for reporting metrics into InfluxDB."
# SCRIPT START
prereqs = [ 'git', 'go' ]
optional_prereqs = [ 'gvm', 'fpm', 'rpmbuild' ]
fpm_common_args = "-f -s dir --log error \
--vendor {} \
--url {} \
--license {} \
--maintainer {} \
--config-files {} \
--config-files {} \
--after-install {} \
--before-install {} \
--description \"{}\"".format(
VENDOR,
PACKAGE_URL,
PACKAGE_LICENSE,
MAINTAINER,
CONFIG_DIR + '/telegraf.conf',
LOGROTATE_DIR + '/telegraf',
POSTINST_SCRIPT,
PREINST_SCRIPT,
DESCRIPTION)
targets = {
'telegraf' : './cmd/telegraf/telegraf.go',
}
supported_builds = {
# TODO(rossmcdonald): Add support for multiple GOARM values
'darwin': [ "amd64", "386" ],
# 'windows': [ "amd64", "386", "arm", "arm64" ],
'linux': [ "amd64", "386", "arm" ]
}
supported_go = [ '1.5.1' ]
supported_packages = {
"darwin": [ "tar", "zip" ],
"linux": [ "deb", "rpm", "tar", "zip" ],
"windows": [ "tar", "zip" ],
}
def run(command, allow_failure=False, shell=False):
out = None
try:
if shell:
out = subprocess.check_output(command, stderr=subprocess.STDOUT, shell=shell)
else:
out = subprocess.check_output(command.split(), stderr=subprocess.STDOUT)
out = out.decode("utf8")
except subprocess.CalledProcessError as e:
print("")
print("")
print("Executed command failed!")
print("-- Command run was: {}".format(command))
print("-- Failure was: {}".format(e.output))
if allow_failure:
print("Continuing...")
return None
else:
print("")
print("Stopping.")
sys.exit(1)
except OSError as e:
print("")
print("")
print("Invalid command!")
print("-- Command run was: {}".format(command))
print("-- Failure was: {}".format(e))
if allow_failure:
print("Continuing...")
return out
else:
print("")
print("Stopping.")
sys.exit(1)
else:
return out
def create_temp_dir():
return tempfile.mkdtemp(prefix="telegraf-build.")
def get_current_version():
command = "git describe --always --tags --abbrev=0"
out = run(command)
return out.strip()
def get_current_commit(short=False):
command = None
if short:
command = "git log --pretty=format:'%h' -n 1"
else:
command = "git rev-parse HEAD"
out = run(command)
return out.strip('\'\n\r ')
def get_current_branch():
command = "git rev-parse --abbrev-ref HEAD"
out = run(command)
return out.strip()
def get_system_arch():
arch = os.uname()[4]
if arch == "x86_64":
arch = "amd64"
return arch
def get_system_platform():
if sys.platform.startswith("linux"):
return "linux"
else:
return sys.platform
def get_go_version():
out = run("go version")
matches = re.search('go version go(\S+)', out)
if matches is not None:
return matches.groups()[0].strip()
return None
def check_path_for(b):
def is_exe(fpath):
return os.path.isfile(fpath) and os.access(fpath, os.X_OK)
for path in os.environ["PATH"].split(os.pathsep):
path = path.strip('"')
full_path = os.path.join(path, b)
if os.path.isfile(full_path) and os.access(full_path, os.X_OK):
return full_path
def check_environ(build_dir = None):
print("\nChecking environment:")
for v in [ "GOPATH", "GOBIN", "GOROOT" ]:
print("\t- {} -> {}".format(v, os.environ.get(v)))
cwd = os.getcwd()
if build_dir == None and os.environ.get("GOPATH") and os.environ.get("GOPATH") not in cwd:
print("\n!! WARNING: Your current directory is not under your GOPATH. This may lead to build failures.")
def check_prereqs():
print("\nChecking for dependencies:")
for req in prereqs:
print("\t- {} ->".format(req),)
path = check_path_for(req)
if path:
print("{}".format(path))
else:
print("?")
for req in optional_prereqs:
print("\t- {} (optional) ->".format(req))
path = check_path_for(req)
if path:
print("{}".format(path))
else:
print("?")
print("")
def upload_packages(packages, nightly=False):
print("Uploading packages to S3...")
print("")
c = boto.connect_s3()
# TODO(rossmcdonald) - Set to different S3 bucket for release vs nightly
bucket = c.get_bucket('get.influxdb.org')
for p in packages:
name = os.path.join('telegraf', os.path.basename(p))
if bucket.get_key(name) is None or nightly:
print("\t - Uploading {}...".format(name))
k = Key(bucket)
k.key = name
if nightly:
n = k.set_contents_from_filename(p, replace=True)
else:
n = k.set_contents_from_filename(p, replace=False)
k.make_public()
print("[ DONE ]")
else:
print("\t - Not uploading {}, already exists.".format(p))
print("")
def run_tests(race, parallel, timeout, no_vet):
get_command = "go get -d -t ./..."
print("Retrieving Go dependencies...")
sys.stdout.flush()
run(get_command)
print("done.")
print("Running tests:")
print("\tRace: ", race)
if parallel is not None:
print("\tParallel:", parallel)
if timeout is not None:
print("\tTimeout:", timeout)
sys.stdout.flush()
p = subprocess.Popen(["go", "fmt", "./..."], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = p.communicate()
if len(out) > 0 or len(err) > 0:
print("Code not formatted. Please use 'go fmt ./...' to fix formatting errors.")
print(out)
print(err)
return False
if not no_vet:
p = subprocess.Popen(["go", "tool", "vet", "-composites=false", "./"], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = p.communicate()
if len(out) > 0 or len(err) > 0:
print("Go vet failed. Please run 'go vet ./...' and fix any errors.")
print(out)
print(err)
return False
else:
print("Skipping go vet ...")
sys.stdout.flush()
test_command = "go test -v"
if race:
test_command += " -race"
if parallel is not None:
test_command += " -parallel {}".format(parallel)
if timeout is not None:
test_command += " -timeout {}".format(timeout)
test_command += " ./..."
code = os.system(test_command)
if code != 0:
print("Tests Failed")
return False
else:
print("Tests Passed")
return True
def build(version=None,
branch=None,
commit=None,
platform=None,
arch=None,
nightly=False,
rc=None,
race=False,
clean=False,
outdir=".",
goarm_version="6"):
print("-------------------------")
print("")
print("Build plan:")
print("\t- version: {}".format(version))
if rc:
print("\t- release candidate: {}".format(rc))
print("\t- commit: {}".format(commit))
print("\t- branch: {}".format(branch))
print("\t- platform: {}".format(platform))
print("\t- arch: {}".format(arch))
if arch == 'arm' and goarm_version:
print("\t- ARM version: {}".format(goarm_version))
print("\t- nightly? {}".format(str(nightly).lower()))
print("\t- race enabled? {}".format(str(race).lower()))
print("")
if not os.path.exists(outdir):
os.makedirs(outdir)
elif clean and outdir != '/':
print("Cleaning build directory...")
shutil.rmtree(outdir)
os.makedirs(outdir)
if rc:
# If a release candidate, update the version information accordingly
version = "{}rc{}".format(version, rc)
print("Starting build...")
for b, c in targets.items():
print("\t- Building '{}'...".format(os.path.join(outdir, b)),)
build_command = ""
build_command += "GOOS={} GOARCH={} ".format(platform, arch)
if arch == "arm" and goarm_version:
if goarm_version not in ["5", "6", "7", "arm64"]:
print("!! Invalid ARM build version: {}".format(goarm_version))
build_command += "GOARM={} ".format(goarm_version)
build_command += "go build -o {} ".format(os.path.join(outdir, b))
if race:
build_command += "-race "
go_version = get_go_version()
if "1.4" in go_version:
build_command += "-ldflags=\"-X main.buildTime '{}' ".format(datetime.datetime.utcnow().isoformat())
build_command += "-X main.Version {} ".format(version)
build_command += "-X main.Branch {} ".format(branch)
build_command += "-X main.Commit {}\" ".format(get_current_commit())
else:
build_command += "-ldflags=\"-X main.buildTime='{}' ".format(datetime.datetime.utcnow().isoformat())
build_command += "-X main.Version={} ".format(version)
build_command += "-X main.Branch={} ".format(branch)
build_command += "-X main.Commit={}\" ".format(get_current_commit())
build_command += c
run(build_command, shell=True)
print("[ DONE ]")
print("")
def create_dir(path):
try:
os.makedirs(path)
except OSError as e:
print(e)
def rename_file(fr, to):
try:
os.rename(fr, to)
except OSError as e:
print(e)
# Return the original filename
return fr
else:
# Return the new filename
return to
def copy_file(fr, to):
try:
shutil.copy(fr, to)
except OSError as e:
print(e)
def create_package_fs(build_root):
print("\t- Creating a filesystem hierarchy from directory: {}".format(build_root))
# Using [1:] for the path names due to them being absolute
# (will overwrite previous paths, per 'os.path.join' documentation)
dirs = [ INSTALL_ROOT_DIR[1:], LOG_DIR[1:], SCRIPT_DIR[1:], CONFIG_DIR[1:], LOGROTATE_DIR[1:] ]
for d in dirs:
create_dir(os.path.join(build_root, d))
os.chmod(os.path.join(build_root, d), 0o755)
def package_scripts(build_root):
print("\t- Copying scripts and sample configuration to build directory")
shutil.copyfile(INIT_SCRIPT, os.path.join(build_root, SCRIPT_DIR[1:], INIT_SCRIPT.split('/')[1]))
os.chmod(os.path.join(build_root, SCRIPT_DIR[1:], INIT_SCRIPT.split('/')[1]), 0o644)
shutil.copyfile(SYSTEMD_SCRIPT, os.path.join(build_root, SCRIPT_DIR[1:], SYSTEMD_SCRIPT.split('/')[1]))
os.chmod(os.path.join(build_root, SCRIPT_DIR[1:], SYSTEMD_SCRIPT.split('/')[1]), 0o644)
shutil.copyfile(LOGROTATE_SCRIPT, os.path.join(build_root, LOGROTATE_DIR[1:], "telegraf"))
os.chmod(os.path.join(build_root, LOGROTATE_DIR[1:], "telegraf"), 0o644)
shutil.copyfile(DEFAULT_CONFIG, os.path.join(build_root, CONFIG_DIR[1:], "telegraf.conf"))
os.chmod(os.path.join(build_root, CONFIG_DIR[1:], "telegraf.conf"), 0o644)
def go_get(update=False):
get_command = None
if update:
get_command = "go get -u -f -d ./..."
else:
get_command = "go get -d ./..."
print("Retrieving Go dependencies...")
run(get_command)
print("done.\n")
def generate_md5_from_file(path):
m = hashlib.md5()
with open(path, 'rb') as f:
while True:
data = f.read(4096)
if not data:
break
m.update(data)
return m.hexdigest()
def build_packages(build_output, version, nightly=False, rc=None, iteration=1):
outfiles = []
tmp_build_dir = create_temp_dir()
try:
print("-------------------------")
print("")
print("Packaging...")
for p in build_output:
# Create top-level folder displaying which platform (linux, etc)
create_dir(os.path.join(tmp_build_dir, p))
for a in build_output[p]:
current_location = build_output[p][a]
# Create second-level directory displaying the architecture (amd64, etc)p
build_root = os.path.join(tmp_build_dir, p, a)
# Create directory tree to mimic file system of package
create_dir(build_root)
create_package_fs(build_root)
# Copy in packaging and miscellaneous scripts
package_scripts(build_root)
# Copy newly-built binaries to packaging directory
for b in targets:
if p == 'windows':
b = b + '.exe'
fr = os.path.join(current_location, b)
to = os.path.join(build_root, INSTALL_ROOT_DIR[1:], b)
print("\t- [{}][{}] - Moving from '{}' to '{}'".format(p, a, fr, to))
copy_file(fr, to)
# Package the directory structure
for package_type in supported_packages[p]:
print("\t- Packaging directory '{}' as '{}'...".format(build_root, package_type))
name = "telegraf"
package_version = version
package_iteration = iteration
if package_type in ['zip', 'tar']:
if nightly:
name = '{}-nightly_{}_{}'.format(name, p, a)
else:
name = '{}-{}_{}_{}'.format(name, version, p, a)
if package_type == 'tar':
# Add `tar.gz` to path to reduce package size
current_location = os.path.join(current_location, name + '.tar.gz')
if rc is not None:
package_iteration = "0.rc{}".format(rc)
fpm_command = "fpm {} --name {} -a {} -t {} --version {} --iteration {} -C {} -p {} ".format(
fpm_common_args,
name,
a,
package_type,
package_version,
package_iteration,
build_root,
current_location)
if package_type == "rpm":
fpm_command += "--depends coreutils "
fpm_command += "--depends lsof"
out = run(fpm_command, shell=True)
matches = re.search(':path=>"(.*)"', out)
outfile = None
if matches is not None:
outfile = matches.groups()[0]
if outfile is None:
print("[ COULD NOT DETERMINE OUTPUT ]")
else:
# Strip nightly version (the unix epoch) from filename
if nightly and package_type in ['deb', 'rpm']:
outfile = rename_file(outfile, outfile.replace("{}-{}".format(version, iteration), "nightly"))
outfiles.append(os.path.join(os.getcwd(), outfile))
print("[ DONE ]")
# Display MD5 hash for generated package
print("\t\tMD5 = {}".format(generate_md5_from_file(outfile)))
print("")
return outfiles
finally:
# Cleanup
shutil.rmtree(tmp_build_dir)
def print_usage():
print("Usage: ./build.py [options]")
print("")
print("Options:")
print("\t --outdir=<path> \n\t\t- Send build output to a specified path. Defaults to ./build.")
print("\t --arch=<arch> \n\t\t- Build for specified architecture. Acceptable values: x86_64|amd64, 386, arm, or all")
print("\t --goarm=<arm version> \n\t\t- Build for specified ARM version (when building for ARM). Default value is: 6")
print("\t --platform=<platform> \n\t\t- Build for specified platform. Acceptable values: linux, windows, darwin, or all")
print("\t --version=<version> \n\t\t- Version information to apply to build metadata. If not specified, will be pulled from repo tag.")
print("\t --commit=<commit> \n\t\t- Use specific commit for build (currently a NOOP).")
print("\t --branch=<branch> \n\t\t- Build from a specific branch (currently a NOOP).")
print("\t --rc=<rc number> \n\t\t- Whether or not the build is a release candidate (affects version information).")
print("\t --iteration=<iteration number> \n\t\t- The iteration to display on the package output (defaults to 0 for RC's, and 1 otherwise).")
print("\t --race \n\t\t- Whether the produced build should have race detection enabled.")
print("\t --package \n\t\t- Whether the produced builds should be packaged for the target platform(s).")
print("\t --nightly \n\t\t- Whether the produced build is a nightly (affects version information).")
print("\t --update \n\t\t- Whether dependencies should be updated prior to building.")
print("\t --test \n\t\t- Run Go tests. Will not produce a build.")
print("\t --parallel \n\t\t- Run Go tests in parallel up to the count specified.")
print("\t --timeout \n\t\t- Timeout for Go tests. Defaults to 480s.")
print("\t --clean \n\t\t- Clean the build output directory prior to creating build.")
print("")
def print_package_summary(packages):
print(packages)
def main():
# Command-line arguments
outdir = "build"
commit = None
target_platform = None
target_arch = None
nightly = False
race = False
branch = None
version = get_current_version()
rc = None
package = False
update = False
clean = False
upload = False
test = False
parallel = None
timeout = None
iteration = 1
no_vet = False
goarm_version = "6"
for arg in sys.argv[1:]:
if '--outdir' in arg:
# Output directory. If none is specified, then builds will be placed in the same directory.
output_dir = arg.split("=")[1]
if '--commit' in arg:
# Commit to build from. If none is specified, then it will build from the most recent commit.
commit = arg.split("=")[1]
if '--branch' in arg:
# Branch to build from. If none is specified, then it will build from the current branch.
branch = arg.split("=")[1]
elif '--arch' in arg:
# Target architecture. If none is specified, then it will build for the current arch.
target_arch = arg.split("=")[1]
elif '--platform' in arg:
# Target platform. If none is specified, then it will build for the current platform.
target_platform = arg.split("=")[1]
elif '--version' in arg:
# Version to assign to this build (0.9.5, etc)
version = arg.split("=")[1]
elif '--rc' in arg:
# Signifies that this is a release candidate build.
rc = arg.split("=")[1]
elif '--race' in arg:
# Signifies that race detection should be enabled.
race = True
elif '--package' in arg:
# Signifies that packages should be built.
package = True
elif '--nightly' in arg:
# Signifies that this is a nightly build.
nightly = True
elif '--update' in arg:
# Signifies that dependencies should be updated.
update = True
elif '--upload' in arg:
# Signifies that the resulting packages should be uploaded to S3
upload = True
elif '--test' in arg:
# Run tests and exit
test = True
elif '--parallel' in arg:
# Set parallel for tests.
parallel = int(arg.split("=")[1])
elif '--timeout' in arg:
# Set timeout for tests.
timeout = arg.split("=")[1]
elif '--clean' in arg:
# Signifies that the outdir should be deleted before building
clean = True
elif '--iteration' in arg:
iteration = arg.split("=")[1]
elif '--no-vet' in arg:
no_vet = True
elif '--goarm' in arg:
# Signifies GOARM flag to pass to build command when compiling for ARM
goarm_version = arg.split("=")[1]
elif '--help' in arg:
print_usage()
return 0
else:
print("!! Unknown argument: {}".format(arg))
print_usage()
return 1
if nightly:
if rc:
print("!! Cannot be both nightly and a release candidate! Stopping.")
return 1
# In order to support nightly builds on the repository, we are adding the epoch timestamp
# to the version so that version numbers are always greater than the previous nightly.
version = "{}.n{}".format(version, int(time.time()))
# Pre-build checks
check_environ()
check_prereqs()
if not commit:
commit = get_current_commit(short=True)
if not branch:
branch = get_current_branch()
if not target_arch:
if 'arm' in get_system_arch():
# Prevent uname from reporting ARM arch (eg 'armv7l')
target_arch = "arm"
else:
target_arch = get_system_arch()
if not target_platform:
target_platform = get_system_platform()
if rc or nightly:
# If a release candidate or nightly, set iteration to 0 (instead of 1)
iteration = 0
build_output = {}
# TODO(rossmcdonald): Prepare git repo for build (checking out correct branch/commit, etc.)
# prepare(branch=branch, commit=commit)
if test:
if not run_tests(race, parallel, timeout, no_vet):
return 1
return 0
go_get(update=update)
platforms = []
single_build = True
if target_platform == 'all':
platforms = list(supported_builds.keys())
single_build = False
else:
platforms = [target_platform]
for platform in platforms:
build_output.update( { platform : {} } )
archs = []
if target_arch == "all":
single_build = False
archs = supported_builds.get(platform)
else:
archs = [target_arch]
for arch in archs:
od = outdir
if not single_build:
od = os.path.join(outdir, platform, arch)
build(version=version,
branch=branch,
commit=commit,
platform=platform,
arch=arch,
nightly=nightly,
rc=rc,
race=race,
clean=clean,
outdir=od,
goarm_version=goarm_version)
build_output.get(platform).update( { arch : od } )
# Build packages
if package:
if not check_path_for("fpm"):
print("!! Cannot package without command 'fpm'. Stopping.")
return 1
packages = build_packages(build_output, version, nightly=nightly, rc=rc, iteration=iteration)
# TODO(rossmcdonald): Add nice output for print_package_summary()
# print_package_summary(packages)
# Optionally upload to S3
if upload:
upload_packages(packages, nightly=nightly)
return 0
if __name__ == '__main__':
sys.exit(main())

View File

@ -4,14 +4,12 @@ machine:
post: post:
- sudo service zookeeper stop - sudo service zookeeper stop
- go version - go version
- go version | grep 1.5.1 || sudo rm -rf /usr/local/go - go version | grep 1.5.2 || sudo rm -rf /usr/local/go
- wget https://storage.googleapis.com/golang/go1.5.1.linux-amd64.tar.gz - wget https://storage.googleapis.com/golang/go1.5.2.linux-amd64.tar.gz
- sudo tar -C /usr/local -xzf go1.5.1.linux-amd64.tar.gz - sudo tar -C /usr/local -xzf go1.5.2.linux-amd64.tar.gz
- go version - go version
dependencies: dependencies:
cache_directories:
- "~/telegraf-build/src"
override: override:
- docker info - docker info

View File

@ -7,44 +7,108 @@ import (
"os" "os"
"os/signal" "os/signal"
"strings" "strings"
"syscall"
"github.com/influxdb/telegraf" "github.com/influxdata/telegraf"
"github.com/influxdb/telegraf/internal/config" "github.com/influxdata/telegraf/internal/config"
_ "github.com/influxdb/telegraf/outputs/all" _ "github.com/influxdata/telegraf/plugins/inputs/all"
_ "github.com/influxdb/telegraf/plugins/all" _ "github.com/influxdata/telegraf/plugins/outputs/all"
) )
var fDebug = flag.Bool("debug", false, var fDebug = flag.Bool("debug", false,
"show metrics as they're generated to stdout") "show metrics as they're generated to stdout")
var fQuiet = flag.Bool("quiet", false,
"run in quiet mode")
var fTest = flag.Bool("test", false, "gather metrics, print them out, and exit") var fTest = flag.Bool("test", false, "gather metrics, print them out, and exit")
var fConfig = flag.String("config", "", "configuration file to load") var fConfig = flag.String("config", "", "configuration file to load")
var fConfigDirectory = flag.String("configdirectory", "", var fConfigDirectory = flag.String("config-directory", "",
"directory containing additional *.conf files") "directory containing additional *.conf files")
var fVersion = flag.Bool("version", false, "display the version") var fVersion = flag.Bool("version", false, "display the version")
var fSampleConfig = flag.Bool("sample-config", false, var fSampleConfig = flag.Bool("sample-config", false,
"print out full sample configuration") "print out full sample configuration")
var fPidfile = flag.String("pidfile", "", "file to write our pid to") var fPidfile = flag.String("pidfile", "", "file to write our pid to")
var fPLuginFilters = flag.String("filter", "", var fInputFilters = flag.String("input-filter", "",
"filter the plugins to enable, separator is :") "filter the inputs to enable, separator is :")
var fOutputFilters = flag.String("outputfilter", "", var fOutputFilters = flag.String("output-filter", "",
"filter the outputs to enable, separator is :") "filter the outputs to enable, separator is :")
var fUsage = flag.String("usage", "", var fUsage = flag.String("usage", "",
"print usage for a plugin, ie, 'telegraf -usage mysql'") "print usage for a plugin, ie, 'telegraf -usage mysql'")
var fInputFiltersLegacy = flag.String("filter", "",
"filter the inputs to enable, separator is :")
var fOutputFiltersLegacy = flag.String("outputfilter", "",
"filter the outputs to enable, separator is :")
var fConfigDirectoryLegacy = flag.String("configdirectory", "",
"directory containing additional *.conf files")
// Telegraf version // Telegraf version
// -ldflags "-X main.Version=`git describe --always --tags`" // -ldflags "-X main.Version=`git describe --always --tags`"
var Version string var Version string
const usage = `Telegraf, The plugin-driven server agent for collecting and reporting metrics.
Usage:
telegraf <flags>
The flags are:
-config <file> configuration file to load
-test gather metrics once, print them to stdout, and exit
-sample-config print out full sample configuration to stdout
-config-directory directory containing additional *.conf files
-input-filter filter the input plugins to enable, separator is :
-output-filter filter the output plugins to enable, separator is :
-usage print usage for a plugin, ie, 'telegraf -usage mysql'
-debug print metrics as they're generated to stdout
-quiet run in quiet mode
-version print the version to stdout
Examples:
# generate a telegraf config file:
telegraf -sample-config > telegraf.conf
# generate config with only cpu input & influxdb output plugins defined
telegraf -sample-config -input-filter cpu -output-filter influxdb
# run a single telegraf collection, outputing metrics to stdout
telegraf -config telegraf.conf -test
# run telegraf with all plugins defined in config file
telegraf -config telegraf.conf
# run telegraf, enabling the cpu & memory input, and influxdb output plugins
telegraf -config telegraf.conf -input-filter cpu:mem -output-filter influxdb
`
func main() { func main() {
reload := make(chan bool, 1)
reload <- true
for <-reload {
reload <- false
flag.Usage = usageExit
flag.Parse() flag.Parse()
var pluginFilters []string if flag.NFlag() == 0 {
if *fPLuginFilters != "" { usageExit()
pluginsFilter := strings.TrimSpace(*fPLuginFilters) }
pluginFilters = strings.Split(":"+pluginsFilter+":", ":")
var inputFilters []string
if *fInputFiltersLegacy != "" {
inputFilter := strings.TrimSpace(*fInputFiltersLegacy)
inputFilters = strings.Split(":"+inputFilter+":", ":")
}
if *fInputFilters != "" {
inputFilter := strings.TrimSpace(*fInputFilters)
inputFilters = strings.Split(":"+inputFilter+":", ":")
} }
var outputFilters []string var outputFilters []string
if *fOutputFiltersLegacy != "" {
outputFilter := strings.TrimSpace(*fOutputFiltersLegacy)
outputFilters = strings.Split(":"+outputFilter+":", ":")
}
if *fOutputFilters != "" { if *fOutputFilters != "" {
outputFilter := strings.TrimSpace(*fOutputFilters) outputFilter := strings.TrimSpace(*fOutputFilters)
outputFilters = strings.Split(":"+outputFilter+":", ":") outputFilters = strings.Split(":"+outputFilter+":", ":")
@ -57,12 +121,12 @@ func main() {
} }
if *fSampleConfig { if *fSampleConfig {
config.PrintSampleConfig(pluginFilters, outputFilters) config.PrintSampleConfig(inputFilters, outputFilters)
return return
} }
if *fUsage != "" { if *fUsage != "" {
if err := config.PrintPluginConfig(*fUsage); err != nil { if err := config.PrintInputConfig(*fUsage); err != nil {
if err2 := config.PrintOutputConfig(*fUsage); err2 != nil { if err2 := config.PrintOutputConfig(*fUsage); err2 != nil {
log.Fatalf("%s and %s", err, err2) log.Fatalf("%s and %s", err, err2)
} }
@ -78,7 +142,7 @@ func main() {
if *fConfig != "" { if *fConfig != "" {
c = config.NewConfig() c = config.NewConfig()
c.OutputFilters = outputFilters c.OutputFilters = outputFilters
c.PluginFilters = pluginFilters c.InputFilters = inputFilters
err = c.LoadConfig(*fConfig) err = c.LoadConfig(*fConfig)
if err != nil { if err != nil {
log.Fatal(err) log.Fatal(err)
@ -89,6 +153,13 @@ func main() {
return return
} }
if *fConfigDirectoryLegacy != "" {
err = c.LoadDirectory(*fConfigDirectoryLegacy)
if err != nil {
log.Fatal(err)
}
}
if *fConfigDirectory != "" { if *fConfigDirectory != "" {
err = c.LoadDirectory(*fConfigDirectory) err = c.LoadDirectory(*fConfigDirectory)
if err != nil { if err != nil {
@ -98,8 +169,8 @@ func main() {
if len(c.Outputs) == 0 { if len(c.Outputs) == 0 {
log.Fatalf("Error: no outputs found, did you provide a valid config file?") log.Fatalf("Error: no outputs found, did you provide a valid config file?")
} }
if len(c.Plugins) == 0 { if len(c.Inputs) == 0 {
log.Fatalf("Error: no plugins found, did you provide a valid config file?") log.Fatalf("Error: no inputs found, did you provide a valid config file?")
} }
ag, err := telegraf.NewAgent(c) ag, err := telegraf.NewAgent(c)
@ -111,6 +182,10 @@ func main() {
ag.Config.Agent.Debug = true ag.Config.Agent.Debug = true
} }
if *fQuiet {
ag.Config.Agent.Quiet = true
}
if *fTest { if *fTest {
err = ag.Test() err = ag.Test()
if err != nil { if err != nil {
@ -126,15 +201,23 @@ func main() {
shutdown := make(chan struct{}) shutdown := make(chan struct{})
signals := make(chan os.Signal) signals := make(chan os.Signal)
signal.Notify(signals, os.Interrupt) signal.Notify(signals, os.Interrupt, syscall.SIGHUP)
go func() { go func() {
<-signals sig := <-signals
if sig == os.Interrupt {
close(shutdown) close(shutdown)
}
if sig == syscall.SIGHUP {
log.Printf("Reloading Telegraf config\n")
<-reload
reload <- true
close(shutdown)
}
}() }()
log.Printf("Starting Telegraf (version %s)\n", Version) log.Printf("Starting Telegraf (version %s)\n", Version)
log.Printf("Loaded outputs: %s", strings.Join(c.OutputNames(), " ")) log.Printf("Loaded outputs: %s", strings.Join(c.OutputNames(), " "))
log.Printf("Loaded plugins: %s", strings.Join(c.PluginNames(), " ")) log.Printf("Loaded inputs: %s", strings.Join(c.InputNames(), " "))
log.Printf("Tags enabled: %s", c.ListTags()) log.Printf("Tags enabled: %s", c.ListTags())
if *fPidfile != "" { if *fPidfile != "" {
@ -149,4 +232,10 @@ func main() {
} }
ag.Run(shutdown) ag.Run(shutdown)
}
}
func usageExit() {
fmt.Println(usage)
os.Exit(0)
} }

View File

@ -1,7 +1,7 @@
# Telegraf configuration # Telegraf configuration
# Telegraf is entirely plugin driven. All metrics are gathered from the # Telegraf is entirely plugin driven. All metrics are gathered from the
# declared plugins. # declared inputs.
# Even if a plugin has no configuration, it must be declared in here # Even if a plugin has no configuration, it must be declared in here
# to be active. Declaring a plugin means just specifying the name # to be active. Declaring a plugin means just specifying the name
@ -49,8 +49,6 @@
# OUTPUTS # # OUTPUTS #
############################################################################### ###############################################################################
[outputs]
# Configuration for influxdb server to send metrics to # Configuration for influxdb server to send metrics to
[[outputs.influxdb]] [[outputs.influxdb]]
# The full HTTP or UDP endpoint URL for your InfluxDB instance. # The full HTTP or UDP endpoint URL for your InfluxDB instance.
@ -76,13 +74,11 @@
############################################################################### ###############################################################################
# PLUGINS # # INPUTS #
############################################################################### ###############################################################################
[plugins]
# Read metrics about cpu usage # Read metrics about cpu usage
[[plugins.cpu]] [[inputs.cpu]]
# Whether to report per-cpu stats or not # Whether to report per-cpu stats or not
percpu = true percpu = true
# Whether to report total system cpu stats or not # Whether to report total system cpu stats or not
@ -91,13 +87,13 @@
drop = ["cpu_time"] drop = ["cpu_time"]
# Read metrics about disk usage by mount point # Read metrics about disk usage by mount point
[[plugins.disk]] [[inputs.disk]]
# By default, telegraf gather stats for all mountpoints. # By default, telegraf gather stats for all mountpoints.
# Setting mountpoints will restrict the stats to the specified mountpoints. # Setting mountpoints will restrict the stats to the specified mountpoints.
# Mountpoints=["/"] # mount_points=["/"]
# Read metrics about disk IO by device # Read metrics about disk IO by device
[[plugins.io]] [[inputs.diskio]]
# By default, telegraf will gather stats for all devices including # By default, telegraf will gather stats for all devices including
# disk partitions. # disk partitions.
# Setting devices will restrict the stats to the specified devices. # Setting devices will restrict the stats to the specified devices.
@ -106,18 +102,18 @@
# SkipSerialNumber = true # SkipSerialNumber = true
# Read metrics about memory usage # Read metrics about memory usage
[[plugins.mem]] [[inputs.mem]]
# no configuration # no configuration
# Read metrics about swap memory usage # Read metrics about swap memory usage
[[plugins.swap]] [[inputs.swap]]
# no configuration # no configuration
# Read metrics about system load & uptime # Read metrics about system load & uptime
[[plugins.system]] [[inputs.system]]
# no configuration # no configuration
############################################################################### ###############################################################################
# SERVICE PLUGINS # # SERVICE INPUTS #
############################################################################### ###############################################################################

View File

@ -10,14 +10,14 @@ import (
"strings" "strings"
"time" "time"
"github.com/influxdb/telegraf/internal" "github.com/influxdata/telegraf/internal"
"github.com/influxdb/telegraf/outputs" "github.com/influxdata/telegraf/plugins/inputs"
"github.com/influxdb/telegraf/plugins" "github.com/influxdata/telegraf/plugins/outputs"
"github.com/naoina/toml" "github.com/naoina/toml"
"github.com/naoina/toml/ast" "github.com/naoina/toml/ast"
"github.com/influxdb/influxdb/client/v2" "github.com/influxdata/influxdb/client/v2"
) )
// Config specifies the URL/user/password for the database that telegraf // Config specifies the URL/user/password for the database that telegraf
@ -25,11 +25,11 @@ import (
// specified // specified
type Config struct { type Config struct {
Tags map[string]string Tags map[string]string
PluginFilters []string InputFilters []string
OutputFilters []string OutputFilters []string
Agent *AgentConfig Agent *AgentConfig
Plugins []*RunningPlugin Inputs []*RunningInput
Outputs []*RunningOutput Outputs []*RunningOutput
} }
@ -45,9 +45,9 @@ func NewConfig() *Config {
}, },
Tags: make(map[string]string), Tags: make(map[string]string),
Plugins: make([]*RunningPlugin, 0), Inputs: make([]*RunningInput, 0),
Outputs: make([]*RunningOutput, 0), Outputs: make([]*RunningOutput, 0),
PluginFilters: make([]string, 0), InputFilters: make([]string, 0),
OutputFilters: make([]string, 0), OutputFilters: make([]string, 0),
} }
return c return c
@ -61,13 +61,22 @@ type AgentConfig struct {
// ie, if Interval=10s then always collect on :00, :10, :20, etc. // ie, if Interval=10s then always collect on :00, :10, :20, etc.
RoundInterval bool RoundInterval bool
// CollectionJitter is used to jitter the collection by a random amount.
// Each plugin will sleep for a random time within jitter before collecting.
// This can be used to avoid many plugins querying things like sysfs at the
// same time, which can have a measurable effect on the system.
CollectionJitter internal.Duration
// Interval at which to flush data // Interval at which to flush data
FlushInterval internal.Duration FlushInterval internal.Duration
// FlushRetries is the number of times to retry each data flush // FlushRetries is the number of times to retry each data flush
FlushRetries int FlushRetries int
// FlushJitter tells // FlushJitter Jitters the flush interval by a random amount.
// This is primarily to avoid large write spikes for users running a large
// number of telegraf instances.
// ie, a jitter of 5s and interval 10s means flushes will happen every 10-15s
FlushJitter internal.Duration FlushJitter internal.Duration
// TODO(cam): Remove UTC and Precision parameters, they are no longer // TODO(cam): Remove UTC and Precision parameters, they are no longer
@ -76,8 +85,11 @@ type AgentConfig struct {
UTC bool `toml:"utc"` UTC bool `toml:"utc"`
Precision string Precision string
// Option for running in debug mode // Debug is the option for running in debug mode
Debug bool Debug bool
// Quiet is the option for running in quiet mode
Quiet bool
Hostname string Hostname string
} }
@ -93,10 +105,10 @@ type RunningOutput struct {
Config *OutputConfig Config *OutputConfig
} }
type RunningPlugin struct { type RunningInput struct {
Name string Name string
Plugin plugins.Plugin Input inputs.Input
Config *PluginConfig Config *InputConfig
} }
// Filter containing drop/pass and tagdrop/tagpass rules // Filter containing drop/pass and tagdrop/tagpass rules
@ -110,9 +122,13 @@ type Filter struct {
IsActive bool IsActive bool
} }
// PluginConfig containing a name, interval, and filter // InputConfig containing a name, interval, and filter
type PluginConfig struct { type InputConfig struct {
Name string Name string
NameOverride string
MeasurementPrefix string
MeasurementSuffix string
Tags map[string]string
Filter Filter Filter Filter
Interval time.Duration Interval time.Duration
} }
@ -142,12 +158,12 @@ func (ro *RunningOutput) FilterPoints(points []*client.Point) []*client.Point {
// ShouldPass returns true if the metric should pass, false if should drop // ShouldPass returns true if the metric should pass, false if should drop
// based on the drop/pass filter parameters // based on the drop/pass filter parameters
func (f Filter) ShouldPass(measurement string) bool { func (f Filter) ShouldPass(fieldkey string) bool {
if f.Pass != nil { if f.Pass != nil {
for _, pat := range f.Pass { for _, pat := range f.Pass {
// TODO remove HasPrefix check, leaving it for now for legacy support. // TODO remove HasPrefix check, leaving it for now for legacy support.
// Cam, 2015-12-07 // Cam, 2015-12-07
if strings.HasPrefix(measurement, pat) || internal.Glob(pat, measurement) { if strings.HasPrefix(fieldkey, pat) || internal.Glob(pat, fieldkey) {
return true return true
} }
} }
@ -158,7 +174,7 @@ func (f Filter) ShouldPass(measurement string) bool {
for _, pat := range f.Drop { for _, pat := range f.Drop {
// TODO remove HasPrefix check, leaving it for now for legacy support. // TODO remove HasPrefix check, leaving it for now for legacy support.
// Cam, 2015-12-07 // Cam, 2015-12-07
if strings.HasPrefix(measurement, pat) || internal.Glob(pat, measurement) { if strings.HasPrefix(fieldkey, pat) || internal.Glob(pat, fieldkey) {
return false return false
} }
} }
@ -200,16 +216,16 @@ func (f Filter) ShouldTagsPass(tags map[string]string) bool {
return true return true
} }
// Plugins returns a list of strings of the configured plugins. // Inputs returns a list of strings of the configured inputs.
func (c *Config) PluginNames() []string { func (c *Config) InputNames() []string {
var name []string var name []string
for _, plugin := range c.Plugins { for _, input := range c.Inputs {
name = append(name, plugin.Name) name = append(name, input.Name)
} }
return name return name
} }
// Outputs returns a list of strings of the configured plugins. // Outputs returns a list of strings of the configured inputs.
func (c *Config) OutputNames() []string { func (c *Config) OutputNames() []string {
var name []string var name []string
for _, output := range c.Outputs { for _, output := range c.Outputs {
@ -235,7 +251,7 @@ func (c *Config) ListTags() string {
var header = `# Telegraf configuration var header = `# Telegraf configuration
# Telegraf is entirely plugin driven. All metrics are gathered from the # Telegraf is entirely plugin driven. All metrics are gathered from the
# declared plugins. # declared inputs.
# Even if a plugin has no configuration, it must be declared in here # Even if a plugin has no configuration, it must be declared in here
# to be active. Declaring a plugin means just specifying the name # to be active. Declaring a plugin means just specifying the name
@ -259,11 +275,16 @@ var header = `# Telegraf configuration
# Configuration for telegraf agent # Configuration for telegraf agent
[agent] [agent]
# Default data collection interval for all plugins # Default data collection interval for all inputs
interval = "10s" interval = "10s"
# Rounds collection interval to 'interval' # Rounds collection interval to 'interval'
# ie, if interval="10s" then always collect on :00, :10, :20, etc. # ie, if interval="10s" then always collect on :00, :10, :20, etc.
round_interval = true round_interval = true
# Collection jitter is used to jitter the collection by a random amount.
# Each plugin will sleep for a random time within jitter before collecting.
# This can be used to avoid many plugins querying things like sysfs at the
# same time, which can have a measurable effect on the system.
collection_jitter = "0s"
# Default data flushing interval for all outputs. You should not set this below # Default data flushing interval for all outputs. You should not set this below
# interval. Maximum flush_interval will be flush_interval + flush_jitter # interval. Maximum flush_interval will be flush_interval + flush_jitter
@ -275,6 +296,8 @@ var header = `# Telegraf configuration
# Run telegraf in debug mode # Run telegraf in debug mode
debug = false debug = false
# Run telegraf in quiet mode
quiet = false
# Override default hostname, if empty use os.Hostname() # Override default hostname, if empty use os.Hostname()
hostname = "" hostname = ""
@ -283,22 +306,20 @@ var header = `# Telegraf configuration
# OUTPUTS # # OUTPUTS #
############################################################################### ###############################################################################
[outputs]
` `
var pluginHeader = ` var pluginHeader = `
############################################################################### ###############################################################################
# PLUGINS # # INPUTS #
############################################################################### ###############################################################################
[plugins]
` `
var servicePluginHeader = ` var serviceInputHeader = `
############################################################################### ###############################################################################
# SERVICE PLUGINS # # SERVICE INPUTS #
############################################################################### ###############################################################################
` `
@ -322,35 +343,35 @@ func PrintSampleConfig(pluginFilters []string, outputFilters []string) {
printConfig(oname, output, "outputs") printConfig(oname, output, "outputs")
} }
// Filter plugins // Filter inputs
var pnames []string var pnames []string
for pname := range plugins.Plugins { for pname := range inputs.Inputs {
if len(pluginFilters) == 0 || sliceContains(pname, pluginFilters) { if len(pluginFilters) == 0 || sliceContains(pname, pluginFilters) {
pnames = append(pnames, pname) pnames = append(pnames, pname)
} }
} }
sort.Strings(pnames) sort.Strings(pnames)
// Print Plugins // Print Inputs
fmt.Printf(pluginHeader) fmt.Printf(pluginHeader)
servPlugins := make(map[string]plugins.ServicePlugin) servInputs := make(map[string]inputs.ServiceInput)
for _, pname := range pnames { for _, pname := range pnames {
creator := plugins.Plugins[pname] creator := inputs.Inputs[pname]
plugin := creator() input := creator()
switch p := plugin.(type) { switch p := input.(type) {
case plugins.ServicePlugin: case inputs.ServiceInput:
servPlugins[pname] = p servInputs[pname] = p
continue continue
} }
printConfig(pname, plugin, "plugins") printConfig(pname, input, "inputs")
} }
// Print Service Plugins // Print Service Inputs
fmt.Printf(servicePluginHeader) fmt.Printf(serviceInputHeader)
for name, plugin := range servPlugins { for name, input := range servInputs {
printConfig(name, plugin, "plugins") printConfig(name, input, "inputs")
} }
} }
@ -378,12 +399,12 @@ func sliceContains(name string, list []string) bool {
return false return false
} }
// PrintPluginConfig prints the config usage of a single plugin. // PrintInputConfig prints the config usage of a single input.
func PrintPluginConfig(name string) error { func PrintInputConfig(name string) error {
if creator, ok := plugins.Plugins[name]; ok { if creator, ok := inputs.Inputs[name]; ok {
printConfig(name, creator(), "plugins") printConfig(name, creator(), "inputs")
} else { } else {
return errors.New(fmt.Sprintf("Plugin %s not found", name)) return errors.New(fmt.Sprintf("Input %s not found", name))
} }
return nil return nil
} }
@ -449,33 +470,15 @@ func (c *Config) LoadConfig(path string) error {
return err return err
} }
case "outputs": case "outputs":
for outputName, outputVal := range subTable.Fields {
switch outputSubTable := outputVal.(type) {
case *ast.Table:
if err = c.addOutput(outputName, outputSubTable); err != nil {
return err
}
case []*ast.Table:
for _, t := range outputSubTable {
if err = c.addOutput(outputName, t); err != nil {
return err
}
}
default:
return fmt.Errorf("Unsupported config format: %s",
outputName)
}
}
case "plugins":
for pluginName, pluginVal := range subTable.Fields { for pluginName, pluginVal := range subTable.Fields {
switch pluginSubTable := pluginVal.(type) { switch pluginSubTable := pluginVal.(type) {
case *ast.Table: case *ast.Table:
if err = c.addPlugin(pluginName, pluginSubTable); err != nil { if err = c.addOutput(pluginName, pluginSubTable); err != nil {
return err return err
} }
case []*ast.Table: case []*ast.Table:
for _, t := range pluginSubTable { for _, t := range pluginSubTable {
if err = c.addPlugin(pluginName, t); err != nil { if err = c.addOutput(pluginName, t); err != nil {
return err return err
} }
} }
@ -484,10 +487,28 @@ func (c *Config) LoadConfig(path string) error {
pluginName) pluginName)
} }
} }
// Assume it's a plugin for legacy config file support if no other case "inputs", "plugins":
for pluginName, pluginVal := range subTable.Fields {
switch pluginSubTable := pluginVal.(type) {
case *ast.Table:
if err = c.addInput(pluginName, pluginSubTable); err != nil {
return err
}
case []*ast.Table:
for _, t := range pluginSubTable {
if err = c.addInput(pluginName, t); err != nil {
return err
}
}
default:
return fmt.Errorf("Unsupported config format: %s",
pluginName)
}
}
// Assume it's an input input for legacy config file support if no other
// identifiers are present // identifiers are present
default: default:
if err = c.addPlugin(name, subTable); err != nil { if err = c.addInput(name, subTable); err != nil {
return err return err
} }
} }
@ -523,36 +544,41 @@ func (c *Config) addOutput(name string, table *ast.Table) error {
return nil return nil
} }
func (c *Config) addPlugin(name string, table *ast.Table) error { func (c *Config) addInput(name string, table *ast.Table) error {
if len(c.PluginFilters) > 0 && !sliceContains(name, c.PluginFilters) { if len(c.InputFilters) > 0 && !sliceContains(name, c.InputFilters) {
return nil return nil
} }
creator, ok := plugins.Plugins[name] // Legacy support renaming io input to diskio
if !ok { if name == "io" {
return fmt.Errorf("Undefined but requested plugin: %s", name) name = "diskio"
} }
plugin := creator()
pluginConfig, err := buildPlugin(name, table) creator, ok := inputs.Inputs[name]
if !ok {
return fmt.Errorf("Undefined but requested input: %s", name)
}
input := creator()
pluginConfig, err := buildInput(name, table)
if err != nil { if err != nil {
return err return err
} }
if err := toml.UnmarshalTable(table, plugin); err != nil { if err := toml.UnmarshalTable(table, input); err != nil {
return err return err
} }
rp := &RunningPlugin{ rp := &RunningInput{
Name: name, Name: name,
Plugin: plugin, Input: input,
Config: pluginConfig, Config: pluginConfig,
} }
c.Plugins = append(c.Plugins, rp) c.Inputs = append(c.Inputs, rp)
return nil return nil
} }
// buildFilter builds a Filter (tagpass/tagdrop/pass/drop) to // buildFilter builds a Filter (tagpass/tagdrop/pass/drop) to
// be inserted into the OutputConfig/PluginConfig to be used for prefix // be inserted into the OutputConfig/InputConfig to be used for prefix
// filtering on tags and measurements // filtering on tags and measurements
func buildFilter(tbl *ast.Table) Filter { func buildFilter(tbl *ast.Table) Filter {
f := Filter{} f := Filter{}
@ -628,10 +654,11 @@ func buildFilter(tbl *ast.Table) Filter {
return f return f
} }
// buildPlugin parses plugin specific items from the ast.Table, builds the filter and returns a // buildInput parses input specific items from the ast.Table,
// PluginConfig to be inserted into RunningPlugin // builds the filter and returns a
func buildPlugin(name string, tbl *ast.Table) (*PluginConfig, error) { // InputConfig to be inserted into RunningInput
cp := &PluginConfig{Name: name} func buildInput(name string, tbl *ast.Table) (*InputConfig, error) {
cp := &InputConfig{Name: name}
if node, ok := tbl.Fields["interval"]; ok { if node, ok := tbl.Fields["interval"]; ok {
if kv, ok := node.(*ast.KeyValue); ok { if kv, ok := node.(*ast.KeyValue); ok {
if str, ok := kv.Value.(*ast.String); ok { if str, ok := kv.Value.(*ast.String); ok {
@ -644,14 +671,51 @@ func buildPlugin(name string, tbl *ast.Table) (*PluginConfig, error) {
} }
} }
} }
if node, ok := tbl.Fields["name_prefix"]; ok {
if kv, ok := node.(*ast.KeyValue); ok {
if str, ok := kv.Value.(*ast.String); ok {
cp.MeasurementPrefix = str.Value
}
}
}
if node, ok := tbl.Fields["name_suffix"]; ok {
if kv, ok := node.(*ast.KeyValue); ok {
if str, ok := kv.Value.(*ast.String); ok {
cp.MeasurementSuffix = str.Value
}
}
}
if node, ok := tbl.Fields["name_override"]; ok {
if kv, ok := node.(*ast.KeyValue); ok {
if str, ok := kv.Value.(*ast.String); ok {
cp.NameOverride = str.Value
}
}
}
cp.Tags = make(map[string]string)
if node, ok := tbl.Fields["tags"]; ok {
if subtbl, ok := node.(*ast.Table); ok {
if err := toml.UnmarshalTable(subtbl, cp.Tags); err != nil {
log.Printf("Could not parse tags for input %s\n", name)
}
}
}
delete(tbl.Fields, "name_prefix")
delete(tbl.Fields, "name_suffix")
delete(tbl.Fields, "name_override")
delete(tbl.Fields, "interval") delete(tbl.Fields, "interval")
delete(tbl.Fields, "tags")
cp.Filter = buildFilter(tbl) cp.Filter = buildFilter(tbl)
return cp, nil return cp, nil
} }
// buildOutput parses output specific items from the ast.Table, builds the filter and returns an // buildOutput parses output specific items from the ast.Table, builds the filter and returns an
// OutputConfig to be inserted into RunningPlugin // OutputConfig to be inserted into RunningInput
// Note: error exists in the return for future calls that might require error // Note: error exists in the return for future calls that might require error
func buildOutput(name string, tbl *ast.Table) (*OutputConfig, error) { func buildOutput(name string, tbl *ast.Table) (*OutputConfig, error) {
oc := &OutputConfig{ oc := &OutputConfig{
@ -659,5 +723,4 @@ func buildOutput(name string, tbl *ast.Table) (*OutputConfig, error) {
Filter: buildFilter(tbl), Filter: buildFilter(tbl),
} }
return oc, nil return oc, nil
} }

View File

@ -4,21 +4,21 @@ import (
"testing" "testing"
"time" "time"
"github.com/influxdb/telegraf/plugins" "github.com/influxdata/telegraf/plugins/inputs"
"github.com/influxdb/telegraf/plugins/exec" "github.com/influxdata/telegraf/plugins/inputs/exec"
"github.com/influxdb/telegraf/plugins/memcached" "github.com/influxdata/telegraf/plugins/inputs/memcached"
"github.com/influxdb/telegraf/plugins/procstat" "github.com/influxdata/telegraf/plugins/inputs/procstat"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
) )
func TestConfig_LoadSinglePlugin(t *testing.T) { func TestConfig_LoadSingleInput(t *testing.T) {
c := NewConfig() c := NewConfig()
c.LoadConfig("./testdata/single_plugin.toml") c.LoadConfig("./testdata/single_plugin.toml")
memcached := plugins.Plugins["memcached"]().(*memcached.Memcached) memcached := inputs.Inputs["memcached"]().(*memcached.Memcached)
memcached.Servers = []string{"localhost"} memcached.Servers = []string{"localhost"}
mConfig := &PluginConfig{ mConfig := &InputConfig{
Name: "memcached", Name: "memcached",
Filter: Filter{ Filter: Filter{
Drop: []string{"other", "stuff"}, Drop: []string{"other", "stuff"},
@ -39,10 +39,11 @@ func TestConfig_LoadSinglePlugin(t *testing.T) {
}, },
Interval: 5 * time.Second, Interval: 5 * time.Second,
} }
mConfig.Tags = make(map[string]string)
assert.Equal(t, memcached, c.Plugins[0].Plugin, assert.Equal(t, memcached, c.Inputs[0].Input,
"Testdata did not produce a correct memcached struct.") "Testdata did not produce a correct memcached struct.")
assert.Equal(t, mConfig, c.Plugins[0].Config, assert.Equal(t, mConfig, c.Inputs[0].Config,
"Testdata did not produce correct memcached metadata.") "Testdata did not produce correct memcached metadata.")
} }
@ -57,10 +58,10 @@ func TestConfig_LoadDirectory(t *testing.T) {
t.Error(err) t.Error(err)
} }
memcached := plugins.Plugins["memcached"]().(*memcached.Memcached) memcached := inputs.Inputs["memcached"]().(*memcached.Memcached)
memcached.Servers = []string{"localhost"} memcached.Servers = []string{"localhost"}
mConfig := &PluginConfig{ mConfig := &InputConfig{
Name: "memcached", Name: "memcached",
Filter: Filter{ Filter: Filter{
Drop: []string{"other", "stuff"}, Drop: []string{"other", "stuff"},
@ -81,45 +82,40 @@ func TestConfig_LoadDirectory(t *testing.T) {
}, },
Interval: 5 * time.Second, Interval: 5 * time.Second,
} }
assert.Equal(t, memcached, c.Plugins[0].Plugin, mConfig.Tags = make(map[string]string)
assert.Equal(t, memcached, c.Inputs[0].Input,
"Testdata did not produce a correct memcached struct.") "Testdata did not produce a correct memcached struct.")
assert.Equal(t, mConfig, c.Plugins[0].Config, assert.Equal(t, mConfig, c.Inputs[0].Config,
"Testdata did not produce correct memcached metadata.") "Testdata did not produce correct memcached metadata.")
ex := plugins.Plugins["exec"]().(*exec.Exec) ex := inputs.Inputs["exec"]().(*exec.Exec)
ex.Commands = []*exec.Command{ ex.Command = "/usr/bin/myothercollector --foo=bar"
&exec.Command{ eConfig := &InputConfig{
Command: "/usr/bin/myothercollector --foo=bar", Name: "exec",
Name: "myothercollector", MeasurementSuffix: "_myothercollector",
},
} }
eConfig := &PluginConfig{Name: "exec"} eConfig.Tags = make(map[string]string)
assert.Equal(t, ex, c.Plugins[1].Plugin, assert.Equal(t, ex, c.Inputs[1].Input,
"Merged Testdata did not produce a correct exec struct.") "Merged Testdata did not produce a correct exec struct.")
assert.Equal(t, eConfig, c.Plugins[1].Config, assert.Equal(t, eConfig, c.Inputs[1].Config,
"Merged Testdata did not produce correct exec metadata.") "Merged Testdata did not produce correct exec metadata.")
memcached.Servers = []string{"192.168.1.1"} memcached.Servers = []string{"192.168.1.1"}
assert.Equal(t, memcached, c.Plugins[2].Plugin, assert.Equal(t, memcached, c.Inputs[2].Input,
"Testdata did not produce a correct memcached struct.") "Testdata did not produce a correct memcached struct.")
assert.Equal(t, mConfig, c.Plugins[2].Config, assert.Equal(t, mConfig, c.Inputs[2].Config,
"Testdata did not produce correct memcached metadata.") "Testdata did not produce correct memcached metadata.")
pstat := plugins.Plugins["procstat"]().(*procstat.Procstat) pstat := inputs.Inputs["procstat"]().(*procstat.Procstat)
pstat.Specifications = []*procstat.Specification{ pstat.PidFile = "/var/run/grafana-server.pid"
&procstat.Specification{
PidFile: "/var/run/grafana-server.pid",
},
&procstat.Specification{
PidFile: "/var/run/influxdb/influxd.pid",
},
}
pConfig := &PluginConfig{Name: "procstat"} pConfig := &InputConfig{Name: "procstat"}
pConfig.Tags = make(map[string]string)
assert.Equal(t, pstat, c.Plugins[3].Plugin, assert.Equal(t, pstat, c.Inputs[3].Input,
"Merged Testdata did not produce a correct procstat struct.") "Merged Testdata did not produce a correct procstat struct.")
assert.Equal(t, pConfig, c.Plugins[3].Config, assert.Equal(t, pConfig, c.Inputs[3].Config,
"Merged Testdata did not produce correct procstat metadata.") "Merged Testdata did not produce correct procstat metadata.")
} }

View File

@ -1,9 +1,9 @@
[[plugins.memcached]] [[inputs.memcached]]
servers = ["localhost"] servers = ["localhost"]
pass = ["some", "strings"] pass = ["some", "strings"]
drop = ["other", "stuff"] drop = ["other", "stuff"]
interval = "5s" interval = "5s"
[plugins.memcached.tagpass] [inputs.memcached.tagpass]
goodtag = ["mytag"] goodtag = ["mytag"]
[plugins.memcached.tagdrop] [inputs.memcached.tagdrop]
badtag = ["othertag"] badtag = ["othertag"]

View File

@ -1,8 +1,4 @@
[[plugins.exec]] [[inputs.exec]]
# specify commands via an array of tables
[[plugins.exec.commands]]
# the command to run # the command to run
command = "/usr/bin/myothercollector --foo=bar" command = "/usr/bin/myothercollector --foo=bar"
name_suffix = "_myothercollector"
# name of the command (used as a prefix for measurements)
name = "myothercollector"

View File

@ -1,9 +1,9 @@
[[plugins.memcached]] [[inputs.memcached]]
servers = ["192.168.1.1"] servers = ["192.168.1.1"]
pass = ["some", "strings"] pass = ["some", "strings"]
drop = ["other", "stuff"] drop = ["other", "stuff"]
interval = "5s" interval = "5s"
[plugins.memcached.tagpass] [inputs.memcached.tagpass]
goodtag = ["mytag"] goodtag = ["mytag"]
[plugins.memcached.tagdrop] [inputs.memcached.tagdrop]
badtag = ["othertag"] badtag = ["othertag"]

View File

@ -1,5 +1,2 @@
[[plugins.procstat]] [[inputs.procstat]]
[[plugins.procstat.specifications]]
pid_file = "/var/run/grafana-server.pid" pid_file = "/var/run/grafana-server.pid"
[[plugins.procstat.specifications]]
pid_file = "/var/run/influxdb/influxd.pid"

View File

@ -1,7 +1,7 @@
# Telegraf configuration # Telegraf configuration
# Telegraf is entirely plugin driven. All metrics are gathered from the # Telegraf is entirely plugin driven. All metrics are gathered from the
# declared plugins. # declared inputs.
# Even if a plugin has no configuration, it must be declared in here # Even if a plugin has no configuration, it must be declared in here
# to be active. Declaring a plugin means just specifying the name # to be active. Declaring a plugin means just specifying the name
@ -21,20 +21,13 @@
# Tags can also be specified via a normal map, but only one form at a time: # Tags can also be specified via a normal map, but only one form at a time:
[tags] [tags]
# dc = "us-east-1" dc = "us-east-1"
# Configuration for telegraf agent # Configuration for telegraf agent
[agent] [agent]
# Default data collection interval for all plugins # Default data collection interval for all plugins
interval = "10s" interval = "10s"
# If utc = false, uses local time (utc is highly recommended)
utc = true
# Precision of writes, valid values are n, u, ms, s, m, and h
# note: using second precision greatly helps InfluxDB compression
precision = "s"
# run telegraf in debug mode # run telegraf in debug mode
debug = false debug = false
@ -46,8 +39,6 @@
# OUTPUTS # # OUTPUTS #
############################################################################### ###############################################################################
[outputs]
# Configuration for influxdb server to send metrics to # Configuration for influxdb server to send metrics to
[[outputs.influxdb]] [[outputs.influxdb]]
# The full HTTP endpoint URL for your InfluxDB instance # The full HTTP endpoint URL for your InfluxDB instance
@ -58,17 +49,6 @@
# The target database for metrics. This database must already exist # The target database for metrics. This database must already exist
database = "telegraf" # required. database = "telegraf" # required.
# Connection timeout (for the connection with InfluxDB), formatted as a string.
# Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
# If not provided, will default to 0 (no timeout)
# timeout = "5s"
# username = "telegraf"
# password = "metricsmetricsmetricsmetrics"
# Set the user agent for the POSTs (can be useful for log differentiation)
# user_agent = "telegraf"
[[outputs.influxdb]] [[outputs.influxdb]]
urls = ["udp://localhost:8089"] urls = ["udp://localhost:8089"]
database = "udp-telegraf" database = "udp-telegraf"
@ -88,15 +68,13 @@
# PLUGINS # # PLUGINS #
############################################################################### ###############################################################################
[plugins]
# Read Apache status information (mod_status) # Read Apache status information (mod_status)
[[plugins.apache]] [[inputs.apache]]
# An array of Apache status URI to gather stats. # An array of Apache status URI to gather stats.
urls = ["http://localhost/server-status?auto"] urls = ["http://localhost/server-status?auto"]
# Read metrics about cpu usage # Read metrics about cpu usage
[[plugins.cpu]] [[inputs.cpu]]
# Whether to report per-cpu stats or not # Whether to report per-cpu stats or not
percpu = true percpu = true
# Whether to report total system cpu stats or not # Whether to report total system cpu stats or not
@ -105,11 +83,11 @@ urls = ["http://localhost/server-status?auto"]
drop = ["cpu_time"] drop = ["cpu_time"]
# Read metrics about disk usage by mount point # Read metrics about disk usage by mount point
[[plugins.disk]] [[inputs.diskio]]
# no configuration # no configuration
# Read metrics from one or many disque servers # Read metrics from one or many disque servers
[[plugins.disque]] [[inputs.disque]]
# An array of URI to gather stats about. Specify an ip or hostname # An array of URI to gather stats about. Specify an ip or hostname
# with optional port and password. ie disque://localhost, disque://10.10.3.33:18832, # with optional port and password. ie disque://localhost, disque://10.10.3.33:18832,
# 10.0.0.1:10000, etc. # 10.0.0.1:10000, etc.
@ -118,7 +96,7 @@ urls = ["http://localhost/server-status?auto"]
servers = ["localhost"] servers = ["localhost"]
# Read stats from one or more Elasticsearch servers or clusters # Read stats from one or more Elasticsearch servers or clusters
[[plugins.elasticsearch]] [[inputs.elasticsearch]]
# specify a list of one or more Elasticsearch servers # specify a list of one or more Elasticsearch servers
servers = ["http://localhost:9200"] servers = ["http://localhost:9200"]
@ -127,17 +105,13 @@ urls = ["http://localhost/server-status?auto"]
local = true local = true
# Read flattened metrics from one or more commands that output JSON to stdout # Read flattened metrics from one or more commands that output JSON to stdout
[[plugins.exec]] [[inputs.exec]]
# specify commands via an array of tables
[[exec.commands]]
# the command to run # the command to run
command = "/usr/bin/mycollector --foo=bar" command = "/usr/bin/mycollector --foo=bar"
name_suffix = "_mycollector"
# name of the command (used as a prefix for measurements)
name = "mycollector"
# Read metrics of haproxy, via socket or csv stats page # Read metrics of haproxy, via socket or csv stats page
[[plugins.haproxy]] [[inputs.haproxy]]
# An array of address to gather stats about. Specify an ip on hostname # An array of address to gather stats about. Specify an ip on hostname
# with optional port. ie localhost, 10.10.3.33:1936, etc. # with optional port. ie localhost, 10.10.3.33:1936, etc.
# #
@ -147,10 +121,7 @@ urls = ["http://localhost/server-status?auto"]
# servers = ["socket:/run/haproxy/admin.sock"] # servers = ["socket:/run/haproxy/admin.sock"]
# Read flattened metrics from one or more JSON HTTP endpoints # Read flattened metrics from one or more JSON HTTP endpoints
[[plugins.httpjson]] [[inputs.httpjson]]
# Specify services via an array of tables
[[httpjson.services]]
# a name for the service being polled # a name for the service being polled
name = "webserver_stats" name = "webserver_stats"
@ -164,16 +135,16 @@ urls = ["http://localhost/server-status?auto"]
method = "GET" method = "GET"
# HTTP parameters (all values must be strings) # HTTP parameters (all values must be strings)
[httpjson.services.parameters] [httpjson.parameters]
event_type = "cpu_spike" event_type = "cpu_spike"
threshold = "0.75" threshold = "0.75"
# Read metrics about disk IO by device # Read metrics about disk IO by device
[[plugins.io]] [[inputs.diskio]]
# no configuration # no configuration
# read metrics from a Kafka topic # read metrics from a Kafka topic
[[plugins.kafka_consumer]] [[inputs.kafka_consumer]]
# topic(s) to consume # topic(s) to consume
topics = ["telegraf"] topics = ["telegraf"]
# an array of Zookeeper connection strings # an array of Zookeeper connection strings
@ -186,7 +157,7 @@ urls = ["http://localhost/server-status?auto"]
offset = "oldest" offset = "oldest"
# Read metrics from a LeoFS Server via SNMP # Read metrics from a LeoFS Server via SNMP
[[plugins.leofs]] [[inputs.leofs]]
# An array of URI to gather stats about LeoFS. # An array of URI to gather stats about LeoFS.
# Specify an ip or hostname with port. ie 127.0.0.1:4020 # Specify an ip or hostname with port. ie 127.0.0.1:4020
# #
@ -194,7 +165,7 @@ urls = ["http://localhost/server-status?auto"]
servers = ["127.0.0.1:4021"] servers = ["127.0.0.1:4021"]
# Read metrics from local Lustre service on OST, MDS # Read metrics from local Lustre service on OST, MDS
[[plugins.lustre2]] [[inputs.lustre2]]
# An array of /proc globs to search for Lustre stats # An array of /proc globs to search for Lustre stats
# If not specified, the default will work on Lustre 2.5.x # If not specified, the default will work on Lustre 2.5.x
# #
@ -202,11 +173,11 @@ urls = ["http://localhost/server-status?auto"]
# mds_procfiles = ["/proc/fs/lustre/mdt/*/md_stats"] # mds_procfiles = ["/proc/fs/lustre/mdt/*/md_stats"]
# Read metrics about memory usage # Read metrics about memory usage
[[plugins.mem]] [[inputs.mem]]
# no configuration # no configuration
# Read metrics from one or many memcached servers # Read metrics from one or many memcached servers
[[plugins.memcached]] [[inputs.memcached]]
# An array of address to gather stats about. Specify an ip on hostname # An array of address to gather stats about. Specify an ip on hostname
# with optional port. ie localhost, 10.0.0.1:11211, etc. # with optional port. ie localhost, 10.0.0.1:11211, etc.
# #
@ -214,7 +185,7 @@ urls = ["http://localhost/server-status?auto"]
servers = ["localhost"] servers = ["localhost"]
# Read metrics from one or many MongoDB servers # Read metrics from one or many MongoDB servers
[[plugins.mongodb]] [[inputs.mongodb]]
# An array of URI to gather stats about. Specify an ip or hostname # An array of URI to gather stats about. Specify an ip or hostname
# with optional port add password. ie mongodb://user:auth_key@10.10.3.30:27017, # with optional port add password. ie mongodb://user:auth_key@10.10.3.30:27017,
# mongodb://10.10.3.33:18832, 10.0.0.1:10000, etc. # mongodb://10.10.3.33:18832, 10.0.0.1:10000, etc.
@ -223,7 +194,7 @@ urls = ["http://localhost/server-status?auto"]
servers = ["127.0.0.1:27017"] servers = ["127.0.0.1:27017"]
# Read metrics from one or many mysql servers # Read metrics from one or many mysql servers
[[plugins.mysql]] [[inputs.mysql]]
# specify servers via a url matching: # specify servers via a url matching:
# [username[:password]@][protocol[(address)]]/[?tls=[true|false|skip-verify]] # [username[:password]@][protocol[(address)]]/[?tls=[true|false|skip-verify]]
# e.g. # e.g.
@ -234,7 +205,7 @@ urls = ["http://localhost/server-status?auto"]
servers = ["localhost"] servers = ["localhost"]
# Read metrics about network interface usage # Read metrics about network interface usage
[[plugins.net]] [[inputs.net]]
# By default, telegraf gathers stats from any up interface (excluding loopback) # By default, telegraf gathers stats from any up interface (excluding loopback)
# Setting interfaces will tell it to gather these explicit interfaces, # Setting interfaces will tell it to gather these explicit interfaces,
# regardless of status. # regardless of status.
@ -242,12 +213,12 @@ urls = ["http://localhost/server-status?auto"]
# interfaces = ["eth0", ... ] # interfaces = ["eth0", ... ]
# Read Nginx's basic status information (ngx_http_stub_status_module) # Read Nginx's basic status information (ngx_http_stub_status_module)
[[plugins.nginx]] [[inputs.nginx]]
# An array of Nginx stub_status URI to gather stats. # An array of Nginx stub_status URI to gather stats.
urls = ["http://localhost/status"] urls = ["http://localhost/status"]
# Ping given url(s) and return statistics # Ping given url(s) and return statistics
[[plugins.ping]] [[inputs.ping]]
# urls to ping # urls to ping
urls = ["www.google.com"] # required urls = ["www.google.com"] # required
# number of pings to send (ping -c <COUNT>) # number of pings to send (ping -c <COUNT>)
@ -260,10 +231,7 @@ urls = ["http://localhost/server-status?auto"]
interface = "" interface = ""
# Read metrics from one or many postgresql servers # Read metrics from one or many postgresql servers
[[plugins.postgresql]] [[inputs.postgresql]]
# specify servers via an array of tables
[[postgresql.servers]]
# specify address via a url matching: # specify address via a url matching:
# postgres://[pqgotest[:password]]@localhost[/dbname]?sslmode=[disable|verify-ca|verify-full] # postgres://[pqgotest[:password]]@localhost[/dbname]?sslmode=[disable|verify-ca|verify-full]
# or a simple string: # or a simple string:
@ -290,14 +258,13 @@ urls = ["http://localhost/server-status?auto"]
# address = "influx@remoteserver" # address = "influx@remoteserver"
# Read metrics from one or many prometheus clients # Read metrics from one or many prometheus clients
[[plugins.prometheus]] [[inputs.prometheus]]
# An array of urls to scrape metrics from. # An array of urls to scrape metrics from.
urls = ["http://localhost:9100/metrics"] urls = ["http://localhost:9100/metrics"]
# Read metrics from one or many RabbitMQ servers via the management API # Read metrics from one or many RabbitMQ servers via the management API
[[plugins.rabbitmq]] [[inputs.rabbitmq]]
# Specify servers via an array of tables # Specify servers via an array of tables
[[rabbitmq.servers]]
# name = "rmq-server-1" # optional tag # name = "rmq-server-1" # optional tag
# url = "http://localhost:15672" # url = "http://localhost:15672"
# username = "guest" # username = "guest"
@ -308,7 +275,7 @@ urls = ["http://localhost/server-status?auto"]
# nodes = ["rabbit@node1", "rabbit@node2"] # nodes = ["rabbit@node1", "rabbit@node2"]
# Read metrics from one or many redis servers # Read metrics from one or many redis servers
[[plugins.redis]] [[inputs.redis]]
# An array of URI to gather stats about. Specify an ip or hostname # An array of URI to gather stats about. Specify an ip or hostname
# with optional port add password. ie redis://localhost, redis://10.10.3.33:18832, # with optional port add password. ie redis://localhost, redis://10.10.3.33:18832,
# 10.0.0.1:10000, etc. # 10.0.0.1:10000, etc.
@ -317,7 +284,7 @@ urls = ["http://localhost/server-status?auto"]
servers = ["localhost"] servers = ["localhost"]
# Read metrics from one or many RethinkDB servers # Read metrics from one or many RethinkDB servers
[[plugins.rethinkdb]] [[inputs.rethinkdb]]
# An array of URI to gather stats about. Specify an ip or hostname # An array of URI to gather stats about. Specify an ip or hostname
# with optional port add password. ie rethinkdb://user:auth_key@10.10.3.30:28105, # with optional port add password. ie rethinkdb://user:auth_key@10.10.3.30:28105,
# rethinkdb://10.10.3.33:18832, 10.0.0.1:10000, etc. # rethinkdb://10.10.3.33:18832, 10.0.0.1:10000, etc.
@ -326,9 +293,9 @@ urls = ["http://localhost/server-status?auto"]
servers = ["127.0.0.1:28015"] servers = ["127.0.0.1:28015"]
# Read metrics about swap memory usage # Read metrics about swap memory usage
[[plugins.swap]] [[inputs.swap]]
# no configuration # no configuration
# Read metrics about system load & uptime # Read metrics about system load & uptime
[[plugins.system]] [[inputs.system]]
# no configuration # no configuration

View File

@ -3,7 +3,9 @@ package internal
import ( import (
"bufio" "bufio"
"errors" "errors"
"fmt"
"os" "os"
"strconv"
"strings" "strings"
"time" "time"
) )
@ -27,6 +29,47 @@ func (d *Duration) UnmarshalTOML(b []byte) error {
var NotImplementedError = errors.New("not implemented yet") var NotImplementedError = errors.New("not implemented yet")
type JSONFlattener struct {
Fields map[string]interface{}
}
// FlattenJSON flattens nested maps/interfaces into a fields map
func (f *JSONFlattener) FlattenJSON(
fieldname string,
v interface{},
) error {
if f.Fields == nil {
f.Fields = make(map[string]interface{})
}
fieldname = strings.Trim(fieldname, "_")
switch t := v.(type) {
case map[string]interface{}:
for k, v := range t {
err := f.FlattenJSON(fieldname+"_"+k+"_", v)
if err != nil {
return err
}
}
case []interface{}:
for i, v := range t {
k := strconv.Itoa(i)
err := f.FlattenJSON(fieldname+"_"+k+"_", v)
if err != nil {
return nil
}
}
case float64:
f.Fields[fieldname] = t
case bool, string, nil:
// ignored types
return nil
default:
return fmt.Errorf("JSON Flattener: got unexpected type %T with value %v (%s)",
t, t, fieldname)
}
return nil
}
// ReadLines reads contents from a file and splits them by new lines. // ReadLines reads contents from a file and splits them by new lines.
// A convenience wrapper to ReadLinesOffsetN(filename, 0, -1). // A convenience wrapper to ReadLinesOffsetN(filename, 0, -1).
func ReadLines(filename string) ([]string, error) { func ReadLines(filename string) ([]string, error) {

View File

@ -1,16 +0,0 @@
package all
import (
_ "github.com/influxdb/telegraf/outputs/amon"
_ "github.com/influxdb/telegraf/outputs/amqp"
_ "github.com/influxdb/telegraf/outputs/datadog"
_ "github.com/influxdb/telegraf/outputs/influxdb"
_ "github.com/influxdb/telegraf/outputs/kafka"
_ "github.com/influxdb/telegraf/outputs/kinesis"
_ "github.com/influxdb/telegraf/outputs/librato"
_ "github.com/influxdb/telegraf/outputs/mqtt"
_ "github.com/influxdb/telegraf/outputs/nsq"
_ "github.com/influxdb/telegraf/outputs/opentsdb"
_ "github.com/influxdb/telegraf/outputs/prometheus_client"
_ "github.com/influxdb/telegraf/outputs/riemann"
)

View File

@ -1,85 +0,0 @@
package kafka
import (
"errors"
"fmt"
"github.com/Shopify/sarama"
"github.com/influxdb/influxdb/client/v2"
"github.com/influxdb/telegraf/outputs"
)
type Kafka struct {
// Kafka brokers to send metrics to
Brokers []string
// Kafka topic
Topic string
// Routing Key Tag
RoutingTag string `toml:"routing_tag"`
producer sarama.SyncProducer
}
var sampleConfig = `
# URLs of kafka brokers
brokers = ["localhost:9092"]
# Kafka topic for producer messages
topic = "telegraf"
# Telegraf tag to use as a routing key
# ie, if this tag exists, it's value will be used as the routing key
routing_tag = "host"
`
func (k *Kafka) Connect() error {
producer, err := sarama.NewSyncProducer(k.Brokers, nil)
if err != nil {
return err
}
k.producer = producer
return nil
}
func (k *Kafka) Close() error {
return k.producer.Close()
}
func (k *Kafka) SampleConfig() string {
return sampleConfig
}
func (k *Kafka) Description() string {
return "Configuration for the Kafka server to send metrics to"
}
func (k *Kafka) Write(points []*client.Point) error {
if len(points) == 0 {
return nil
}
for _, p := range points {
// Combine tags from Point and BatchPoints and grab the resulting
// line-protocol output string to write to Kafka
value := p.String()
m := &sarama.ProducerMessage{
Topic: k.Topic,
Value: sarama.StringEncoder(value),
}
if h, ok := p.Tags()[k.RoutingTag]; ok {
m.Key = sarama.StringEncoder(h)
}
_, _, err := k.producer.SendMessage(m)
if err != nil {
return errors.New(fmt.Sprintf("FAILED to send kafka message: %s\n",
err))
}
}
return nil
}
func init() {
outputs.Add("kafka", func() outputs.Output {
return &Kafka{}
})
}

View File

@ -1,37 +0,0 @@
package all
import (
_ "github.com/influxdb/telegraf/plugins/aerospike"
_ "github.com/influxdb/telegraf/plugins/apache"
_ "github.com/influxdb/telegraf/plugins/bcache"
_ "github.com/influxdb/telegraf/plugins/disque"
_ "github.com/influxdb/telegraf/plugins/elasticsearch"
_ "github.com/influxdb/telegraf/plugins/exec"
_ "github.com/influxdb/telegraf/plugins/haproxy"
_ "github.com/influxdb/telegraf/plugins/httpjson"
_ "github.com/influxdb/telegraf/plugins/influxdb"
_ "github.com/influxdb/telegraf/plugins/jolokia"
_ "github.com/influxdb/telegraf/plugins/kafka_consumer"
_ "github.com/influxdb/telegraf/plugins/leofs"
_ "github.com/influxdb/telegraf/plugins/lustre2"
_ "github.com/influxdb/telegraf/plugins/mailchimp"
_ "github.com/influxdb/telegraf/plugins/memcached"
_ "github.com/influxdb/telegraf/plugins/mongodb"
_ "github.com/influxdb/telegraf/plugins/mysql"
_ "github.com/influxdb/telegraf/plugins/nginx"
_ "github.com/influxdb/telegraf/plugins/phpfpm"
_ "github.com/influxdb/telegraf/plugins/ping"
_ "github.com/influxdb/telegraf/plugins/postgresql"
_ "github.com/influxdb/telegraf/plugins/procstat"
_ "github.com/influxdb/telegraf/plugins/prometheus"
_ "github.com/influxdb/telegraf/plugins/puppetagent"
_ "github.com/influxdb/telegraf/plugins/rabbitmq"
_ "github.com/influxdb/telegraf/plugins/redis"
_ "github.com/influxdb/telegraf/plugins/rethinkdb"
_ "github.com/influxdb/telegraf/plugins/statsd"
_ "github.com/influxdb/telegraf/plugins/system"
_ "github.com/influxdb/telegraf/plugins/trig"
_ "github.com/influxdb/telegraf/plugins/twemproxy"
_ "github.com/influxdb/telegraf/plugins/zfs"
_ "github.com/influxdb/telegraf/plugins/zookeeper"
)

View File

@ -1,759 +0,0 @@
package elasticsearch
const clusterResponse = `
{
"cluster_name": "elasticsearch_telegraf",
"status": "green",
"timed_out": false,
"number_of_nodes": 3,
"number_of_data_nodes": 3,
"active_primary_shards": 5,
"active_shards": 15,
"relocating_shards": 0,
"initializing_shards": 0,
"unassigned_shards": 0,
"indices": {
"v1": {
"status": "green",
"number_of_shards": 10,
"number_of_replicas": 1,
"active_primary_shards": 10,
"active_shards": 20,
"relocating_shards": 0,
"initializing_shards": 0,
"unassigned_shards": 0
},
"v2": {
"status": "red",
"number_of_shards": 10,
"number_of_replicas": 1,
"active_primary_shards": 0,
"active_shards": 0,
"relocating_shards": 0,
"initializing_shards": 0,
"unassigned_shards": 20
}
}
}
`
var clusterHealthExpected = map[string]interface{}{
"status": "green",
"timed_out": false,
"number_of_nodes": 3,
"number_of_data_nodes": 3,
"active_primary_shards": 5,
"active_shards": 15,
"relocating_shards": 0,
"initializing_shards": 0,
"unassigned_shards": 0,
}
var v1IndexExpected = map[string]interface{}{
"status": "green",
"number_of_shards": 10,
"number_of_replicas": 1,
"active_primary_shards": 10,
"active_shards": 20,
"relocating_shards": 0,
"initializing_shards": 0,
"unassigned_shards": 0,
}
var v2IndexExpected = map[string]interface{}{
"status": "red",
"number_of_shards": 10,
"number_of_replicas": 1,
"active_primary_shards": 0,
"active_shards": 0,
"relocating_shards": 0,
"initializing_shards": 0,
"unassigned_shards": 20,
}
const statsResponse = `
{
"cluster_name": "es-testcluster",
"nodes": {
"SDFsfSDFsdfFSDSDfSFDSDF": {
"timestamp": 1436365550135,
"name": "test.host.com",
"transport_address": "inet[/127.0.0.1:9300]",
"host": "test",
"ip": [
"inet[/127.0.0.1:9300]",
"NONE"
],
"attributes": {
"master": "true"
},
"indices": {
"docs": {
"count": 29652,
"deleted": 5229
},
"store": {
"size_in_bytes": 37715234,
"throttle_time_in_millis": 215
},
"indexing": {
"index_total": 84790,
"index_time_in_millis": 29680,
"index_current": 0,
"delete_total": 13879,
"delete_time_in_millis": 1139,
"delete_current": 0,
"noop_update_total": 0,
"is_throttled": false,
"throttle_time_in_millis": 0
},
"get": {
"total": 1,
"time_in_millis": 2,
"exists_total": 0,
"exists_time_in_millis": 0,
"missing_total": 1,
"missing_time_in_millis": 2,
"current": 0
},
"search": {
"open_contexts": 0,
"query_total": 1452,
"query_time_in_millis": 5695,
"query_current": 0,
"fetch_total": 414,
"fetch_time_in_millis": 146,
"fetch_current": 0
},
"merges": {
"current": 0,
"current_docs": 0,
"current_size_in_bytes": 0,
"total": 133,
"total_time_in_millis": 21060,
"total_docs": 203672,
"total_size_in_bytes": 142900226
},
"refresh": {
"total": 1076,
"total_time_in_millis": 20078
},
"flush": {
"total": 115,
"total_time_in_millis": 2401
},
"warmer": {
"current": 0,
"total": 2319,
"total_time_in_millis": 448
},
"filter_cache": {
"memory_size_in_bytes": 7384,
"evictions": 0
},
"id_cache": {
"memory_size_in_bytes": 0
},
"fielddata": {
"memory_size_in_bytes": 12996,
"evictions": 0
},
"percolate": {
"total": 0,
"time_in_millis": 0,
"current": 0,
"memory_size_in_bytes": -1,
"memory_size": "-1b",
"queries": 0
},
"completion": {
"size_in_bytes": 0
},
"segments": {
"count": 134,
"memory_in_bytes": 1285212,
"index_writer_memory_in_bytes": 0,
"index_writer_max_memory_in_bytes": 172368955,
"version_map_memory_in_bytes": 611844,
"fixed_bit_set_memory_in_bytes": 0
},
"translog": {
"operations": 17702,
"size_in_bytes": 17
},
"suggest": {
"total": 0,
"time_in_millis": 0,
"current": 0
},
"query_cache": {
"memory_size_in_bytes": 0,
"evictions": 0,
"hit_count": 0,
"miss_count": 0
},
"recovery": {
"current_as_source": 0,
"current_as_target": 0,
"throttle_time_in_millis": 0
}
},
"os": {
"timestamp": 1436460392944,
"load_average": [
0.01,
0.04,
0.05
],
"mem": {
"free_in_bytes": 477761536,
"used_in_bytes": 1621868544,
"free_percent": 74,
"used_percent": 25,
"actual_free_in_bytes": 1565470720,
"actual_used_in_bytes": 534159360
},
"swap": {
"used_in_bytes": 0,
"free_in_bytes": 487997440
}
},
"process": {
"timestamp": 1436460392945,
"open_file_descriptors": 160,
"cpu": {
"percent": 2,
"sys_in_millis": 1870,
"user_in_millis": 13610,
"total_in_millis": 15480
},
"mem": {
"total_virtual_in_bytes": 4747890688
}
},
"jvm": {
"timestamp": 1436460392945,
"uptime_in_millis": 202245,
"mem": {
"heap_used_in_bytes": 52709568,
"heap_used_percent": 5,
"heap_committed_in_bytes": 259522560,
"heap_max_in_bytes": 1038876672,
"non_heap_used_in_bytes": 39634576,
"non_heap_committed_in_bytes": 40841216,
"pools": {
"young": {
"used_in_bytes": 32685760,
"max_in_bytes": 279183360,
"peak_used_in_bytes": 71630848,
"peak_max_in_bytes": 279183360
},
"survivor": {
"used_in_bytes": 8912880,
"max_in_bytes": 34865152,
"peak_used_in_bytes": 8912888,
"peak_max_in_bytes": 34865152
},
"old": {
"used_in_bytes": 11110928,
"max_in_bytes": 724828160,
"peak_used_in_bytes": 14354608,
"peak_max_in_bytes": 724828160
}
}
},
"threads": {
"count": 44,
"peak_count": 45
},
"gc": {
"collectors": {
"young": {
"collection_count": 2,
"collection_time_in_millis": 98
},
"old": {
"collection_count": 1,
"collection_time_in_millis": 24
}
}
},
"buffer_pools": {
"direct": {
"count": 40,
"used_in_bytes": 6304239,
"total_capacity_in_bytes": 6304239
},
"mapped": {
"count": 0,
"used_in_bytes": 0,
"total_capacity_in_bytes": 0
}
}
},
"thread_pool": {
"percolate": {
"threads": 123,
"queue": 23,
"active": 13,
"rejected": 235,
"largest": 23,
"completed": 33
},
"fetch_shard_started": {
"threads": 3,
"queue": 1,
"active": 5,
"rejected": 6,
"largest": 4,
"completed": 54
},
"listener": {
"threads": 1,
"queue": 2,
"active": 4,
"rejected": 8,
"largest": 1,
"completed": 1
},
"index": {
"threads": 6,
"queue": 8,
"active": 4,
"rejected": 2,
"largest": 3,
"completed": 6
},
"refresh": {
"threads": 23,
"queue": 7,
"active": 3,
"rejected": 4,
"largest": 8,
"completed": 3
},
"suggest": {
"threads": 2,
"queue": 7,
"active": 2,
"rejected": 1,
"largest": 8,
"completed": 3
},
"generic": {
"threads": 1,
"queue": 4,
"active": 6,
"rejected": 3,
"largest": 2,
"completed": 27
},
"warmer": {
"threads": 2,
"queue": 7,
"active": 3,
"rejected": 2,
"largest": 3,
"completed": 1
},
"search": {
"threads": 5,
"queue": 7,
"active": 2,
"rejected": 7,
"largest": 2,
"completed": 4
},
"flush": {
"threads": 3,
"queue": 8,
"active": 0,
"rejected": 1,
"largest": 5,
"completed": 3
},
"optimize": {
"threads": 3,
"queue": 4,
"active": 1,
"rejected": 2,
"largest": 7,
"completed": 3
},
"fetch_shard_store": {
"threads": 1,
"queue": 7,
"active": 4,
"rejected": 2,
"largest": 4,
"completed": 1
},
"management": {
"threads": 2,
"queue": 3,
"active": 1,
"rejected": 6,
"largest": 2,
"completed": 22
},
"get": {
"threads": 1,
"queue": 8,
"active": 4,
"rejected": 3,
"largest": 2,
"completed": 1
},
"merge": {
"threads": 6,
"queue": 4,
"active": 5,
"rejected": 2,
"largest": 5,
"completed": 1
},
"bulk": {
"threads": 4,
"queue": 5,
"active": 7,
"rejected": 3,
"largest": 1,
"completed": 4
},
"snapshot": {
"threads": 8,
"queue": 5,
"active": 6,
"rejected": 2,
"largest": 1,
"completed": 0
}
},
"fs": {
"timestamp": 1436460392946,
"total": {
"total_in_bytes": 19507089408,
"free_in_bytes": 16909316096,
"available_in_bytes": 15894814720
},
"data": [
{
"path": "/usr/share/elasticsearch/data/elasticsearch/nodes/0",
"mount": "/usr/share/elasticsearch/data",
"type": "ext4",
"total_in_bytes": 19507089408,
"free_in_bytes": 16909316096,
"available_in_bytes": 15894814720
}
]
},
"transport": {
"server_open": 13,
"rx_count": 6,
"rx_size_in_bytes": 1380,
"tx_count": 6,
"tx_size_in_bytes": 1380
},
"http": {
"current_open": 3,
"total_opened": 3
},
"breakers": {
"fielddata": {
"limit_size_in_bytes": 623326003,
"limit_size": "594.4mb",
"estimated_size_in_bytes": 0,
"estimated_size": "0b",
"overhead": 1.03,
"tripped": 0
},
"request": {
"limit_size_in_bytes": 415550668,
"limit_size": "396.2mb",
"estimated_size_in_bytes": 0,
"estimated_size": "0b",
"overhead": 1.0,
"tripped": 0
},
"parent": {
"limit_size_in_bytes": 727213670,
"limit_size": "693.5mb",
"estimated_size_in_bytes": 0,
"estimated_size": "0b",
"overhead": 1.0,
"tripped": 0
}
}
}
}
}
`
var indicesExpected = map[string]float64{
"indices_id_cache_memory_size_in_bytes": 0,
"indices_completion_size_in_bytes": 0,
"indices_suggest_total": 0,
"indices_suggest_time_in_millis": 0,
"indices_suggest_current": 0,
"indices_query_cache_memory_size_in_bytes": 0,
"indices_query_cache_evictions": 0,
"indices_query_cache_hit_count": 0,
"indices_query_cache_miss_count": 0,
"indices_store_size_in_bytes": 37715234,
"indices_store_throttle_time_in_millis": 215,
"indices_merges_current_docs": 0,
"indices_merges_current_size_in_bytes": 0,
"indices_merges_total": 133,
"indices_merges_total_time_in_millis": 21060,
"indices_merges_total_docs": 203672,
"indices_merges_total_size_in_bytes": 142900226,
"indices_merges_current": 0,
"indices_filter_cache_memory_size_in_bytes": 7384,
"indices_filter_cache_evictions": 0,
"indices_indexing_index_total": 84790,
"indices_indexing_index_time_in_millis": 29680,
"indices_indexing_index_current": 0,
"indices_indexing_noop_update_total": 0,
"indices_indexing_throttle_time_in_millis": 0,
"indices_indexing_delete_total": 13879,
"indices_indexing_delete_time_in_millis": 1139,
"indices_indexing_delete_current": 0,
"indices_get_exists_time_in_millis": 0,
"indices_get_missing_total": 1,
"indices_get_missing_time_in_millis": 2,
"indices_get_current": 0,
"indices_get_total": 1,
"indices_get_time_in_millis": 2,
"indices_get_exists_total": 0,
"indices_refresh_total": 1076,
"indices_refresh_total_time_in_millis": 20078,
"indices_percolate_current": 0,
"indices_percolate_memory_size_in_bytes": -1,
"indices_percolate_queries": 0,
"indices_percolate_total": 0,
"indices_percolate_time_in_millis": 0,
"indices_translog_operations": 17702,
"indices_translog_size_in_bytes": 17,
"indices_recovery_current_as_source": 0,
"indices_recovery_current_as_target": 0,
"indices_recovery_throttle_time_in_millis": 0,
"indices_docs_count": 29652,
"indices_docs_deleted": 5229,
"indices_flush_total_time_in_millis": 2401,
"indices_flush_total": 115,
"indices_fielddata_memory_size_in_bytes": 12996,
"indices_fielddata_evictions": 0,
"indices_search_fetch_current": 0,
"indices_search_open_contexts": 0,
"indices_search_query_total": 1452,
"indices_search_query_time_in_millis": 5695,
"indices_search_query_current": 0,
"indices_search_fetch_total": 414,
"indices_search_fetch_time_in_millis": 146,
"indices_warmer_current": 0,
"indices_warmer_total": 2319,
"indices_warmer_total_time_in_millis": 448,
"indices_segments_count": 134,
"indices_segments_memory_in_bytes": 1285212,
"indices_segments_index_writer_memory_in_bytes": 0,
"indices_segments_index_writer_max_memory_in_bytes": 172368955,
"indices_segments_version_map_memory_in_bytes": 611844,
"indices_segments_fixed_bit_set_memory_in_bytes": 0,
}
var osExpected = map[string]float64{
"os_swap_used_in_bytes": 0,
"os_swap_free_in_bytes": 487997440,
"os_timestamp": 1436460392944,
"os_mem_free_percent": 74,
"os_mem_used_percent": 25,
"os_mem_actual_free_in_bytes": 1565470720,
"os_mem_actual_used_in_bytes": 534159360,
"os_mem_free_in_bytes": 477761536,
"os_mem_used_in_bytes": 1621868544,
}
var processExpected = map[string]float64{
"process_mem_total_virtual_in_bytes": 4747890688,
"process_timestamp": 1436460392945,
"process_open_file_descriptors": 160,
"process_cpu_total_in_millis": 15480,
"process_cpu_percent": 2,
"process_cpu_sys_in_millis": 1870,
"process_cpu_user_in_millis": 13610,
}
var jvmExpected = map[string]float64{
"jvm_timestamp": 1436460392945,
"jvm_uptime_in_millis": 202245,
"jvm_mem_non_heap_used_in_bytes": 39634576,
"jvm_mem_non_heap_committed_in_bytes": 40841216,
"jvm_mem_pools_young_max_in_bytes": 279183360,
"jvm_mem_pools_young_peak_used_in_bytes": 71630848,
"jvm_mem_pools_young_peak_max_in_bytes": 279183360,
"jvm_mem_pools_young_used_in_bytes": 32685760,
"jvm_mem_pools_survivor_peak_used_in_bytes": 8912888,
"jvm_mem_pools_survivor_peak_max_in_bytes": 34865152,
"jvm_mem_pools_survivor_used_in_bytes": 8912880,
"jvm_mem_pools_survivor_max_in_bytes": 34865152,
"jvm_mem_pools_old_peak_max_in_bytes": 724828160,
"jvm_mem_pools_old_used_in_bytes": 11110928,
"jvm_mem_pools_old_max_in_bytes": 724828160,
"jvm_mem_pools_old_peak_used_in_bytes": 14354608,
"jvm_mem_heap_used_in_bytes": 52709568,
"jvm_mem_heap_used_percent": 5,
"jvm_mem_heap_committed_in_bytes": 259522560,
"jvm_mem_heap_max_in_bytes": 1038876672,
"jvm_threads_peak_count": 45,
"jvm_threads_count": 44,
"jvm_gc_collectors_young_collection_count": 2,
"jvm_gc_collectors_young_collection_time_in_millis": 98,
"jvm_gc_collectors_old_collection_count": 1,
"jvm_gc_collectors_old_collection_time_in_millis": 24,
"jvm_buffer_pools_direct_count": 40,
"jvm_buffer_pools_direct_used_in_bytes": 6304239,
"jvm_buffer_pools_direct_total_capacity_in_bytes": 6304239,
"jvm_buffer_pools_mapped_count": 0,
"jvm_buffer_pools_mapped_used_in_bytes": 0,
"jvm_buffer_pools_mapped_total_capacity_in_bytes": 0,
}
var threadPoolExpected = map[string]float64{
"thread_pool_merge_threads": 6,
"thread_pool_merge_queue": 4,
"thread_pool_merge_active": 5,
"thread_pool_merge_rejected": 2,
"thread_pool_merge_largest": 5,
"thread_pool_merge_completed": 1,
"thread_pool_bulk_threads": 4,
"thread_pool_bulk_queue": 5,
"thread_pool_bulk_active": 7,
"thread_pool_bulk_rejected": 3,
"thread_pool_bulk_largest": 1,
"thread_pool_bulk_completed": 4,
"thread_pool_warmer_threads": 2,
"thread_pool_warmer_queue": 7,
"thread_pool_warmer_active": 3,
"thread_pool_warmer_rejected": 2,
"thread_pool_warmer_largest": 3,
"thread_pool_warmer_completed": 1,
"thread_pool_get_largest": 2,
"thread_pool_get_completed": 1,
"thread_pool_get_threads": 1,
"thread_pool_get_queue": 8,
"thread_pool_get_active": 4,
"thread_pool_get_rejected": 3,
"thread_pool_index_threads": 6,
"thread_pool_index_queue": 8,
"thread_pool_index_active": 4,
"thread_pool_index_rejected": 2,
"thread_pool_index_largest": 3,
"thread_pool_index_completed": 6,
"thread_pool_suggest_threads": 2,
"thread_pool_suggest_queue": 7,
"thread_pool_suggest_active": 2,
"thread_pool_suggest_rejected": 1,
"thread_pool_suggest_largest": 8,
"thread_pool_suggest_completed": 3,
"thread_pool_fetch_shard_store_queue": 7,
"thread_pool_fetch_shard_store_active": 4,
"thread_pool_fetch_shard_store_rejected": 2,
"thread_pool_fetch_shard_store_largest": 4,
"thread_pool_fetch_shard_store_completed": 1,
"thread_pool_fetch_shard_store_threads": 1,
"thread_pool_management_threads": 2,
"thread_pool_management_queue": 3,
"thread_pool_management_active": 1,
"thread_pool_management_rejected": 6,
"thread_pool_management_largest": 2,
"thread_pool_management_completed": 22,
"thread_pool_percolate_queue": 23,
"thread_pool_percolate_active": 13,
"thread_pool_percolate_rejected": 235,
"thread_pool_percolate_largest": 23,
"thread_pool_percolate_completed": 33,
"thread_pool_percolate_threads": 123,
"thread_pool_listener_active": 4,
"thread_pool_listener_rejected": 8,
"thread_pool_listener_largest": 1,
"thread_pool_listener_completed": 1,
"thread_pool_listener_threads": 1,
"thread_pool_listener_queue": 2,
"thread_pool_search_rejected": 7,
"thread_pool_search_largest": 2,
"thread_pool_search_completed": 4,
"thread_pool_search_threads": 5,
"thread_pool_search_queue": 7,
"thread_pool_search_active": 2,
"thread_pool_fetch_shard_started_threads": 3,
"thread_pool_fetch_shard_started_queue": 1,
"thread_pool_fetch_shard_started_active": 5,
"thread_pool_fetch_shard_started_rejected": 6,
"thread_pool_fetch_shard_started_largest": 4,
"thread_pool_fetch_shard_started_completed": 54,
"thread_pool_refresh_rejected": 4,
"thread_pool_refresh_largest": 8,
"thread_pool_refresh_completed": 3,
"thread_pool_refresh_threads": 23,
"thread_pool_refresh_queue": 7,
"thread_pool_refresh_active": 3,
"thread_pool_optimize_threads": 3,
"thread_pool_optimize_queue": 4,
"thread_pool_optimize_active": 1,
"thread_pool_optimize_rejected": 2,
"thread_pool_optimize_largest": 7,
"thread_pool_optimize_completed": 3,
"thread_pool_snapshot_largest": 1,
"thread_pool_snapshot_completed": 0,
"thread_pool_snapshot_threads": 8,
"thread_pool_snapshot_queue": 5,
"thread_pool_snapshot_active": 6,
"thread_pool_snapshot_rejected": 2,
"thread_pool_generic_threads": 1,
"thread_pool_generic_queue": 4,
"thread_pool_generic_active": 6,
"thread_pool_generic_rejected": 3,
"thread_pool_generic_largest": 2,
"thread_pool_generic_completed": 27,
"thread_pool_flush_threads": 3,
"thread_pool_flush_queue": 8,
"thread_pool_flush_active": 0,
"thread_pool_flush_rejected": 1,
"thread_pool_flush_largest": 5,
"thread_pool_flush_completed": 3,
}
var fsExpected = map[string]float64{
"fs_timestamp": 1436460392946,
"fs_total_free_in_bytes": 16909316096,
"fs_total_available_in_bytes": 15894814720,
"fs_total_total_in_bytes": 19507089408,
}
var transportExpected = map[string]float64{
"transport_server_open": 13,
"transport_rx_count": 6,
"transport_rx_size_in_bytes": 1380,
"transport_tx_count": 6,
"transport_tx_size_in_bytes": 1380,
}
var httpExpected = map[string]float64{
"http_current_open": 3,
"http_total_opened": 3,
}
var breakersExpected = map[string]float64{
"breakers_fielddata_estimated_size_in_bytes": 0,
"breakers_fielddata_overhead": 1.03,
"breakers_fielddata_tripped": 0,
"breakers_fielddata_limit_size_in_bytes": 623326003,
"breakers_request_estimated_size_in_bytes": 0,
"breakers_request_overhead": 1.0,
"breakers_request_tripped": 0,
"breakers_request_limit_size_in_bytes": 415550668,
"breakers_parent_overhead": 1.0,
"breakers_parent_tripped": 0,
"breakers_parent_limit_size_in_bytes": 727213670,
"breakers_parent_estimated_size_in_bytes": 0,
}

View File

@ -1,42 +0,0 @@
# Exec Plugin
The exec plugin can execute arbitrary commands which output JSON. Then it flattens JSON and finds
all numeric values, treating them as floats.
For example, if you have a json-returning command called mycollector, you could
setup the exec plugin with:
```
[[exec.commands]]
command = "/usr/bin/mycollector --output=json"
name = "mycollector"
interval = 10
```
The name is used as a prefix for the measurements.
The interval is used to determine how often a particular command should be run. Each
time the exec plugin runs, it will only run a particular command if it has been at least
`interval` seconds since the exec plugin last ran the command.
# Sample
Let's say that we have a command named "mycollector", which gives the following output:
```json
{
"a": 0.5,
"b": {
"c": "some text",
"d": 0.1,
"e": 5
}
}
```
The collected metrics will be:
```
exec_mycollector_a value=0.5
exec_mycollector_b_d value=0.1
exec_mycollector_b_e value=5
```

View File

@ -1,162 +0,0 @@
package exec
import (
"bytes"
"encoding/json"
"errors"
"fmt"
"github.com/gonuts/go-shellquote"
"github.com/influxdb/telegraf/plugins"
"math"
"os/exec"
"strings"
"sync"
"time"
)
const sampleConfig = `
# specify commands via an array of tables
[[plugins.exec.commands]]
# the command to run
command = "/usr/bin/mycollector --foo=bar"
# name of the command (used as a prefix for measurements)
name = "mycollector"
# Only run this command if it has been at least this many
# seconds since it last ran
interval = 10
`
type Exec struct {
Commands []*Command
runner Runner
clock Clock
}
type Command struct {
Command string
Name string
Interval int
lastRunAt time.Time
}
type Runner interface {
Run(*Command) ([]byte, error)
}
type Clock interface {
Now() time.Time
}
type CommandRunner struct{}
type RealClock struct{}
func (c CommandRunner) Run(command *Command) ([]byte, error) {
command.lastRunAt = time.Now()
split_cmd, err := shellquote.Split(command.Command)
if err != nil || len(split_cmd) == 0 {
return nil, fmt.Errorf("exec: unable to parse command, %s", err)
}
cmd := exec.Command(split_cmd[0], split_cmd[1:]...)
var out bytes.Buffer
cmd.Stdout = &out
if err := cmd.Run(); err != nil {
return nil, fmt.Errorf("exec: %s for command '%s'", err, command.Command)
}
return out.Bytes(), nil
}
func (c RealClock) Now() time.Time {
return time.Now()
}
func NewExec() *Exec {
return &Exec{runner: CommandRunner{}, clock: RealClock{}}
}
func (e *Exec) SampleConfig() string {
return sampleConfig
}
func (e *Exec) Description() string {
return "Read flattened metrics from one or more commands that output JSON to stdout"
}
func (e *Exec) Gather(acc plugins.Accumulator) error {
var wg sync.WaitGroup
errorChannel := make(chan error, len(e.Commands))
for _, c := range e.Commands {
wg.Add(1)
go func(c *Command, acc plugins.Accumulator) {
defer wg.Done()
err := e.gatherCommand(c, acc)
if err != nil {
errorChannel <- err
}
}(c, acc)
}
wg.Wait()
close(errorChannel)
// Get all errors and return them as one giant error
errorStrings := []string{}
for err := range errorChannel {
errorStrings = append(errorStrings, err.Error())
}
if len(errorStrings) == 0 {
return nil
}
return errors.New(strings.Join(errorStrings, "\n"))
}
func (e *Exec) gatherCommand(c *Command, acc plugins.Accumulator) error {
secondsSinceLastRun := 0.0
if c.lastRunAt.Unix() == 0 { // means time is uninitialized
secondsSinceLastRun = math.Inf(1)
} else {
secondsSinceLastRun = (e.clock.Now().Sub(c.lastRunAt)).Seconds()
}
if secondsSinceLastRun >= float64(c.Interval) {
out, err := e.runner.Run(c)
if err != nil {
return err
}
var jsonOut interface{}
err = json.Unmarshal(out, &jsonOut)
if err != nil {
return fmt.Errorf("exec: unable to parse output of '%s' as JSON, %s", c.Command, err)
}
processResponse(acc, c.Name, map[string]string{}, jsonOut)
}
return nil
}
func processResponse(acc plugins.Accumulator, prefix string, tags map[string]string, v interface{}) {
switch t := v.(type) {
case map[string]interface{}:
for k, v := range t {
processResponse(acc, prefix+"_"+k, tags, v)
}
case float64:
acc.Add(prefix, v, tags)
}
}
func init() {
plugins.Add("exec", func() plugins.Plugin {
return NewExec()
})
}

View File

@ -1,262 +0,0 @@
package exec
import (
"fmt"
"github.com/influxdb/telegraf/testutil"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"math"
"testing"
"time"
)
// Midnight 9/22/2015
const baseTimeSeconds = 1442905200
const validJson = `
{
"status": "green",
"num_processes": 82,
"cpu": {
"status": "red",
"nil_status": null,
"used": 8234,
"free": 32
},
"percent": 0.81,
"users": [0, 1, 2, 3]
}`
const malformedJson = `
{
"status": "green",
`
type runnerMock struct {
out []byte
err error
}
type clockMock struct {
now time.Time
}
func newRunnerMock(out []byte, err error) Runner {
return &runnerMock{
out: out,
err: err,
}
}
func (r runnerMock) Run(command *Command) ([]byte, error) {
if r.err != nil {
return nil, r.err
}
return r.out, nil
}
func newClockMock(now time.Time) Clock {
return &clockMock{now: now}
}
func (c clockMock) Now() time.Time {
return c.now
}
func TestExec(t *testing.T) {
runner := newRunnerMock([]byte(validJson), nil)
clock := newClockMock(time.Unix(baseTimeSeconds+20, 0))
command := Command{
Command: "testcommand arg1",
Name: "mycollector",
Interval: 10,
lastRunAt: time.Unix(baseTimeSeconds, 0),
}
e := &Exec{
runner: runner,
clock: clock,
Commands: []*Command{&command},
}
var acc testutil.Accumulator
initialPoints := len(acc.Points)
err := e.Gather(&acc)
deltaPoints := len(acc.Points) - initialPoints
require.NoError(t, err)
checkFloat := []struct {
name string
value float64
}{
{"mycollector_num_processes", 82},
{"mycollector_cpu_used", 8234},
{"mycollector_cpu_free", 32},
{"mycollector_percent", 0.81},
}
for _, c := range checkFloat {
assert.True(t, acc.CheckValue(c.name, c.value))
}
assert.Equal(t, deltaPoints, 4, "non-numeric measurements should be ignored")
}
func TestExecMalformed(t *testing.T) {
runner := newRunnerMock([]byte(malformedJson), nil)
clock := newClockMock(time.Unix(baseTimeSeconds+20, 0))
command := Command{
Command: "badcommand arg1",
Name: "mycollector",
Interval: 10,
lastRunAt: time.Unix(baseTimeSeconds, 0),
}
e := &Exec{
runner: runner,
clock: clock,
Commands: []*Command{&command},
}
var acc testutil.Accumulator
initialPoints := len(acc.Points)
err := e.Gather(&acc)
deltaPoints := len(acc.Points) - initialPoints
require.Error(t, err)
assert.Equal(t, deltaPoints, 0, "No new points should have been added")
}
func TestCommandError(t *testing.T) {
runner := newRunnerMock(nil, fmt.Errorf("exit status code 1"))
clock := newClockMock(time.Unix(baseTimeSeconds+20, 0))
command := Command{
Command: "badcommand",
Name: "mycollector",
Interval: 10,
lastRunAt: time.Unix(baseTimeSeconds, 0),
}
e := &Exec{
runner: runner,
clock: clock,
Commands: []*Command{&command},
}
var acc testutil.Accumulator
initialPoints := len(acc.Points)
err := e.Gather(&acc)
deltaPoints := len(acc.Points) - initialPoints
require.Error(t, err)
assert.Equal(t, deltaPoints, 0, "No new points should have been added")
}
func TestExecNotEnoughTime(t *testing.T) {
runner := newRunnerMock([]byte(validJson), nil)
clock := newClockMock(time.Unix(baseTimeSeconds+5, 0))
command := Command{
Command: "testcommand arg1",
Name: "mycollector",
Interval: 10,
lastRunAt: time.Unix(baseTimeSeconds, 0),
}
e := &Exec{
runner: runner,
clock: clock,
Commands: []*Command{&command},
}
var acc testutil.Accumulator
initialPoints := len(acc.Points)
err := e.Gather(&acc)
deltaPoints := len(acc.Points) - initialPoints
require.NoError(t, err)
assert.Equal(t, deltaPoints, 0, "No new points should have been added")
}
func TestExecUninitializedLastRunAt(t *testing.T) {
runner := newRunnerMock([]byte(validJson), nil)
clock := newClockMock(time.Unix(baseTimeSeconds, 0))
command := Command{
Command: "testcommand arg1",
Name: "mycollector",
Interval: math.MaxInt32,
// Uninitialized lastRunAt should default to time.Unix(0, 0), so this should
// run no matter what the interval is
}
e := &Exec{
runner: runner,
clock: clock,
Commands: []*Command{&command},
}
var acc testutil.Accumulator
initialPoints := len(acc.Points)
err := e.Gather(&acc)
deltaPoints := len(acc.Points) - initialPoints
require.NoError(t, err)
checkFloat := []struct {
name string
value float64
}{
{"mycollector_num_processes", 82},
{"mycollector_cpu_used", 8234},
{"mycollector_cpu_free", 32},
{"mycollector_percent", 0.81},
}
for _, c := range checkFloat {
assert.True(t, acc.CheckValue(c.name, c.value))
}
assert.Equal(t, deltaPoints, 4, "non-numeric measurements should be ignored")
}
func TestExecOneNotEnoughTimeAndOneEnoughTime(t *testing.T) {
runner := newRunnerMock([]byte(validJson), nil)
clock := newClockMock(time.Unix(baseTimeSeconds+5, 0))
notEnoughTimeCommand := Command{
Command: "testcommand arg1",
Name: "mycollector",
Interval: 10,
lastRunAt: time.Unix(baseTimeSeconds, 0),
}
enoughTimeCommand := Command{
Command: "testcommand arg1",
Name: "mycollector",
Interval: 3,
lastRunAt: time.Unix(baseTimeSeconds, 0),
}
e := &Exec{
runner: runner,
clock: clock,
Commands: []*Command{&notEnoughTimeCommand, &enoughTimeCommand},
}
var acc testutil.Accumulator
initialPoints := len(acc.Points)
err := e.Gather(&acc)
deltaPoints := len(acc.Points) - initialPoints
require.NoError(t, err)
checkFloat := []struct {
name string
value float64
}{
{"mycollector_num_processes", 82},
{"mycollector_cpu_used", 8234},
{"mycollector_cpu_free", 32},
{"mycollector_percent", 0.81},
}
for _, c := range checkFloat {
assert.True(t, acc.CheckValue(c.name, c.value))
}
assert.Equal(t, deltaPoints, 4, "Only one command should have been run")
}

View File

@ -4,7 +4,7 @@ import (
"bytes" "bytes"
"encoding/binary" "encoding/binary"
"fmt" "fmt"
"github.com/influxdb/telegraf/plugins" "github.com/influxdata/telegraf/plugins/inputs"
"net" "net"
"strconv" "strconv"
"strings" "strings"
@ -119,7 +119,7 @@ func (a *Aerospike) Description() string {
return "Read stats from an aerospike server" return "Read stats from an aerospike server"
} }
func (a *Aerospike) Gather(acc plugins.Accumulator) error { func (a *Aerospike) Gather(acc inputs.Accumulator) error {
if len(a.Servers) == 0 { if len(a.Servers) == 0 {
return a.gatherServer("127.0.0.1:3000", acc) return a.gatherServer("127.0.0.1:3000", acc)
} }
@ -140,7 +140,7 @@ func (a *Aerospike) Gather(acc plugins.Accumulator) error {
return outerr return outerr
} }
func (a *Aerospike) gatherServer(host string, acc plugins.Accumulator) error { func (a *Aerospike) gatherServer(host string, acc inputs.Accumulator) error {
aerospikeInfo, err := getMap(STATISTICS_COMMAND, host) aerospikeInfo, err := getMap(STATISTICS_COMMAND, host)
if err != nil { if err != nil {
return fmt.Errorf("Aerospike info failed: %s", err) return fmt.Errorf("Aerospike info failed: %s", err)
@ -247,8 +247,13 @@ func get(key []byte, host string) (map[string]string, error) {
return data, err return data, err
} }
func readAerospikeStats(stats map[string]string, acc plugins.Accumulator, host, namespace string) { func readAerospikeStats(
for key, value := range stats { stats map[string]string,
acc inputs.Accumulator,
host string,
namespace string,
) {
fields := make(map[string]interface{})
tags := map[string]string{ tags := map[string]string{
"aerospike_host": host, "aerospike_host": host,
"namespace": "_service", "namespace": "_service",
@ -257,16 +262,17 @@ func readAerospikeStats(stats map[string]string, acc plugins.Accumulator, host,
if namespace != "" { if namespace != "" {
tags["namespace"] = namespace tags["namespace"] = namespace
} }
for key, value := range stats {
// We are going to ignore all string based keys // We are going to ignore all string based keys
val, err := strconv.ParseInt(value, 10, 64) val, err := strconv.ParseInt(value, 10, 64)
if err == nil { if err == nil {
if strings.Contains(key, "-") { if strings.Contains(key, "-") {
key = strings.Replace(key, "-", "_", -1) key = strings.Replace(key, "-", "_", -1)
} }
acc.Add(key, val, tags) fields[key] = val
} }
} }
acc.AddFields("aerospike", fields, tags)
} }
func unmarshalMapInfo(infoMap map[string]string, key string) (map[string]string, error) { func unmarshalMapInfo(infoMap map[string]string, key string) (map[string]string, error) {
@ -330,7 +336,7 @@ func msgLenFromBytes(buf [6]byte) int64 {
} }
func init() { func init() {
plugins.Add("aerospike", func() plugins.Plugin { inputs.Add("aerospike", func() inputs.Input {
return &Aerospike{} return &Aerospike{}
}) })
} }

View File

@ -1,11 +1,12 @@
package aerospike package aerospike
import ( import (
"github.com/influxdb/telegraf/testutil"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"reflect" "reflect"
"testing" "testing"
"github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
) )
func TestAerospikeStatistics(t *testing.T) { func TestAerospikeStatistics(t *testing.T) {
@ -31,7 +32,7 @@ func TestAerospikeStatistics(t *testing.T) {
} }
for _, metric := range asMetrics { for _, metric := range asMetrics {
assert.True(t, acc.HasIntValue(metric), metric) assert.True(t, acc.HasIntField("aerospike", metric), metric)
} }
} }
@ -49,13 +50,16 @@ func TestReadAerospikeStatsNoNamespace(t *testing.T) {
"stat_read_reqs": "12345", "stat_read_reqs": "12345",
} }
readAerospikeStats(stats, &acc, "host1", "") readAerospikeStats(stats, &acc, "host1", "")
for k := range stats {
if k == "stat-write-errs" { fields := map[string]interface{}{
k = "stat_write_errs" "stat_write_errs": int64(12345),
"stat_read_reqs": int64(12345),
} }
assert.True(t, acc.HasMeasurement(k)) tags := map[string]string{
assert.True(t, acc.CheckValue(k, int64(12345))) "aerospike_host": "host1",
"namespace": "_service",
} }
acc.AssertContainsTaggedFields(t, "aerospike", fields, tags)
} }
func TestReadAerospikeStatsNamespace(t *testing.T) { func TestReadAerospikeStatsNamespace(t *testing.T) {
@ -66,13 +70,15 @@ func TestReadAerospikeStatsNamespace(t *testing.T) {
} }
readAerospikeStats(stats, &acc, "host1", "test") readAerospikeStats(stats, &acc, "host1", "test")
fields := map[string]interface{}{
"stat_write_errs": int64(12345),
"stat_read_reqs": int64(12345),
}
tags := map[string]string{ tags := map[string]string{
"aerospike_host": "host1", "aerospike_host": "host1",
"namespace": "test", "namespace": "test",
} }
for k := range stats { acc.AssertContainsTaggedFields(t, "aerospike", fields, tags)
assert.True(t, acc.ValidateTaggedValue(k, int64(12345), tags) == nil)
}
} }
func TestAerospikeUnmarshalList(t *testing.T) { func TestAerospikeUnmarshalList(t *testing.T) {

41
plugins/inputs/all/all.go Normal file
View File

@ -0,0 +1,41 @@
package all
import (
_ "github.com/influxdata/telegraf/plugins/inputs/aerospike"
_ "github.com/influxdata/telegraf/plugins/inputs/apache"
_ "github.com/influxdata/telegraf/plugins/inputs/bcache"
_ "github.com/influxdata/telegraf/plugins/inputs/disque"
_ "github.com/influxdata/telegraf/plugins/inputs/docker"
_ "github.com/influxdata/telegraf/plugins/inputs/elasticsearch"
_ "github.com/influxdata/telegraf/plugins/inputs/exec"
_ "github.com/influxdata/telegraf/plugins/inputs/haproxy"
_ "github.com/influxdata/telegraf/plugins/inputs/httpjson"
_ "github.com/influxdata/telegraf/plugins/inputs/influxdb"
_ "github.com/influxdata/telegraf/plugins/inputs/jolokia"
_ "github.com/influxdata/telegraf/plugins/inputs/kafka_consumer"
_ "github.com/influxdata/telegraf/plugins/inputs/leofs"
_ "github.com/influxdata/telegraf/plugins/inputs/lustre2"
_ "github.com/influxdata/telegraf/plugins/inputs/mailchimp"
_ "github.com/influxdata/telegraf/plugins/inputs/memcached"
_ "github.com/influxdata/telegraf/plugins/inputs/mongodb"
_ "github.com/influxdata/telegraf/plugins/inputs/mysql"
_ "github.com/influxdata/telegraf/plugins/inputs/nginx"
_ "github.com/influxdata/telegraf/plugins/inputs/nsq"
_ "github.com/influxdata/telegraf/plugins/inputs/passenger"
_ "github.com/influxdata/telegraf/plugins/inputs/phpfpm"
_ "github.com/influxdata/telegraf/plugins/inputs/ping"
_ "github.com/influxdata/telegraf/plugins/inputs/postgresql"
_ "github.com/influxdata/telegraf/plugins/inputs/procstat"
_ "github.com/influxdata/telegraf/plugins/inputs/prometheus"
_ "github.com/influxdata/telegraf/plugins/inputs/puppetagent"
_ "github.com/influxdata/telegraf/plugins/inputs/rabbitmq"
_ "github.com/influxdata/telegraf/plugins/inputs/redis"
_ "github.com/influxdata/telegraf/plugins/inputs/rethinkdb"
_ "github.com/influxdata/telegraf/plugins/inputs/sensors"
_ "github.com/influxdata/telegraf/plugins/inputs/statsd"
_ "github.com/influxdata/telegraf/plugins/inputs/system"
_ "github.com/influxdata/telegraf/plugins/inputs/trig"
_ "github.com/influxdata/telegraf/plugins/inputs/twemproxy"
_ "github.com/influxdata/telegraf/plugins/inputs/zfs"
_ "github.com/influxdata/telegraf/plugins/inputs/zookeeper"
)

View File

@ -11,7 +11,7 @@ import (
"sync" "sync"
"time" "time"
"github.com/influxdb/telegraf/plugins" "github.com/influxdata/telegraf/plugins/inputs"
) )
type Apache struct { type Apache struct {
@ -31,7 +31,7 @@ func (n *Apache) Description() string {
return "Read Apache status information (mod_status)" return "Read Apache status information (mod_status)"
} }
func (n *Apache) Gather(acc plugins.Accumulator) error { func (n *Apache) Gather(acc inputs.Accumulator) error {
var wg sync.WaitGroup var wg sync.WaitGroup
var outerr error var outerr error
@ -59,7 +59,7 @@ var tr = &http.Transport{
var client = &http.Client{Transport: tr} var client = &http.Client{Transport: tr}
func (n *Apache) gatherUrl(addr *url.URL, acc plugins.Accumulator) error { func (n *Apache) gatherUrl(addr *url.URL, acc inputs.Accumulator) error {
resp, err := client.Get(addr.String()) resp, err := client.Get(addr.String())
if err != nil { if err != nil {
return fmt.Errorf("error making HTTP request to %s: %s", addr.String(), err) return fmt.Errorf("error making HTTP request to %s: %s", addr.String(), err)
@ -72,32 +72,33 @@ func (n *Apache) gatherUrl(addr *url.URL, acc plugins.Accumulator) error {
tags := getTags(addr) tags := getTags(addr)
sc := bufio.NewScanner(resp.Body) sc := bufio.NewScanner(resp.Body)
fields := make(map[string]interface{})
for sc.Scan() { for sc.Scan() {
line := sc.Text() line := sc.Text()
if strings.Contains(line, ":") { if strings.Contains(line, ":") {
parts := strings.SplitN(line, ":", 2) parts := strings.SplitN(line, ":", 2)
key, part := strings.Replace(parts[0], " ", "", -1), strings.TrimSpace(parts[1]) key, part := strings.Replace(parts[0], " ", "", -1), strings.TrimSpace(parts[1])
switch key { switch key {
case "Scoreboard": case "Scoreboard":
n.gatherScores(part, acc, tags) for field, value := range n.gatherScores(part) {
fields[field] = value
}
default: default:
value, err := strconv.ParseFloat(part, 64) value, err := strconv.ParseFloat(part, 64)
if err != nil { if err != nil {
continue continue
} }
acc.Add(key, value, tags) fields[key] = value
} }
} }
} }
acc.AddFields("apache", fields, tags)
return nil return nil
} }
func (n *Apache) gatherScores(data string, acc plugins.Accumulator, tags map[string]string) { func (n *Apache) gatherScores(data string) map[string]interface{} {
var waiting, open int = 0, 0 var waiting, open int = 0, 0
var S, R, W, K, D, C, L, G, I int = 0, 0, 0, 0, 0, 0, 0, 0, 0 var S, R, W, K, D, C, L, G, I int = 0, 0, 0, 0, 0, 0, 0, 0, 0
@ -129,17 +130,20 @@ func (n *Apache) gatherScores(data string, acc plugins.Accumulator, tags map[str
} }
} }
acc.Add("scboard_waiting", float64(waiting), tags) fields := map[string]interface{}{
acc.Add("scboard_starting", float64(S), tags) "scboard_waiting": float64(waiting),
acc.Add("scboard_reading", float64(R), tags) "scboard_starting": float64(S),
acc.Add("scboard_sending", float64(W), tags) "scboard_reading": float64(R),
acc.Add("scboard_keepalive", float64(K), tags) "scboard_sending": float64(W),
acc.Add("scboard_dnslookup", float64(D), tags) "scboard_keepalive": float64(K),
acc.Add("scboard_closing", float64(C), tags) "scboard_dnslookup": float64(D),
acc.Add("scboard_logging", float64(L), tags) "scboard_closing": float64(C),
acc.Add("scboard_finishing", float64(G), tags) "scboard_logging": float64(L),
acc.Add("scboard_idle_cleanup", float64(I), tags) "scboard_finishing": float64(G),
acc.Add("scboard_open", float64(open), tags) "scboard_idle_cleanup": float64(I),
"scboard_open": float64(open),
}
return fields
} }
// Get tag(s) for the apache plugin // Get tag(s) for the apache plugin
@ -160,7 +164,7 @@ func getTags(addr *url.URL) map[string]string {
} }
func init() { func init() {
plugins.Add("apache", func() plugins.Plugin { inputs.Add("apache", func() inputs.Input {
return &Apache{} return &Apache{}
}) })
} }

View File

@ -6,9 +6,8 @@ import (
"net/http/httptest" "net/http/httptest"
"testing" "testing"
"github.com/influxdb/telegraf/testutil" "github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
@ -44,37 +43,31 @@ func TestHTTPApache(t *testing.T) {
err := a.Gather(&acc) err := a.Gather(&acc)
require.NoError(t, err) require.NoError(t, err)
testInt := []struct { fields := map[string]interface{}{
measurement string "TotalAccesses": float64(1.29811861e+08),
value float64 "TotalkBytes": float64(5.213701865e+09),
}{ "CPULoad": float64(6.51929),
{"TotalAccesses", 1.29811861e+08}, "Uptime": float64(941553),
{"TotalkBytes", 5.213701865e+09}, "ReqPerSec": float64(137.87),
{"CPULoad", 6.51929}, "BytesPerSec": float64(5.67024e+06),
{"Uptime", 941553}, "BytesPerReq": float64(41127.4),
{"ReqPerSec", 137.87}, "BusyWorkers": float64(270),
{"BytesPerSec", 5.67024e+06}, "IdleWorkers": float64(630),
{"BytesPerReq", 41127.4}, "ConnsTotal": float64(1451),
{"BusyWorkers", 270}, "ConnsAsyncWriting": float64(32),
{"IdleWorkers", 630}, "ConnsAsyncKeepAlive": float64(945),
{"ConnsTotal", 1451}, "ConnsAsyncClosing": float64(205),
{"ConnsAsyncWriting", 32}, "scboard_waiting": float64(630),
{"ConnsAsyncKeepAlive", 945}, "scboard_starting": float64(0),
{"ConnsAsyncClosing", 205}, "scboard_reading": float64(157),
{"scboard_waiting", 630}, "scboard_sending": float64(113),
{"scboard_starting", 0}, "scboard_keepalive": float64(0),
{"scboard_reading", 157}, "scboard_dnslookup": float64(0),
{"scboard_sending", 113}, "scboard_closing": float64(0),
{"scboard_keepalive", 0}, "scboard_logging": float64(0),
{"scboard_dnslookup", 0}, "scboard_finishing": float64(0),
{"scboard_closing", 0}, "scboard_idle_cleanup": float64(0),
{"scboard_logging", 0}, "scboard_open": float64(2850),
{"scboard_finishing", 0},
{"scboard_idle_cleanup", 0},
{"scboard_open", 2850},
}
for _, test := range testInt {
assert.True(t, acc.CheckValue(test.measurement, test.value))
} }
acc.AssertContainsFields(t, "apache", fields)
} }

View File

@ -70,7 +70,7 @@ Using this configuration:
When run with: When run with:
``` ```
./telegraf -config telegraf.conf -filter bcache -test ./telegraf -config telegraf.conf -input-filter bcache -test
``` ```
It produces: It produces:

View File

@ -8,7 +8,7 @@ import (
"strconv" "strconv"
"strings" "strings"
"github.com/influxdb/telegraf/plugins" "github.com/influxdata/telegraf/plugins/inputs"
) )
type Bcache struct { type Bcache struct {
@ -69,7 +69,7 @@ func prettyToBytes(v string) uint64 {
return uint64(result) return uint64(result)
} }
func (b *Bcache) gatherBcache(bdev string, acc plugins.Accumulator) error { func (b *Bcache) gatherBcache(bdev string, acc inputs.Accumulator) error {
tags := getTags(bdev) tags := getTags(bdev)
metrics, err := filepath.Glob(bdev + "/stats_total/*") metrics, err := filepath.Glob(bdev + "/stats_total/*")
if len(metrics) < 0 { if len(metrics) < 0 {
@ -81,7 +81,9 @@ func (b *Bcache) gatherBcache(bdev string, acc plugins.Accumulator) error {
} }
rawValue := strings.TrimSpace(string(file)) rawValue := strings.TrimSpace(string(file))
value := prettyToBytes(rawValue) value := prettyToBytes(rawValue)
acc.Add("dirty_data", value, tags)
fields := make(map[string]interface{})
fields["dirty_data"] = value
for _, path := range metrics { for _, path := range metrics {
key := filepath.Base(path) key := filepath.Base(path)
@ -92,16 +94,17 @@ func (b *Bcache) gatherBcache(bdev string, acc plugins.Accumulator) error {
} }
if key == "bypassed" { if key == "bypassed" {
value := prettyToBytes(rawValue) value := prettyToBytes(rawValue)
acc.Add(key, value, tags) fields[key] = value
} else { } else {
value, _ := strconv.ParseUint(rawValue, 10, 64) value, _ := strconv.ParseUint(rawValue, 10, 64)
acc.Add(key, value, tags) fields[key] = value
} }
} }
acc.AddFields("bcache", fields, tags)
return nil return nil
} }
func (b *Bcache) Gather(acc plugins.Accumulator) error { func (b *Bcache) Gather(acc inputs.Accumulator) error {
bcacheDevsChecked := make(map[string]bool) bcacheDevsChecked := make(map[string]bool)
var restrictDevs bool var restrictDevs bool
if len(b.BcacheDevs) != 0 { if len(b.BcacheDevs) != 0 {
@ -117,7 +120,7 @@ func (b *Bcache) Gather(acc plugins.Accumulator) error {
} }
bdevs, _ := filepath.Glob(bcachePath + "/*/bdev*") bdevs, _ := filepath.Glob(bcachePath + "/*/bdev*")
if len(bdevs) < 1 { if len(bdevs) < 1 {
return errors.New("Can't found any bcache device") return errors.New("Can't find any bcache device")
} }
for _, bdev := range bdevs { for _, bdev := range bdevs {
if restrictDevs { if restrictDevs {
@ -132,7 +135,7 @@ func (b *Bcache) Gather(acc plugins.Accumulator) error {
} }
func init() { func init() {
plugins.Add("bcache", func() plugins.Plugin { inputs.Add("bcache", func() inputs.Input {
return &Bcache{} return &Bcache{}
}) })
} }

View File

@ -5,8 +5,7 @@ import (
"os" "os"
"testing" "testing"
"github.com/influxdb/telegraf/testutil" "github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
@ -29,11 +28,6 @@ var (
testBcacheBackingDevPath = os.TempDir() + "/telegraf/sys/devices/virtual/block/md10" testBcacheBackingDevPath = os.TempDir() + "/telegraf/sys/devices/virtual/block/md10"
) )
type metrics struct {
name string
value uint64
}
func TestBcacheGeneratesMetrics(t *testing.T) { func TestBcacheGeneratesMetrics(t *testing.T) {
err := os.MkdirAll(testBcacheUuidPath, 0755) err := os.MkdirAll(testBcacheUuidPath, 0755)
require.NoError(t, err) require.NoError(t, err)
@ -53,70 +47,52 @@ func TestBcacheGeneratesMetrics(t *testing.T) {
err = os.MkdirAll(testBcacheUuidPath+"/bdev0/stats_total", 0755) err = os.MkdirAll(testBcacheUuidPath+"/bdev0/stats_total", 0755)
require.NoError(t, err) require.NoError(t, err)
err = ioutil.WriteFile(testBcacheUuidPath+"/bdev0/dirty_data", []byte(dirty_data), 0644) err = ioutil.WriteFile(testBcacheUuidPath+"/bdev0/dirty_data",
[]byte(dirty_data), 0644)
require.NoError(t, err) require.NoError(t, err)
err = ioutil.WriteFile(testBcacheUuidPath+"/bdev0/stats_total/bypassed", []byte(bypassed), 0644) err = ioutil.WriteFile(testBcacheUuidPath+"/bdev0/stats_total/bypassed",
[]byte(bypassed), 0644)
require.NoError(t, err) require.NoError(t, err)
err = ioutil.WriteFile(testBcacheUuidPath+"/bdev0/stats_total/cache_bypass_hits", []byte(cache_bypass_hits), 0644) err = ioutil.WriteFile(testBcacheUuidPath+"/bdev0/stats_total/cache_bypass_hits",
[]byte(cache_bypass_hits), 0644)
require.NoError(t, err) require.NoError(t, err)
err = ioutil.WriteFile(testBcacheUuidPath+"/bdev0/stats_total/cache_bypass_misses", []byte(cache_bypass_misses), 0644) err = ioutil.WriteFile(testBcacheUuidPath+"/bdev0/stats_total/cache_bypass_misses",
[]byte(cache_bypass_misses), 0644)
require.NoError(t, err) require.NoError(t, err)
err = ioutil.WriteFile(testBcacheUuidPath+"/bdev0/stats_total/cache_hit_ratio", []byte(cache_hit_ratio), 0644) err = ioutil.WriteFile(testBcacheUuidPath+"/bdev0/stats_total/cache_hit_ratio",
[]byte(cache_hit_ratio), 0644)
require.NoError(t, err) require.NoError(t, err)
err = ioutil.WriteFile(testBcacheUuidPath+"/bdev0/stats_total/cache_hits", []byte(cache_hits), 0644) err = ioutil.WriteFile(testBcacheUuidPath+"/bdev0/stats_total/cache_hits",
[]byte(cache_hits), 0644)
require.NoError(t, err) require.NoError(t, err)
err = ioutil.WriteFile(testBcacheUuidPath+"/bdev0/stats_total/cache_miss_collisions", []byte(cache_miss_collisions), 0644) err = ioutil.WriteFile(testBcacheUuidPath+"/bdev0/stats_total/cache_miss_collisions",
[]byte(cache_miss_collisions), 0644)
require.NoError(t, err) require.NoError(t, err)
err = ioutil.WriteFile(testBcacheUuidPath+"/bdev0/stats_total/cache_misses", []byte(cache_misses), 0644) err = ioutil.WriteFile(testBcacheUuidPath+"/bdev0/stats_total/cache_misses",
[]byte(cache_misses), 0644)
require.NoError(t, err) require.NoError(t, err)
err = ioutil.WriteFile(testBcacheUuidPath+"/bdev0/stats_total/cache_readaheads", []byte(cache_readaheads), 0644) err = ioutil.WriteFile(testBcacheUuidPath+"/bdev0/stats_total/cache_readaheads",
[]byte(cache_readaheads), 0644)
require.NoError(t, err) require.NoError(t, err)
intMetrics := []*metrics{ fields := map[string]interface{}{
{ "dirty_data": uint64(1610612736),
name: "dirty_data", "bypassed": uint64(5167704440832),
value: 1610612736, "cache_bypass_hits": uint64(146155333),
}, "cache_bypass_misses": uint64(0),
{ "cache_hit_ratio": uint64(90),
name: "bypassed", "cache_hits": uint64(511469583),
value: 5167704440832, "cache_miss_collisions": uint64(157567),
}, "cache_misses": uint64(50616331),
{ "cache_readaheads": uint64(2),
name: "cache_bypass_hits",
value: 146155333,
},
{
name: "cache_bypass_misses",
value: 0,
},
{
name: "cache_hit_ratio",
value: 90,
},
{
name: "cache_hits",
value: 511469583,
},
{
name: "cache_miss_collisions",
value: 157567,
},
{
name: "cache_misses",
value: 50616331,
},
{
name: "cache_readaheads",
value: 2,
},
} }
tags := map[string]string{ tags := map[string]string{
@ -126,27 +102,19 @@ func TestBcacheGeneratesMetrics(t *testing.T) {
var acc testutil.Accumulator var acc testutil.Accumulator
//all devs // all devs
b := &Bcache{BcachePath: testBcachePath} b := &Bcache{BcachePath: testBcachePath}
err = b.Gather(&acc) err = b.Gather(&acc)
require.NoError(t, err) require.NoError(t, err)
acc.AssertContainsTaggedFields(t, "bcache", fields, tags)
for _, metric := range intMetrics { // one exist dev
assert.True(t, acc.HasUIntValue(metric.name), metric.name)
assert.True(t, acc.CheckTaggedValue(metric.name, metric.value, tags))
}
//one exist dev
b = &Bcache{BcachePath: testBcachePath, BcacheDevs: []string{"bcache0"}} b = &Bcache{BcachePath: testBcachePath, BcacheDevs: []string{"bcache0"}}
err = b.Gather(&acc) err = b.Gather(&acc)
require.NoError(t, err) require.NoError(t, err)
acc.AssertContainsTaggedFields(t, "bcache", fields, tags)
for _, metric := range intMetrics {
assert.True(t, acc.HasUIntValue(metric.name), metric.name)
assert.True(t, acc.CheckTaggedValue(metric.name, metric.value, tags))
}
err = os.RemoveAll(os.TempDir() + "/telegraf") err = os.RemoveAll(os.TempDir() + "/telegraf")
require.NoError(t, err) require.NoError(t, err)

View File

@ -10,7 +10,7 @@ import (
"strings" "strings"
"sync" "sync"
"github.com/influxdb/telegraf/plugins" "github.com/influxdata/telegraf/plugins/inputs"
) )
type Disque struct { type Disque struct {
@ -61,7 +61,7 @@ var ErrProtocolError = errors.New("disque protocol error")
// Reads stats from all configured servers accumulates stats. // Reads stats from all configured servers accumulates stats.
// Returns one of the errors encountered while gather stats (if any). // Returns one of the errors encountered while gather stats (if any).
func (g *Disque) Gather(acc plugins.Accumulator) error { func (g *Disque) Gather(acc inputs.Accumulator) error {
if len(g.Servers) == 0 { if len(g.Servers) == 0 {
url := &url.URL{ url := &url.URL{
Host: ":7711", Host: ":7711",
@ -98,7 +98,7 @@ func (g *Disque) Gather(acc plugins.Accumulator) error {
const defaultPort = "7711" const defaultPort = "7711"
func (g *Disque) gatherServer(addr *url.URL, acc plugins.Accumulator) error { func (g *Disque) gatherServer(addr *url.URL, acc inputs.Accumulator) error {
if g.c == nil { if g.c == nil {
_, _, err := net.SplitHostPort(addr.Host) _, _, err := net.SplitHostPort(addr.Host)
@ -155,6 +155,8 @@ func (g *Disque) gatherServer(addr *url.URL, acc plugins.Accumulator) error {
var read int var read int
fields := make(map[string]interface{})
tags := map[string]string{"host": addr.String()}
for read < sz { for read < sz {
line, err := r.ReadString('\n') line, err := r.ReadString('\n')
if err != nil { if err != nil {
@ -176,12 +178,11 @@ func (g *Disque) gatherServer(addr *url.URL, acc plugins.Accumulator) error {
continue continue
} }
tags := map[string]string{"host": addr.String()}
val := strings.TrimSpace(parts[1]) val := strings.TrimSpace(parts[1])
ival, err := strconv.ParseUint(val, 10, 64) ival, err := strconv.ParseUint(val, 10, 64)
if err == nil { if err == nil {
acc.Add(metric, ival, tags) fields[metric] = ival
continue continue
} }
@ -190,14 +191,14 @@ func (g *Disque) gatherServer(addr *url.URL, acc plugins.Accumulator) error {
return err return err
} }
acc.Add(metric, fval, tags) fields[metric] = fval
} }
acc.AddFields("disque", fields, tags)
return nil return nil
} }
func init() { func init() {
plugins.Add("disque", func() plugins.Plugin { inputs.Add("disque", func() inputs.Input {
return &Disque{} return &Disque{}
}) })
} }

View File

@ -6,8 +6,7 @@ import (
"net" "net"
"testing" "testing"
"github.com/influxdb/telegraf/testutil" "github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
@ -55,42 +54,26 @@ func TestDisqueGeneratesMetrics(t *testing.T) {
err = r.Gather(&acc) err = r.Gather(&acc)
require.NoError(t, err) require.NoError(t, err)
checkInt := []struct { fields := map[string]interface{}{
name string "uptime": uint64(1452705),
value uint64 "clients": uint64(31),
}{ "blocked_clients": uint64(13),
{"uptime", 1452705}, "used_memory": uint64(1840104),
{"clients", 31}, "used_memory_rss": uint64(3227648),
{"blocked_clients", 13}, "used_memory_peak": uint64(89603656),
{"used_memory", 1840104}, "total_connections_received": uint64(5062777),
{"used_memory_rss", 3227648}, "total_commands_processed": uint64(12308396),
{"used_memory_peak", 89603656}, "instantaneous_ops_per_sec": uint64(18),
{"total_connections_received", 5062777}, "latest_fork_usec": uint64(1644),
{"total_commands_processed", 12308396}, "registered_jobs": uint64(360),
{"instantaneous_ops_per_sec", 18}, "registered_queues": uint64(12),
{"latest_fork_usec", 1644}, "mem_fragmentation_ratio": float64(1.75),
{"registered_jobs", 360}, "used_cpu_sys": float64(19585.73),
{"registered_queues", 12}, "used_cpu_user": float64(11255.96),
} "used_cpu_sys_children": float64(1.75),
"used_cpu_user_children": float64(1.91),
for _, c := range checkInt {
assert.True(t, acc.CheckValue(c.name, c.value))
}
checkFloat := []struct {
name string
value float64
}{
{"mem_fragmentation_ratio", 1.75},
{"used_cpu_sys", 19585.73},
{"used_cpu_user", 11255.96},
{"used_cpu_sys_children", 1.75},
{"used_cpu_user_children", 1.91},
}
for _, c := range checkFloat {
assert.True(t, acc.CheckValue(c.name, c.value))
} }
acc.AssertContainsFields(t, "disque", fields)
} }
func TestDisqueCanPullStatsFromMultipleServers(t *testing.T) { func TestDisqueCanPullStatsFromMultipleServers(t *testing.T) {
@ -137,42 +120,26 @@ func TestDisqueCanPullStatsFromMultipleServers(t *testing.T) {
err = r.Gather(&acc) err = r.Gather(&acc)
require.NoError(t, err) require.NoError(t, err)
checkInt := []struct { fields := map[string]interface{}{
name string "uptime": uint64(1452705),
value uint64 "clients": uint64(31),
}{ "blocked_clients": uint64(13),
{"uptime", 1452705}, "used_memory": uint64(1840104),
{"clients", 31}, "used_memory_rss": uint64(3227648),
{"blocked_clients", 13}, "used_memory_peak": uint64(89603656),
{"used_memory", 1840104}, "total_connections_received": uint64(5062777),
{"used_memory_rss", 3227648}, "total_commands_processed": uint64(12308396),
{"used_memory_peak", 89603656}, "instantaneous_ops_per_sec": uint64(18),
{"total_connections_received", 5062777}, "latest_fork_usec": uint64(1644),
{"total_commands_processed", 12308396}, "registered_jobs": uint64(360),
{"instantaneous_ops_per_sec", 18}, "registered_queues": uint64(12),
{"latest_fork_usec", 1644}, "mem_fragmentation_ratio": float64(1.75),
{"registered_jobs", 360}, "used_cpu_sys": float64(19585.73),
{"registered_queues", 12}, "used_cpu_user": float64(11255.96),
} "used_cpu_sys_children": float64(1.75),
"used_cpu_user_children": float64(1.91),
for _, c := range checkInt {
assert.True(t, acc.CheckValue(c.name, c.value))
}
checkFloat := []struct {
name string
value float64
}{
{"mem_fragmentation_ratio", 1.75},
{"used_cpu_sys", 19585.73},
{"used_cpu_user", 11255.96},
{"used_cpu_sys_children", 1.75},
{"used_cpu_user_children", 1.91},
}
for _, c := range checkFloat {
assert.True(t, acc.CheckValue(c.name, c.value))
} }
acc.AssertContainsFields(t, "disque", fields)
} }
const testOutput = `# Server const testOutput = `# Server

View File

@ -0,0 +1,148 @@
# Docker Input Plugin
The docker plugin uses the docker remote API to gather metrics on running
docker containers. You can read Docker's documentation for their remote API
[here](https://docs.docker.com/engine/reference/api/docker_remote_api_v1.20/#get-container-stats-based-on-resource-usage)
The docker plugin uses the excellent
[fsouza go-dockerclient](https://github.com/fsouza/go-dockerclient) library to
gather stats. Documentation for the library can be found
[here](https://godoc.org/github.com/fsouza/go-dockerclient) and documentation
for the stat structure can be found
[here](https://godoc.org/github.com/fsouza/go-dockerclient#Stats)
### Configuration:
```
# Read metrics about docker containers
[[inputs.docker]]
# Docker Endpoint
# To use TCP, set endpoint = "tcp://[ip]:[port]"
# To use environment variables (ie, docker-machine), set endpoint = "ENV"
endpoint = "unix:///var/run/docker.sock"
# Only collect metrics for these containers, collect all if empty
container_names = []
```
### Measurements & Fields:
Every effort was made to preserve the names based on the JSON response from the
docker API.
Note that the docker_cpu metric may appear multiple times per collection, based
on the availability of per-cpu stats on your system.
- docker_mem
- total_pgmafault
- cache
- mapped_file
- total_inactive_file
- pgpgout
- rss
- total_mapped_file
- writeback
- unevictable
- pgpgin
- total_unevictable
- pgmajfault
- total_rss
- total_rss_huge
- total_writeback
- total_inactive_anon
- rss_huge
- hierarchical_memory_limit
- total_pgfault
- total_active_file
- active_anon
- total_active_anon
- total_pgpgout
- total_cache
- inactive_anon
- active_file
- pgfault
- inactive_file
- total_pgpgin
- max_usage
- usage
- failcnt
- limit
- docker_cpu
- throttling_periods
- throttling_throttled_periods
- throttling_throttled_time
- usage_in_kernelmode
- usage_in_usermode
- usage_system
- usage_total
- docker_net
- rx_dropped
- rx_bytes
- rx_errors
- tx_packets
- tx_dropped
- rx_packets
- tx_errors
- tx_bytes
- docker_blkio
- io_service_bytes_recursive_async
- io_service_bytes_recursive_read
- io_service_bytes_recursive_sync
- io_service_bytes_recursive_total
- io_service_bytes_recursive_write
- io_serviced_recursive_async
- io_serviced_recursive_read
- io_serviced_recursive_sync
- io_serviced_recursive_total
- io_serviced_recursive_write
### Tags:
- All stats have the following tags:
- cont_id (container ID)
- cont_image (container image)
- cont_name (container name)
- docker_cpu specific:
- cpu
- docker_net specific:
- network
- docker_blkio specific:
- device
### Example Output:
```
% ./telegraf -config ~/ws/telegraf.conf -input-filter docker -test
* Plugin: docker, Collection 1
> docker_mem,cont_id=5705ba8ed8fb47527410653d60a8bb2f3af5e62372297c419022a3cc6d45d848,\
cont_image=spotify/kafka,cont_name=kafka \
active_anon=52568064i,active_file=6926336i,cache=12038144i,fail_count=0i,\
hierarchical_memory_limit=9223372036854771712i,inactive_anon=52707328i,\
inactive_file=5111808i,limit=1044578304i,mapped_file=10301440i,\
max_usage=140656640i,pgfault=63762i,pgmajfault=2837i,pgpgin=73355i,\
pgpgout=45736i,rss=105275392i,rss_huge=4194304i,total_active_anon=52568064i,\
total_active_file=6926336i,total_cache=12038144i,total_inactive_anon=52707328i,\
total_inactive_file=5111808i,total_mapped_file=10301440i,total_pgfault=63762i,\
total_pgmafault=0i,total_pgpgin=73355i,total_pgpgout=45736i,\
total_rss=105275392i,total_rss_huge=4194304i,total_unevictable=0i,\
total_writeback=0i,unevictable=0i,usage=117440512i,writeback=0i 1453409536840126713
> docker_cpu,cont_id=5705ba8ed8fb47527410653d60a8bb2f3af5e62372297c419022a3cc6d45d848,\
cont_image=spotify/kafka,cont_name=kafka,cpu=cpu-total \
throttling_periods=0i,throttling_throttled_periods=0i,\
throttling_throttled_time=0i,usage_in_kernelmode=440000000i,\
usage_in_usermode=2290000000i,usage_system=84795360000000i,\
usage_total=6628208865i 1453409536840126713
> docker_cpu,cont_id=5705ba8ed8fb47527410653d60a8bb2f3af5e62372297c419022a3cc6d45d848,\
cont_image=spotify/kafka,cont_name=kafka,cpu=cpu0 \
usage_total=6628208865i 1453409536840126713
> docker_net,cont_id=5705ba8ed8fb47527410653d60a8bb2f3af5e62372297c419022a3cc6d45d848,\
cont_image=spotify/kafka,cont_name=kafka,network=eth0 \
rx_bytes=7468i,rx_dropped=0i,rx_errors=0i,rx_packets=94i,tx_bytes=946i,\
tx_dropped=0i,tx_errors=0i,tx_packets=13i 1453409536840126713
> docker_blkio,cont_id=5705ba8ed8fb47527410653d60a8bb2f3af5e62372297c419022a3cc6d45d848,\
cont_image=spotify/kafka,cont_name=kafka,device=8:0 \
io_service_bytes_recursive_async=80216064i,io_service_bytes_recursive_read=79925248i,\
io_service_bytes_recursive_sync=77824i,io_service_bytes_recursive_total=80293888i,\
io_service_bytes_recursive_write=368640i,io_serviced_recursive_async=6562i,\
io_serviced_recursive_read=6492i,io_serviced_recursive_sync=37i,\
io_serviced_recursive_total=6599i,io_serviced_recursive_write=107i 1453409536840126713
```

View File

@ -0,0 +1,312 @@
package system
import (
"fmt"
"strings"
"sync"
"time"
"github.com/influxdata/telegraf/plugins/inputs"
"github.com/fsouza/go-dockerclient"
)
type Docker struct {
Endpoint string
ContainerNames []string
client *docker.Client
}
var sampleConfig = `
# Docker Endpoint
# To use TCP, set endpoint = "tcp://[ip]:[port]"
# To use environment variables (ie, docker-machine), set endpoint = "ENV"
endpoint = "unix:///var/run/docker.sock"
# Only collect metrics for these containers, collect all if empty
container_names = []
`
func (d *Docker) Description() string {
return "Read metrics about docker containers"
}
func (d *Docker) SampleConfig() string { return sampleConfig }
func (d *Docker) Gather(acc inputs.Accumulator) error {
if d.client == nil {
var c *docker.Client
var err error
if d.Endpoint == "ENV" {
c, err = docker.NewClientFromEnv()
if err != nil {
return err
}
} else if d.Endpoint == "" {
c, err = docker.NewClient("unix:///var/run/docker.sock")
if err != nil {
return err
}
} else {
c, err = docker.NewClient(d.Endpoint)
if err != nil {
return err
}
}
d.client = c
}
opts := docker.ListContainersOptions{}
containers, err := d.client.ListContainers(opts)
if err != nil {
return err
}
var wg sync.WaitGroup
wg.Add(len(containers))
for _, container := range containers {
go func(c docker.APIContainers) {
defer wg.Done()
err := d.gatherContainer(c, acc)
if err != nil {
fmt.Println(err.Error())
}
}(container)
}
wg.Wait()
return nil
}
func (d *Docker) gatherContainer(
container docker.APIContainers,
acc inputs.Accumulator,
) error {
// Parse container name
cname := "unknown"
if len(container.Names) > 0 {
// Not sure what to do with other names, just take the first.
cname = strings.TrimPrefix(container.Names[0], "/")
}
tags := map[string]string{
"cont_id": container.ID,
"cont_name": cname,
"cont_image": container.Image,
}
if len(d.ContainerNames) > 0 {
if !sliceContains(cname, d.ContainerNames) {
return nil
}
}
statChan := make(chan *docker.Stats)
done := make(chan bool)
statOpts := docker.StatsOptions{
Stream: false,
ID: container.ID,
Stats: statChan,
Done: done,
Timeout: time.Duration(time.Second * 5),
}
var err error
go func() {
err = d.client.Stats(statOpts)
}()
stat := <-statChan
if err != nil {
return err
}
// Add labels to tags
for k, v := range container.Labels {
tags[k] = v
}
gatherContainerStats(stat, acc, tags)
return nil
}
func gatherContainerStats(
stat *docker.Stats,
acc inputs.Accumulator,
tags map[string]string,
) {
now := stat.Read
memfields := map[string]interface{}{
"max_usage": stat.MemoryStats.MaxUsage,
"usage": stat.MemoryStats.Usage,
"fail_count": stat.MemoryStats.Failcnt,
"limit": stat.MemoryStats.Limit,
"total_pgmafault": stat.MemoryStats.Stats.TotalPgmafault,
"cache": stat.MemoryStats.Stats.Cache,
"mapped_file": stat.MemoryStats.Stats.MappedFile,
"total_inactive_file": stat.MemoryStats.Stats.TotalInactiveFile,
"pgpgout": stat.MemoryStats.Stats.Pgpgout,
"rss": stat.MemoryStats.Stats.Rss,
"total_mapped_file": stat.MemoryStats.Stats.TotalMappedFile,
"writeback": stat.MemoryStats.Stats.Writeback,
"unevictable": stat.MemoryStats.Stats.Unevictable,
"pgpgin": stat.MemoryStats.Stats.Pgpgin,
"total_unevictable": stat.MemoryStats.Stats.TotalUnevictable,
"pgmajfault": stat.MemoryStats.Stats.Pgmajfault,
"total_rss": stat.MemoryStats.Stats.TotalRss,
"total_rss_huge": stat.MemoryStats.Stats.TotalRssHuge,
"total_writeback": stat.MemoryStats.Stats.TotalWriteback,
"total_inactive_anon": stat.MemoryStats.Stats.TotalInactiveAnon,
"rss_huge": stat.MemoryStats.Stats.RssHuge,
"hierarchical_memory_limit": stat.MemoryStats.Stats.HierarchicalMemoryLimit,
"total_pgfault": stat.MemoryStats.Stats.TotalPgfault,
"total_active_file": stat.MemoryStats.Stats.TotalActiveFile,
"active_anon": stat.MemoryStats.Stats.ActiveAnon,
"total_active_anon": stat.MemoryStats.Stats.TotalActiveAnon,
"total_pgpgout": stat.MemoryStats.Stats.TotalPgpgout,
"total_cache": stat.MemoryStats.Stats.TotalCache,
"inactive_anon": stat.MemoryStats.Stats.InactiveAnon,
"active_file": stat.MemoryStats.Stats.ActiveFile,
"pgfault": stat.MemoryStats.Stats.Pgfault,
"inactive_file": stat.MemoryStats.Stats.InactiveFile,
"total_pgpgin": stat.MemoryStats.Stats.TotalPgpgin,
}
acc.AddFields("docker_mem", memfields, tags, now)
cpufields := map[string]interface{}{
"usage_total": stat.CPUStats.CPUUsage.TotalUsage,
"usage_in_usermode": stat.CPUStats.CPUUsage.UsageInUsermode,
"usage_in_kernelmode": stat.CPUStats.CPUUsage.UsageInKernelmode,
"usage_system": stat.CPUStats.SystemCPUUsage,
"throttling_periods": stat.CPUStats.ThrottlingData.Periods,
"throttling_throttled_periods": stat.CPUStats.ThrottlingData.ThrottledPeriods,
"throttling_throttled_time": stat.CPUStats.ThrottlingData.ThrottledTime,
}
cputags := copyTags(tags)
cputags["cpu"] = "cpu-total"
acc.AddFields("docker_cpu", cpufields, cputags, now)
for i, percpu := range stat.CPUStats.CPUUsage.PercpuUsage {
percputags := copyTags(tags)
percputags["cpu"] = fmt.Sprintf("cpu%d", i)
acc.AddFields("docker_cpu", map[string]interface{}{"usage_total": percpu}, percputags, now)
}
for network, netstats := range stat.Networks {
netfields := map[string]interface{}{
"rx_dropped": netstats.RxDropped,
"rx_bytes": netstats.RxBytes,
"rx_errors": netstats.RxErrors,
"tx_packets": netstats.TxPackets,
"tx_dropped": netstats.TxDropped,
"rx_packets": netstats.RxPackets,
"tx_errors": netstats.TxErrors,
"tx_bytes": netstats.TxBytes,
}
// Create a new network tag dictionary for the "network" tag
nettags := copyTags(tags)
nettags["network"] = network
acc.AddFields("docker_net", netfields, nettags, now)
}
gatherBlockIOMetrics(stat, acc, tags, now)
}
func gatherBlockIOMetrics(
stat *docker.Stats,
acc inputs.Accumulator,
tags map[string]string,
now time.Time,
) {
blkioStats := stat.BlkioStats
// Make a map of devices to their block io stats
deviceStatMap := make(map[string]map[string]interface{})
for _, metric := range blkioStats.IOServiceBytesRecursive {
device := fmt.Sprintf("%d:%d", metric.Major, metric.Minor)
_, ok := deviceStatMap[device]
if !ok {
deviceStatMap[device] = make(map[string]interface{})
}
field := fmt.Sprintf("io_service_bytes_recursive_%s", strings.ToLower(metric.Op))
deviceStatMap[device][field] = metric.Value
}
for _, metric := range blkioStats.IOServicedRecursive {
device := fmt.Sprintf("%d:%d", metric.Major, metric.Minor)
_, ok := deviceStatMap[device]
if !ok {
deviceStatMap[device] = make(map[string]interface{})
}
field := fmt.Sprintf("io_serviced_recursive_%s", strings.ToLower(metric.Op))
deviceStatMap[device][field] = metric.Value
}
for _, metric := range blkioStats.IOQueueRecursive {
device := fmt.Sprintf("%d:%d", metric.Major, metric.Minor)
field := fmt.Sprintf("io_queue_recursive_%s", strings.ToLower(metric.Op))
deviceStatMap[device][field] = metric.Value
}
for _, metric := range blkioStats.IOServiceTimeRecursive {
device := fmt.Sprintf("%d:%d", metric.Major, metric.Minor)
field := fmt.Sprintf("io_service_time_recursive_%s", strings.ToLower(metric.Op))
deviceStatMap[device][field] = metric.Value
}
for _, metric := range blkioStats.IOWaitTimeRecursive {
device := fmt.Sprintf("%d:%d", metric.Major, metric.Minor)
field := fmt.Sprintf("io_wait_time_%s", strings.ToLower(metric.Op))
deviceStatMap[device][field] = metric.Value
}
for _, metric := range blkioStats.IOMergedRecursive {
device := fmt.Sprintf("%d:%d", metric.Major, metric.Minor)
field := fmt.Sprintf("io_merged_recursive_%s", strings.ToLower(metric.Op))
deviceStatMap[device][field] = metric.Value
}
for _, metric := range blkioStats.IOTimeRecursive {
device := fmt.Sprintf("%d:%d", metric.Major, metric.Minor)
field := fmt.Sprintf("io_time_recursive_%s", strings.ToLower(metric.Op))
deviceStatMap[device][field] = metric.Value
}
for _, metric := range blkioStats.SectorsRecursive {
device := fmt.Sprintf("%d:%d", metric.Major, metric.Minor)
field := fmt.Sprintf("sectors_recursive_%s", strings.ToLower(metric.Op))
deviceStatMap[device][field] = metric.Value
}
for device, fields := range deviceStatMap {
iotags := copyTags(tags)
iotags["device"] = device
acc.AddFields("docker_blkio", fields, iotags, now)
}
}
func copyTags(in map[string]string) map[string]string {
out := make(map[string]string)
for k, v := range in {
out[k] = v
}
return out
}
func sliceContains(in string, sl []string) bool {
for _, str := range sl {
if str == in {
return true
}
}
return false
}
func init() {
inputs.Add("docker", func() inputs.Input {
return &Docker{}
})
}

View File

@ -0,0 +1,190 @@
package system
import (
"testing"
"time"
"github.com/influxdata/telegraf/testutil"
"github.com/fsouza/go-dockerclient"
)
func TestDockerGatherContainerStats(t *testing.T) {
var acc testutil.Accumulator
stats := testStats()
tags := map[string]string{
"cont_id": "foobarbaz",
"cont_name": "redis",
"cont_image": "redis/image",
}
gatherContainerStats(stats, &acc, tags)
// test docker_net measurement
netfields := map[string]interface{}{
"rx_dropped": uint64(1),
"rx_bytes": uint64(2),
"rx_errors": uint64(3),
"tx_packets": uint64(4),
"tx_dropped": uint64(1),
"rx_packets": uint64(2),
"tx_errors": uint64(3),
"tx_bytes": uint64(4),
}
nettags := copyTags(tags)
nettags["network"] = "eth0"
acc.AssertContainsTaggedFields(t, "docker_net", netfields, nettags)
// test docker_blkio measurement
blkiotags := copyTags(tags)
blkiotags["device"] = "6:0"
blkiofields := map[string]interface{}{
"io_service_bytes_recursive_read": uint64(100),
"io_serviced_recursive_write": uint64(101),
}
acc.AssertContainsTaggedFields(t, "docker_blkio", blkiofields, blkiotags)
// test docker_mem measurement
memfields := map[string]interface{}{
"max_usage": uint64(1001),
"usage": uint64(1111),
"fail_count": uint64(1),
"limit": uint64(20),
"total_pgmafault": uint64(0),
"cache": uint64(0),
"mapped_file": uint64(0),
"total_inactive_file": uint64(0),
"pgpgout": uint64(0),
"rss": uint64(0),
"total_mapped_file": uint64(0),
"writeback": uint64(0),
"unevictable": uint64(0),
"pgpgin": uint64(0),
"total_unevictable": uint64(0),
"pgmajfault": uint64(0),
"total_rss": uint64(44),
"total_rss_huge": uint64(444),
"total_writeback": uint64(55),
"total_inactive_anon": uint64(0),
"rss_huge": uint64(0),
"hierarchical_memory_limit": uint64(0),
"total_pgfault": uint64(0),
"total_active_file": uint64(0),
"active_anon": uint64(0),
"total_active_anon": uint64(0),
"total_pgpgout": uint64(0),
"total_cache": uint64(0),
"inactive_anon": uint64(0),
"active_file": uint64(1),
"pgfault": uint64(2),
"inactive_file": uint64(3),
"total_pgpgin": uint64(4),
}
acc.AssertContainsTaggedFields(t, "docker_mem", memfields, tags)
// test docker_cpu measurement
cputags := copyTags(tags)
cputags["cpu"] = "cpu-total"
cpufields := map[string]interface{}{
"usage_total": uint64(500),
"usage_in_usermode": uint64(100),
"usage_in_kernelmode": uint64(200),
"usage_system": uint64(100),
"throttling_periods": uint64(1),
"throttling_throttled_periods": uint64(0),
"throttling_throttled_time": uint64(0),
}
acc.AssertContainsTaggedFields(t, "docker_cpu", cpufields, cputags)
cputags["cpu"] = "cpu0"
cpu0fields := map[string]interface{}{
"usage_total": uint64(1),
}
acc.AssertContainsTaggedFields(t, "docker_cpu", cpu0fields, cputags)
cputags["cpu"] = "cpu1"
cpu1fields := map[string]interface{}{
"usage_total": uint64(1002),
}
acc.AssertContainsTaggedFields(t, "docker_cpu", cpu1fields, cputags)
}
func testStats() *docker.Stats {
stats := &docker.Stats{
Read: time.Now(),
Networks: make(map[string]docker.NetworkStats),
}
stats.CPUStats.CPUUsage.PercpuUsage = []uint64{1, 1002}
stats.CPUStats.CPUUsage.UsageInUsermode = 100
stats.CPUStats.CPUUsage.TotalUsage = 500
stats.CPUStats.CPUUsage.UsageInKernelmode = 200
stats.CPUStats.SystemCPUUsage = 100
stats.CPUStats.ThrottlingData.Periods = 1
stats.MemoryStats.Stats.TotalPgmafault = 0
stats.MemoryStats.Stats.Cache = 0
stats.MemoryStats.Stats.MappedFile = 0
stats.MemoryStats.Stats.TotalInactiveFile = 0
stats.MemoryStats.Stats.Pgpgout = 0
stats.MemoryStats.Stats.Rss = 0
stats.MemoryStats.Stats.TotalMappedFile = 0
stats.MemoryStats.Stats.Writeback = 0
stats.MemoryStats.Stats.Unevictable = 0
stats.MemoryStats.Stats.Pgpgin = 0
stats.MemoryStats.Stats.TotalUnevictable = 0
stats.MemoryStats.Stats.Pgmajfault = 0
stats.MemoryStats.Stats.TotalRss = 44
stats.MemoryStats.Stats.TotalRssHuge = 444
stats.MemoryStats.Stats.TotalWriteback = 55
stats.MemoryStats.Stats.TotalInactiveAnon = 0
stats.MemoryStats.Stats.RssHuge = 0
stats.MemoryStats.Stats.HierarchicalMemoryLimit = 0
stats.MemoryStats.Stats.TotalPgfault = 0
stats.MemoryStats.Stats.TotalActiveFile = 0
stats.MemoryStats.Stats.ActiveAnon = 0
stats.MemoryStats.Stats.TotalActiveAnon = 0
stats.MemoryStats.Stats.TotalPgpgout = 0
stats.MemoryStats.Stats.TotalCache = 0
stats.MemoryStats.Stats.InactiveAnon = 0
stats.MemoryStats.Stats.ActiveFile = 1
stats.MemoryStats.Stats.Pgfault = 2
stats.MemoryStats.Stats.InactiveFile = 3
stats.MemoryStats.Stats.TotalPgpgin = 4
stats.MemoryStats.MaxUsage = 1001
stats.MemoryStats.Usage = 1111
stats.MemoryStats.Failcnt = 1
stats.MemoryStats.Limit = 20
stats.Networks["eth0"] = docker.NetworkStats{
RxDropped: 1,
RxBytes: 2,
RxErrors: 3,
TxPackets: 4,
TxDropped: 1,
RxPackets: 2,
TxErrors: 3,
TxBytes: 4,
}
sbr := docker.BlkioStatsEntry{
Major: 6,
Minor: 0,
Op: "read",
Value: 100,
}
sr := docker.BlkioStatsEntry{
Major: 6,
Minor: 0,
Op: "write",
Value: 101,
}
stats.BlkioStats.IOServiceBytesRecursive = append(
stats.BlkioStats.IOServiceBytesRecursive, sbr)
stats.BlkioStats.IOServicedRecursive = append(
stats.BlkioStats.IOServicedRecursive, sr)
return stats
}

View File

@ -31,8 +31,9 @@ contains `status`, `timed_out`, `number_of_nodes`, `number_of_data_nodes`,
`initializing_shards`, `unassigned_shards` fields `initializing_shards`, `unassigned_shards` fields
- elasticsearch_cluster_health - elasticsearch_cluster_health
contains `status`, `number_of_shards`, `number_of_replicas`, `active_primary_shards`, contains `status`, `number_of_shards`, `number_of_replicas`,
`active_shards`, `relocating_shards`, `initializing_shards`, `unassigned_shards` fields `active_primary_shards`, `active_shards`, `relocating_shards`,
`initializing_shards`, `unassigned_shards` fields
- elasticsearch_indices - elasticsearch_indices
#### node measurements: #### node measurements:

View File

@ -2,11 +2,15 @@ package elasticsearch
import ( import (
"encoding/json" "encoding/json"
"errors"
"fmt" "fmt"
"net/http" "net/http"
"strings"
"sync"
"time" "time"
"github.com/influxdb/telegraf/plugins" "github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs"
) )
const statsPath = "/_nodes/stats" const statsPath = "/_nodes/stats"
@ -91,25 +95,45 @@ func (e *Elasticsearch) Description() string {
// Gather reads the stats from Elasticsearch and writes it to the // Gather reads the stats from Elasticsearch and writes it to the
// Accumulator. // Accumulator.
func (e *Elasticsearch) Gather(acc plugins.Accumulator) error { func (e *Elasticsearch) Gather(acc inputs.Accumulator) error {
errChan := make(chan error, len(e.Servers))
var wg sync.WaitGroup
wg.Add(len(e.Servers))
for _, serv := range e.Servers { for _, serv := range e.Servers {
go func(s string, acc inputs.Accumulator) {
defer wg.Done()
var url string var url string
if e.Local { if e.Local {
url = serv + statsPathLocal url = s + statsPathLocal
} else { } else {
url = serv + statsPath url = s + statsPath
} }
if err := e.gatherNodeStats(url, acc); err != nil { if err := e.gatherNodeStats(url, acc); err != nil {
return err errChan <- err
return
} }
if e.ClusterHealth { if e.ClusterHealth {
e.gatherClusterStats(fmt.Sprintf("%s/_cluster/health?level=indices", serv), acc) e.gatherClusterStats(fmt.Sprintf("%s/_cluster/health?level=indices", s), acc)
} }
}(serv, acc)
} }
wg.Wait()
close(errChan)
// Get all errors and return them as one giant error
errStrings := []string{}
for err := range errChan {
errStrings = append(errStrings, err.Error())
}
if len(errStrings) == 0 {
return nil return nil
}
return errors.New(strings.Join(errStrings, "\n"))
} }
func (e *Elasticsearch) gatherNodeStats(url string, acc plugins.Accumulator) error { func (e *Elasticsearch) gatherNodeStats(url string, acc inputs.Accumulator) error {
nodeStats := &struct { nodeStats := &struct {
ClusterName string `json:"cluster_name"` ClusterName string `json:"cluster_name"`
Nodes map[string]*node `json:"nodes"` Nodes map[string]*node `json:"nodes"`
@ -141,16 +165,20 @@ func (e *Elasticsearch) gatherNodeStats(url string, acc plugins.Accumulator) err
"breakers": n.Breakers, "breakers": n.Breakers,
} }
now := time.Now()
for p, s := range stats { for p, s := range stats {
if err := e.parseInterface(acc, p, tags, s); err != nil { f := internal.JSONFlattener{}
err := f.FlattenJSON("", s)
if err != nil {
return err return err
} }
acc.AddFields("elasticsearch_"+p, f.Fields, tags, now)
} }
} }
return nil return nil
} }
func (e *Elasticsearch) gatherClusterStats(url string, acc plugins.Accumulator) error { func (e *Elasticsearch) gatherClusterStats(url string, acc inputs.Accumulator) error {
clusterStats := &clusterHealth{} clusterStats := &clusterHealth{}
if err := e.gatherData(url, clusterStats); err != nil { if err := e.gatherData(url, clusterStats); err != nil {
return err return err
@ -168,7 +196,7 @@ func (e *Elasticsearch) gatherClusterStats(url string, acc plugins.Accumulator)
"unassigned_shards": clusterStats.UnassignedShards, "unassigned_shards": clusterStats.UnassignedShards,
} }
acc.AddFields( acc.AddFields(
"cluster_health", "elasticsearch_cluster_health",
clusterFields, clusterFields,
map[string]string{"name": clusterStats.ClusterName}, map[string]string{"name": clusterStats.ClusterName},
measurementTime, measurementTime,
@ -186,7 +214,7 @@ func (e *Elasticsearch) gatherClusterStats(url string, acc plugins.Accumulator)
"unassigned_shards": health.UnassignedShards, "unassigned_shards": health.UnassignedShards,
} }
acc.AddFields( acc.AddFields(
"indices", "elasticsearch_indices",
indexFields, indexFields,
map[string]string{"index": name}, map[string]string{"index": name},
measurementTime, measurementTime,
@ -205,7 +233,8 @@ func (e *Elasticsearch) gatherData(url string, v interface{}) error {
// NOTE: we are not going to read/discard r.Body under the assumption we'd prefer // NOTE: we are not going to read/discard r.Body under the assumption we'd prefer
// to let the underlying transport close the connection and re-establish a new one for // to let the underlying transport close the connection and re-establish a new one for
// future calls. // future calls.
return fmt.Errorf("elasticsearch: API responded with status-code %d, expected %d", r.StatusCode, http.StatusOK) return fmt.Errorf("elasticsearch: API responded with status-code %d, expected %d",
r.StatusCode, http.StatusOK)
} }
if err = json.NewDecoder(r.Body).Decode(v); err != nil { if err = json.NewDecoder(r.Body).Decode(v); err != nil {
return err return err
@ -213,27 +242,8 @@ func (e *Elasticsearch) gatherData(url string, v interface{}) error {
return nil return nil
} }
func (e *Elasticsearch) parseInterface(acc plugins.Accumulator, prefix string, tags map[string]string, v interface{}) error {
switch t := v.(type) {
case map[string]interface{}:
for k, v := range t {
if err := e.parseInterface(acc, prefix+"_"+k, tags, v); err != nil {
return err
}
}
case float64:
acc.Add(prefix, t, tags)
case bool, string, []interface{}:
// ignored types
return nil
default:
return fmt.Errorf("elasticsearch: got unexpected type %T with value %v (%s)", t, t, prefix)
}
return nil
}
func init() { func init() {
plugins.Add("elasticsearch", func() plugins.Plugin { inputs.Add("elasticsearch", func() inputs.Input {
return NewElasticsearch() return NewElasticsearch()
}) })
} }

View File

@ -6,8 +6,8 @@ import (
"strings" "strings"
"testing" "testing"
"github.com/influxdb/telegraf/testutil" "github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
@ -52,23 +52,15 @@ func TestElasticsearch(t *testing.T) {
"node_host": "test", "node_host": "test",
} }
testTables := []map[string]float64{ acc.AssertContainsTaggedFields(t, "elasticsearch_indices", indicesExpected, tags)
indicesExpected, acc.AssertContainsTaggedFields(t, "elasticsearch_os", osExpected, tags)
osExpected, acc.AssertContainsTaggedFields(t, "elasticsearch_process", processExpected, tags)
processExpected, acc.AssertContainsTaggedFields(t, "elasticsearch_jvm", jvmExpected, tags)
jvmExpected, acc.AssertContainsTaggedFields(t, "elasticsearch_thread_pool", threadPoolExpected, tags)
threadPoolExpected, acc.AssertContainsTaggedFields(t, "elasticsearch_fs", fsExpected, tags)
fsExpected, acc.AssertContainsTaggedFields(t, "elasticsearch_transport", transportExpected, tags)
transportExpected, acc.AssertContainsTaggedFields(t, "elasticsearch_http", httpExpected, tags)
httpExpected, acc.AssertContainsTaggedFields(t, "elasticsearch_breakers", breakersExpected, tags)
breakersExpected,
}
for _, testTable := range testTables {
for k, v := range testTable {
assert.NoError(t, acc.ValidateTaggedValue(k, v, tags))
}
}
} }
func TestGatherClusterStats(t *testing.T) { func TestGatherClusterStats(t *testing.T) {
@ -80,29 +72,15 @@ func TestGatherClusterStats(t *testing.T) {
var acc testutil.Accumulator var acc testutil.Accumulator
require.NoError(t, es.Gather(&acc)) require.NoError(t, es.Gather(&acc))
var clusterHealthTests = []struct { acc.AssertContainsTaggedFields(t, "elasticsearch_cluster_health",
measurement string
fields map[string]interface{}
tags map[string]string
}{
{
"cluster_health",
clusterHealthExpected, clusterHealthExpected,
map[string]string{"name": "elasticsearch_telegraf"}, map[string]string{"name": "elasticsearch_telegraf"})
},
{
"indices",
v1IndexExpected,
map[string]string{"index": "v1"},
},
{
"indices",
v2IndexExpected,
map[string]string{"index": "v2"},
},
}
for _, exp := range clusterHealthTests { acc.AssertContainsTaggedFields(t, "elasticsearch_indices",
assert.NoError(t, acc.ValidateTaggedFields(exp.measurement, exp.fields, exp.tags)) v1IndexExpected,
} map[string]string{"index": "v1"})
acc.AssertContainsTaggedFields(t, "elasticsearch_indices",
v2IndexExpected,
map[string]string{"index": "v2"})
} }

View File

@ -0,0 +1,765 @@
package elasticsearch
const clusterResponse = `
{
"cluster_name": "elasticsearch_telegraf",
"status": "green",
"timed_out": false,
"number_of_nodes": 3,
"number_of_data_nodes": 3,
"active_primary_shards": 5,
"active_shards": 15,
"relocating_shards": 0,
"initializing_shards": 0,
"unassigned_shards": 0,
"indices": {
"v1": {
"status": "green",
"number_of_shards": 10,
"number_of_replicas": 1,
"active_primary_shards": 10,
"active_shards": 20,
"relocating_shards": 0,
"initializing_shards": 0,
"unassigned_shards": 0
},
"v2": {
"status": "red",
"number_of_shards": 10,
"number_of_replicas": 1,
"active_primary_shards": 0,
"active_shards": 0,
"relocating_shards": 0,
"initializing_shards": 0,
"unassigned_shards": 20
}
}
}
`
var clusterHealthExpected = map[string]interface{}{
"status": "green",
"timed_out": false,
"number_of_nodes": 3,
"number_of_data_nodes": 3,
"active_primary_shards": 5,
"active_shards": 15,
"relocating_shards": 0,
"initializing_shards": 0,
"unassigned_shards": 0,
}
var v1IndexExpected = map[string]interface{}{
"status": "green",
"number_of_shards": 10,
"number_of_replicas": 1,
"active_primary_shards": 10,
"active_shards": 20,
"relocating_shards": 0,
"initializing_shards": 0,
"unassigned_shards": 0,
}
var v2IndexExpected = map[string]interface{}{
"status": "red",
"number_of_shards": 10,
"number_of_replicas": 1,
"active_primary_shards": 0,
"active_shards": 0,
"relocating_shards": 0,
"initializing_shards": 0,
"unassigned_shards": 20,
}
const statsResponse = `
{
"cluster_name": "es-testcluster",
"nodes": {
"SDFsfSDFsdfFSDSDfSFDSDF": {
"timestamp": 1436365550135,
"name": "test.host.com",
"transport_address": "inet[/127.0.0.1:9300]",
"host": "test",
"ip": [
"inet[/127.0.0.1:9300]",
"NONE"
],
"attributes": {
"master": "true"
},
"indices": {
"docs": {
"count": 29652,
"deleted": 5229
},
"store": {
"size_in_bytes": 37715234,
"throttle_time_in_millis": 215
},
"indexing": {
"index_total": 84790,
"index_time_in_millis": 29680,
"index_current": 0,
"delete_total": 13879,
"delete_time_in_millis": 1139,
"delete_current": 0,
"noop_update_total": 0,
"is_throttled": false,
"throttle_time_in_millis": 0
},
"get": {
"total": 1,
"time_in_millis": 2,
"exists_total": 0,
"exists_time_in_millis": 0,
"missing_total": 1,
"missing_time_in_millis": 2,
"current": 0
},
"search": {
"open_contexts": 0,
"query_total": 1452,
"query_time_in_millis": 5695,
"query_current": 0,
"fetch_total": 414,
"fetch_time_in_millis": 146,
"fetch_current": 0
},
"merges": {
"current": 0,
"current_docs": 0,
"current_size_in_bytes": 0,
"total": 133,
"total_time_in_millis": 21060,
"total_docs": 203672,
"total_size_in_bytes": 142900226
},
"refresh": {
"total": 1076,
"total_time_in_millis": 20078
},
"flush": {
"total": 115,
"total_time_in_millis": 2401
},
"warmer": {
"current": 0,
"total": 2319,
"total_time_in_millis": 448
},
"filter_cache": {
"memory_size_in_bytes": 7384,
"evictions": 0
},
"id_cache": {
"memory_size_in_bytes": 0
},
"fielddata": {
"memory_size_in_bytes": 12996,
"evictions": 0
},
"percolate": {
"total": 0,
"time_in_millis": 0,
"current": 0,
"memory_size_in_bytes": -1,
"memory_size": "-1b",
"queries": 0
},
"completion": {
"size_in_bytes": 0
},
"segments": {
"count": 134,
"memory_in_bytes": 1285212,
"index_writer_memory_in_bytes": 0,
"index_writer_max_memory_in_bytes": 172368955,
"version_map_memory_in_bytes": 611844,
"fixed_bit_set_memory_in_bytes": 0
},
"translog": {
"operations": 17702,
"size_in_bytes": 17
},
"suggest": {
"total": 0,
"time_in_millis": 0,
"current": 0
},
"query_cache": {
"memory_size_in_bytes": 0,
"evictions": 0,
"hit_count": 0,
"miss_count": 0
},
"recovery": {
"current_as_source": 0,
"current_as_target": 0,
"throttle_time_in_millis": 0
}
},
"os": {
"timestamp": 1436460392944,
"load_average": [
0.01,
0.04,
0.05
],
"mem": {
"free_in_bytes": 477761536,
"used_in_bytes": 1621868544,
"free_percent": 74,
"used_percent": 25,
"actual_free_in_bytes": 1565470720,
"actual_used_in_bytes": 534159360
},
"swap": {
"used_in_bytes": 0,
"free_in_bytes": 487997440
}
},
"process": {
"timestamp": 1436460392945,
"open_file_descriptors": 160,
"cpu": {
"percent": 2,
"sys_in_millis": 1870,
"user_in_millis": 13610,
"total_in_millis": 15480
},
"mem": {
"total_virtual_in_bytes": 4747890688
}
},
"jvm": {
"timestamp": 1436460392945,
"uptime_in_millis": 202245,
"mem": {
"heap_used_in_bytes": 52709568,
"heap_used_percent": 5,
"heap_committed_in_bytes": 259522560,
"heap_max_in_bytes": 1038876672,
"non_heap_used_in_bytes": 39634576,
"non_heap_committed_in_bytes": 40841216,
"pools": {
"young": {
"used_in_bytes": 32685760,
"max_in_bytes": 279183360,
"peak_used_in_bytes": 71630848,
"peak_max_in_bytes": 279183360
},
"survivor": {
"used_in_bytes": 8912880,
"max_in_bytes": 34865152,
"peak_used_in_bytes": 8912888,
"peak_max_in_bytes": 34865152
},
"old": {
"used_in_bytes": 11110928,
"max_in_bytes": 724828160,
"peak_used_in_bytes": 14354608,
"peak_max_in_bytes": 724828160
}
}
},
"threads": {
"count": 44,
"peak_count": 45
},
"gc": {
"collectors": {
"young": {
"collection_count": 2,
"collection_time_in_millis": 98
},
"old": {
"collection_count": 1,
"collection_time_in_millis": 24
}
}
},
"buffer_pools": {
"direct": {
"count": 40,
"used_in_bytes": 6304239,
"total_capacity_in_bytes": 6304239
},
"mapped": {
"count": 0,
"used_in_bytes": 0,
"total_capacity_in_bytes": 0
}
}
},
"thread_pool": {
"percolate": {
"threads": 123,
"queue": 23,
"active": 13,
"rejected": 235,
"largest": 23,
"completed": 33
},
"fetch_shard_started": {
"threads": 3,
"queue": 1,
"active": 5,
"rejected": 6,
"largest": 4,
"completed": 54
},
"listener": {
"threads": 1,
"queue": 2,
"active": 4,
"rejected": 8,
"largest": 1,
"completed": 1
},
"index": {
"threads": 6,
"queue": 8,
"active": 4,
"rejected": 2,
"largest": 3,
"completed": 6
},
"refresh": {
"threads": 23,
"queue": 7,
"active": 3,
"rejected": 4,
"largest": 8,
"completed": 3
},
"suggest": {
"threads": 2,
"queue": 7,
"active": 2,
"rejected": 1,
"largest": 8,
"completed": 3
},
"generic": {
"threads": 1,
"queue": 4,
"active": 6,
"rejected": 3,
"largest": 2,
"completed": 27
},
"warmer": {
"threads": 2,
"queue": 7,
"active": 3,
"rejected": 2,
"largest": 3,
"completed": 1
},
"search": {
"threads": 5,
"queue": 7,
"active": 2,
"rejected": 7,
"largest": 2,
"completed": 4
},
"flush": {
"threads": 3,
"queue": 8,
"active": 0,
"rejected": 1,
"largest": 5,
"completed": 3
},
"optimize": {
"threads": 3,
"queue": 4,
"active": 1,
"rejected": 2,
"largest": 7,
"completed": 3
},
"fetch_shard_store": {
"threads": 1,
"queue": 7,
"active": 4,
"rejected": 2,
"largest": 4,
"completed": 1
},
"management": {
"threads": 2,
"queue": 3,
"active": 1,
"rejected": 6,
"largest": 2,
"completed": 22
},
"get": {
"threads": 1,
"queue": 8,
"active": 4,
"rejected": 3,
"largest": 2,
"completed": 1
},
"merge": {
"threads": 6,
"queue": 4,
"active": 5,
"rejected": 2,
"largest": 5,
"completed": 1
},
"bulk": {
"threads": 4,
"queue": 5,
"active": 7,
"rejected": 3,
"largest": 1,
"completed": 4
},
"snapshot": {
"threads": 8,
"queue": 5,
"active": 6,
"rejected": 2,
"largest": 1,
"completed": 0
}
},
"fs": {
"timestamp": 1436460392946,
"total": {
"total_in_bytes": 19507089408,
"free_in_bytes": 16909316096,
"available_in_bytes": 15894814720
},
"data": [
{
"path": "/usr/share/elasticsearch/data/elasticsearch/nodes/0",
"mount": "/usr/share/elasticsearch/data",
"type": "ext4",
"total_in_bytes": 19507089408,
"free_in_bytes": 16909316096,
"available_in_bytes": 15894814720
}
]
},
"transport": {
"server_open": 13,
"rx_count": 6,
"rx_size_in_bytes": 1380,
"tx_count": 6,
"tx_size_in_bytes": 1380
},
"http": {
"current_open": 3,
"total_opened": 3
},
"breakers": {
"fielddata": {
"limit_size_in_bytes": 623326003,
"limit_size": "594.4mb",
"estimated_size_in_bytes": 0,
"estimated_size": "0b",
"overhead": 1.03,
"tripped": 0
},
"request": {
"limit_size_in_bytes": 415550668,
"limit_size": "396.2mb",
"estimated_size_in_bytes": 0,
"estimated_size": "0b",
"overhead": 1.0,
"tripped": 0
},
"parent": {
"limit_size_in_bytes": 727213670,
"limit_size": "693.5mb",
"estimated_size_in_bytes": 0,
"estimated_size": "0b",
"overhead": 1.0,
"tripped": 0
}
}
}
}
}
`
var indicesExpected = map[string]interface{}{
"id_cache_memory_size_in_bytes": float64(0),
"completion_size_in_bytes": float64(0),
"suggest_total": float64(0),
"suggest_time_in_millis": float64(0),
"suggest_current": float64(0),
"query_cache_memory_size_in_bytes": float64(0),
"query_cache_evictions": float64(0),
"query_cache_hit_count": float64(0),
"query_cache_miss_count": float64(0),
"store_size_in_bytes": float64(37715234),
"store_throttle_time_in_millis": float64(215),
"merges_current_docs": float64(0),
"merges_current_size_in_bytes": float64(0),
"merges_total": float64(133),
"merges_total_time_in_millis": float64(21060),
"merges_total_docs": float64(203672),
"merges_total_size_in_bytes": float64(142900226),
"merges_current": float64(0),
"filter_cache_memory_size_in_bytes": float64(7384),
"filter_cache_evictions": float64(0),
"indexing_index_total": float64(84790),
"indexing_index_time_in_millis": float64(29680),
"indexing_index_current": float64(0),
"indexing_noop_update_total": float64(0),
"indexing_throttle_time_in_millis": float64(0),
"indexing_delete_total": float64(13879),
"indexing_delete_time_in_millis": float64(1139),
"indexing_delete_current": float64(0),
"get_exists_time_in_millis": float64(0),
"get_missing_total": float64(1),
"get_missing_time_in_millis": float64(2),
"get_current": float64(0),
"get_total": float64(1),
"get_time_in_millis": float64(2),
"get_exists_total": float64(0),
"refresh_total": float64(1076),
"refresh_total_time_in_millis": float64(20078),
"percolate_current": float64(0),
"percolate_memory_size_in_bytes": float64(-1),
"percolate_queries": float64(0),
"percolate_total": float64(0),
"percolate_time_in_millis": float64(0),
"translog_operations": float64(17702),
"translog_size_in_bytes": float64(17),
"recovery_current_as_source": float64(0),
"recovery_current_as_target": float64(0),
"recovery_throttle_time_in_millis": float64(0),
"docs_count": float64(29652),
"docs_deleted": float64(5229),
"flush_total_time_in_millis": float64(2401),
"flush_total": float64(115),
"fielddata_memory_size_in_bytes": float64(12996),
"fielddata_evictions": float64(0),
"search_fetch_current": float64(0),
"search_open_contexts": float64(0),
"search_query_total": float64(1452),
"search_query_time_in_millis": float64(5695),
"search_query_current": float64(0),
"search_fetch_total": float64(414),
"search_fetch_time_in_millis": float64(146),
"warmer_current": float64(0),
"warmer_total": float64(2319),
"warmer_total_time_in_millis": float64(448),
"segments_count": float64(134),
"segments_memory_in_bytes": float64(1285212),
"segments_index_writer_memory_in_bytes": float64(0),
"segments_index_writer_max_memory_in_bytes": float64(172368955),
"segments_version_map_memory_in_bytes": float64(611844),
"segments_fixed_bit_set_memory_in_bytes": float64(0),
}
var osExpected = map[string]interface{}{
"load_average_0": float64(0.01),
"load_average_1": float64(0.04),
"load_average_2": float64(0.05),
"swap_used_in_bytes": float64(0),
"swap_free_in_bytes": float64(487997440),
"timestamp": float64(1436460392944),
"mem_free_percent": float64(74),
"mem_used_percent": float64(25),
"mem_actual_free_in_bytes": float64(1565470720),
"mem_actual_used_in_bytes": float64(534159360),
"mem_free_in_bytes": float64(477761536),
"mem_used_in_bytes": float64(1621868544),
}
var processExpected = map[string]interface{}{
"mem_total_virtual_in_bytes": float64(4747890688),
"timestamp": float64(1436460392945),
"open_file_descriptors": float64(160),
"cpu_total_in_millis": float64(15480),
"cpu_percent": float64(2),
"cpu_sys_in_millis": float64(1870),
"cpu_user_in_millis": float64(13610),
}
var jvmExpected = map[string]interface{}{
"timestamp": float64(1436460392945),
"uptime_in_millis": float64(202245),
"mem_non_heap_used_in_bytes": float64(39634576),
"mem_non_heap_committed_in_bytes": float64(40841216),
"mem_pools_young_max_in_bytes": float64(279183360),
"mem_pools_young_peak_used_in_bytes": float64(71630848),
"mem_pools_young_peak_max_in_bytes": float64(279183360),
"mem_pools_young_used_in_bytes": float64(32685760),
"mem_pools_survivor_peak_used_in_bytes": float64(8912888),
"mem_pools_survivor_peak_max_in_bytes": float64(34865152),
"mem_pools_survivor_used_in_bytes": float64(8912880),
"mem_pools_survivor_max_in_bytes": float64(34865152),
"mem_pools_old_peak_max_in_bytes": float64(724828160),
"mem_pools_old_used_in_bytes": float64(11110928),
"mem_pools_old_max_in_bytes": float64(724828160),
"mem_pools_old_peak_used_in_bytes": float64(14354608),
"mem_heap_used_in_bytes": float64(52709568),
"mem_heap_used_percent": float64(5),
"mem_heap_committed_in_bytes": float64(259522560),
"mem_heap_max_in_bytes": float64(1038876672),
"threads_peak_count": float64(45),
"threads_count": float64(44),
"gc_collectors_young_collection_count": float64(2),
"gc_collectors_young_collection_time_in_millis": float64(98),
"gc_collectors_old_collection_count": float64(1),
"gc_collectors_old_collection_time_in_millis": float64(24),
"buffer_pools_direct_count": float64(40),
"buffer_pools_direct_used_in_bytes": float64(6304239),
"buffer_pools_direct_total_capacity_in_bytes": float64(6304239),
"buffer_pools_mapped_count": float64(0),
"buffer_pools_mapped_used_in_bytes": float64(0),
"buffer_pools_mapped_total_capacity_in_bytes": float64(0),
}
var threadPoolExpected = map[string]interface{}{
"merge_threads": float64(6),
"merge_queue": float64(4),
"merge_active": float64(5),
"merge_rejected": float64(2),
"merge_largest": float64(5),
"merge_completed": float64(1),
"bulk_threads": float64(4),
"bulk_queue": float64(5),
"bulk_active": float64(7),
"bulk_rejected": float64(3),
"bulk_largest": float64(1),
"bulk_completed": float64(4),
"warmer_threads": float64(2),
"warmer_queue": float64(7),
"warmer_active": float64(3),
"warmer_rejected": float64(2),
"warmer_largest": float64(3),
"warmer_completed": float64(1),
"get_largest": float64(2),
"get_completed": float64(1),
"get_threads": float64(1),
"get_queue": float64(8),
"get_active": float64(4),
"get_rejected": float64(3),
"index_threads": float64(6),
"index_queue": float64(8),
"index_active": float64(4),
"index_rejected": float64(2),
"index_largest": float64(3),
"index_completed": float64(6),
"suggest_threads": float64(2),
"suggest_queue": float64(7),
"suggest_active": float64(2),
"suggest_rejected": float64(1),
"suggest_largest": float64(8),
"suggest_completed": float64(3),
"fetch_shard_store_queue": float64(7),
"fetch_shard_store_active": float64(4),
"fetch_shard_store_rejected": float64(2),
"fetch_shard_store_largest": float64(4),
"fetch_shard_store_completed": float64(1),
"fetch_shard_store_threads": float64(1),
"management_threads": float64(2),
"management_queue": float64(3),
"management_active": float64(1),
"management_rejected": float64(6),
"management_largest": float64(2),
"management_completed": float64(22),
"percolate_queue": float64(23),
"percolate_active": float64(13),
"percolate_rejected": float64(235),
"percolate_largest": float64(23),
"percolate_completed": float64(33),
"percolate_threads": float64(123),
"listener_active": float64(4),
"listener_rejected": float64(8),
"listener_largest": float64(1),
"listener_completed": float64(1),
"listener_threads": float64(1),
"listener_queue": float64(2),
"search_rejected": float64(7),
"search_largest": float64(2),
"search_completed": float64(4),
"search_threads": float64(5),
"search_queue": float64(7),
"search_active": float64(2),
"fetch_shard_started_threads": float64(3),
"fetch_shard_started_queue": float64(1),
"fetch_shard_started_active": float64(5),
"fetch_shard_started_rejected": float64(6),
"fetch_shard_started_largest": float64(4),
"fetch_shard_started_completed": float64(54),
"refresh_rejected": float64(4),
"refresh_largest": float64(8),
"refresh_completed": float64(3),
"refresh_threads": float64(23),
"refresh_queue": float64(7),
"refresh_active": float64(3),
"optimize_threads": float64(3),
"optimize_queue": float64(4),
"optimize_active": float64(1),
"optimize_rejected": float64(2),
"optimize_largest": float64(7),
"optimize_completed": float64(3),
"snapshot_largest": float64(1),
"snapshot_completed": float64(0),
"snapshot_threads": float64(8),
"snapshot_queue": float64(5),
"snapshot_active": float64(6),
"snapshot_rejected": float64(2),
"generic_threads": float64(1),
"generic_queue": float64(4),
"generic_active": float64(6),
"generic_rejected": float64(3),
"generic_largest": float64(2),
"generic_completed": float64(27),
"flush_threads": float64(3),
"flush_queue": float64(8),
"flush_active": float64(0),
"flush_rejected": float64(1),
"flush_largest": float64(5),
"flush_completed": float64(3),
}
var fsExpected = map[string]interface{}{
"data_0_total_in_bytes": float64(19507089408),
"data_0_free_in_bytes": float64(16909316096),
"data_0_available_in_bytes": float64(15894814720),
"timestamp": float64(1436460392946),
"total_free_in_bytes": float64(16909316096),
"total_available_in_bytes": float64(15894814720),
"total_total_in_bytes": float64(19507089408),
}
var transportExpected = map[string]interface{}{
"server_open": float64(13),
"rx_count": float64(6),
"rx_size_in_bytes": float64(1380),
"tx_count": float64(6),
"tx_size_in_bytes": float64(1380),
}
var httpExpected = map[string]interface{}{
"current_open": float64(3),
"total_opened": float64(3),
}
var breakersExpected = map[string]interface{}{
"fielddata_estimated_size_in_bytes": float64(0),
"fielddata_overhead": float64(1.03),
"fielddata_tripped": float64(0),
"fielddata_limit_size_in_bytes": float64(623326003),
"request_estimated_size_in_bytes": float64(0),
"request_overhead": float64(1.0),
"request_tripped": float64(0),
"request_limit_size_in_bytes": float64(415550668),
"parent_overhead": float64(1.0),
"parent_tripped": float64(0),
"parent_limit_size_in_bytes": float64(727213670),
"parent_estimated_size_in_bytes": float64(0),
}

View File

@ -0,0 +1,45 @@
# Exec Plugin
The exec plugin can execute arbitrary commands which output JSON. Then it flattens JSON and finds
all numeric values, treating them as floats.
For example, if you have a json-returning command called mycollector, you could
setup the exec plugin with:
```
[[inputs.exec]]
command = "/usr/bin/mycollector --output=json"
name_suffix = "_mycollector"
interval = "10s"
```
The name suffix is appended to exec as "exec_name_suffix" to identify the input stream.
The interval is used to determine how often a particular command should be run. Each
time the exec plugin runs, it will only run a particular command if it has been at least
`interval` seconds since the exec plugin last ran the command.
# Sample
Let's say that we have a command with the name_suffix "_mycollector", which gives the following output:
```json
{
"a": 0.5,
"b": {
"c": 0.1,
"d": 5
}
}
```
The collected metrics will be stored as field values under the same measurement "exec_mycollector":
```
exec_mycollector a=0.5,b_c=0.1,b_d=5 1452815002357578567
```
Other options for modifying the measurement names are:
```
name_override = "newname"
name_prefix = "prefix_"
```

View File

@ -0,0 +1,91 @@
package exec
import (
"bytes"
"encoding/json"
"fmt"
"os/exec"
"github.com/gonuts/go-shellquote"
"github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs"
)
const sampleConfig = `
# the command to run
command = "/usr/bin/mycollector --foo=bar"
# measurement name suffix (for separating different commands)
name_suffix = "_mycollector"
`
type Exec struct {
Command string
runner Runner
}
type Runner interface {
Run(*Exec) ([]byte, error)
}
type CommandRunner struct{}
func (c CommandRunner) Run(e *Exec) ([]byte, error) {
split_cmd, err := shellquote.Split(e.Command)
if err != nil || len(split_cmd) == 0 {
return nil, fmt.Errorf("exec: unable to parse command, %s", err)
}
cmd := exec.Command(split_cmd[0], split_cmd[1:]...)
var out bytes.Buffer
cmd.Stdout = &out
if err := cmd.Run(); err != nil {
return nil, fmt.Errorf("exec: %s for command '%s'", err, e.Command)
}
return out.Bytes(), nil
}
func NewExec() *Exec {
return &Exec{runner: CommandRunner{}}
}
func (e *Exec) SampleConfig() string {
return sampleConfig
}
func (e *Exec) Description() string {
return "Read flattened metrics from one or more commands that output JSON to stdout"
}
func (e *Exec) Gather(acc inputs.Accumulator) error {
out, err := e.runner.Run(e)
if err != nil {
return err
}
var jsonOut interface{}
err = json.Unmarshal(out, &jsonOut)
if err != nil {
return fmt.Errorf("exec: unable to parse output of '%s' as JSON, %s",
e.Command, err)
}
f := internal.JSONFlattener{}
err = f.FlattenJSON("", jsonOut)
if err != nil {
return err
}
acc.AddFields("exec", f.Fields, nil)
return nil
}
func init() {
inputs.Add("exec", func() inputs.Input {
return NewExec()
})
}

View File

@ -0,0 +1,99 @@
package exec
import (
"fmt"
"testing"
"github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
// Midnight 9/22/2015
const baseTimeSeconds = 1442905200
const validJson = `
{
"status": "green",
"num_processes": 82,
"cpu": {
"status": "red",
"nil_status": null,
"used": 8234,
"free": 32
},
"percent": 0.81,
"users": [0, 1, 2, 3]
}`
const malformedJson = `
{
"status": "green",
`
type runnerMock struct {
out []byte
err error
}
func newRunnerMock(out []byte, err error) Runner {
return &runnerMock{
out: out,
err: err,
}
}
func (r runnerMock) Run(e *Exec) ([]byte, error) {
if r.err != nil {
return nil, r.err
}
return r.out, nil
}
func TestExec(t *testing.T) {
e := &Exec{
runner: newRunnerMock([]byte(validJson), nil),
Command: "testcommand arg1",
}
var acc testutil.Accumulator
err := e.Gather(&acc)
require.NoError(t, err)
assert.Equal(t, acc.NFields(), 8, "non-numeric measurements should be ignored")
fields := map[string]interface{}{
"num_processes": float64(82),
"cpu_used": float64(8234),
"cpu_free": float64(32),
"percent": float64(0.81),
"users_0": float64(0),
"users_1": float64(1),
"users_2": float64(2),
"users_3": float64(3),
}
acc.AssertContainsFields(t, "exec", fields)
}
func TestExecMalformed(t *testing.T) {
e := &Exec{
runner: newRunnerMock([]byte(malformedJson), nil),
Command: "badcommand arg1",
}
var acc testutil.Accumulator
err := e.Gather(&acc)
require.Error(t, err)
assert.Equal(t, acc.NFields(), 0, "No new points should have been added")
}
func TestCommandError(t *testing.T) {
e := &Exec{
runner: newRunnerMock(nil, fmt.Errorf("exit status code 1")),
Command: "badcommand",
}
var acc testutil.Accumulator
err := e.Gather(&acc)
require.Error(t, err)
assert.Equal(t, acc.NFields(), 0, "No new points should have been added")
}

View File

@ -3,12 +3,13 @@ package haproxy
import ( import (
"encoding/csv" "encoding/csv"
"fmt" "fmt"
"github.com/influxdb/telegraf/plugins" "github.com/influxdata/telegraf/plugins/inputs"
"io" "io"
"net/http" "net/http"
"net/url" "net/url"
"strconv" "strconv"
"sync" "sync"
"time"
) )
//CSV format: https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#9.1 //CSV format: https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#9.1
@ -90,7 +91,7 @@ var sampleConfig = `
# If no servers are specified, then default to 127.0.0.1:1936 # If no servers are specified, then default to 127.0.0.1:1936
servers = ["http://myhaproxy.com:1936", "http://anotherhaproxy.com:1936"] servers = ["http://myhaproxy.com:1936", "http://anotherhaproxy.com:1936"]
# Or you can also use local socket(not work yet) # Or you can also use local socket(not work yet)
# servers = ["socket:/run/haproxy/admin.sock"] # servers = ["socket://run/haproxy/admin.sock"]
` `
func (r *haproxy) SampleConfig() string { func (r *haproxy) SampleConfig() string {
@ -103,7 +104,7 @@ func (r *haproxy) Description() string {
// Reads stats from all configured servers accumulates stats. // Reads stats from all configured servers accumulates stats.
// Returns one of the errors encountered while gather stats (if any). // Returns one of the errors encountered while gather stats (if any).
func (g *haproxy) Gather(acc plugins.Accumulator) error { func (g *haproxy) Gather(acc inputs.Accumulator) error {
if len(g.Servers) == 0 { if len(g.Servers) == 0 {
return g.gatherServer("http://127.0.0.1:1936", acc) return g.gatherServer("http://127.0.0.1:1936", acc)
} }
@ -125,7 +126,7 @@ func (g *haproxy) Gather(acc plugins.Accumulator) error {
return outerr return outerr
} }
func (g *haproxy) gatherServer(addr string, acc plugins.Accumulator) error { func (g *haproxy) gatherServer(addr string, acc inputs.Accumulator) error {
if g.client == nil { if g.client == nil {
client := &http.Client{} client := &http.Client{}
@ -152,214 +153,212 @@ func (g *haproxy) gatherServer(addr string, acc plugins.Accumulator) error {
return fmt.Errorf("Unable to get valid stat result from '%s': %s", addr, err) return fmt.Errorf("Unable to get valid stat result from '%s': %s", addr, err)
} }
importCsvResult(res.Body, acc, u.Host) return importCsvResult(res.Body, acc, u.Host)
return nil
} }
func importCsvResult(r io.Reader, acc plugins.Accumulator, host string) ([][]string, error) { func importCsvResult(r io.Reader, acc inputs.Accumulator, host string) error {
csv := csv.NewReader(r) csv := csv.NewReader(r)
result, err := csv.ReadAll() result, err := csv.ReadAll()
now := time.Now()
for _, row := range result { for _, row := range result {
fields := make(map[string]interface{})
for field, v := range row {
tags := map[string]string{ tags := map[string]string{
"server": host, "server": host,
"proxy": row[HF_PXNAME], "proxy": row[HF_PXNAME],
"sv": row[HF_SVNAME], "sv": row[HF_SVNAME],
} }
for field, v := range row {
switch field { switch field {
case HF_QCUR: case HF_QCUR:
ival, err := strconv.ParseUint(v, 10, 64) ival, err := strconv.ParseUint(v, 10, 64)
if err == nil { if err == nil {
acc.Add("qcur", ival, tags) fields["qcur"] = ival
} }
case HF_QMAX: case HF_QMAX:
ival, err := strconv.ParseUint(v, 10, 64) ival, err := strconv.ParseUint(v, 10, 64)
if err == nil { if err == nil {
acc.Add("qmax", ival, tags) fields["qmax"] = ival
} }
case HF_SCUR: case HF_SCUR:
ival, err := strconv.ParseUint(v, 10, 64) ival, err := strconv.ParseUint(v, 10, 64)
if err == nil { if err == nil {
acc.Add("scur", ival, tags) fields["scur"] = ival
} }
case HF_SMAX: case HF_SMAX:
ival, err := strconv.ParseUint(v, 10, 64) ival, err := strconv.ParseUint(v, 10, 64)
if err == nil { if err == nil {
acc.Add("smax", ival, tags) fields["smax"] = ival
} }
case HF_STOT: case HF_STOT:
ival, err := strconv.ParseUint(v, 10, 64) ival, err := strconv.ParseUint(v, 10, 64)
if err == nil { if err == nil {
acc.Add("stot", ival, tags) fields["stot"] = ival
} }
case HF_BIN: case HF_BIN:
ival, err := strconv.ParseUint(v, 10, 64) ival, err := strconv.ParseUint(v, 10, 64)
if err == nil { if err == nil {
acc.Add("bin", ival, tags) fields["bin"] = ival
} }
case HF_BOUT: case HF_BOUT:
ival, err := strconv.ParseUint(v, 10, 64) ival, err := strconv.ParseUint(v, 10, 64)
if err == nil { if err == nil {
acc.Add("bout", ival, tags) fields["bout"] = ival
} }
case HF_DREQ: case HF_DREQ:
ival, err := strconv.ParseUint(v, 10, 64) ival, err := strconv.ParseUint(v, 10, 64)
if err == nil { if err == nil {
acc.Add("dreq", ival, tags) fields["dreq"] = ival
} }
case HF_DRESP: case HF_DRESP:
ival, err := strconv.ParseUint(v, 10, 64) ival, err := strconv.ParseUint(v, 10, 64)
if err == nil { if err == nil {
acc.Add("dresp", ival, tags) fields["dresp"] = ival
} }
case HF_EREQ: case HF_EREQ:
ival, err := strconv.ParseUint(v, 10, 64) ival, err := strconv.ParseUint(v, 10, 64)
if err == nil { if err == nil {
acc.Add("ereq", ival, tags) fields["ereq"] = ival
} }
case HF_ECON: case HF_ECON:
ival, err := strconv.ParseUint(v, 10, 64) ival, err := strconv.ParseUint(v, 10, 64)
if err == nil { if err == nil {
acc.Add("econ", ival, tags) fields["econ"] = ival
} }
case HF_ERESP: case HF_ERESP:
ival, err := strconv.ParseUint(v, 10, 64) ival, err := strconv.ParseUint(v, 10, 64)
if err == nil { if err == nil {
acc.Add("eresp", ival, tags) fields["eresp"] = ival
} }
case HF_WRETR: case HF_WRETR:
ival, err := strconv.ParseUint(v, 10, 64) ival, err := strconv.ParseUint(v, 10, 64)
if err == nil { if err == nil {
acc.Add("wretr", ival, tags) fields["wretr"] = ival
} }
case HF_WREDIS: case HF_WREDIS:
ival, err := strconv.ParseUint(v, 10, 64) ival, err := strconv.ParseUint(v, 10, 64)
if err == nil { if err == nil {
acc.Add("wredis", ival, tags) fields["wredis"] = ival
} }
case HF_ACT: case HF_ACT:
ival, err := strconv.ParseUint(v, 10, 64) ival, err := strconv.ParseUint(v, 10, 64)
if err == nil { if err == nil {
acc.Add("active_servers", ival, tags) fields["active_servers"] = ival
} }
case HF_BCK: case HF_BCK:
ival, err := strconv.ParseUint(v, 10, 64) ival, err := strconv.ParseUint(v, 10, 64)
if err == nil { if err == nil {
acc.Add("backup_servers", ival, tags) fields["backup_servers"] = ival
} }
case HF_DOWNTIME: case HF_DOWNTIME:
ival, err := strconv.ParseUint(v, 10, 64) ival, err := strconv.ParseUint(v, 10, 64)
if err == nil { if err == nil {
acc.Add("downtime", ival, tags) fields["downtime"] = ival
} }
case HF_THROTTLE: case HF_THROTTLE:
ival, err := strconv.ParseUint(v, 10, 64) ival, err := strconv.ParseUint(v, 10, 64)
if err == nil { if err == nil {
acc.Add("throttle", ival, tags) fields["throttle"] = ival
} }
case HF_LBTOT: case HF_LBTOT:
ival, err := strconv.ParseUint(v, 10, 64) ival, err := strconv.ParseUint(v, 10, 64)
if err == nil { if err == nil {
acc.Add("lbtot", ival, tags) fields["lbtot"] = ival
} }
case HF_RATE: case HF_RATE:
ival, err := strconv.ParseUint(v, 10, 64) ival, err := strconv.ParseUint(v, 10, 64)
if err == nil { if err == nil {
acc.Add("rate", ival, tags) fields["rate"] = ival
} }
case HF_RATE_MAX: case HF_RATE_MAX:
ival, err := strconv.ParseUint(v, 10, 64) ival, err := strconv.ParseUint(v, 10, 64)
if err == nil { if err == nil {
acc.Add("rate_max", ival, tags) fields["rate_max"] = ival
} }
case HF_CHECK_DURATION: case HF_CHECK_DURATION:
ival, err := strconv.ParseUint(v, 10, 64) ival, err := strconv.ParseUint(v, 10, 64)
if err == nil { if err == nil {
acc.Add("check_duration", ival, tags) fields["check_duration"] = ival
} }
case HF_HRSP_1xx: case HF_HRSP_1xx:
ival, err := strconv.ParseUint(v, 10, 64) ival, err := strconv.ParseUint(v, 10, 64)
if err == nil { if err == nil {
acc.Add("http_response.1xx", ival, tags) fields["http_response.1xx"] = ival
} }
case HF_HRSP_2xx: case HF_HRSP_2xx:
ival, err := strconv.ParseUint(v, 10, 64) ival, err := strconv.ParseUint(v, 10, 64)
if err == nil { if err == nil {
acc.Add("http_response.2xx", ival, tags) fields["http_response.2xx"] = ival
} }
case HF_HRSP_3xx: case HF_HRSP_3xx:
ival, err := strconv.ParseUint(v, 10, 64) ival, err := strconv.ParseUint(v, 10, 64)
if err == nil { if err == nil {
acc.Add("http_response.3xx", ival, tags) fields["http_response.3xx"] = ival
} }
case HF_HRSP_4xx: case HF_HRSP_4xx:
ival, err := strconv.ParseUint(v, 10, 64) ival, err := strconv.ParseUint(v, 10, 64)
if err == nil { if err == nil {
acc.Add("http_response.4xx", ival, tags) fields["http_response.4xx"] = ival
} }
case HF_HRSP_5xx: case HF_HRSP_5xx:
ival, err := strconv.ParseUint(v, 10, 64) ival, err := strconv.ParseUint(v, 10, 64)
if err == nil { if err == nil {
acc.Add("http_response.5xx", ival, tags) fields["http_response.5xx"] = ival
} }
case HF_REQ_RATE: case HF_REQ_RATE:
ival, err := strconv.ParseUint(v, 10, 64) ival, err := strconv.ParseUint(v, 10, 64)
if err == nil { if err == nil {
acc.Add("req_rate", ival, tags) fields["req_rate"] = ival
} }
case HF_REQ_RATE_MAX: case HF_REQ_RATE_MAX:
ival, err := strconv.ParseUint(v, 10, 64) ival, err := strconv.ParseUint(v, 10, 64)
if err == nil { if err == nil {
acc.Add("req_rate_max", ival, tags) fields["req_rate_max"] = ival
} }
case HF_REQ_TOT: case HF_REQ_TOT:
ival, err := strconv.ParseUint(v, 10, 64) ival, err := strconv.ParseUint(v, 10, 64)
if err == nil { if err == nil {
acc.Add("req_tot", ival, tags) fields["req_tot"] = ival
} }
case HF_CLI_ABRT: case HF_CLI_ABRT:
ival, err := strconv.ParseUint(v, 10, 64) ival, err := strconv.ParseUint(v, 10, 64)
if err == nil { if err == nil {
acc.Add("cli_abort", ival, tags) fields["cli_abort"] = ival
} }
case HF_SRV_ABRT: case HF_SRV_ABRT:
ival, err := strconv.ParseUint(v, 10, 64) ival, err := strconv.ParseUint(v, 10, 64)
if err == nil { if err == nil {
acc.Add("srv_abort", ival, tags) fields["srv_abort"] = ival
} }
case HF_QTIME: case HF_QTIME:
ival, err := strconv.ParseUint(v, 10, 64) ival, err := strconv.ParseUint(v, 10, 64)
if err == nil { if err == nil {
acc.Add("qtime", ival, tags) fields["qtime"] = ival
} }
case HF_CTIME: case HF_CTIME:
ival, err := strconv.ParseUint(v, 10, 64) ival, err := strconv.ParseUint(v, 10, 64)
if err == nil { if err == nil {
acc.Add("ctime", ival, tags) fields["ctime"] = ival
} }
case HF_RTIME: case HF_RTIME:
ival, err := strconv.ParseUint(v, 10, 64) ival, err := strconv.ParseUint(v, 10, 64)
if err == nil { if err == nil {
acc.Add("rtime", ival, tags) fields["rtime"] = ival
} }
case HF_TTIME: case HF_TTIME:
ival, err := strconv.ParseUint(v, 10, 64) ival, err := strconv.ParseUint(v, 10, 64)
if err == nil { if err == nil {
acc.Add("ttime", ival, tags) fields["ttime"] = ival
}
}
} }
} }
return result, err }
acc.AddFields("haproxy", fields, tags, now)
}
return err
} }
func init() { func init() {
plugins.Add("haproxy", func() plugins.Plugin { inputs.Add("haproxy", func() inputs.Input {
return &haproxy{} return &haproxy{}
}) })
} }

View File

@ -5,7 +5,7 @@ import (
"strings" "strings"
"testing" "testing"
"github.com/influxdb/telegraf/testutil" "github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
"net/http" "net/http"
@ -47,52 +47,39 @@ func TestHaproxyGeneratesMetricsWithAuthentication(t *testing.T) {
"sv": "host0", "sv": "host0",
} }
assert.NoError(t, acc.ValidateTaggedValue("stot", uint64(171014), tags)) fields := map[string]interface{}{
"active_servers": uint64(1),
checkInt := []struct { "backup_servers": uint64(0),
name string "bin": uint64(510913516),
value uint64 "bout": uint64(2193856571),
}{ "check_duration": uint64(10),
"cli_abort": uint64(73),
{"qmax", 81}, "ctime": uint64(2),
{"scur", 288}, "downtime": uint64(0),
{"smax", 713}, "dresp": uint64(0),
{"bin", 5557055817}, "econ": uint64(0),
{"bout", 24096715169}, "eresp": uint64(1),
{"dreq", 1102}, "http_response.1xx": uint64(0),
{"dresp", 80}, "http_response.2xx": uint64(119534),
{"ereq", 95740}, "http_response.3xx": uint64(48051),
{"econ", 0}, "http_response.4xx": uint64(2345),
{"eresp", 0}, "http_response.5xx": uint64(1056),
{"wretr", 17}, "lbtot": uint64(171013),
{"wredis", 19}, "qcur": uint64(0),
{"active_servers", 1}, "qmax": uint64(0),
{"backup_servers", 0}, "qtime": uint64(0),
{"downtime", 0}, "rate": uint64(3),
{"throttle", 13}, "rate_max": uint64(12),
{"lbtot", 114}, "rtime": uint64(312),
{"rate", 18}, "scur": uint64(1),
{"rate_max", 102}, "smax": uint64(32),
{"check_duration", 1}, "srv_abort": uint64(1),
{"http_response.1xx", 0}, "stot": uint64(171014),
{"http_response.2xx", 1314093}, "ttime": uint64(2341),
{"http_response.3xx", 537036}, "wredis": uint64(0),
{"http_response.4xx", 123452}, "wretr": uint64(1),
{"http_response.5xx", 11966},
{"req_rate", 35},
{"req_rate_max", 140},
{"req_tot", 1987928},
{"cli_abort", 0},
{"srv_abort", 0},
{"qtime", 0},
{"ctime", 2},
{"rtime", 23},
{"ttime", 545},
}
for _, c := range checkInt {
assert.Equal(t, true, acc.CheckValue(c.name, c.value))
} }
acc.AssertContainsTaggedFields(t, "haproxy", fields, tags)
//Here, we should get error because we don't pass authentication data //Here, we should get error because we don't pass authentication data
r = &haproxy{ r = &haproxy{
@ -124,10 +111,39 @@ func TestHaproxyGeneratesMetricsWithoutAuthentication(t *testing.T) {
"sv": "host0", "sv": "host0",
} }
assert.NoError(t, acc.ValidateTaggedValue("stot", uint64(171014), tags)) fields := map[string]interface{}{
assert.NoError(t, acc.ValidateTaggedValue("scur", uint64(1), tags)) "active_servers": uint64(1),
assert.NoError(t, acc.ValidateTaggedValue("rate", uint64(3), tags)) "backup_servers": uint64(0),
assert.Equal(t, true, acc.CheckValue("bin", uint64(5557055817))) "bin": uint64(510913516),
"bout": uint64(2193856571),
"check_duration": uint64(10),
"cli_abort": uint64(73),
"ctime": uint64(2),
"downtime": uint64(0),
"dresp": uint64(0),
"econ": uint64(0),
"eresp": uint64(1),
"http_response.1xx": uint64(0),
"http_response.2xx": uint64(119534),
"http_response.3xx": uint64(48051),
"http_response.4xx": uint64(2345),
"http_response.5xx": uint64(1056),
"lbtot": uint64(171013),
"qcur": uint64(0),
"qmax": uint64(0),
"qtime": uint64(0),
"rate": uint64(3),
"rate_max": uint64(12),
"rtime": uint64(312),
"scur": uint64(1),
"smax": uint64(32),
"srv_abort": uint64(1),
"stot": uint64(171014),
"ttime": uint64(2341),
"wredis": uint64(0),
"wretr": uint64(1),
}
acc.AssertContainsTaggedFields(t, "haproxy", fields, tags)
} }
//When not passing server config, we default to localhost //When not passing server config, we default to localhost

View File

@ -9,21 +9,19 @@ import (
"net/url" "net/url"
"strings" "strings"
"sync" "sync"
"time"
"github.com/influxdb/telegraf/plugins" "github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs"
) )
type HttpJson struct { type HttpJson struct {
Services []Service
client HTTPClient
}
type Service struct {
Name string Name string
Servers []string Servers []string
Method string Method string
TagKeys []string TagKeys []string
Parameters map[string]string Parameters map[string]string
client HTTPClient
} }
type HTTPClient interface { type HTTPClient interface {
@ -47,9 +45,6 @@ func (c RealHTTPClient) MakeRequest(req *http.Request) (*http.Response, error) {
} }
var sampleConfig = ` var sampleConfig = `
# Specify services via an array of tables
[[plugins.httpjson.services]]
# a name for the service being polled # a name for the service being polled
name = "webserver_stats" name = "webserver_stats"
@ -69,7 +64,7 @@ var sampleConfig = `
# ] # ]
# HTTP parameters (all values must be strings) # HTTP parameters (all values must be strings)
[plugins.httpjson.services.parameters] [inputs.httpjson.parameters]
event_type = "cpu_spike" event_type = "cpu_spike"
threshold = "0.75" threshold = "0.75"
` `
@ -83,25 +78,19 @@ func (h *HttpJson) Description() string {
} }
// Gathers data for all servers. // Gathers data for all servers.
func (h *HttpJson) Gather(acc plugins.Accumulator) error { func (h *HttpJson) Gather(acc inputs.Accumulator) error {
var wg sync.WaitGroup var wg sync.WaitGroup
totalServers := 0 errorChannel := make(chan error, len(h.Servers))
for _, service := range h.Services {
totalServers += len(service.Servers)
}
errorChannel := make(chan error, totalServers)
for _, service := range h.Services { for _, server := range h.Servers {
for _, server := range service.Servers {
wg.Add(1) wg.Add(1)
go func(service Service, server string) { go func(server string) {
defer wg.Done() defer wg.Done()
if err := h.gatherServer(acc, service, server); err != nil { if err := h.gatherServer(acc, server); err != nil {
errorChannel <- err errorChannel <- err
} }
}(service, server) }(server)
}
} }
wg.Wait() wg.Wait()
@ -128,11 +117,11 @@ func (h *HttpJson) Gather(acc plugins.Accumulator) error {
// Returns: // Returns:
// error: Any error that may have occurred // error: Any error that may have occurred
func (h *HttpJson) gatherServer( func (h *HttpJson) gatherServer(
acc plugins.Accumulator, acc inputs.Accumulator,
service Service,
serverURL string, serverURL string,
) error { ) error {
resp, err := h.sendRequest(service, serverURL) resp, responseTime, err := h.sendRequest(serverURL)
if err != nil { if err != nil {
return err return err
} }
@ -146,7 +135,7 @@ func (h *HttpJson) gatherServer(
"server": serverURL, "server": serverURL,
} }
for _, tag := range service.TagKeys { for _, tag := range h.TagKeys {
switch v := jsonOut[tag].(type) { switch v := jsonOut[tag].(type) {
case string: case string:
tags[tag] = v tags[tag] = v
@ -154,7 +143,22 @@ func (h *HttpJson) gatherServer(
delete(jsonOut, tag) delete(jsonOut, tag)
} }
processResponse(acc, service.Name, tags, jsonOut) if responseTime >= 0 {
jsonOut["response_time"] = responseTime
}
f := internal.JSONFlattener{}
err = f.FlattenJSON("", jsonOut)
if err != nil {
return err
}
var msrmnt_name string
if h.Name == "" {
msrmnt_name = "httpjson"
} else {
msrmnt_name = "httpjson_" + h.Name
}
acc.AddFields(msrmnt_name, f.Fields, tags)
return nil return nil
} }
@ -165,34 +169,37 @@ func (h *HttpJson) gatherServer(
// Returns: // Returns:
// string: body of the response // string: body of the response
// error : Any error that may have occurred // error : Any error that may have occurred
func (h *HttpJson) sendRequest(service Service, serverURL string) (string, error) { func (h *HttpJson) sendRequest(serverURL string) (string, float64, error) {
// Prepare URL // Prepare URL
requestURL, err := url.Parse(serverURL) requestURL, err := url.Parse(serverURL)
if err != nil { if err != nil {
return "", fmt.Errorf("Invalid server URL \"%s\"", serverURL) return "", -1, fmt.Errorf("Invalid server URL \"%s\"", serverURL)
} }
params := url.Values{} params := url.Values{}
for k, v := range service.Parameters { for k, v := range h.Parameters {
params.Add(k, v) params.Add(k, v)
} }
requestURL.RawQuery = params.Encode() requestURL.RawQuery = params.Encode()
// Create + send request // Create + send request
req, err := http.NewRequest(service.Method, requestURL.String(), nil) req, err := http.NewRequest(h.Method, requestURL.String(), nil)
if err != nil { if err != nil {
return "", err return "", -1, err
} }
start := time.Now()
resp, err := h.client.MakeRequest(req) resp, err := h.client.MakeRequest(req)
if err != nil { if err != nil {
return "", err return "", -1, err
} }
defer resp.Body.Close() defer resp.Body.Close()
responseTime := time.Since(start).Seconds()
body, err := ioutil.ReadAll(resp.Body) body, err := ioutil.ReadAll(resp.Body)
if err != nil { if err != nil {
return string(body), err return string(body), responseTime, err
} }
// Process response // Process response
@ -203,31 +210,14 @@ func (h *HttpJson) sendRequest(service Service, serverURL string) (string, error
http.StatusText(resp.StatusCode), http.StatusText(resp.StatusCode),
http.StatusOK, http.StatusOK,
http.StatusText(http.StatusOK)) http.StatusText(http.StatusOK))
return string(body), err return string(body), responseTime, err
} }
return string(body), err return string(body), responseTime, err
}
// Flattens the map generated from the JSON object and stores its float values using a
// plugins.Accumulator. It ignores any non-float values.
// Parameters:
// acc: the Accumulator to use
// prefix: What the name of the measurement name should be prefixed by.
// tags: telegraf tags to
func processResponse(acc plugins.Accumulator, prefix string, tags map[string]string, v interface{}) {
switch t := v.(type) {
case map[string]interface{}:
for k, v := range t {
processResponse(acc, prefix+"_"+k, tags, v)
}
case float64:
acc.Add(prefix, v, tags)
}
} }
func init() { func init() {
plugins.Add("httpjson", func() plugins.Plugin { inputs.Add("httpjson", func() inputs.Input {
return &HttpJson{client: RealHTTPClient{client: &http.Client{}}} return &HttpJson{client: RealHTTPClient{client: &http.Client{}}}
}) })
} }

View File

@ -1,13 +1,12 @@
package httpjson package httpjson
import ( import (
"fmt"
"io/ioutil" "io/ioutil"
"net/http" "net/http"
"strings" "strings"
"testing" "testing"
"github.com/influxdb/telegraf/testutil" "github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
@ -15,17 +14,17 @@ import (
const validJSON = ` const validJSON = `
{ {
"parent": { "parent": {
"child": 3, "child": 3.0,
"ignored_child": "hi" "ignored_child": "hi"
}, },
"ignored_null": null, "ignored_null": null,
"integer": 4, "integer": 4,
"ignored_list": [3, 4], "list": [3, 4],
"ignored_parent": { "ignored_parent": {
"another_ignored_list": [4],
"another_ignored_null": null, "another_ignored_null": null,
"ignored_string": "hello, world!" "ignored_string": "hello, world!"
} },
"another_list": [4]
}` }`
const validJSONTags = ` const validJSONTags = `
@ -35,6 +34,14 @@ const validJSONTags = `
"build": "123" "build": "123"
}` }`
var expectedFields = map[string]interface{}{
"parent_child": float64(3),
"list_0": float64(3),
"list_1": float64(4),
"another_list_0": float64(4),
"integer": float64(4),
}
const invalidJSON = "I don't think this is JSON" const invalidJSON = "I don't think this is JSON"
const empty = "" const empty = ""
@ -76,11 +83,10 @@ func (c mockHTTPClient) MakeRequest(req *http.Request) (*http.Response, error) {
// //
// Returns: // Returns:
// *HttpJson: Pointer to an HttpJson object that uses the generated mock HTTP client // *HttpJson: Pointer to an HttpJson object that uses the generated mock HTTP client
func genMockHttpJson(response string, statusCode int) *HttpJson { func genMockHttpJson(response string, statusCode int) []*HttpJson {
return &HttpJson{ return []*HttpJson{
&HttpJson{
client: mockHTTPClient{responseBody: response, statusCode: statusCode}, client: mockHTTPClient{responseBody: response, statusCode: statusCode},
Services: []Service{
Service{
Servers: []string{ Servers: []string{
"http://server1.example.com/metrics/", "http://server1.example.com/metrics/",
"http://server2.example.com/metrics/", "http://server2.example.com/metrics/",
@ -92,7 +98,8 @@ func genMockHttpJson(response string, statusCode int) *HttpJson {
"httpParam2": "the second parameter", "httpParam2": "the second parameter",
}, },
}, },
Service{ &HttpJson{
client: mockHTTPClient{responseBody: response, statusCode: statusCode},
Servers: []string{ Servers: []string{
"http://server3.example.com/metrics/", "http://server3.example.com/metrics/",
"http://server4.example.com/metrics/", "http://server4.example.com/metrics/",
@ -108,7 +115,6 @@ func genMockHttpJson(response string, statusCode int) *HttpJson {
"build", "build",
}, },
}, },
},
} }
} }
@ -116,28 +122,21 @@ func genMockHttpJson(response string, statusCode int) *HttpJson {
func TestHttpJson200(t *testing.T) { func TestHttpJson200(t *testing.T) {
httpjson := genMockHttpJson(validJSON, 200) httpjson := genMockHttpJson(validJSON, 200)
for _, service := range httpjson {
var acc testutil.Accumulator var acc testutil.Accumulator
err := httpjson.Gather(&acc) err := service.Gather(&acc)
require.NoError(t, err) require.NoError(t, err)
assert.Equal(t, 12, acc.NFields())
// Set responsetime
for _, p := range acc.Points {
p.Fields["response_time"] = 1.0
}
assert.Equal(t, 8, len(acc.Points))
for _, service := range httpjson.Services {
for _, srv := range service.Servers { for _, srv := range service.Servers {
require.NoError(t, tags := map[string]string{"server": srv}
acc.ValidateTaggedValue( mname := "httpjson_" + service.Name
fmt.Sprintf("%s_parent_child", service.Name), expectedFields["response_time"] = 1.0
3.0, acc.AssertContainsTaggedFields(t, mname, expectedFields, tags)
map[string]string{"server": srv},
),
)
require.NoError(t,
acc.ValidateTaggedValue(
fmt.Sprintf("%s_integer", service.Name),
4.0,
map[string]string{"server": srv},
),
)
} }
} }
} }
@ -147,28 +146,22 @@ func TestHttpJson500(t *testing.T) {
httpjson := genMockHttpJson(validJSON, 500) httpjson := genMockHttpJson(validJSON, 500)
var acc testutil.Accumulator var acc testutil.Accumulator
err := httpjson.Gather(&acc) err := httpjson[0].Gather(&acc)
assert.NotNil(t, err) assert.NotNil(t, err)
// 4 error lines for (2 urls) * (2 services) assert.Equal(t, 0, acc.NFields())
assert.Equal(t, len(strings.Split(err.Error(), "\n")), 4)
assert.Equal(t, 0, len(acc.Points))
} }
// Test response to HTTP 405 // Test response to HTTP 405
func TestHttpJsonBadMethod(t *testing.T) { func TestHttpJsonBadMethod(t *testing.T) {
httpjson := genMockHttpJson(validJSON, 200) httpjson := genMockHttpJson(validJSON, 200)
httpjson.Services[0].Method = "NOT_A_REAL_METHOD" httpjson[0].Method = "NOT_A_REAL_METHOD"
var acc testutil.Accumulator var acc testutil.Accumulator
err := httpjson.Gather(&acc) err := httpjson[0].Gather(&acc)
assert.NotNil(t, err) assert.NotNil(t, err)
// 2 error lines for (2 urls) * (1 falied service) assert.Equal(t, 0, acc.NFields())
assert.Equal(t, len(strings.Split(err.Error(), "\n")), 2)
// (2 measurements) * (2 servers) * (1 successful service)
assert.Equal(t, 4, len(acc.Points))
} }
// Test response to malformed JSON // Test response to malformed JSON
@ -176,12 +169,10 @@ func TestHttpJsonBadJson(t *testing.T) {
httpjson := genMockHttpJson(invalidJSON, 200) httpjson := genMockHttpJson(invalidJSON, 200)
var acc testutil.Accumulator var acc testutil.Accumulator
err := httpjson.Gather(&acc) err := httpjson[0].Gather(&acc)
assert.NotNil(t, err) assert.NotNil(t, err)
// 4 error lines for (2 urls) * (2 services) assert.Equal(t, 0, acc.NFields())
assert.Equal(t, len(strings.Split(err.Error(), "\n")), 4)
assert.Equal(t, 0, len(acc.Points))
} }
// Test response to empty string as response objectgT // Test response to empty string as response objectgT
@ -189,34 +180,31 @@ func TestHttpJsonEmptyResponse(t *testing.T) {
httpjson := genMockHttpJson(empty, 200) httpjson := genMockHttpJson(empty, 200)
var acc testutil.Accumulator var acc testutil.Accumulator
err := httpjson.Gather(&acc) err := httpjson[0].Gather(&acc)
assert.NotNil(t, err) assert.NotNil(t, err)
// 4 error lines for (2 urls) * (2 services) assert.Equal(t, 0, acc.NFields())
assert.Equal(t, len(strings.Split(err.Error(), "\n")), 4)
assert.Equal(t, 0, len(acc.Points))
} }
// Test that the proper values are ignored or collected // Test that the proper values are ignored or collected
func TestHttpJson200Tags(t *testing.T) { func TestHttpJson200Tags(t *testing.T) {
httpjson := genMockHttpJson(validJSONTags, 200) httpjson := genMockHttpJson(validJSONTags, 200)
var acc testutil.Accumulator for _, service := range httpjson {
err := httpjson.Gather(&acc)
require.NoError(t, err)
assert.Equal(t, 4, len(acc.Points))
for _, service := range httpjson.Services {
if service.Name == "other_webapp" { if service.Name == "other_webapp" {
var acc testutil.Accumulator
err := service.Gather(&acc)
// Set responsetime
for _, p := range acc.Points {
p.Fields["response_time"] = 1.0
}
require.NoError(t, err)
assert.Equal(t, 4, acc.NFields())
for _, srv := range service.Servers { for _, srv := range service.Servers {
require.NoError(t, tags := map[string]string{"server": srv, "role": "master", "build": "123"}
acc.ValidateTaggedValue( fields := map[string]interface{}{"value": float64(15), "response_time": float64(1)}
fmt.Sprintf("%s_value", service.Name), mname := "httpjson_" + service.Name
15.0, acc.AssertContainsTaggedFields(t, mname, fields, tags)
map[string]string{"server": srv, "role": "master", "build": "123"},
),
)
} }
} }
} }

View File

@ -5,7 +5,7 @@ The influxdb plugin collects InfluxDB-formatted data from JSON endpoints.
With a configuration of: With a configuration of:
```toml ```toml
[[plugins.influxdb]] [[inputs.influxdb]]
urls = [ urls = [
"http://127.0.0.1:8086/debug/vars", "http://127.0.0.1:8086/debug/vars",
"http://192.168.2.1:8086/debug/vars" "http://192.168.2.1:8086/debug/vars"

View File

@ -8,7 +8,7 @@ import (
"strings" "strings"
"sync" "sync"
"github.com/influxdb/telegraf/plugins" "github.com/influxdata/telegraf/plugins/inputs"
) )
type InfluxDB struct { type InfluxDB struct {
@ -32,7 +32,7 @@ func (*InfluxDB) SampleConfig() string {
` `
} }
func (i *InfluxDB) Gather(acc plugins.Accumulator) error { func (i *InfluxDB) Gather(acc inputs.Accumulator) error {
errorChannel := make(chan error, len(i.URLs)) errorChannel := make(chan error, len(i.URLs))
var wg sync.WaitGroup var wg sync.WaitGroup
@ -77,7 +77,7 @@ type point struct {
// Returns: // Returns:
// error: Any error that may have occurred // error: Any error that may have occurred
func (i *InfluxDB) gatherURL( func (i *InfluxDB) gatherURL(
acc plugins.Accumulator, acc inputs.Accumulator,
url string, url string,
) error { ) error {
resp, err := http.Get(url) resp, err := http.Get(url)
@ -140,7 +140,7 @@ func (i *InfluxDB) gatherURL(
} }
func init() { func init() {
plugins.Add("influxdb", func() plugins.Plugin { inputs.Add("influxdb", func() inputs.Input {
return &InfluxDB{} return &InfluxDB{}
}) })
} }

View File

@ -5,8 +5,8 @@ import (
"net/http/httptest" "net/http/httptest"
"testing" "testing"
"github.com/influxdb/telegraf/plugins/influxdb" "github.com/influxdata/telegraf/plugins/inputs/influxdb"
"github.com/influxdb/telegraf/testutil" "github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
@ -72,29 +72,26 @@ func TestBasic(t *testing.T) {
require.NoError(t, plugin.Gather(&acc)) require.NoError(t, plugin.Gather(&acc))
require.Len(t, acc.Points, 2) require.Len(t, acc.Points, 2)
require.NoError(t, acc.ValidateTaggedFieldsValue( fields := map[string]interface{}{
"foo",
map[string]interface{}{
// JSON will truncate floats to integer representations. // JSON will truncate floats to integer representations.
// Since there's no distinction in JSON, we can't assume it's an int. // Since there's no distinction in JSON, we can't assume it's an int.
"i": -1.0, "i": -1.0,
"f": 0.5, "f": 0.5,
"b": true, "b": true,
"s": "string", "s": "string",
}, }
map[string]string{ tags := map[string]string{
"id": "ex1", "id": "ex1",
"url": fakeServer.URL + "/endpoint", "url": fakeServer.URL + "/endpoint",
}, }
)) acc.AssertContainsTaggedFields(t, "foo", fields, tags)
require.NoError(t, acc.ValidateTaggedFieldsValue(
"bar", fields = map[string]interface{}{
map[string]interface{}{
"x": "x", "x": "x",
}, }
map[string]string{ tags = map[string]string{
"id": "ex2", "id": "ex2",
"url": fakeServer.URL + "/endpoint", "url": fakeServer.URL + "/endpoint",
}, }
)) acc.AssertContainsTaggedFields(t, "bar", fields, tags)
} }

View File

@ -7,9 +7,8 @@ import (
"io/ioutil" "io/ioutil"
"net/http" "net/http"
"net/url" "net/url"
"strings"
"github.com/influxdb/telegraf/plugins" "github.com/influxdata/telegraf/plugins/inputs"
) )
type Server struct { type Server struct {
@ -23,8 +22,6 @@ type Server struct {
type Metric struct { type Metric struct {
Name string Name string
Jmx string Jmx string
Pass []string
Drop []string
} }
type JolokiaClient interface { type JolokiaClient interface {
@ -44,7 +41,6 @@ type Jolokia struct {
Context string Context string
Servers []Server Servers []Server
Metrics []Metric Metrics []Metric
Tags map[string]string
} }
func (j *Jolokia) SampleConfig() string { func (j *Jolokia) SampleConfig() string {
@ -52,12 +48,8 @@ func (j *Jolokia) SampleConfig() string {
# This is the context root used to compose the jolokia url # This is the context root used to compose the jolokia url
context = "/jolokia/read" context = "/jolokia/read"
# Tags added to each measurements
[jolokia.tags]
group = "as"
# List of servers exposing jolokia read service # List of servers exposing jolokia read service
[[plugins.jolokia.servers]] [[inputs.jolokia.servers]]
name = "stable" name = "stable"
host = "192.168.103.2" host = "192.168.103.2"
port = "8180" port = "8180"
@ -67,26 +59,9 @@ func (j *Jolokia) SampleConfig() string {
# List of metrics collected on above servers # List of metrics collected on above servers
# Each metric consists in a name, a jmx path and either a pass or drop slice attributes # Each metric consists in a name, a jmx path and either a pass or drop slice attributes
# This collect all heap memory usage metrics # This collect all heap memory usage metrics
[[plugins.jolokia.metrics]] [[inputs.jolokia.metrics]]
name = "heap_memory_usage" name = "heap_memory_usage"
jmx = "/java.lang:type=Memory/HeapMemoryUsage" jmx = "/java.lang:type=Memory/HeapMemoryUsage"
# This drops the 'committed' value from Eden space measurement
[[plugins.jolokia.metrics]]
name = "memory_eden"
jmx = "/java.lang:type=MemoryPool,name=PS Eden Space/Usage"
drop = [ "committed" ]
# This passes only DaemonThreadCount and ThreadCount
[[plugins.jolokia.metrics]]
name = "heap_threads"
jmx = "/java.lang:type=Threading"
pass = [
"DaemonThreadCount",
"ThreadCount"
]
` `
} }
@ -102,10 +77,6 @@ func (j *Jolokia) getAttr(requestUrl *url.URL) (map[string]interface{}, error) {
} }
resp, err := j.jClient.MakeRequest(req) resp, err := j.jClient.MakeRequest(req)
if err != nil {
return nil, err
}
if err != nil { if err != nil {
return nil, err return nil, err
} }
@ -137,65 +108,22 @@ func (j *Jolokia) getAttr(requestUrl *url.URL) (map[string]interface{}, error) {
return jsonOut, nil return jsonOut, nil
} }
func (m *Metric) shouldPass(field string) bool { func (j *Jolokia) Gather(acc inputs.Accumulator) error {
if m.Pass != nil {
for _, pass := range m.Pass {
if strings.HasPrefix(field, pass) {
return true
}
}
return false
}
if m.Drop != nil {
for _, drop := range m.Drop {
if strings.HasPrefix(field, drop) {
return false
}
}
return true
}
return true
}
func (m *Metric) filterFields(fields map[string]interface{}) map[string]interface{} {
for field, _ := range fields {
if !m.shouldPass(field) {
delete(fields, field)
}
}
return fields
}
func (j *Jolokia) Gather(acc plugins.Accumulator) error {
context := j.Context //"/jolokia/read" context := j.Context //"/jolokia/read"
servers := j.Servers servers := j.Servers
metrics := j.Metrics metrics := j.Metrics
tags := j.Tags tags := make(map[string]string)
if tags == nil {
tags = map[string]string{}
}
for _, server := range servers { for _, server := range servers {
tags["server"] = server.Name
tags["port"] = server.Port
tags["host"] = server.Host
fields := make(map[string]interface{})
for _, metric := range metrics { for _, metric := range metrics {
measurement := metric.Name measurement := metric.Name
jmxPath := metric.Jmx jmxPath := metric.Jmx
tags["server"] = server.Name
tags["port"] = server.Port
tags["host"] = server.Host
// Prepare URL // Prepare URL
requestUrl, err := url.Parse("http://" + server.Host + ":" + requestUrl, err := url.Parse("http://" + server.Host + ":" +
server.Port + context + jmxPath) server.Port + context + jmxPath)
@ -209,23 +137,27 @@ func (j *Jolokia) Gather(acc plugins.Accumulator) error {
out, _ := j.getAttr(requestUrl) out, _ := j.getAttr(requestUrl)
if values, ok := out["value"]; ok { if values, ok := out["value"]; ok {
switch values.(type) { switch t := values.(type) {
case map[string]interface{}: case map[string]interface{}:
acc.AddFields(measurement, metric.filterFields(values.(map[string]interface{})), tags) for k, v := range t {
fields[measurement+"_"+k] = v
}
case interface{}: case interface{}:
acc.Add(measurement, values.(interface{}), tags) fields[measurement] = t
} }
} else { } else {
fmt.Printf("Missing key 'value' in '%s' output response\n", requestUrl.String()) fmt.Printf("Missing key 'value' in '%s' output response\n",
requestUrl.String())
} }
} }
acc.AddFields("jolokia", fields, tags)
} }
return nil return nil
} }
func init() { func init() {
plugins.Add("jolokia", func() plugins.Plugin { inputs.Add("jolokia", func() inputs.Input {
return &Jolokia{jClient: &JolokiaClientImpl{client: &http.Client{}}} return &Jolokia{jClient: &JolokiaClientImpl{client: &http.Client{}}}
}) })
} }

View File

@ -7,7 +7,7 @@ import (
"strings" "strings"
"testing" "testing"
"github.com/influxdb/telegraf/testutil" "github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
_ "github.com/stretchr/testify/require" _ "github.com/stretchr/testify/require"
) )
@ -48,7 +48,7 @@ const empty = ""
var Servers = []Server{Server{Name: "as1", Host: "127.0.0.1", Port: "8080"}} var Servers = []Server{Server{Name: "as1", Host: "127.0.0.1", Port: "8080"}}
var HeapMetric = Metric{Name: "heap_memory_usage", Jmx: "/java.lang:type=Memory/HeapMemoryUsage"} var HeapMetric = Metric{Name: "heap_memory_usage", Jmx: "/java.lang:type=Memory/HeapMemoryUsage"}
var UsedHeapMetric = Metric{Name: "heap_memory_usage", Jmx: "/java.lang:type=Memory/HeapMemoryUsage", Pass: []string{"used"}} var UsedHeapMetric = Metric{Name: "heap_memory_usage", Jmx: "/java.lang:type=Memory/HeapMemoryUsage"}
type jolokiaClientStub struct { type jolokiaClientStub struct {
responseBody string responseBody string
@ -79,7 +79,6 @@ func genJolokiaClientStub(response string, statusCode int, servers []Server, met
// Test that the proper values are ignored or collected // Test that the proper values are ignored or collected
func TestHttpJsonMultiValue(t *testing.T) { func TestHttpJsonMultiValue(t *testing.T) {
jolokia := genJolokiaClientStub(validMultiValueJSON, 200, Servers, []Metric{HeapMetric}) jolokia := genJolokiaClientStub(validMultiValueJSON, 200, Servers, []Metric{HeapMetric})
var acc testutil.Accumulator var acc testutil.Accumulator
@ -88,58 +87,28 @@ func TestHttpJsonMultiValue(t *testing.T) {
assert.Nil(t, err) assert.Nil(t, err)
assert.Equal(t, 1, len(acc.Points)) assert.Equal(t, 1, len(acc.Points))
assert.True(t, acc.CheckFieldsValue("heap_memory_usage", map[string]interface{}{"init": 67108864.0, fields := map[string]interface{}{
"committed": 456130560.0, "heap_memory_usage_init": 67108864.0,
"max": 477626368.0, "heap_memory_usage_committed": 456130560.0,
"used": 203288528.0})) "heap_memory_usage_max": 477626368.0,
} "heap_memory_usage_used": 203288528.0,
}
// Test that the proper values are ignored or collected tags := map[string]string{
func TestHttpJsonMultiValueWithPass(t *testing.T) { "host": "127.0.0.1",
"port": "8080",
jolokia := genJolokiaClientStub(validMultiValueJSON, 200, Servers, []Metric{UsedHeapMetric}) "server": "as1",
}
var acc testutil.Accumulator acc.AssertContainsTaggedFields(t, "jolokia", fields, tags)
err := jolokia.Gather(&acc)
assert.Nil(t, err)
assert.Equal(t, 1, len(acc.Points))
assert.True(t, acc.CheckFieldsValue("heap_memory_usage", map[string]interface{}{"used": 203288528.0}))
}
// Test that the proper values are ignored or collected
func TestHttpJsonMultiValueTags(t *testing.T) {
jolokia := genJolokiaClientStub(validMultiValueJSON, 200, Servers, []Metric{UsedHeapMetric})
var acc testutil.Accumulator
err := jolokia.Gather(&acc)
assert.Nil(t, err)
assert.Equal(t, 1, len(acc.Points))
assert.NoError(t, acc.ValidateTaggedFieldsValue("heap_memory_usage", map[string]interface{}{"used": 203288528.0}, map[string]string{"host": "127.0.0.1", "port": "8080", "server": "as1"}))
}
// Test that the proper values are ignored or collected
func TestHttpJsonSingleValueTags(t *testing.T) {
jolokia := genJolokiaClientStub(validSingleValueJSON, 200, Servers, []Metric{UsedHeapMetric})
var acc testutil.Accumulator
err := jolokia.Gather(&acc)
assert.Nil(t, err)
assert.Equal(t, 1, len(acc.Points))
assert.NoError(t, acc.ValidateTaggedFieldsValue("heap_memory_usage", map[string]interface{}{"value": 209274376.0}, map[string]string{"host": "127.0.0.1", "port": "8080", "server": "as1"}))
} }
// Test that the proper values are ignored or collected // Test that the proper values are ignored or collected
func TestHttpJsonOn404(t *testing.T) { func TestHttpJsonOn404(t *testing.T) {
jolokia := genJolokiaClientStub(validMultiValueJSON, 404, Servers, []Metric{UsedHeapMetric}) jolokia := genJolokiaClientStub(validMultiValueJSON, 404, Servers,
[]Metric{UsedHeapMetric})
var acc testutil.Accumulator var acc testutil.Accumulator
acc.SetDebug(true)
err := jolokia.Gather(&acc) err := jolokia.Gather(&acc)
assert.Nil(t, err) assert.Nil(t, err)

View File

@ -5,8 +5,8 @@ import (
"strings" "strings"
"sync" "sync"
"github.com/influxdb/influxdb/models" "github.com/influxdata/influxdb/models"
"github.com/influxdb/telegraf/plugins" "github.com/influxdata/telegraf/plugins/inputs"
"github.com/Shopify/sarama" "github.com/Shopify/sarama"
"github.com/wvanbergen/kafka/consumergroup" "github.com/wvanbergen/kafka/consumergroup"
@ -148,7 +148,7 @@ func (k *Kafka) Stop() {
} }
} }
func (k *Kafka) Gather(acc plugins.Accumulator) error { func (k *Kafka) Gather(acc inputs.Accumulator) error {
k.Lock() k.Lock()
defer k.Unlock() defer k.Unlock()
npoints := len(k.pointChan) npoints := len(k.pointChan)
@ -160,7 +160,7 @@ func (k *Kafka) Gather(acc plugins.Accumulator) error {
} }
func init() { func init() {
plugins.Add("kafka_consumer", func() plugins.Plugin { inputs.Add("kafka_consumer", func() inputs.Input {
return &Kafka{} return &Kafka{}
}) })
} }

View File

@ -6,7 +6,7 @@ import (
"time" "time"
"github.com/Shopify/sarama" "github.com/Shopify/sarama"
"github.com/influxdb/telegraf/testutil" "github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )

View File

@ -4,8 +4,8 @@ import (
"testing" "testing"
"time" "time"
"github.com/influxdb/influxdb/models" "github.com/influxdata/influxdb/models"
"github.com/influxdb/telegraf/testutil" "github.com/influxdata/telegraf/testutil"
"github.com/Shopify/sarama" "github.com/Shopify/sarama"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
@ -85,7 +85,8 @@ func TestRunParserAndGather(t *testing.T) {
k.Gather(&acc) k.Gather(&acc)
assert.Equal(t, len(acc.Points), 1) assert.Equal(t, len(acc.Points), 1)
assert.True(t, acc.CheckValue("cpu_load_short", 23422.0)) acc.AssertContainsFields(t, "cpu_load_short",
map[string]interface{}{"value": float64(23422)})
} }
func saramaMsg(val string) *sarama.ConsumerMessage { func saramaMsg(val string) *sarama.ConsumerMessage {

View File

@ -3,7 +3,7 @@ package leofs
import ( import (
"bufio" "bufio"
"fmt" "fmt"
"github.com/influxdb/telegraf/plugins" "github.com/influxdata/telegraf/plugins/inputs"
"net/url" "net/url"
"os/exec" "os/exec"
"strconv" "strconv"
@ -146,7 +146,7 @@ func (l *LeoFS) Description() string {
return "Read metrics from a LeoFS Server via SNMP" return "Read metrics from a LeoFS Server via SNMP"
} }
func (l *LeoFS) Gather(acc plugins.Accumulator) error { func (l *LeoFS) Gather(acc inputs.Accumulator) error {
if len(l.Servers) == 0 { if len(l.Servers) == 0 {
l.gatherServer(defaultEndpoint, ServerTypeManagerMaster, acc) l.gatherServer(defaultEndpoint, ServerTypeManagerMaster, acc)
return nil return nil
@ -176,7 +176,7 @@ func (l *LeoFS) Gather(acc plugins.Accumulator) error {
return outerr return outerr
} }
func (l *LeoFS) gatherServer(endpoint string, serverType ServerType, acc plugins.Accumulator) error { func (l *LeoFS) gatherServer(endpoint string, serverType ServerType, acc inputs.Accumulator) error {
cmd := exec.Command("snmpwalk", "-v2c", "-cpublic", endpoint, oid) cmd := exec.Command("snmpwalk", "-v2c", "-cpublic", endpoint, oid)
stdout, err := cmd.StdoutPipe() stdout, err := cmd.StdoutPipe()
if err != nil { if err != nil {
@ -197,6 +197,8 @@ func (l *LeoFS) gatherServer(endpoint string, serverType ServerType, acc plugins
"node": nodeNameTrimmed, "node": nodeNameTrimmed,
} }
i := 0 i := 0
fields := make(map[string]interface{})
for scanner.Scan() { for scanner.Scan() {
key := KeyMapping[serverType][i] key := KeyMapping[serverType][i]
val, err := retrieveTokenAfterColon(scanner.Text()) val, err := retrieveTokenAfterColon(scanner.Text())
@ -207,9 +209,10 @@ func (l *LeoFS) gatherServer(endpoint string, serverType ServerType, acc plugins
if err != nil { if err != nil {
return fmt.Errorf("Unable to parse the value:%s, err:%s", val, err) return fmt.Errorf("Unable to parse the value:%s, err:%s", val, err)
} }
acc.Add(key, fVal, tags) fields[key] = fVal
i++ i++
} }
acc.AddFields("leofs", fields, tags)
return nil return nil
} }
@ -222,7 +225,7 @@ func retrieveTokenAfterColon(line string) (string, error) {
} }
func init() { func init() {
plugins.Add("leofs", func() plugins.Plugin { inputs.Add("leofs", func() inputs.Input {
return &LeoFS{} return &LeoFS{}
}) })
} }

View File

@ -1,7 +1,7 @@
package leofs package leofs
import ( import (
"github.com/influxdb/telegraf/testutil" "github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
"io/ioutil" "io/ioutil"
@ -129,7 +129,6 @@ func buildFakeSNMPCmd(src string) {
} }
func testMain(t *testing.T, code string, endpoint string, serverType ServerType) { func testMain(t *testing.T, code string, endpoint string, serverType ServerType) {
// Build the fake snmpwalk for test // Build the fake snmpwalk for test
src := makeFakeSNMPSrc(code) src := makeFakeSNMPSrc(code)
defer os.Remove(src) defer os.Remove(src)
@ -145,6 +144,7 @@ func testMain(t *testing.T, code string, endpoint string, serverType ServerType)
} }
var acc testutil.Accumulator var acc testutil.Accumulator
acc.SetDebug(true)
err := l.Gather(&acc) err := l.Gather(&acc)
require.NoError(t, err) require.NoError(t, err)
@ -152,7 +152,7 @@ func testMain(t *testing.T, code string, endpoint string, serverType ServerType)
floatMetrics := KeyMapping[serverType] floatMetrics := KeyMapping[serverType]
for _, metric := range floatMetrics { for _, metric := range floatMetrics {
assert.True(t, acc.HasFloatValue(metric), metric) assert.True(t, acc.HasFloatField("leofs", metric), metric)
} }
} }

View File

@ -13,8 +13,8 @@ import (
"strconv" "strconv"
"strings" "strings"
"github.com/influxdb/telegraf/internal" "github.com/influxdata/telegraf/internal"
"github.com/influxdb/telegraf/plugins" "github.com/influxdata/telegraf/plugins/inputs"
) )
// Lustre proc files can change between versions, so we want to future-proof // Lustre proc files can change between versions, so we want to future-proof
@ -22,6 +22,9 @@ import (
type Lustre2 struct { type Lustre2 struct {
Ost_procfiles []string Ost_procfiles []string
Mds_procfiles []string Mds_procfiles []string
// allFields maps and OST name to the metric fields associated with that OST
allFields map[string]map[string]interface{}
} }
var sampleConfig = ` var sampleConfig = `
@ -126,7 +129,7 @@ var wanted_mds_fields = []*mapping{
}, },
} }
func (l *Lustre2) GetLustreProcStats(fileglob string, wanted_fields []*mapping, acc plugins.Accumulator) error { func (l *Lustre2) GetLustreProcStats(fileglob string, wanted_fields []*mapping, acc inputs.Accumulator) error {
files, err := filepath.Glob(fileglob) files, err := filepath.Glob(fileglob)
if err != nil { if err != nil {
return err return err
@ -140,8 +143,11 @@ func (l *Lustre2) GetLustreProcStats(fileglob string, wanted_fields []*mapping,
*/ */
path := strings.Split(file, "/") path := strings.Split(file, "/")
name := path[len(path)-2] name := path[len(path)-2]
tags := map[string]string{ var fields map[string]interface{}
"name": name, fields, ok := l.allFields[name]
if !ok {
fields = make(map[string]interface{})
l.allFields[name] = fields
} }
lines, err := internal.ReadLines(file) lines, err := internal.ReadLines(file)
@ -150,18 +156,17 @@ func (l *Lustre2) GetLustreProcStats(fileglob string, wanted_fields []*mapping,
} }
for _, line := range lines { for _, line := range lines {
fields := strings.Fields(line) parts := strings.Fields(line)
for _, wanted := range wanted_fields { for _, wanted := range wanted_fields {
var data uint64 var data uint64
if fields[0] == wanted.inProc { if parts[0] == wanted.inProc {
wanted_field := wanted.field wanted_field := wanted.field
// if not set, assume field[1]. Shouldn't be field[0], as // if not set, assume field[1]. Shouldn't be field[0], as
// that's a string // that's a string
if wanted_field == 0 { if wanted_field == 0 {
wanted_field = 1 wanted_field = 1
} }
data, err = strconv.ParseUint((fields[wanted_field]), 10, 64) data, err = strconv.ParseUint((parts[wanted_field]), 10, 64)
if err != nil { if err != nil {
return err return err
} }
@ -169,8 +174,7 @@ func (l *Lustre2) GetLustreProcStats(fileglob string, wanted_fields []*mapping,
if wanted.reportAs != "" { if wanted.reportAs != "" {
report_name = wanted.reportAs report_name = wanted.reportAs
} }
acc.Add(report_name, data, tags) fields[report_name] = data
} }
} }
} }
@ -189,16 +193,19 @@ func (l *Lustre2) Description() string {
} }
// Gather reads stats from all lustre targets // Gather reads stats from all lustre targets
func (l *Lustre2) Gather(acc plugins.Accumulator) error { func (l *Lustre2) Gather(acc inputs.Accumulator) error {
l.allFields = make(map[string]map[string]interface{})
if len(l.Ost_procfiles) == 0 { if len(l.Ost_procfiles) == 0 {
// read/write bytes are in obdfilter/<ost_name>/stats // read/write bytes are in obdfilter/<ost_name>/stats
err := l.GetLustreProcStats("/proc/fs/lustre/obdfilter/*/stats", wanted_ost_fields, acc) err := l.GetLustreProcStats("/proc/fs/lustre/obdfilter/*/stats",
wanted_ost_fields, acc)
if err != nil { if err != nil {
return err return err
} }
// cache counters are in osd-ldiskfs/<ost_name>/stats // cache counters are in osd-ldiskfs/<ost_name>/stats
err = l.GetLustreProcStats("/proc/fs/lustre/osd-ldiskfs/*/stats", wanted_ost_fields, acc) err = l.GetLustreProcStats("/proc/fs/lustre/osd-ldiskfs/*/stats",
wanted_ost_fields, acc)
if err != nil { if err != nil {
return err return err
} }
@ -206,7 +213,8 @@ func (l *Lustre2) Gather(acc plugins.Accumulator) error {
if len(l.Mds_procfiles) == 0 { if len(l.Mds_procfiles) == 0 {
// Metadata server stats // Metadata server stats
err := l.GetLustreProcStats("/proc/fs/lustre/mdt/*/md_stats", wanted_mds_fields, acc) err := l.GetLustreProcStats("/proc/fs/lustre/mdt/*/md_stats",
wanted_mds_fields, acc)
if err != nil { if err != nil {
return err return err
} }
@ -225,11 +233,18 @@ func (l *Lustre2) Gather(acc plugins.Accumulator) error {
} }
} }
for name, fields := range l.allFields {
tags := map[string]string{
"name": name,
}
acc.AddFields("lustre2", fields, tags)
}
return nil return nil
} }
func init() { func init() {
plugins.Add("lustre2", func() plugins.Plugin { inputs.Add("lustre2", func() inputs.Input {
return &Lustre2{} return &Lustre2{}
}) })
} }

View File

@ -5,8 +5,7 @@ import (
"os" "os"
"testing" "testing"
"github.com/influxdb/telegraf/testutil" "github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
@ -58,11 +57,6 @@ samedir_rename 259625 samples [reqs]
crossdir_rename 369571 samples [reqs] crossdir_rename 369571 samples [reqs]
` `
type metrics struct {
name string
value uint64
}
func TestLustre2GeneratesMetrics(t *testing.T) { func TestLustre2GeneratesMetrics(t *testing.T) {
tempdir := os.TempDir() + "/telegraf/proc/fs/lustre/" tempdir := os.TempDir() + "/telegraf/proc/fs/lustre/"
@ -103,41 +97,33 @@ func TestLustre2GeneratesMetrics(t *testing.T) {
"name": ost_name, "name": ost_name,
} }
intMetrics := []*metrics{ fields := map[string]interface{}{
{ "cache_access": uint64(19047063027),
name: "write_bytes", "cache_hit": uint64(7393729777),
value: 15201500833981, "cache_miss": uint64(11653333250),
}, "close": uint64(873243496),
{ "crossdir_rename": uint64(369571),
name: "read_bytes", "getattr": uint64(1503663097),
value: 78026117632000, "getxattr": uint64(6145349681),
}, "link": uint64(445),
{ "mkdir": uint64(705499),
name: "write_calls", "mknod": uint64(349042),
value: 71893382, "open": uint64(1024577037),
}, "read_bytes": uint64(78026117632000),
{ "read_calls": uint64(203238095),
name: "read_calls", "rename": uint64(629196),
value: 203238095, "rmdir": uint64(227434),
}, "samedir_rename": uint64(259625),
{ "setattr": uint64(1898364),
name: "cache_hit", "setxattr": uint64(83969),
value: 7393729777, "statfs": uint64(2916320),
}, "sync": uint64(434081),
{ "unlink": uint64(3549417),
name: "cache_access", "write_bytes": uint64(15201500833981),
value: 19047063027, "write_calls": uint64(71893382),
},
{
name: "cache_miss",
value: 11653333250,
},
} }
for _, metric := range intMetrics { acc.AssertContainsTaggedFields(t, "lustre2", fields, tags)
assert.True(t, acc.HasUIntValue(metric.name), metric.name)
assert.True(t, acc.CheckTaggedValue(metric.name, metric.value, tags))
}
err = os.RemoveAll(os.TempDir() + "/telegraf") err = os.RemoveAll(os.TempDir() + "/telegraf")
require.NoError(t, err) require.NoError(t, err)

View File

@ -0,0 +1,116 @@
package mailchimp
import (
"fmt"
"time"
"github.com/influxdata/telegraf/plugins/inputs"
)
type MailChimp struct {
api *ChimpAPI
ApiKey string
DaysOld int
CampaignId string
}
var sampleConfig = `
# MailChimp API key
# get from https://admin.mailchimp.com/account/api/
api_key = "" # required
# Reports for campaigns sent more than days_old ago will not be collected.
# 0 means collect all.
days_old = 0
# Campaign ID to get, if empty gets all campaigns, this option overrides days_old
# campaign_id = ""
`
func (m *MailChimp) SampleConfig() string {
return sampleConfig
}
func (m *MailChimp) Description() string {
return "Gathers metrics from the /3.0/reports MailChimp API"
}
func (m *MailChimp) Gather(acc inputs.Accumulator) error {
if m.api == nil {
m.api = NewChimpAPI(m.ApiKey)
}
m.api.Debug = false
if m.CampaignId == "" {
since := ""
if m.DaysOld > 0 {
now := time.Now()
d, _ := time.ParseDuration(fmt.Sprintf("%dh", 24*m.DaysOld))
since = now.Add(-d).Format(time.RFC3339)
}
reports, err := m.api.GetReports(ReportsParams{
SinceSendTime: since,
})
if err != nil {
return err
}
now := time.Now()
for _, report := range reports.Reports {
gatherReport(acc, report, now)
}
} else {
report, err := m.api.GetReport(m.CampaignId)
if err != nil {
return err
}
now := time.Now()
gatherReport(acc, report, now)
}
return nil
}
func gatherReport(acc inputs.Accumulator, report Report, now time.Time) {
tags := make(map[string]string)
tags["id"] = report.ID
tags["campaign_title"] = report.CampaignTitle
fields := map[string]interface{}{
"emails_sent": report.EmailsSent,
"abuse_reports": report.AbuseReports,
"unsubscribed": report.Unsubscribed,
"hard_bounces": report.Bounces.HardBounces,
"soft_bounces": report.Bounces.SoftBounces,
"syntax_errors": report.Bounces.SyntaxErrors,
"forwards_count": report.Forwards.ForwardsCount,
"forwards_opens": report.Forwards.ForwardsOpens,
"opens_total": report.Opens.OpensTotal,
"unique_opens": report.Opens.UniqueOpens,
"open_rate": report.Opens.OpenRate,
"clicks_total": report.Clicks.ClicksTotal,
"unique_clicks": report.Clicks.UniqueClicks,
"unique_subscriber_clicks": report.Clicks.UniqueSubscriberClicks,
"click_rate": report.Clicks.ClickRate,
"facebook_recipient_likes": report.FacebookLikes.RecipientLikes,
"facebook_unique_likes": report.FacebookLikes.UniqueLikes,
"facebook_likes": report.FacebookLikes.FacebookLikes,
"industry_type": report.IndustryStats.Type,
"industry_open_rate": report.IndustryStats.OpenRate,
"industry_click_rate": report.IndustryStats.ClickRate,
"industry_bounce_rate": report.IndustryStats.BounceRate,
"industry_unopen_rate": report.IndustryStats.UnopenRate,
"industry_unsub_rate": report.IndustryStats.UnsubRate,
"industry_abuse_rate": report.IndustryStats.AbuseRate,
"list_stats_sub_rate": report.ListStats.SubRate,
"list_stats_unsub_rate": report.ListStats.UnsubRate,
"list_stats_open_rate": report.ListStats.OpenRate,
"list_stats_click_rate": report.ListStats.ClickRate,
}
acc.AddFields("mailchimp", fields, tags, now)
}
func init() {
inputs.Add("mailchimp", func() inputs.Input {
return &MailChimp{}
})
}

View File

@ -7,9 +7,8 @@ import (
"net/url" "net/url"
"testing" "testing"
"github.com/influxdb/telegraf/testutil" "github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
@ -42,67 +41,38 @@ func TestMailChimpGatherReports(t *testing.T) {
tags["id"] = "42694e9e57" tags["id"] = "42694e9e57"
tags["campaign_title"] = "Freddie's Jokes Vol. 1" tags["campaign_title"] = "Freddie's Jokes Vol. 1"
testInts := []struct { fields := map[string]interface{}{
measurement string "emails_sent": int(200),
value int "abuse_reports": int(0),
}{ "unsubscribed": int(2),
{"emails_sent", 200}, "hard_bounces": int(0),
{"abuse_reports", 0}, "soft_bounces": int(2),
{"unsubscribed", 2}, "syntax_errors": int(0),
{"hard_bounces", 0}, "forwards_count": int(0),
{"soft_bounces", 2}, "forwards_opens": int(0),
{"syntax_errors", 0}, "opens_total": int(186),
{"forwards_count", 0}, "unique_opens": int(100),
{"forwards_opens", 0}, "clicks_total": int(42),
{"opens_total", 186}, "unique_clicks": int(400),
{"unique_opens", 100}, "unique_subscriber_clicks": int(42),
{"clicks_total", 42}, "facebook_recipient_likes": int(5),
{"unique_clicks", 400}, "facebook_unique_likes": int(8),
{"unique_subscriber_clicks", 42}, "facebook_likes": int(42),
{"facebook_recipient_likes", 5}, "open_rate": float64(42),
{"facebook_unique_likes", 8}, "click_rate": float64(42),
{"facebook_likes", 42}, "industry_open_rate": float64(0.17076777144396),
} "industry_click_rate": float64(0.027431311866951),
for _, test := range testInts { "industry_bounce_rate": float64(0.0063767751251474),
assert.True(t, acc.CheckTaggedValue(test.measurement, test.value, tags), "industry_unopen_rate": float64(0.82285545343089),
fmt.Sprintf("Measurement: %v, value: %v, tags: %v not found", "industry_unsub_rate": float64(0.001436957032815),
test.measurement, test.value, tags)) "industry_abuse_rate": float64(0.00021111996110887),
} "list_stats_sub_rate": float64(10),
"list_stats_unsub_rate": float64(20),
testFloats := []struct { "list_stats_open_rate": float64(42),
measurement string "list_stats_click_rate": float64(42),
value float64 "industry_type": "Social Networks and Online Communities",
}{
{"open_rate", 42},
{"click_rate", 42},
{"industry_open_rate", 0.17076777144396},
{"industry_click_rate", 0.027431311866951},
{"industry_bounce_rate", 0.0063767751251474},
{"industry_unopen_rate", 0.82285545343089},
{"industry_unsub_rate", 0.001436957032815},
{"industry_abuse_rate", 0.00021111996110887},
{"list_stats_sub_rate", 10},
{"list_stats_unsub_rate", 20},
{"list_stats_open_rate", 42},
{"list_stats_click_rate", 42},
}
for _, test := range testFloats {
assert.True(t, acc.CheckTaggedValue(test.measurement, test.value, tags),
fmt.Sprintf("Measurement: %v, value: %v, tags: %v not found",
test.measurement, test.value, tags))
}
testStrings := []struct {
measurement string
value string
}{
{"industry_type", "Social Networks and Online Communities"},
}
for _, test := range testStrings {
assert.True(t, acc.CheckTaggedValue(test.measurement, test.value, tags),
fmt.Sprintf("Measurement: %v, value: %v, tags: %v not found",
test.measurement, test.value, tags))
} }
acc.AssertContainsTaggedFields(t, "mailchimp", fields, tags)
} }
func TestMailChimpGatherReport(t *testing.T) { func TestMailChimpGatherReport(t *testing.T) {
@ -135,67 +105,39 @@ func TestMailChimpGatherReport(t *testing.T) {
tags["id"] = "42694e9e57" tags["id"] = "42694e9e57"
tags["campaign_title"] = "Freddie's Jokes Vol. 1" tags["campaign_title"] = "Freddie's Jokes Vol. 1"
testInts := []struct { fields := map[string]interface{}{
measurement string "emails_sent": int(200),
value int "abuse_reports": int(0),
}{ "unsubscribed": int(2),
{"emails_sent", 200}, "hard_bounces": int(0),
{"abuse_reports", 0}, "soft_bounces": int(2),
{"unsubscribed", 2}, "syntax_errors": int(0),
{"hard_bounces", 0}, "forwards_count": int(0),
{"soft_bounces", 2}, "forwards_opens": int(0),
{"syntax_errors", 0}, "opens_total": int(186),
{"forwards_count", 0}, "unique_opens": int(100),
{"forwards_opens", 0}, "clicks_total": int(42),
{"opens_total", 186}, "unique_clicks": int(400),
{"unique_opens", 100}, "unique_subscriber_clicks": int(42),
{"clicks_total", 42}, "facebook_recipient_likes": int(5),
{"unique_clicks", 400}, "facebook_unique_likes": int(8),
{"unique_subscriber_clicks", 42}, "facebook_likes": int(42),
{"facebook_recipient_likes", 5}, "open_rate": float64(42),
{"facebook_unique_likes", 8}, "click_rate": float64(42),
{"facebook_likes", 42}, "industry_open_rate": float64(0.17076777144396),
} "industry_click_rate": float64(0.027431311866951),
for _, test := range testInts { "industry_bounce_rate": float64(0.0063767751251474),
assert.True(t, acc.CheckTaggedValue(test.measurement, test.value, tags), "industry_unopen_rate": float64(0.82285545343089),
fmt.Sprintf("Measurement: %v, value: %v, tags: %v not found", "industry_unsub_rate": float64(0.001436957032815),
test.measurement, test.value, tags)) "industry_abuse_rate": float64(0.00021111996110887),
"list_stats_sub_rate": float64(10),
"list_stats_unsub_rate": float64(20),
"list_stats_open_rate": float64(42),
"list_stats_click_rate": float64(42),
"industry_type": "Social Networks and Online Communities",
} }
acc.AssertContainsTaggedFields(t, "mailchimp", fields, tags)
testFloats := []struct {
measurement string
value float64
}{
{"open_rate", 42},
{"click_rate", 42},
{"industry_open_rate", 0.17076777144396},
{"industry_click_rate", 0.027431311866951},
{"industry_bounce_rate", 0.0063767751251474},
{"industry_unopen_rate", 0.82285545343089},
{"industry_unsub_rate", 0.001436957032815},
{"industry_abuse_rate", 0.00021111996110887},
{"list_stats_sub_rate", 10},
{"list_stats_unsub_rate", 20},
{"list_stats_open_rate", 42},
{"list_stats_click_rate", 42},
}
for _, test := range testFloats {
assert.True(t, acc.CheckTaggedValue(test.measurement, test.value, tags),
fmt.Sprintf("Measurement: %v, value: %v, tags: %v not found",
test.measurement, test.value, tags))
}
testStrings := []struct {
measurement string
value string
}{
{"industry_type", "Social Networks and Online Communities"},
}
for _, test := range testStrings {
assert.True(t, acc.CheckTaggedValue(test.measurement, test.value, tags),
fmt.Sprintf("Measurement: %v, value: %v, tags: %v not found",
test.measurement, test.value, tags))
}
} }
func TestMailChimpGatherError(t *testing.T) { func TestMailChimpGatherError(t *testing.T) {

View File

@ -8,7 +8,7 @@ import (
"strconv" "strconv"
"time" "time"
"github.com/influxdb/telegraf/plugins" "github.com/influxdata/telegraf/plugins/inputs"
) )
// Memcached is a memcached plugin // Memcached is a memcached plugin
@ -69,7 +69,7 @@ func (m *Memcached) Description() string {
} }
// Gather reads stats from all configured servers accumulates stats // Gather reads stats from all configured servers accumulates stats
func (m *Memcached) Gather(acc plugins.Accumulator) error { func (m *Memcached) Gather(acc inputs.Accumulator) error {
if len(m.Servers) == 0 && len(m.UnixSockets) == 0 { if len(m.Servers) == 0 && len(m.UnixSockets) == 0 {
return m.gatherServer(":11211", false, acc) return m.gatherServer(":11211", false, acc)
} }
@ -92,7 +92,7 @@ func (m *Memcached) Gather(acc plugins.Accumulator) error {
func (m *Memcached) gatherServer( func (m *Memcached) gatherServer(
address string, address string,
unix bool, unix bool,
acc plugins.Accumulator, acc inputs.Accumulator,
) error { ) error {
var conn net.Conn var conn net.Conn
if unix { if unix {
@ -137,16 +137,18 @@ func (m *Memcached) gatherServer(
tags := map[string]string{"server": address} tags := map[string]string{"server": address}
// Process values // Process values
fields := make(map[string]interface{})
for _, key := range sendMetrics { for _, key := range sendMetrics {
if value, ok := values[key]; ok { if value, ok := values[key]; ok {
// Mostly it is the number // Mostly it is the number
if iValue, errParse := strconv.ParseInt(value, 10, 64); errParse != nil { if iValue, errParse := strconv.ParseInt(value, 10, 64); errParse == nil {
acc.Add(key, value, tags) fields[key] = iValue
} else { } else {
acc.Add(key, iValue, tags) fields[key] = value
} }
} }
} }
acc.AddFields("memcached", fields, tags)
return nil return nil
} }
@ -176,7 +178,7 @@ func parseResponse(r *bufio.Reader) (map[string]string, error) {
} }
func init() { func init() {
plugins.Add("memcached", func() plugins.Plugin { inputs.Add("memcached", func() inputs.Input {
return &Memcached{} return &Memcached{}
}) })
} }

View File

@ -5,7 +5,7 @@ import (
"strings" "strings"
"testing" "testing"
"github.com/influxdb/telegraf/testutil" "github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
@ -32,7 +32,7 @@ func TestMemcachedGeneratesMetrics(t *testing.T) {
"bytes_read", "bytes_written", "threads", "conn_yields"} "bytes_read", "bytes_written", "threads", "conn_yields"}
for _, metric := range intMetrics { for _, metric := range intMetrics {
assert.True(t, acc.HasIntValue(metric), metric) assert.True(t, acc.HasIntField("memcached", metric), metric)
} }
} }

View File

@ -1,4 +1,4 @@
package plugins package inputs
import "github.com/stretchr/testify/mock" import "github.com/stretchr/testify/mock"

View File

@ -9,7 +9,7 @@ import (
"sync" "sync"
"time" "time"
"github.com/influxdb/telegraf/plugins" "github.com/influxdata/telegraf/plugins/inputs"
"gopkg.in/mgo.v2" "gopkg.in/mgo.v2"
) )
@ -45,7 +45,7 @@ var localhost = &url.URL{Host: "127.0.0.1:27017"}
// Reads stats from all configured servers accumulates stats. // Reads stats from all configured servers accumulates stats.
// Returns one of the errors encountered while gather stats (if any). // Returns one of the errors encountered while gather stats (if any).
func (m *MongoDB) Gather(acc plugins.Accumulator) error { func (m *MongoDB) Gather(acc inputs.Accumulator) error {
if len(m.Servers) == 0 { if len(m.Servers) == 0 {
m.gatherServer(m.getMongoServer(localhost), acc) m.gatherServer(m.getMongoServer(localhost), acc)
return nil return nil
@ -88,7 +88,7 @@ func (m *MongoDB) getMongoServer(url *url.URL) *Server {
return m.mongos[url.Host] return m.mongos[url.Host]
} }
func (m *MongoDB) gatherServer(server *Server, acc plugins.Accumulator) error { func (m *MongoDB) gatherServer(server *Server, acc inputs.Accumulator) error {
if server.Session == nil { if server.Session == nil {
var dialAddrs []string var dialAddrs []string
if server.Url.User != nil { if server.Url.User != nil {
@ -98,7 +98,8 @@ func (m *MongoDB) gatherServer(server *Server, acc plugins.Accumulator) error {
} }
dialInfo, err := mgo.ParseURL(dialAddrs[0]) dialInfo, err := mgo.ParseURL(dialAddrs[0])
if err != nil { if err != nil {
return fmt.Errorf("Unable to parse URL (%s), %s\n", dialAddrs[0], err.Error()) return fmt.Errorf("Unable to parse URL (%s), %s\n",
dialAddrs[0], err.Error())
} }
dialInfo.Direct = true dialInfo.Direct = true
dialInfo.Timeout = time.Duration(10) * time.Second dialInfo.Timeout = time.Duration(10) * time.Second
@ -137,7 +138,7 @@ func (m *MongoDB) gatherServer(server *Server, acc plugins.Accumulator) error {
} }
func init() { func init() {
plugins.Add("mongodb", func() plugins.Plugin { inputs.Add("mongodb", func() inputs.Input {
return &MongoDB{ return &MongoDB{
mongos: make(map[string]*Server), mongos: make(map[string]*Server),
} }

View File

@ -5,11 +5,12 @@ import (
"reflect" "reflect"
"strconv" "strconv"
"github.com/influxdb/telegraf/plugins" "github.com/influxdata/telegraf/plugins/inputs"
) )
type MongodbData struct { type MongodbData struct {
StatLine *StatLine StatLine *StatLine
Fields map[string]interface{}
Tags map[string]string Tags map[string]string
} }
@ -20,6 +21,7 @@ func NewMongodbData(statLine *StatLine, tags map[string]string) *MongodbData {
return &MongodbData{ return &MongodbData{
StatLine: statLine, StatLine: statLine,
Tags: tags, Tags: tags,
Fields: make(map[string]interface{}),
} }
} }
@ -63,38 +65,44 @@ var WiredTigerStats = map[string]string{
"percent_cache_used": "CacheUsedPercent", "percent_cache_used": "CacheUsedPercent",
} }
func (d *MongodbData) AddDefaultStats(acc plugins.Accumulator) { func (d *MongodbData) AddDefaultStats() {
statLine := reflect.ValueOf(d.StatLine).Elem() statLine := reflect.ValueOf(d.StatLine).Elem()
d.addStat(acc, statLine, DefaultStats) d.addStat(statLine, DefaultStats)
if d.StatLine.NodeType != "" { if d.StatLine.NodeType != "" {
d.addStat(acc, statLine, DefaultReplStats) d.addStat(statLine, DefaultReplStats)
} }
if d.StatLine.StorageEngine == "mmapv1" { if d.StatLine.StorageEngine == "mmapv1" {
d.addStat(acc, statLine, MmapStats) d.addStat(statLine, MmapStats)
} else if d.StatLine.StorageEngine == "wiredTiger" { } else if d.StatLine.StorageEngine == "wiredTiger" {
for key, value := range WiredTigerStats { for key, value := range WiredTigerStats {
val := statLine.FieldByName(value).Interface() val := statLine.FieldByName(value).Interface()
percentVal := fmt.Sprintf("%.1f", val.(float64)*100) percentVal := fmt.Sprintf("%.1f", val.(float64)*100)
floatVal, _ := strconv.ParseFloat(percentVal, 64) floatVal, _ := strconv.ParseFloat(percentVal, 64)
d.add(acc, key, floatVal) d.add(key, floatVal)
} }
} }
} }
func (d *MongodbData) addStat(acc plugins.Accumulator, statLine reflect.Value, stats map[string]string) { func (d *MongodbData) addStat(
statLine reflect.Value,
stats map[string]string,
) {
for key, value := range stats { for key, value := range stats {
val := statLine.FieldByName(value).Interface() val := statLine.FieldByName(value).Interface()
d.add(acc, key, val) d.add(key, val)
} }
} }
func (d *MongodbData) add(acc plugins.Accumulator, key string, val interface{}) { func (d *MongodbData) add(key string, val interface{}) {
d.Fields[key] = val
}
func (d *MongodbData) flush(acc inputs.Accumulator) {
acc.AddFields( acc.AddFields(
key, "mongodb",
map[string]interface{}{ d.Fields,
"value": val,
},
d.Tags, d.Tags,
d.StatLine.Time, d.StatLine.Time,
) )
d.Fields = make(map[string]interface{})
} }

View File

@ -4,9 +4,8 @@ import (
"testing" "testing"
"time" "time"
"github.com/influxdb/telegraf/testutil" "github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
) )
var tags = make(map[string]string) var tags = make(map[string]string)
@ -37,10 +36,11 @@ func TestAddNonReplStats(t *testing.T) {
) )
var acc testutil.Accumulator var acc testutil.Accumulator
d.AddDefaultStats(&acc) d.AddDefaultStats()
d.flush(&acc)
for key, _ := range DefaultStats { for key, _ := range DefaultStats {
assert.True(t, acc.HasIntValue(key)) assert.True(t, acc.HasIntField("mongodb", key))
} }
} }
@ -57,10 +57,11 @@ func TestAddReplStats(t *testing.T) {
var acc testutil.Accumulator var acc testutil.Accumulator
d.AddDefaultStats(&acc) d.AddDefaultStats()
d.flush(&acc)
for key, _ := range MmapStats { for key, _ := range MmapStats {
assert.True(t, acc.HasIntValue(key)) assert.True(t, acc.HasIntField("mongodb", key))
} }
} }
@ -76,10 +77,11 @@ func TestAddWiredTigerStats(t *testing.T) {
var acc testutil.Accumulator var acc testutil.Accumulator
d.AddDefaultStats(&acc) d.AddDefaultStats()
d.flush(&acc)
for key, _ := range WiredTigerStats { for key, _ := range WiredTigerStats {
assert.True(t, acc.HasFloatValue(key)) assert.True(t, acc.HasFloatField("mongodb", key))
} }
} }
@ -95,17 +97,37 @@ func TestStateTag(t *testing.T) {
tags, tags,
) )
stats := []string{"inserts_per_sec", "queries_per_sec"}
stateTags := make(map[string]string) stateTags := make(map[string]string)
stateTags["state"] = "PRI" stateTags["state"] = "PRI"
var acc testutil.Accumulator var acc testutil.Accumulator
d.AddDefaultStats(&acc) d.AddDefaultStats()
d.flush(&acc)
for _, key := range stats { fields := map[string]interface{}{
err := acc.ValidateTaggedValue(key, int64(0), stateTags) "active_reads": int64(0),
require.NoError(t, err) "active_writes": int64(0),
"commands_per_sec": int64(0),
"deletes_per_sec": int64(0),
"flushes_per_sec": int64(0),
"getmores_per_sec": int64(0),
"inserts_per_sec": int64(0),
"member_status": "PRI",
"net_in_bytes": int64(0),
"net_out_bytes": int64(0),
"open_connections": int64(0),
"queries_per_sec": int64(0),
"queued_reads": int64(0),
"queued_writes": int64(0),
"repl_commands_per_sec": int64(0),
"repl_deletes_per_sec": int64(0),
"repl_getmores_per_sec": int64(0),
"repl_inserts_per_sec": int64(0),
"repl_queries_per_sec": int64(0),
"repl_updates_per_sec": int64(0),
"resident_megabytes": int64(0),
"updates_per_sec": int64(0),
"vsize_megabytes": int64(0),
} }
acc.AssertContainsTaggedFields(t, "mongodb", fields, stateTags)
} }

View File

@ -4,7 +4,7 @@ import (
"net/url" "net/url"
"time" "time"
"github.com/influxdb/telegraf/plugins" "github.com/influxdata/telegraf/plugins/inputs"
"gopkg.in/mgo.v2" "gopkg.in/mgo.v2"
"gopkg.in/mgo.v2/bson" "gopkg.in/mgo.v2/bson"
) )
@ -21,7 +21,7 @@ func (s *Server) getDefaultTags() map[string]string {
return tags return tags
} }
func (s *Server) gatherData(acc plugins.Accumulator) error { func (s *Server) gatherData(acc inputs.Accumulator) error {
s.Session.SetMode(mgo.Eventual, true) s.Session.SetMode(mgo.Eventual, true)
s.Session.SetSocketTimeout(0) s.Session.SetSocketTimeout(0)
result := &ServerStatus{} result := &ServerStatus{}
@ -44,7 +44,8 @@ func (s *Server) gatherData(acc plugins.Accumulator) error {
NewStatLine(*s.lastResult, *result, s.Url.Host, true, durationInSeconds), NewStatLine(*s.lastResult, *result, s.Url.Host, true, durationInSeconds),
s.getDefaultTags(), s.getDefaultTags(),
) )
data.AddDefaultStats(acc) data.AddDefaultStats()
data.flush(acc)
} }
return nil return nil
} }

View File

@ -6,7 +6,7 @@ import (
"testing" "testing"
"time" "time"
"github.com/influxdb/telegraf/testutil" "github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )

View File

@ -6,7 +6,7 @@ import (
"strings" "strings"
_ "github.com/go-sql-driver/mysql" _ "github.com/go-sql-driver/mysql"
"github.com/influxdb/telegraf/plugins" "github.com/influxdata/telegraf/plugins/inputs"
) )
type Mysql struct { type Mysql struct {
@ -35,7 +35,7 @@ func (m *Mysql) Description() string {
var localhost = "" var localhost = ""
func (m *Mysql) Gather(acc plugins.Accumulator) error { func (m *Mysql) Gather(acc inputs.Accumulator) error {
if len(m.Servers) == 0 { if len(m.Servers) == 0 {
// if we can't get stats in this case, thats fine, don't report // if we can't get stats in this case, thats fine, don't report
// an error. // an error.
@ -113,7 +113,7 @@ var mappings = []*mapping{
}, },
} }
func (m *Mysql) gatherServer(serv string, acc plugins.Accumulator) error { func (m *Mysql) gatherServer(serv string, acc inputs.Accumulator) error {
// If user forgot the '/', add it // If user forgot the '/', add it
if strings.HasSuffix(serv, ")") { if strings.HasSuffix(serv, ")") {
serv = serv + "/" serv = serv + "/"
@ -138,6 +138,8 @@ func (m *Mysql) gatherServer(serv string, acc plugins.Accumulator) error {
if err != nil { if err != nil {
servtag = "localhost" servtag = "localhost"
} }
tags := map[string]string{"server": servtag}
fields := make(map[string]interface{})
for rows.Next() { for rows.Next() {
var name string var name string
var val interface{} var val interface{}
@ -149,12 +151,10 @@ func (m *Mysql) gatherServer(serv string, acc plugins.Accumulator) error {
var found bool var found bool
tags := map[string]string{"server": servtag}
for _, mapped := range mappings { for _, mapped := range mappings {
if strings.HasPrefix(name, mapped.onServer) { if strings.HasPrefix(name, mapped.onServer) {
i, _ := strconv.Atoi(string(val.([]byte))) i, _ := strconv.Atoi(string(val.([]byte)))
acc.Add(mapped.inExport+name[len(mapped.onServer):], i, tags) fields[mapped.inExport+name[len(mapped.onServer):]] = i
found = true found = true
} }
} }
@ -170,16 +170,17 @@ func (m *Mysql) gatherServer(serv string, acc plugins.Accumulator) error {
return err return err
} }
acc.Add("queries", i, tags) fields["queries"] = i
case "Slow_queries": case "Slow_queries":
i, err := strconv.ParseInt(string(val.([]byte)), 10, 64) i, err := strconv.ParseInt(string(val.([]byte)), 10, 64)
if err != nil { if err != nil {
return err return err
} }
acc.Add("slow_queries", i, tags) fields["slow_queries"] = i
} }
} }
acc.AddFields("mysql", fields, tags)
conn_rows, err := db.Query("SELECT user, sum(1) FROM INFORMATION_SCHEMA.PROCESSLIST GROUP BY user") conn_rows, err := db.Query("SELECT user, sum(1) FROM INFORMATION_SCHEMA.PROCESSLIST GROUP BY user")
@ -193,18 +194,20 @@ func (m *Mysql) gatherServer(serv string, acc plugins.Accumulator) error {
} }
tags := map[string]string{"server": servtag, "user": user} tags := map[string]string{"server": servtag, "user": user}
fields := make(map[string]interface{})
if err != nil { if err != nil {
return err return err
} }
acc.Add("connections", connections, tags) fields["connections"] = connections
acc.AddFields("mysql_users", fields, tags)
} }
return nil return nil
} }
func init() { func init() {
plugins.Add("mysql", func() plugins.Plugin { inputs.Add("mysql", func() inputs.Input {
return &Mysql{} return &Mysql{}
}) })
} }

View File

@ -2,72 +2,13 @@ package mysql
import ( import (
"fmt" "fmt"
"strings"
"testing" "testing"
"github.com/influxdb/telegraf/testutil" "github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
func TestMysqlGeneratesMetrics(t *testing.T) {
if testing.Short() {
t.Skip("Skipping integration test in short mode")
}
m := &Mysql{
Servers: []string{fmt.Sprintf("root@tcp(%s:3306)/", testutil.GetLocalHost())},
}
var acc testutil.Accumulator
err := m.Gather(&acc)
require.NoError(t, err)
prefixes := []struct {
prefix string
count int
}{
{"commands", 139},
{"handler", 16},
{"bytes", 2},
{"innodb", 46},
{"threads", 4},
{"aborted", 2},
{"created", 3},
{"key", 7},
{"open", 7},
{"opened", 3},
{"qcache", 8},
{"table", 1},
}
intMetrics := []string{
"queries",
"slow_queries",
"connections",
}
for _, prefix := range prefixes {
var count int
for _, p := range acc.Points {
if strings.HasPrefix(p.Measurement, prefix.prefix) {
count++
}
}
if prefix.count > count {
t.Errorf("Expected less than %d measurements with prefix %s, got %d",
count, prefix.prefix, prefix.count)
}
}
for _, metric := range intMetrics {
assert.True(t, acc.HasIntValue(metric))
}
}
func TestMysqlDefaultsToLocal(t *testing.T) { func TestMysqlDefaultsToLocal(t *testing.T) {
if testing.Short() { if testing.Short() {
t.Skip("Skipping integration test in short mode") t.Skip("Skipping integration test in short mode")
@ -82,7 +23,7 @@ func TestMysqlDefaultsToLocal(t *testing.T) {
err := m.Gather(&acc) err := m.Gather(&acc)
require.NoError(t, err) require.NoError(t, err)
assert.True(t, len(acc.Points) > 0) assert.True(t, acc.HasMeasurement("mysql"))
} }
func TestMysqlParseDSN(t *testing.T) { func TestMysqlParseDSN(t *testing.T) {

View File

@ -11,7 +11,7 @@ import (
"sync" "sync"
"time" "time"
"github.com/influxdb/telegraf/plugins" "github.com/influxdata/telegraf/plugins/inputs"
) )
type Nginx struct { type Nginx struct {
@ -31,7 +31,7 @@ func (n *Nginx) Description() string {
return "Read Nginx's basic status information (ngx_http_stub_status_module)" return "Read Nginx's basic status information (ngx_http_stub_status_module)"
} }
func (n *Nginx) Gather(acc plugins.Accumulator) error { func (n *Nginx) Gather(acc inputs.Accumulator) error {
var wg sync.WaitGroup var wg sync.WaitGroup
var outerr error var outerr error
@ -59,7 +59,7 @@ var tr = &http.Transport{
var client = &http.Client{Transport: tr} var client = &http.Client{Transport: tr}
func (n *Nginx) gatherUrl(addr *url.URL, acc plugins.Accumulator) error { func (n *Nginx) gatherUrl(addr *url.URL, acc inputs.Accumulator) error {
resp, err := client.Get(addr.String()) resp, err := client.Get(addr.String())
if err != nil { if err != nil {
return fmt.Errorf("error making HTTP request to %s: %s", addr.String(), err) return fmt.Errorf("error making HTTP request to %s: %s", addr.String(), err)
@ -127,14 +127,16 @@ func (n *Nginx) gatherUrl(addr *url.URL, acc plugins.Accumulator) error {
} }
tags := getTags(addr) tags := getTags(addr)
fields := map[string]interface{}{
acc.Add("active", active, tags) "active": active,
acc.Add("accepts", accepts, tags) "accepts": accepts,
acc.Add("handled", handled, tags) "handled": handled,
acc.Add("requests", requests, tags) "requests": requests,
acc.Add("reading", reading, tags) "reading": reading,
acc.Add("writing", writing, tags) "writing": writing,
acc.Add("waiting", waiting, tags) "waiting": waiting,
}
acc.AddFields("nginx", fields, tags)
return nil return nil
} }
@ -157,7 +159,7 @@ func getTags(addr *url.URL) map[string]string {
} }
func init() { func init() {
plugins.Add("nginx", func() plugins.Plugin { inputs.Add("nginx", func() inputs.Input {
return &Nginx{} return &Nginx{}
}) })
} }

View File

@ -8,7 +8,7 @@ import (
"net/url" "net/url"
"testing" "testing"
"github.com/influxdb/telegraf/testutil" "github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
@ -54,17 +54,14 @@ func TestNginxGeneratesMetrics(t *testing.T) {
err := n.Gather(&acc) err := n.Gather(&acc)
require.NoError(t, err) require.NoError(t, err)
metrics := []struct { fields := map[string]interface{}{
name string "active": uint64(585),
value uint64 "accepts": uint64(85340),
}{ "handled": uint64(85340),
{"active", 585}, "requests": uint64(35085),
{"accepts", 85340}, "reading": uint64(4),
{"handled", 85340}, "writing": uint64(135),
{"requests", 35085}, "waiting": uint64(446),
{"reading", 4},
{"writing", 135},
{"waiting", 446},
} }
addr, err := url.Parse(ts.URL) addr, err := url.Parse(ts.URL)
if err != nil { if err != nil {
@ -84,8 +81,5 @@ func TestNginxGeneratesMetrics(t *testing.T) {
} }
tags := map[string]string{"server": host, "port": port} tags := map[string]string{"server": host, "port": port}
acc.AssertContainsTaggedFields(t, "nginx", fields, tags)
for _, m := range metrics {
assert.NoError(t, acc.ValidateTaggedValue(m.name, m.value, tags))
}
} }

271
plugins/inputs/nsq/nsq.go Normal file
View File

@ -0,0 +1,271 @@
// The MIT License (MIT)
//
// Copyright (c) 2015 Jeff Nickoloff (jeff@allingeek.com)
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in all
// copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
// SOFTWARE.
package nsq
import (
"encoding/json"
"fmt"
"net/http"
"net/url"
"strconv"
"sync"
"time"
"github.com/influxdata/telegraf/plugins/inputs"
)
// Might add Lookupd endpoints for cluster discovery
type NSQ struct {
Endpoints []string
}
var sampleConfig = `
# An array of NSQD HTTP API endpoints
endpoints = ["http://localhost:4151"]
`
const (
requestPattern = `%s/stats?format=json`
)
func init() {
inputs.Add("nsq", func() inputs.Input {
return &NSQ{}
})
}
func (n *NSQ) SampleConfig() string {
return sampleConfig
}
func (n *NSQ) Description() string {
return "Read NSQ topic and channel statistics."
}
func (n *NSQ) Gather(acc inputs.Accumulator) error {
var wg sync.WaitGroup
var outerr error
for _, e := range n.Endpoints {
wg.Add(1)
go func(e string) {
defer wg.Done()
outerr = n.gatherEndpoint(e, acc)
}(e)
}
wg.Wait()
return outerr
}
var tr = &http.Transport{
ResponseHeaderTimeout: time.Duration(3 * time.Second),
}
var client = &http.Client{Transport: tr}
func (n *NSQ) gatherEndpoint(e string, acc inputs.Accumulator) error {
u, err := buildURL(e)
if err != nil {
return err
}
r, err := client.Get(u.String())
if err != nil {
return fmt.Errorf("Error while polling %s: %s", u.String(), err)
}
defer r.Body.Close()
if r.StatusCode != http.StatusOK {
return fmt.Errorf("%s returned HTTP status %s", u.String(), r.Status)
}
s := &NSQStats{}
err = json.NewDecoder(r.Body).Decode(s)
if err != nil {
return fmt.Errorf(`Error parsing response: %s`, err)
}
tags := map[string]string{
`server_host`: u.Host,
`server_version`: s.Data.Version,
}
fields := make(map[string]interface{})
if s.Data.Health == `OK` {
fields["server_count"] = int64(1)
} else {
fields["server_count"] = int64(0)
}
fields["topic_count"] = int64(len(s.Data.Topics))
acc.AddFields("nsq_server", fields, tags)
for _, t := range s.Data.Topics {
topicStats(t, acc, u.Host, s.Data.Version)
}
return nil
}
func buildURL(e string) (*url.URL, error) {
u := fmt.Sprintf(requestPattern, e)
addr, err := url.Parse(u)
if err != nil {
return nil, fmt.Errorf("Unable to parse address '%s': %s", u, err)
}
return addr, nil
}
func topicStats(t TopicStats, acc inputs.Accumulator, host, version string) {
// per topic overall (tag: name, paused, channel count)
tags := map[string]string{
"server_host": host,
"server_version": version,
"topic": t.Name,
}
fields := map[string]interface{}{
"depth": t.Depth,
"backend_depth": t.BackendDepth,
"message_count": t.MessageCount,
"channel_count": int64(len(t.Channels)),
}
acc.AddFields("nsq_topic", fields, tags)
for _, c := range t.Channels {
channelStats(c, acc, host, version, t.Name)
}
}
func channelStats(c ChannelStats, acc inputs.Accumulator, host, version, topic string) {
tags := map[string]string{
"server_host": host,
"server_version": version,
"topic": topic,
"channel": c.Name,
}
fields := map[string]interface{}{
"depth": c.Depth,
"backend_depth": c.BackendDepth,
"inflight_count": c.InFlightCount,
"deferred_count": c.DeferredCount,
"message_count": c.MessageCount,
"requeue_count": c.RequeueCount,
"timeout_count": c.TimeoutCount,
"client_count": int64(len(c.Clients)),
}
acc.AddFields("nsq_channel", fields, tags)
for _, cl := range c.Clients {
clientStats(cl, acc, host, version, topic, c.Name)
}
}
func clientStats(c ClientStats, acc inputs.Accumulator, host, version, topic, channel string) {
tags := map[string]string{
"server_host": host,
"server_version": version,
"topic": topic,
"channel": channel,
"client_name": c.Name,
"client_id": c.ID,
"client_hostname": c.Hostname,
"client_version": c.Version,
"client_address": c.RemoteAddress,
"client_user_agent": c.UserAgent,
"client_tls": strconv.FormatBool(c.TLS),
"client_snappy": strconv.FormatBool(c.Snappy),
"client_deflate": strconv.FormatBool(c.Deflate),
}
fields := map[string]interface{}{
"ready_count": c.ReadyCount,
"inflight_count": c.InFlightCount,
"message_count": c.MessageCount,
"finish_count": c.FinishCount,
"requeue_count": c.RequeueCount,
}
acc.AddFields("nsq_client", fields, tags)
}
type NSQStats struct {
Code int64 `json:"status_code"`
Txt string `json:"status_txt"`
Data NSQStatsData `json:"data"`
}
type NSQStatsData struct {
Version string `json:"version"`
Health string `json:"health"`
StartTime int64 `json:"start_time"`
Topics []TopicStats `json:"topics"`
}
// e2e_processing_latency is not modeled
type TopicStats struct {
Name string `json:"topic_name"`
Depth int64 `json:"depth"`
BackendDepth int64 `json:"backend_depth"`
MessageCount int64 `json:"message_count"`
Paused bool `json:"paused"`
Channels []ChannelStats `json:"channels"`
}
// e2e_processing_latency is not modeled
type ChannelStats struct {
Name string `json:"channel_name"`
Depth int64 `json:"depth"`
BackendDepth int64 `json:"backend_depth"`
InFlightCount int64 `json:"in_flight_count"`
DeferredCount int64 `json:"deferred_count"`
MessageCount int64 `json:"message_count"`
RequeueCount int64 `json:"requeue_count"`
TimeoutCount int64 `json:"timeout_count"`
Paused bool `json:"paused"`
Clients []ClientStats `json:"clients"`
}
type ClientStats struct {
Name string `json:"name"`
ID string `json:"client_id"`
Hostname string `json:"hostname"`
Version string `json:"version"`
RemoteAddress string `json:"remote_address"`
State int64 `json:"state"`
ReadyCount int64 `json:"ready_count"`
InFlightCount int64 `json:"in_flight_count"`
MessageCount int64 `json:"message_count"`
FinishCount int64 `json:"finish_count"`
RequeueCount int64 `json:"requeue_count"`
ConnectTime int64 `json:"connect_ts"`
SampleRate int64 `json:"sample_rate"`
Deflate bool `json:"deflate"`
Snappy bool `json:"snappy"`
UserAgent string `json:"user_agent"`
TLS bool `json:"tls"`
TLSCipherSuite string `json:"tls_cipher_suite"`
TLSVersion string `json:"tls_version"`
TLSNegotiatedProtocol string `json:"tls_negotiated_protocol"`
TLSNegotiatedProtocolIsMutual bool `json:"tls_negotiated_protocol_is_mutual"`
}

View File

@ -0,0 +1,273 @@
package nsq
import (
"fmt"
"net/http"
"net/http/httptest"
"net/url"
"testing"
"github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/require"
)
func TestNSQStats(t *testing.T) {
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK)
fmt.Fprintln(w, response)
}))
defer ts.Close()
n := &NSQ{
Endpoints: []string{ts.URL},
}
var acc testutil.Accumulator
err := n.Gather(&acc)
require.NoError(t, err)
u, err := url.Parse(ts.URL)
require.NoError(t, err)
host := u.Host
// actually validate the tests
tests := []struct {
m string
f map[string]interface{}
g map[string]string
}{
{
"nsq_server",
map[string]interface{}{
"server_count": int64(1),
"topic_count": int64(2),
},
map[string]string{
"server_host": host,
"server_version": "0.3.6",
},
},
{
"nsq_topic",
map[string]interface{}{
"depth": int64(12),
"backend_depth": int64(13),
"message_count": int64(14),
"channel_count": int64(1),
},
map[string]string{
"server_host": host,
"server_version": "0.3.6",
"topic": "t1"},
},
{
"nsq_channel",
map[string]interface{}{
"depth": int64(0),
"backend_depth": int64(1),
"inflight_count": int64(2),
"deferred_count": int64(3),
"message_count": int64(4),
"requeue_count": int64(5),
"timeout_count": int64(6),
"client_count": int64(1),
},
map[string]string{
"server_host": host,
"server_version": "0.3.6",
"topic": "t1",
"channel": "c1",
},
},
{
"nsq_client",
map[string]interface{}{
"ready_count": int64(200),
"inflight_count": int64(7),
"message_count": int64(8),
"finish_count": int64(9),
"requeue_count": int64(10),
},
map[string]string{"server_host": host, "server_version": "0.3.6",
"topic": "t1", "channel": "c1", "client_name": "373a715cd990",
"client_id": "373a715cd990", "client_hostname": "373a715cd990",
"client_version": "V2", "client_address": "172.17.0.11:35560",
"client_tls": "false", "client_snappy": "false",
"client_deflate": "false",
"client_user_agent": "nsq_to_nsq/0.3.6 go-nsq/1.0.5"},
},
{
"nsq_topic",
map[string]interface{}{
"depth": int64(28),
"backend_depth": int64(29),
"message_count": int64(30),
"channel_count": int64(1),
},
map[string]string{
"server_host": host,
"server_version": "0.3.6",
"topic": "t2"},
},
{
"nsq_channel",
map[string]interface{}{
"depth": int64(15),
"backend_depth": int64(16),
"inflight_count": int64(17),
"deferred_count": int64(18),
"message_count": int64(19),
"requeue_count": int64(20),
"timeout_count": int64(21),
"client_count": int64(1),
},
map[string]string{
"server_host": host,
"server_version": "0.3.6",
"topic": "t2",
"channel": "c2",
},
},
{
"nsq_client",
map[string]interface{}{
"ready_count": int64(22),
"inflight_count": int64(23),
"message_count": int64(24),
"finish_count": int64(25),
"requeue_count": int64(26),
},
map[string]string{"server_host": host, "server_version": "0.3.6",
"topic": "t2", "channel": "c2", "client_name": "377569bd462b",
"client_id": "377569bd462b", "client_hostname": "377569bd462b",
"client_version": "V2", "client_address": "172.17.0.8:48145",
"client_user_agent": "go-nsq/1.0.5", "client_tls": "true",
"client_snappy": "true", "client_deflate": "true"},
},
}
for _, test := range tests {
acc.AssertContainsTaggedFields(t, test.m, test.f, test.g)
}
}
var response = `
{
"status_code": 200,
"status_txt": "OK",
"data": {
"version": "0.3.6",
"health": "OK",
"start_time": 1452021674,
"topics": [
{
"topic_name": "t1",
"channels": [
{
"channel_name": "c1",
"depth": 0,
"backend_depth": 1,
"in_flight_count": 2,
"deferred_count": 3,
"message_count": 4,
"requeue_count": 5,
"timeout_count": 6,
"clients": [
{
"name": "373a715cd990",
"client_id": "373a715cd990",
"hostname": "373a715cd990",
"version": "V2",
"remote_address": "172.17.0.11:35560",
"state": 3,
"ready_count": 200,
"in_flight_count": 7,
"message_count": 8,
"finish_count": 9,
"requeue_count": 10,
"connect_ts": 1452021675,
"sample_rate": 11,
"deflate": false,
"snappy": false,
"user_agent": "nsq_to_nsq\/0.3.6 go-nsq\/1.0.5",
"tls": false,
"tls_cipher_suite": "",
"tls_version": "",
"tls_negotiated_protocol": "",
"tls_negotiated_protocol_is_mutual": false
}
],
"paused": false,
"e2e_processing_latency": {
"count": 0,
"percentiles": null
}
}
],
"depth": 12,
"backend_depth": 13,
"message_count": 14,
"paused": false,
"e2e_processing_latency": {
"count": 0,
"percentiles": null
}
},
{
"topic_name": "t2",
"channels": [
{
"channel_name": "c2",
"depth": 15,
"backend_depth": 16,
"in_flight_count": 17,
"deferred_count": 18,
"message_count": 19,
"requeue_count": 20,
"timeout_count": 21,
"clients": [
{
"name": "377569bd462b",
"client_id": "377569bd462b",
"hostname": "377569bd462b",
"version": "V2",
"remote_address": "172.17.0.8:48145",
"state": 3,
"ready_count": 22,
"in_flight_count": 23,
"message_count": 24,
"finish_count": 25,
"requeue_count": 26,
"connect_ts": 1452021678,
"sample_rate": 27,
"deflate": true,
"snappy": true,
"user_agent": "go-nsq\/1.0.5",
"tls": true,
"tls_cipher_suite": "",
"tls_version": "",
"tls_negotiated_protocol": "",
"tls_negotiated_protocol_is_mutual": false
}
],
"paused": false,
"e2e_processing_latency": {
"count": 0,
"percentiles": null
}
}
],
"depth": 28,
"backend_depth": 29,
"message_count": 30,
"paused": false,
"e2e_processing_latency": {
"count": 0,
"percentiles": null
}
}
]
}
}
`

View File

@ -0,0 +1,138 @@
# Telegraf plugin: passenger
Get phusion passenger stat using their command line utility
`passenger-status`
# Measurements
Meta:
- tags:
* name
* passenger_version
* pid
* code_revision
Measurement names:
- passenger:
* Tags: `passenger_version`
* Fields:
- process_count
- max
- capacity_used
- get_wait_list_size
- passenger_supergroup:
* Tags: `name`
* Fields:
- get_wait_list_size
- capacity_used
- passenger_group:
* Tags:
- name
- app_root
- app_type
* Fields:
- get_wait_list_size
- capacity_used
- processes_being_spawned
- passenger_process:
* Tags:
- group_name
- app_root
- supergroup_name
- pid
- code_revision
- life_status
- process_group_id
* Field:
- concurrency
- sessions
- busyness
- processed
- spawner_creation_time
- spawn_start_time
- spawn_end_time
- last_used
- uptime
- cpu
- rss
- pss
- private_dirty
- swap
- real_memory
- vmsize
# Example output
Using this configuration:
```
[[inputs.passenger]]
# Path of passenger-status.
#
# Plugin gather metric via parsing XML output of passenger-status
# More information about the tool:
# https://www.phusionpassenger.com/library/admin/apache/overall_status_report.html
#
#
# If no path is specified, then the plugin simply execute passenger-status
# hopefully it can be found in your PATH
command = "passenger-status -v --show=xml"
```
When run with:
```
./telegraf -config telegraf.conf -test -input-filter passenger
```
It produces:
```
> passenger,passenger_version=5.0.17 capacity_used=23i,get_wait_list_size=0i,max=23i,process_count=23i 1452984112799414257
> passenger_supergroup,name=/var/app/current/public capacity_used=23i,get_wait_list_size=0i 1452984112799496977
> passenger_group,app_root=/var/app/current,app_type=rack,name=/var/app/current/public capacity_used=23i,get_wait_list_size=0i,processes_being_spawned=0i 1452984112799527021
> passenger_process,app_root=/var/app/current,code_revision=899ac7f,group_name=/var/app/current/public,life_status=ALIVE,pid=11553,process_group_id=13608,supergroup_name=/var/app/current/public busyness=0i,concurrency=1i,cpu=58i,last_used=1452747071764940i,private_dirty=314900i,processed=951i,pss=319391i,real_memory=314900i,rss=418548i,sessions=0i,spawn_end_time=1452746845013365i,spawn_start_time=1452746844946982i,spawner_creation_time=1452746835922747i,swap=0i,uptime=226i,vmsize=1563580i 1452984112799571490
> passenger_process,app_root=/var/app/current,code_revision=899ac7f,group_name=/var/app/current/public,life_status=ALIVE,pid=11563,process_group_id=13608,supergroup_name=/var/app/current/public busyness=2147483647i,concurrency=1i,cpu=47i,last_used=1452747071709179i,private_dirty=309240i,processed=756i,pss=314036i,real_memory=309240i,rss=418296i,sessions=1i,spawn_end_time=1452746845172460i,spawn_start_time=1452746845136882i,spawner_creation_time=1452746835922747i,swap=0i,uptime=226i,vmsize=1563608i 1452984112799638581
```
# Note
You have to ensure that you can run the `passenger-status` command under
telegraf user. Depend on how you install and configure passenger, this
maybe an issue for you. If you are using passenger standlone, or compile
yourself, it is straight forward. However, if you are using gem and
`rvm`, it maybe harder to get this right.
Such as with `rvm`, you can use this command:
```
~/.rvm/bin/rvm default do passenger-status -v --show=xml
```
You can use `&` and `;` in the shell command to run comlicated shell command
in order to get the passenger-status such as load the rvm shell, source the
path
```
command = "source .rvm/scripts/rvm && passenger-status -v --show=xml"
```
Anyway, just ensure that you can run the command under `telegraf` user, and it
has to produce XML output.

View File

@ -0,0 +1,250 @@
package passenger
import (
"bytes"
"encoding/xml"
"fmt"
"os/exec"
"strconv"
"strings"
"github.com/influxdata/telegraf/plugins/inputs"
"golang.org/x/net/html/charset"
)
type passenger struct {
Command string
}
func (p *passenger) parseCommand() (string, []string) {
var arguments []string
if !strings.Contains(p.Command, " ") {
return p.Command, arguments
}
arguments = strings.Split(p.Command, " ")
if len(arguments) == 1 {
return arguments[0], arguments[1:]
}
return arguments[0], arguments[1:]
}
type info struct {
Passenger_version string `xml:"passenger_version"`
Process_count int `xml:"process_count"`
Capacity_used int `xml:"capacity_used"`
Get_wait_list_size int `xml:"get_wait_list_size"`
Max int `xml:"max"`
Supergroups struct {
Supergroup []struct {
Name string `xml:"name"`
Get_wait_list_size int `xml:"get_wait_list_size"`
Capacity_used int `xml:"capacity_used"`
Group []struct {
Name string `xml:"name"`
AppRoot string `xml:"app_root"`
AppType string `xml:"app_type"`
Enabled_process_count int `xml:"enabled_process_count"`
Disabling_process_count int `xml:"disabling_process_count"`
Disabled_process_count int `xml:"disabled_process_count"`
Capacity_used int `xml:"capacity_used"`
Get_wait_list_size int `xml:"get_wait_list_size"`
Processes_being_spawned int `xml:"processes_being_spawned"`
Processes struct {
Process []*process `xml:"process"`
} `xml:"processes"`
} `xml:"group"`
} `xml:"supergroup"`
} `xml:"supergroups"`
}
type process struct {
Pid int `xml:"pid"`
Concurrency int `xml:"concurrency"`
Sessions int `xml:"sessions"`
Busyness int `xml:"busyness"`
Processed int `xml:"processed"`
Spawner_creation_time int64 `xml:"spawner_creation_time"`
Spawn_start_time int64 `xml:"spawn_start_time"`
Spawn_end_time int64 `xml:"spawn_end_time"`
Last_used int64 `xml:"last_used"`
Uptime string `xml:"uptime"`
Code_revision string `xml:"code_revision"`
Life_status string `xml:"life_status"`
Enabled string `xml:"enabled"`
Has_metrics bool `xml:"has_metrics"`
Cpu int64 `xml:"cpu"`
Rss int64 `xml:"rss"`
Pss int64 `xml:"pss"`
Private_dirty int64 `xml:"private_dirty"`
Swap int64 `xml:"swap"`
Real_memory int64 `xml:"real_memory"`
Vmsize int64 `xml:"vmsize"`
Process_group_id string `xml:"process_group_id"`
}
func (p *process) getUptime() int64 {
if p.Uptime == "" {
return 0
}
timeSlice := strings.Split(p.Uptime, " ")
var uptime int64
uptime = 0
for _, v := range timeSlice {
switch {
case strings.HasSuffix(v, "d"):
iValue := strings.TrimSuffix(v, "d")
value, err := strconv.ParseInt(iValue, 10, 64)
if err == nil {
uptime += value * (24 * 60 * 60)
}
case strings.HasSuffix(v, "h"):
iValue := strings.TrimSuffix(v, "y")
value, err := strconv.ParseInt(iValue, 10, 64)
if err == nil {
uptime += value * (60 * 60)
}
case strings.HasSuffix(v, "m"):
iValue := strings.TrimSuffix(v, "m")
value, err := strconv.ParseInt(iValue, 10, 64)
if err == nil {
uptime += value * 60
}
case strings.HasSuffix(v, "s"):
iValue := strings.TrimSuffix(v, "s")
value, err := strconv.ParseInt(iValue, 10, 64)
if err == nil {
uptime += value
}
}
}
return uptime
}
var sampleConfig = `
# Path of passenger-status.
#
# Plugin gather metric via parsing XML output of passenger-status
# More information about the tool:
# https://www.phusionpassenger.com/library/admin/apache/overall_status_report.html
#
#
# If no path is specified, then the plugin simply execute passenger-status
# hopefully it can be found in your PATH
command = "passenger-status -v --show=xml"
`
func (r *passenger) SampleConfig() string {
return sampleConfig
}
func (r *passenger) Description() string {
return "Read metrics of passenger using passenger-status"
}
func (g *passenger) Gather(acc inputs.Accumulator) error {
if g.Command == "" {
g.Command = "passenger-status -v --show=xml"
}
cmd, args := g.parseCommand()
out, err := exec.Command(cmd, args...).Output()
if err != nil {
return err
}
if err = importMetric(out, acc); err != nil {
return err
}
return nil
}
func importMetric(stat []byte, acc inputs.Accumulator) error {
var p info
decoder := xml.NewDecoder(bytes.NewReader(stat))
decoder.CharsetReader = charset.NewReaderLabel
if err := decoder.Decode(&p); err != nil {
return fmt.Errorf("Cannot parse input with error: %v\n", err)
}
tags := map[string]string{
"passenger_version": p.Passenger_version,
}
fields := map[string]interface{}{
"process_count": p.Process_count,
"max": p.Max,
"capacity_used": p.Capacity_used,
"get_wait_list_size": p.Get_wait_list_size,
}
acc.AddFields("passenger", fields, tags)
for _, sg := range p.Supergroups.Supergroup {
tags := map[string]string{
"name": sg.Name,
}
fields := map[string]interface{}{
"get_wait_list_size": sg.Get_wait_list_size,
"capacity_used": sg.Capacity_used,
}
acc.AddFields("passenger_supergroup", fields, tags)
for _, group := range sg.Group {
tags := map[string]string{
"name": group.Name,
"app_root": group.AppRoot,
"app_type": group.AppType,
}
fields := map[string]interface{}{
"get_wait_list_size": group.Get_wait_list_size,
"capacity_used": group.Capacity_used,
"processes_being_spawned": group.Processes_being_spawned,
}
acc.AddFields("passenger_group", fields, tags)
for _, process := range group.Processes.Process {
tags := map[string]string{
"group_name": group.Name,
"app_root": group.AppRoot,
"supergroup_name": sg.Name,
"pid": fmt.Sprintf("%d", process.Pid),
"code_revision": process.Code_revision,
"life_status": process.Life_status,
"process_group_id": process.Process_group_id,
}
fields := map[string]interface{}{
"concurrency": process.Concurrency,
"sessions": process.Sessions,
"busyness": process.Busyness,
"processed": process.Processed,
"spawner_creation_time": process.Spawner_creation_time,
"spawn_start_time": process.Spawn_start_time,
"spawn_end_time": process.Spawn_end_time,
"last_used": process.Last_used,
"uptime": process.getUptime(),
"cpu": process.Cpu,
"rss": process.Rss,
"pss": process.Pss,
"private_dirty": process.Private_dirty,
"swap": process.Swap,
"real_memory": process.Real_memory,
"vmsize": process.Vmsize,
}
acc.AddFields("passenger_process", fields, tags)
}
}
}
return nil
}
func init() {
inputs.Add("passenger", func() inputs.Input {
return &passenger{}
})
}

View File

@ -0,0 +1,301 @@
package passenger
import (
"fmt"
"io/ioutil"
"os"
"testing"
"github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func fakePassengerStatus(stat string) {
content := fmt.Sprintf("#!/bin/sh\ncat << EOF\n%s\nEOF", stat)
ioutil.WriteFile("/tmp/passenger-status", []byte(content), 0700)
}
func teardown() {
os.Remove("/tmp/passenger-status")
}
func Test_Invalid_Passenger_Status_Cli(t *testing.T) {
r := &passenger{
Command: "an-invalid-command passenger-status",
}
var acc testutil.Accumulator
err := r.Gather(&acc)
require.Error(t, err)
assert.Equal(t, err.Error(), `exec: "an-invalid-command": executable file not found in $PATH`)
}
func Test_Invalid_Xml(t *testing.T) {
fakePassengerStatus("invalid xml")
defer teardown()
r := &passenger{
Command: "/tmp/passenger-status",
}
var acc testutil.Accumulator
err := r.Gather(&acc)
require.Error(t, err)
assert.Equal(t, err.Error(), "Cannot parse input with error: EOF\n")
}
// We test this by ensure that the error message match the path of default cli
func Test_Default_Config_Load_Default_Command(t *testing.T) {
fakePassengerStatus("invalid xml")
defer teardown()
r := &passenger{}
var acc testutil.Accumulator
err := r.Gather(&acc)
require.Error(t, err)
assert.Equal(t, err.Error(), "exec: \"passenger-status\": executable file not found in $PATH")
}
func TestPassengerGenerateMetric(t *testing.T) {
fakePassengerStatus(sampleStat)
defer teardown()
//Now we tested again above server, with our authentication data
r := &passenger{
Command: "/tmp/passenger-status",
}
var acc testutil.Accumulator
err := r.Gather(&acc)
require.NoError(t, err)
tags := map[string]string{
"passenger_version": "5.0.17",
}
fields := map[string]interface{}{
"process_count": 23,
"max": 23,
"capacity_used": 23,
"get_wait_list_size": 3,
}
acc.AssertContainsTaggedFields(t, "passenger", fields, tags)
tags = map[string]string{
"name": "/var/app/current/public",
"app_root": "/var/app/current",
"app_type": "rack",
}
fields = map[string]interface{}{
"processes_being_spawned": 2,
"capacity_used": 23,
"get_wait_list_size": 3,
}
acc.AssertContainsTaggedFields(t, "passenger_group", fields, tags)
tags = map[string]string{
"name": "/var/app/current/public",
}
fields = map[string]interface{}{
"capacity_used": 23,
"get_wait_list_size": 3,
}
acc.AssertContainsTaggedFields(t, "passenger_supergroup", fields, tags)
tags = map[string]string{
"app_root": "/var/app/current",
"group_name": "/var/app/current/public",
"supergroup_name": "/var/app/current/public",
"pid": "11553",
"code_revision": "899ac7f",
"life_status": "ALIVE",
"process_group_id": "13608",
}
fields = map[string]interface{}{
"concurrency": 1,
"sessions": 0,
"busyness": 0,
"processed": 951,
"spawner_creation_time": int64(1452746835922747),
"spawn_start_time": int64(1452746844946982),
"spawn_end_time": int64(1452746845013365),
"last_used": int64(1452747071764940),
"uptime": int64(226), // in seconds of 3m 46s
"cpu": int64(58),
"rss": int64(418548),
"pss": int64(319391),
"private_dirty": int64(314900),
"swap": int64(0),
"real_memory": int64(314900),
"vmsize": int64(1563580),
}
acc.AssertContainsTaggedFields(t, "passenger_process", fields, tags)
}
var sampleStat = `
<?xml version="1.0" encoding="iso8859-1" ?>
<?xml version="1.0" encoding="UTF-8"?>
<info version="3">
<passenger_version>5.0.17</passenger_version>
<group_count>1</group_count>
<process_count>23</process_count>
<max>23</max>
<capacity_used>23</capacity_used>
<get_wait_list_size>3</get_wait_list_size>
<get_wait_list />
<supergroups>
<supergroup>
<name>/var/app/current/public</name>
<state>READY</state>
<get_wait_list_size>3</get_wait_list_size>
<capacity_used>23</capacity_used>
<secret>foo</secret>
<group default="true">
<name>/var/app/current/public</name>
<component_name>/var/app/current/public</component_name>
<app_root>/var/app/current</app_root>
<app_type>rack</app_type>
<environment>production</environment>
<uuid>QQUrbCVYxbJYpfgyDOwJ</uuid>
<enabled_process_count>23</enabled_process_count>
<disabling_process_count>0</disabling_process_count>
<disabled_process_count>0</disabled_process_count>
<capacity_used>23</capacity_used>
<get_wait_list_size>3</get_wait_list_size>
<disable_wait_list_size>0</disable_wait_list_size>
<processes_being_spawned>2</processes_being_spawned>
<secret>foo</secret>
<api_key>foo</api_key>
<life_status>ALIVE</life_status>
<user>axcoto</user>
<uid>1001</uid>
<group>axcoto</group>
<gid>1001</gid>
<options>
<app_root>/var/app/current</app_root>
<app_group_name>/var/app/current/public</app_group_name>
<app_type>rack</app_type>
<start_command>/var/app/.rvm/gems/ruby-2.2.0-p645/gems/passenger-5.0.17/helper-scripts/rack-loader.rb</start_command>
<startup_file>config.ru</startup_file>
<process_title>Passenger RubyApp</process_title>
<log_level>3</log_level>
<start_timeout>90000</start_timeout>
<environment>production</environment>
<base_uri>/</base_uri>
<spawn_method>smart</spawn_method>
<default_user>nobody</default_user>
<default_group>nogroup</default_group>
<ruby>/var/app/.rvm/gems/ruby-2.2.0-p645/wrappers/ruby</ruby>
<python>python</python>
<nodejs>node</nodejs>
<ust_router_address>unix:/tmp/passenger.eKFdvdC/agents.s/ust_router</ust_router_address>
<ust_router_username>logging</ust_router_username>
<ust_router_password>foo</ust_router_password>
<debugger>false</debugger>
<analytics>false</analytics>
<api_key>foo</api_key>
<min_processes>22</min_processes>
<max_processes>0</max_processes>
<max_preloader_idle_time>300</max_preloader_idle_time>
<max_out_of_band_work_instances>1</max_out_of_band_work_instances>
</options>
<processes>
<process>
<pid>11553</pid>
<sticky_session_id>378579907</sticky_session_id>
<gupid>17173df-PoNT3J9HCf</gupid>
<concurrency>1</concurrency>
<sessions>0</sessions>
<busyness>0</busyness>
<processed>951</processed>
<spawner_creation_time>1452746835922747</spawner_creation_time>
<spawn_start_time>1452746844946982</spawn_start_time>
<spawn_end_time>1452746845013365</spawn_end_time>
<last_used>1452747071764940</last_used>
<last_used_desc>0s ago</last_used_desc>
<uptime>3m 46s</uptime>
<code_revision>899ac7f</code_revision>
<life_status>ALIVE</life_status>
<enabled>ENABLED</enabled>
<has_metrics>true</has_metrics>
<cpu>58</cpu>
<rss>418548</rss>
<pss>319391</pss>
<private_dirty>314900</private_dirty>
<swap>0</swap>
<real_memory>314900</real_memory>
<vmsize>1563580</vmsize>
<process_group_id>13608</process_group_id>
<command>Passenger RubyApp: /var/app/current/public</command>
<sockets>
<socket>
<name>main</name>
<address>unix:/tmp/passenger.eKFdvdC/apps.s/ruby.UWF6zkRJ71aoMXPxpknpWVfC1POFqgWZzbEsdz5v0G46cSSMxJ3GHLFhJaUrK2I</address>
<protocol>session</protocol>
<concurrency>1</concurrency>
<sessions>0</sessions>
</socket>
<socket>
<name>http</name>
<address>tcp://127.0.0.1:49888</address>
<protocol>http</protocol>
<concurrency>1</concurrency>
<sessions>0</sessions>
</socket>
</sockets>
</process>
<process>
<pid>11563</pid>
<sticky_session_id>1549681201</sticky_session_id>
<gupid>17173df-pX5iJOipd8</gupid>
<concurrency>1</concurrency>
<sessions>1</sessions>
<busyness>2147483647</busyness>
<processed>756</processed>
<spawner_creation_time>1452746835922747</spawner_creation_time>
<spawn_start_time>1452746845136882</spawn_start_time>
<spawn_end_time>1452746845172460</spawn_end_time>
<last_used>1452747071709179</last_used>
<last_used_desc>0s ago</last_used_desc>
<uptime>3m 46s</uptime>
<code_revision>899ac7f</code_revision>
<life_status>ALIVE</life_status>
<enabled>ENABLED</enabled>
<has_metrics>true</has_metrics>
<cpu>47</cpu>
<rss>418296</rss>
<pss>314036</pss>
<private_dirty>309240</private_dirty>
<swap>0</swap>
<real_memory>309240</real_memory>
<vmsize>1563608</vmsize>
<process_group_id>13608</process_group_id>
<command>Passenger RubyApp: /var/app/current/public</command>
<sockets>
<socket>
<name>main</name>
<address>unix:/tmp/passenger.eKFdvdC/apps.s/ruby.PVCh7TmvCi9knqhba2vG5qXrlHGEIwhGrxnUvRbIAD6SPz9m0G7YlJ8HEsREHY3</address>
<protocol>session</protocol>
<concurrency>1</concurrency>
<sessions>1</sessions>
</socket>
<socket>
<name>http</name>
<address>tcp://127.0.0.1:52783</address>
<protocol>http</protocol>
<concurrency>1</concurrency>
<sessions>0</sessions>
</socket>
</sockets>
</process>
</processes>
</group>
</supergroup>
</supergroups>
</info>`

View File

@ -0,0 +1,65 @@
# Telegraf plugin: phpfpm
Get phpfpm stat using either HTTP status page or fpm socket.
# Measurements
Meta:
- tags: `pool=poolname`
Measurement names:
- phpfpm
Measurement field:
- accepted_conn
- listen_queue
- max_listen_queue
- listen_queue_len
- idle_processes
- active_processes
- total_processes
- max_active_processes
- max_children_reached
- slow_requests
# Example output
Using this configuration:
```
[phpfpm]
# An array of address to gather stats about. Specify an ip on hostname
# with optional port and path. ie localhost, 10.10.3.33/server-status, etc.
#
# We can configure in three modes:
# - unixsocket: the string is the path to fpm socket like
# /var/run/php5-fpm.sock
# - http: the URL has to start with http:// or https://
# - fcgi: the URL has to start with fcgi:// or cgi://, and socket port must present
#
# If no servers are specified, then default to 127.0.0.1/server-status
urls = ["http://localhost/status", "10.0.0.12:/var/run/php5-fpm-www2.sock", "fcgi://10.0.0.12:9000/status"]
```
When run with:
```
./telegraf -config telegraf.conf -input-filter phpfpm -test
```
It produces:
```
* Plugin: phpfpm, Collection 1
> phpfpm,pool=www accepted_conn=13i,active_processes=2i,idle_processes=1i,listen_queue=0i,listen_queue_len=0i,max_active_processes=2i,max_children_reached=0i,max_listen_queue=0i,slow_requests=0i,total_processes=3i 1453011293083331187
> phpfpm,pool=www2 accepted_conn=12i,active_processes=1i,idle_processes=2i,listen_queue=0i,listen_queue_len=0i,max_active_processes=2i,max_children_reached=0i,max_listen_queue=0i,slow_requests=0i,total_processes=3i 1453011293083691422
> phpfpm,pool=www3 accepted_conn=11i,active_processes=1i,idle_processes=2i,listen_queue=0i,listen_queue_len=0i,max_active_processes=2i,max_children_reached=0i,max_listen_queue=0i,slow_requests=0i,total_processes=3i 1453011293083691658
```
## Note
When using `unixsocket`, you have to ensure that telegraf runs on same
host, and socket path is accessible to telegraf user.

View File

@ -0,0 +1,245 @@
package phpfpm
import (
"bufio"
"bytes"
"fmt"
"io"
"net/http"
"net/url"
"os"
"strconv"
"strings"
"sync"
"github.com/influxdata/telegraf/plugins/inputs"
)
const (
PF_POOL = "pool"
PF_PROCESS_MANAGER = "process manager"
PF_ACCEPTED_CONN = "accepted conn"
PF_LISTEN_QUEUE = "listen queue"
PF_MAX_LISTEN_QUEUE = "max listen queue"
PF_LISTEN_QUEUE_LEN = "listen queue len"
PF_IDLE_PROCESSES = "idle processes"
PF_ACTIVE_PROCESSES = "active processes"
PF_TOTAL_PROCESSES = "total processes"
PF_MAX_ACTIVE_PROCESSES = "max active processes"
PF_MAX_CHILDREN_REACHED = "max children reached"
PF_SLOW_REQUESTS = "slow requests"
)
type metric map[string]int64
type poolStat map[string]metric
type phpfpm struct {
Urls []string
client *http.Client
}
var sampleConfig = `
# An array of addresses to gather stats about. Specify an ip or hostname
# with optional port and path
#
# Plugin can be configured in three modes (either can be used):
# - http: the URL must start with http:// or https://, ie:
# "http://localhost/status"
# "http://192.168.130.1/status?full"
#
# - unixsocket: path to fpm socket, ie:
# "/var/run/php5-fpm.sock"
# or using a custom fpm status path:
# "/var/run/php5-fpm.sock:fpm-custom-status-path"
#
# - fcgi: the URL must start with fcgi:// or cgi://, and port must be present, ie:
# "fcgi://10.0.0.12:9000/status"
# "cgi://10.0.10.12:9001/status"
#
# Example of multiple gathering from local socket and remove host
# urls = ["http://192.168.1.20/status", "/tmp/fpm.sock"]
# If no servers are specified, then default to http://127.0.0.1/status
urls = ["http://localhost/status"]
`
func (r *phpfpm) SampleConfig() string {
return sampleConfig
}
func (r *phpfpm) Description() string {
return "Read metrics of phpfpm, via HTTP status page or socket"
}
// Reads stats from all configured servers accumulates stats.
// Returns one of the errors encountered while gather stats (if any).
func (g *phpfpm) Gather(acc inputs.Accumulator) error {
if len(g.Urls) == 0 {
return g.gatherServer("http://127.0.0.1/status", acc)
}
var wg sync.WaitGroup
var outerr error
for _, serv := range g.Urls {
wg.Add(1)
go func(serv string) {
defer wg.Done()
outerr = g.gatherServer(serv, acc)
}(serv)
}
wg.Wait()
return outerr
}
// Request status page to get stat raw data and import it
func (g *phpfpm) gatherServer(addr string, acc inputs.Accumulator) error {
if g.client == nil {
client := &http.Client{}
g.client = client
}
if strings.HasPrefix(addr, "http://") || strings.HasPrefix(addr, "https://") {
return g.gatherHttp(addr, acc)
}
var (
fcgi *conn
socketPath string
statusPath string
)
if strings.HasPrefix(addr, "fcgi://") || strings.HasPrefix(addr, "cgi://") {
u, err := url.Parse(addr)
if err != nil {
return fmt.Errorf("Unable parse server address '%s': %s", addr, err)
}
socketAddr := strings.Split(u.Host, ":")
fcgiIp := socketAddr[0]
fcgiPort, _ := strconv.Atoi(socketAddr[1])
fcgi, _ = NewClient(fcgiIp, fcgiPort)
} else {
socketAddr := strings.Split(addr, ":")
if len(socketAddr) >= 2 {
socketPath = socketAddr[0]
statusPath = socketAddr[1]
} else {
socketPath = socketAddr[0]
statusPath = "status"
}
if _, err := os.Stat(socketPath); os.IsNotExist(err) {
return fmt.Errorf("Socket doesn't exist '%s': %s", socketPath, err)
}
fcgi, _ = NewClient("unix", socketPath)
}
return g.gatherFcgi(fcgi, statusPath, acc)
}
// Gather stat using fcgi protocol
func (g *phpfpm) gatherFcgi(fcgi *conn, statusPath string, acc inputs.Accumulator) error {
fpmOutput, fpmErr, err := fcgi.Request(map[string]string{
"SCRIPT_NAME": "/" + statusPath,
"SCRIPT_FILENAME": statusPath,
"REQUEST_METHOD": "GET",
"CONTENT_LENGTH": "0",
"SERVER_PROTOCOL": "HTTP/1.0",
"SERVER_SOFTWARE": "go / fcgiclient ",
"REMOTE_ADDR": "127.0.0.1",
}, "/"+statusPath)
if len(fpmErr) == 0 && err == nil {
importMetric(bytes.NewReader(fpmOutput), acc)
return nil
} else {
return fmt.Errorf("Unable parse phpfpm status. Error: %v %v", string(fpmErr), err)
}
}
// Gather stat using http protocol
func (g *phpfpm) gatherHttp(addr string, acc inputs.Accumulator) error {
u, err := url.Parse(addr)
if err != nil {
return fmt.Errorf("Unable parse server address '%s': %s", addr, err)
}
req, err := http.NewRequest("GET", fmt.Sprintf("%s://%s%s", u.Scheme,
u.Host, u.Path), nil)
res, err := g.client.Do(req)
if err != nil {
return fmt.Errorf("Unable to connect to phpfpm status page '%s': %v",
addr, err)
}
if res.StatusCode != 200 {
return fmt.Errorf("Unable to get valid stat result from '%s': %v",
addr, err)
}
importMetric(res.Body, acc)
return nil
}
// Import stat data into Telegraf system
func importMetric(r io.Reader, acc inputs.Accumulator) (poolStat, error) {
stats := make(poolStat)
var currentPool string
scanner := bufio.NewScanner(r)
for scanner.Scan() {
statLine := scanner.Text()
keyvalue := strings.Split(statLine, ":")
if len(keyvalue) < 2 {
continue
}
fieldName := strings.Trim(keyvalue[0], " ")
// We start to gather data for a new pool here
if fieldName == PF_POOL {
currentPool = strings.Trim(keyvalue[1], " ")
stats[currentPool] = make(metric)
continue
}
// Start to parse metric for current pool
switch fieldName {
case PF_ACCEPTED_CONN,
PF_LISTEN_QUEUE,
PF_MAX_LISTEN_QUEUE,
PF_LISTEN_QUEUE_LEN,
PF_IDLE_PROCESSES,
PF_ACTIVE_PROCESSES,
PF_TOTAL_PROCESSES,
PF_MAX_ACTIVE_PROCESSES,
PF_MAX_CHILDREN_REACHED,
PF_SLOW_REQUESTS:
fieldValue, err := strconv.ParseInt(strings.Trim(keyvalue[1], " "), 10, 64)
if err == nil {
stats[currentPool][fieldName] = fieldValue
}
}
}
// Finally, we push the pool metric
for pool := range stats {
tags := map[string]string{
"pool": pool,
}
fields := make(map[string]interface{})
for k, v := range stats[pool] {
fields[strings.Replace(k, " ", "_", -1)] = v
}
acc.AddFields("phpfpm", fields, tags)
}
return stats, nil
}
func init() {
inputs.Add("phpfpm", func() inputs.Input {
return &phpfpm{}
})
}

View File

@ -1,13 +1,14 @@
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package fcgi implements the FastCGI protocol.
// Currently only the responder role is supported.
// The protocol is defined at http://www.fastcgi.com/drupal/node/6?q=node/22
package phpfpm package phpfpm
// FastCGI client to request via socket // This file defines the raw protocol and some utilities used by the child and
// the host.
// Copyright 2012 Junqing Tan <ivan@mysqlab.net> and The Go Authors
// Use of this source code is governed by a BSD-style
// Part of source code is from Go fcgi package
// Fix bug: Can't recive more than 1 record untill FCGI_END_REQUEST 2012-09-15
// By: wofeiwo
import ( import (
"bufio" "bufio"
@ -15,70 +16,84 @@ import (
"encoding/binary" "encoding/binary"
"errors" "errors"
"io" "io"
"sync"
"net" "net"
"strconv" "strconv"
"sync"
"strings"
) )
const FCGI_LISTENSOCK_FILENO uint8 = 0 // recType is a record type, as defined by
const FCGI_HEADER_LEN uint8 = 8 // http://www.fastcgi.com/devkit/doc/fcgi-spec.html#S8
const VERSION_1 uint8 = 1 type recType uint8
const FCGI_NULL_REQUEST_ID uint8 = 0
const FCGI_KEEP_CONN uint8 = 1
const ( const (
FCGI_BEGIN_REQUEST uint8 = iota + 1 typeBeginRequest recType = 1
FCGI_ABORT_REQUEST typeAbortRequest recType = 2
FCGI_END_REQUEST typeEndRequest recType = 3
FCGI_PARAMS typeParams recType = 4
FCGI_STDIN typeStdin recType = 5
FCGI_STDOUT typeStdout recType = 6
FCGI_STDERR typeStderr recType = 7
FCGI_DATA typeData recType = 8
FCGI_GET_VALUES typeGetValues recType = 9
FCGI_GET_VALUES_RESULT typeGetValuesResult recType = 10
FCGI_UNKNOWN_TYPE typeUnknownType recType = 11
FCGI_MAXTYPE = FCGI_UNKNOWN_TYPE
) )
const ( // keep the connection between web-server and responder open after request
FCGI_RESPONDER uint8 = iota + 1 const flagKeepConn = 1
FCGI_AUTHORIZER
FCGI_FILTER
)
const ( const (
FCGI_REQUEST_COMPLETE uint8 = iota maxWrite = 65535 // maximum record body
FCGI_CANT_MPX_CONN
FCGI_OVERLOADED
FCGI_UNKNOWN_ROLE
)
const (
FCGI_MAX_CONNS string = "MAX_CONNS"
FCGI_MAX_REQS string = "MAX_REQS"
FCGI_MPXS_CONNS string = "MPXS_CONNS"
)
const (
maxWrite = 6553500 // maximum record body
maxPad = 255 maxPad = 255
) )
const (
roleResponder = iota + 1 // only Responders are implemented.
roleAuthorizer
roleFilter
)
const (
statusRequestComplete = iota
statusCantMultiplex
statusOverloaded
statusUnknownRole
)
const headerLen = 8
type header struct { type header struct {
Version uint8 Version uint8
Type uint8 Type recType
Id uint16 Id uint16
ContentLength uint16 ContentLength uint16
PaddingLength uint8 PaddingLength uint8
Reserved uint8 Reserved uint8
} }
type beginRequest struct {
role uint16
flags uint8
reserved [5]uint8
}
func (br *beginRequest) read(content []byte) error {
if len(content) != 8 {
return errors.New("fcgi: invalid begin request record")
}
br.role = binary.BigEndian.Uint16(content)
br.flags = content[2]
return nil
}
// for padding so we don't have to allocate all the time // for padding so we don't have to allocate all the time
// not synchronized because we don't care what the contents are // not synchronized because we don't care what the contents are
var pad [maxPad]byte var pad [maxPad]byte
func (h *header) init(recType uint8, reqId uint16, contentLength int) { func (h *header) init(recType recType, reqId uint16, contentLength int) {
h.Version = 1 h.Version = 1
h.Type = recType h.Type = recType
h.Id = reqId h.Id = reqId
@ -86,6 +101,26 @@ func (h *header) init(recType uint8, reqId uint16, contentLength int) {
h.PaddingLength = uint8(-contentLength & 7) h.PaddingLength = uint8(-contentLength & 7)
} }
// conn sends records over rwc
type conn struct {
mutex sync.Mutex
rwc io.ReadWriteCloser
// to avoid allocations
buf bytes.Buffer
h header
}
func newConn(rwc io.ReadWriteCloser) *conn {
return &conn{rwc: rwc}
}
func (c *conn) Close() error {
c.mutex.Lock()
defer c.mutex.Unlock()
return c.rwc.Close()
}
type record struct { type record struct {
h header h header
buf [maxWrite + maxPad]byte buf [maxWrite + maxPad]byte
@ -109,69 +144,39 @@ func (r *record) content() []byte {
return r.buf[:r.h.ContentLength] return r.buf[:r.h.ContentLength]
} }
type FCGIClient struct { // writeRecord writes and sends a single record.
mutex sync.Mutex func (c *conn) writeRecord(recType recType, reqId uint16, b []byte) error {
rwc io.ReadWriteCloser c.mutex.Lock()
h header defer c.mutex.Unlock()
buf bytes.Buffer c.buf.Reset()
keepAlive bool c.h.init(recType, reqId, len(b))
} if err := binary.Write(&c.buf, binary.BigEndian, c.h); err != nil {
func NewClient(h string, args ...interface{}) (fcgi *FCGIClient, err error) {
var conn net.Conn
if len(args) != 1 {
err = errors.New("fcgi: not enough params")
return
}
switch args[0].(type) {
case int:
addr := h + ":" + strconv.FormatInt(int64(args[0].(int)), 10)
conn, err = net.Dial("tcp", addr)
case string:
laddr := net.UnixAddr{Name: args[0].(string), Net: h}
conn, err = net.DialUnix(h, nil, &laddr)
default:
err = errors.New("fcgi: we only accept int (port) or string (socket) params.")
}
fcgi = &FCGIClient{
rwc: conn,
keepAlive: false,
}
return
}
func (client *FCGIClient) writeRecord(recType uint8, reqId uint16, content []byte) (err error) {
client.mutex.Lock()
defer client.mutex.Unlock()
client.buf.Reset()
client.h.init(recType, reqId, len(content))
if err := binary.Write(&client.buf, binary.BigEndian, client.h); err != nil {
return err return err
} }
if _, err := client.buf.Write(content); err != nil { if _, err := c.buf.Write(b); err != nil {
return err return err
} }
if _, err := client.buf.Write(pad[:client.h.PaddingLength]); err != nil { if _, err := c.buf.Write(pad[:c.h.PaddingLength]); err != nil {
return err return err
} }
_, err = client.rwc.Write(client.buf.Bytes()) _, err := c.rwc.Write(c.buf.Bytes())
return err return err
} }
func (client *FCGIClient) writeBeginRequest(reqId uint16, role uint16, flags uint8) error { func (c *conn) writeBeginRequest(reqId uint16, role uint16, flags uint8) error {
b := [8]byte{byte(role >> 8), byte(role), flags} b := [8]byte{byte(role >> 8), byte(role), flags}
return client.writeRecord(FCGI_BEGIN_REQUEST, reqId, b[:]) return c.writeRecord(typeBeginRequest, reqId, b[:])
} }
func (client *FCGIClient) writeEndRequest(reqId uint16, appStatus int, protocolStatus uint8) error { func (c *conn) writeEndRequest(reqId uint16, appStatus int, protocolStatus uint8) error {
b := make([]byte, 8) b := make([]byte, 8)
binary.BigEndian.PutUint32(b, uint32(appStatus)) binary.BigEndian.PutUint32(b, uint32(appStatus))
b[4] = protocolStatus b[4] = protocolStatus
return client.writeRecord(FCGI_END_REQUEST, reqId, b) return c.writeRecord(typeEndRequest, reqId, b)
} }
func (client *FCGIClient) writePairs(recType uint8, reqId uint16, pairs map[string]string) error { func (c *conn) writePairs(recType recType, reqId uint16, pairs map[string]string) error {
w := newWriter(client, recType, reqId) w := newWriter(c, recType, reqId)
b := make([]byte, 8) b := make([]byte, 8)
for k, v := range pairs { for k, v := range pairs {
n := encodeSize(b, uint32(len(k))) n := encodeSize(b, uint32(len(k)))
@ -238,7 +243,7 @@ func (w *bufWriter) Close() error {
return w.closer.Close() return w.closer.Close()
} }
func newWriter(c *FCGIClient, recType uint8, reqId uint16) *bufWriter { func newWriter(c *conn, recType recType, reqId uint16) *bufWriter {
s := &streamWriter{c: c, recType: recType, reqId: reqId} s := &streamWriter{c: c, recType: recType, reqId: reqId}
w := bufio.NewWriterSize(s, maxWrite) w := bufio.NewWriterSize(s, maxWrite)
return &bufWriter{s, w} return &bufWriter{s, w}
@ -247,8 +252,8 @@ func newWriter(c *FCGIClient, recType uint8, reqId uint16) *bufWriter {
// streamWriter abstracts out the separation of a stream into discrete records. // streamWriter abstracts out the separation of a stream into discrete records.
// It only writes maxWrite bytes at a time. // It only writes maxWrite bytes at a time.
type streamWriter struct { type streamWriter struct {
c *FCGIClient c *conn
recType uint8 recType recType
reqId uint16 reqId uint16
} }
@ -273,22 +278,44 @@ func (w *streamWriter) Close() error {
return w.c.writeRecord(w.recType, w.reqId, nil) return w.c.writeRecord(w.recType, w.reqId, nil)
} }
func (client *FCGIClient) Request(env map[string]string, reqStr string) (retout []byte, reterr []byte, err error) { func NewClient(h string, args ...interface{}) (fcgi *conn, err error) {
var con net.Conn
if len(args) != 1 {
err = errors.New("fcgi: not enough params")
return
}
switch args[0].(type) {
case int:
addr := h + ":" + strconv.FormatInt(int64(args[0].(int)), 10)
con, err = net.Dial("tcp", addr)
case string:
laddr := net.UnixAddr{Name: args[0].(string), Net: h}
con, err = net.DialUnix(h, nil, &laddr)
default:
err = errors.New("fcgi: we only accept int (port) or string (socket) params.")
}
fcgi = &conn{
rwc: con,
}
return
}
var reqId uint16 = 1 func (client *conn) Request(env map[string]string, requestData string) (retout []byte, reterr []byte, err error) {
defer client.rwc.Close() defer client.rwc.Close()
var reqId uint16 = 1
err = client.writeBeginRequest(reqId, uint16(FCGI_RESPONDER), 0) err = client.writeBeginRequest(reqId, uint16(roleResponder), 0)
if err != nil { if err != nil {
return return
} }
err = client.writePairs(FCGI_PARAMS, reqId, env)
err = client.writePairs(typeParams, reqId, env)
if err != nil { if err != nil {
return return
} }
if len(reqStr) > 0 {
err = client.writeRecord(FCGI_STDIN, reqId, []byte(reqStr)) if len(requestData) > 0 {
if err != nil { if err = client.writeRecord(typeStdin, reqId, []byte(requestData)); err != nil {
return return
} }
} }
@ -297,23 +324,25 @@ func (client *FCGIClient) Request(env map[string]string, reqStr string) (retout
var err1 error var err1 error
// recive untill EOF or FCGI_END_REQUEST // recive untill EOF or FCGI_END_REQUEST
READ_LOOP:
for { for {
err1 = rec.read(client.rwc) err1 = rec.read(client.rwc)
if err1 != nil { if err1 != nil && strings.Contains(err1.Error(), "use of closed network connection") {
if err1 != io.EOF { if err1 != io.EOF {
err = err1 err = err1
} }
break break
} }
switch { switch {
case rec.h.Type == FCGI_STDOUT: case rec.h.Type == typeStdout:
retout = append(retout, rec.content()...) retout = append(retout, rec.content()...)
case rec.h.Type == FCGI_STDERR: case rec.h.Type == typeStderr:
reterr = append(reterr, rec.content()...) reterr = append(reterr, rec.content()...)
case rec.h.Type == FCGI_END_REQUEST: case rec.h.Type == typeEndRequest:
fallthrough fallthrough
default: default:
break break READ_LOOP
} }
} }

View File

@ -0,0 +1,241 @@
package phpfpm
import (
"crypto/rand"
"encoding/binary"
"fmt"
"net"
"net/http"
"net/http/fcgi"
"net/http/httptest"
"testing"
"github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
type statServer struct{}
// We create a fake server to return test data
func (s statServer) ServeHTTP(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "text/plain")
w.Header().Set("Content-Length", fmt.Sprint(len(outputSample)))
fmt.Fprint(w, outputSample)
}
func TestPhpFpmGeneratesMetrics_From_Http(t *testing.T) {
sv := statServer{}
ts := httptest.NewServer(sv)
defer ts.Close()
r := &phpfpm{
Urls: []string{ts.URL},
}
var acc testutil.Accumulator
err := r.Gather(&acc)
require.NoError(t, err)
tags := map[string]string{
"pool": "www",
}
fields := map[string]interface{}{
"accepted_conn": int64(3),
"listen_queue": int64(1),
"max_listen_queue": int64(0),
"listen_queue_len": int64(0),
"idle_processes": int64(1),
"active_processes": int64(1),
"total_processes": int64(2),
"max_active_processes": int64(1),
"max_children_reached": int64(2),
"slow_requests": int64(1),
}
acc.AssertContainsTaggedFields(t, "phpfpm", fields, tags)
}
func TestPhpFpmGeneratesMetrics_From_Fcgi(t *testing.T) {
// Let OS find an available port
tcp, err := net.Listen("tcp", "127.0.0.1:0")
if err != nil {
t.Fatal("Cannot initalize test server")
}
defer tcp.Close()
s := statServer{}
go fcgi.Serve(tcp, s)
//Now we tested again above server
r := &phpfpm{
Urls: []string{"fcgi://" + tcp.Addr().String() + "/status"},
}
var acc testutil.Accumulator
err = r.Gather(&acc)
require.NoError(t, err)
tags := map[string]string{
"pool": "www",
}
fields := map[string]interface{}{
"accepted_conn": int64(3),
"listen_queue": int64(1),
"max_listen_queue": int64(0),
"listen_queue_len": int64(0),
"idle_processes": int64(1),
"active_processes": int64(1),
"total_processes": int64(2),
"max_active_processes": int64(1),
"max_children_reached": int64(2),
"slow_requests": int64(1),
}
acc.AssertContainsTaggedFields(t, "phpfpm", fields, tags)
}
func TestPhpFpmGeneratesMetrics_From_Socket(t *testing.T) {
// Create a socket in /tmp because we always have write permission and if the
// removing of socket fail when system restart /tmp is clear so
// we don't have junk files around
var randomNumber int64
binary.Read(rand.Reader, binary.LittleEndian, &randomNumber)
tcp, err := net.Listen("unix", fmt.Sprintf("/tmp/test-fpm%d.sock", randomNumber))
if err != nil {
t.Fatal("Cannot initalize server on port ")
}
defer tcp.Close()
s := statServer{}
go fcgi.Serve(tcp, s)
r := &phpfpm{
Urls: []string{tcp.Addr().String()},
}
var acc testutil.Accumulator
err = r.Gather(&acc)
require.NoError(t, err)
tags := map[string]string{
"pool": "www",
}
fields := map[string]interface{}{
"accepted_conn": int64(3),
"listen_queue": int64(1),
"max_listen_queue": int64(0),
"listen_queue_len": int64(0),
"idle_processes": int64(1),
"active_processes": int64(1),
"total_processes": int64(2),
"max_active_processes": int64(1),
"max_children_reached": int64(2),
"slow_requests": int64(1),
}
acc.AssertContainsTaggedFields(t, "phpfpm", fields, tags)
}
func TestPhpFpmGeneratesMetrics_From_Socket_Custom_Status_Path(t *testing.T) {
// Create a socket in /tmp because we always have write permission. If the
// removing of socket fail we won't have junk files around. Cuz when system
// restart, it clears out /tmp
var randomNumber int64
binary.Read(rand.Reader, binary.LittleEndian, &randomNumber)
tcp, err := net.Listen("unix", fmt.Sprintf("/tmp/test-fpm%d.sock", randomNumber))
if err != nil {
t.Fatal("Cannot initalize server on port ")
}
defer tcp.Close()
s := statServer{}
go fcgi.Serve(tcp, s)
r := &phpfpm{
Urls: []string{tcp.Addr().String() + ":custom-status-path"},
}
var acc testutil.Accumulator
err = r.Gather(&acc)
require.NoError(t, err)
tags := map[string]string{
"pool": "www",
}
fields := map[string]interface{}{
"accepted_conn": int64(3),
"listen_queue": int64(1),
"max_listen_queue": int64(0),
"listen_queue_len": int64(0),
"idle_processes": int64(1),
"active_processes": int64(1),
"total_processes": int64(2),
"max_active_processes": int64(1),
"max_children_reached": int64(2),
"slow_requests": int64(1),
}
acc.AssertContainsTaggedFields(t, "phpfpm", fields, tags)
}
//When not passing server config, we default to localhost
//We just want to make sure we did request stat from localhost
func TestPhpFpmDefaultGetFromLocalhost(t *testing.T) {
r := &phpfpm{}
var acc testutil.Accumulator
err := r.Gather(&acc)
require.Error(t, err)
assert.Contains(t, err.Error(), "127.0.0.1/status")
}
func TestPhpFpmGeneratesMetrics_Throw_Error_When_Fpm_Status_Is_Not_Responding(t *testing.T) {
r := &phpfpm{
Urls: []string{"http://aninvalidone"},
}
var acc testutil.Accumulator
err := r.Gather(&acc)
require.Error(t, err)
assert.Contains(t, err.Error(), `Unable to connect to phpfpm status page 'http://aninvalidone': Get http://aninvalidone: dial tcp: lookup aninvalidone`)
}
func TestPhpFpmGeneratesMetrics_Throw_Error_When_Socket_Path_Is_Invalid(t *testing.T) {
r := &phpfpm{
Urls: []string{"/tmp/invalid.sock"},
}
var acc testutil.Accumulator
err := r.Gather(&acc)
require.Error(t, err)
assert.Equal(t, `Socket doesn't exist '/tmp/invalid.sock': stat /tmp/invalid.sock: no such file or directory`, err.Error())
}
const outputSample = `
pool: www
process manager: dynamic
start time: 11/Oct/2015:23:38:51 +0000
start since: 1991
accepted conn: 3
listen queue: 1
max listen queue: 0
listen queue len: 0
idle processes: 1
active processes: 1
total processes: 2
max active processes: 1
max children reached: 2
slow requests: 1
`

View File

@ -7,7 +7,7 @@ import (
"strings" "strings"
"sync" "sync"
"github.com/influxdb/telegraf/plugins" "github.com/influxdata/telegraf/plugins/inputs"
) )
// HostPinger is a function that runs the "ping" function using a list of // HostPinger is a function that runs the "ping" function using a list of
@ -56,7 +56,7 @@ func (_ *Ping) SampleConfig() string {
return sampleConfig return sampleConfig
} }
func (p *Ping) Gather(acc plugins.Accumulator) error { func (p *Ping) Gather(acc inputs.Accumulator) error {
var wg sync.WaitGroup var wg sync.WaitGroup
errorChannel := make(chan error, len(p.Urls)*2) errorChannel := make(chan error, len(p.Urls)*2)
@ -64,7 +64,7 @@ func (p *Ping) Gather(acc plugins.Accumulator) error {
// Spin off a go routine for each url to ping // Spin off a go routine for each url to ping
for _, url := range p.Urls { for _, url := range p.Urls {
wg.Add(1) wg.Add(1)
go func(url string, acc plugins.Accumulator) { go func(url string, acc inputs.Accumulator) {
defer wg.Done() defer wg.Done()
args := p.args(url) args := p.args(url)
out, err := p.pingHost(args...) out, err := p.pingHost(args...)
@ -82,10 +82,15 @@ func (p *Ping) Gather(acc plugins.Accumulator) error {
} }
// Calculate packet loss percentage // Calculate packet loss percentage
loss := float64(trans-rec) / float64(trans) * 100.0 loss := float64(trans-rec) / float64(trans) * 100.0
acc.Add("packets_transmitted", trans, tags) fields := map[string]interface{}{
acc.Add("packets_received", rec, tags) "packets_transmitted": trans,
acc.Add("percent_packet_loss", loss, tags) "packets_received": rec,
acc.Add("average_response_ms", avg, tags) "percent_packet_loss": loss,
}
if avg > 0 {
fields["average_response_ms"] = avg
}
acc.AddFields("ping", fields, tags)
}(url, acc) }(url, acc)
} }
@ -171,7 +176,7 @@ func processPingOutput(out string) (int, int, float64, error) {
} }
func init() { func init() {
plugins.Add("ping", func() plugins.Plugin { inputs.Add("ping", func() inputs.Input {
return &Ping{pingHost: hostPinger} return &Ping{pingHost: hostPinger}
}) })
} }

View File

@ -6,7 +6,7 @@ import (
"sort" "sort"
"testing" "testing"
"github.com/influxdb/telegraf/testutil" "github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
) )
@ -120,18 +120,16 @@ func TestPingGather(t *testing.T) {
p.Gather(&acc) p.Gather(&acc)
tags := map[string]string{"url": "www.google.com"} tags := map[string]string{"url": "www.google.com"}
assert.NoError(t, acc.ValidateTaggedValue("packets_transmitted", 5, tags)) fields := map[string]interface{}{
assert.NoError(t, acc.ValidateTaggedValue("packets_received", 5, tags)) "packets_transmitted": 5,
assert.NoError(t, acc.ValidateTaggedValue("percent_packet_loss", 0.0, tags)) "packets_received": 5,
assert.NoError(t, acc.ValidateTaggedValue("average_response_ms", "percent_packet_loss": 0.0,
43.628, tags)) "average_response_ms": 43.628,
}
acc.AssertContainsTaggedFields(t, "ping", fields, tags)
tags = map[string]string{"url": "www.reddit.com"} tags = map[string]string{"url": "www.reddit.com"}
assert.NoError(t, acc.ValidateTaggedValue("packets_transmitted", 5, tags)) acc.AssertContainsTaggedFields(t, "ping", fields, tags)
assert.NoError(t, acc.ValidateTaggedValue("packets_received", 5, tags))
assert.NoError(t, acc.ValidateTaggedValue("percent_packet_loss", 0.0, tags))
assert.NoError(t, acc.ValidateTaggedValue("average_response_ms",
43.628, tags))
} }
var lossyPingOutput = ` var lossyPingOutput = `
@ -159,10 +157,13 @@ func TestLossyPingGather(t *testing.T) {
p.Gather(&acc) p.Gather(&acc)
tags := map[string]string{"url": "www.google.com"} tags := map[string]string{"url": "www.google.com"}
assert.NoError(t, acc.ValidateTaggedValue("packets_transmitted", 5, tags)) fields := map[string]interface{}{
assert.NoError(t, acc.ValidateTaggedValue("packets_received", 3, tags)) "packets_transmitted": 5,
assert.NoError(t, acc.ValidateTaggedValue("percent_packet_loss", 40.0, tags)) "packets_received": 3,
assert.NoError(t, acc.ValidateTaggedValue("average_response_ms", 44.033, tags)) "percent_packet_loss": 40.0,
"average_response_ms": 44.033,
}
acc.AssertContainsTaggedFields(t, "ping", fields, tags)
} }
var errorPingOutput = ` var errorPingOutput = `
@ -188,10 +189,12 @@ func TestBadPingGather(t *testing.T) {
p.Gather(&acc) p.Gather(&acc)
tags := map[string]string{"url": "www.amazon.com"} tags := map[string]string{"url": "www.amazon.com"}
assert.NoError(t, acc.ValidateTaggedValue("packets_transmitted", 2, tags)) fields := map[string]interface{}{
assert.NoError(t, acc.ValidateTaggedValue("packets_received", 0, tags)) "packets_transmitted": 2,
assert.NoError(t, acc.ValidateTaggedValue("percent_packet_loss", 100.0, tags)) "packets_received": 0,
assert.NoError(t, acc.ValidateTaggedValue("average_response_ms", 0.0, tags)) "percent_packet_loss": 100.0,
}
acc.AssertContainsTaggedFields(t, "ping", fields, tags)
} }
func mockFatalHostPinger(args ...string) (string, error) { func mockFatalHostPinger(args ...string) (string, error) {

Some files were not shown because too many files have changed in this diff Show More