Compare commits

..

3 Commits

Author SHA1 Message Date
Cameron Sparr
f4e48f9909 Loading & namespacing external plugins 2017-02-06 18:03:01 +00:00
Cameron Sparr
2eee1b84fb break telegraf registry into separate package
this is for supporting external plugins.

external plugins will depend on a few telegraf interface types, as well
as a common telegraf registry.

this will allow external and internal plugins to both share this package
and make it easier to vendor/version the whole thing semantically, which
will make it easier to keep plugins supported across build and telegraf
versions.

see #1717
2017-02-06 11:16:29 +00:00
Cameron Sparr
9726ecfec3 Support for loading .so plugin files
closes #1717
2017-02-06 11:10:26 +00:00
305 changed files with 2508 additions and 8058 deletions

View File

@@ -1,7 +1,7 @@
## Directions ## Directions
GitHub Issues are reserved for actionable bug reports and feature requests. GitHub Issues are reserved for actionable bug reports and feature requests.
General questions should be asked at the [InfluxData Community](https://community.influxdata.com) site. General questions should be sent to the [InfluxDB mailing list](https://groups.google.com/forum/#!forum/influxdb).
Before opening an issue, search for similar bug reports or feature requests on GitHub Issues. Before opening an issue, search for similar bug reports or feature requests on GitHub Issues.
If no similar issue can be found, fill out either the "Bug Report" or the "Feature Request" section below. If no similar issue can be found, fill out either the "Bug Report" or the "Feature Request" section below.

View File

@@ -2,12 +2,6 @@
### Release Notes ### Release Notes
- Users of the windows `ping` plugin will need to drop or migrate their
measurements in order to continue using the plugin. The reason for this is that
the windows plugin was outputting a different type than the linux plugin. This
made it impossible to use the `ping` plugin for both windows and linux
machines.
- Ceph: the `ceph_pgmap_state` metric content has been modified to use a unique field `count`, with each state expressed as a `state` tag. - Ceph: the `ceph_pgmap_state` metric content has been modified to use a unique field `count`, with each state expressed as a `state` tag.
Telegraf < 1.3: Telegraf < 1.3:
@@ -41,9 +35,6 @@ be deprecated eventually.
### Features ### Features
- [#2721](https://github.com/influxdata/telegraf/pull/2721): Added SASL options for kafka output plugin.
- [#2723](https://github.com/influxdata/telegraf/pull/2723): Added SSL configuration for input haproxy.
- [#2494](https://github.com/influxdata/telegraf/pull/2494): Add interrupts input plugin.
- [#2094](https://github.com/influxdata/telegraf/pull/2094): Add generic socket listener & writer. - [#2094](https://github.com/influxdata/telegraf/pull/2094): Add generic socket listener & writer.
- [#2204](https://github.com/influxdata/telegraf/pull/2204): Extend http_response to support searching for a substring in response. Return 1 if found, else 0. - [#2204](https://github.com/influxdata/telegraf/pull/2204): Extend http_response to support searching for a substring in response. Return 1 if found, else 0.
- [#2137](https://github.com/influxdata/telegraf/pull/2137): Added userstats to mysql input plugin. - [#2137](https://github.com/influxdata/telegraf/pull/2137): Added userstats to mysql input plugin.
@@ -57,70 +48,18 @@ be deprecated eventually.
- [#2201](https://github.com/influxdata/telegraf/pull/2201): Add lock option to the IPtables input plugin. - [#2201](https://github.com/influxdata/telegraf/pull/2201): Add lock option to the IPtables input plugin.
- [#2244](https://github.com/influxdata/telegraf/pull/2244): Support ipmi_sensor plugin querying local ipmi sensors. - [#2244](https://github.com/influxdata/telegraf/pull/2244): Support ipmi_sensor plugin querying local ipmi sensors.
- [#2339](https://github.com/influxdata/telegraf/pull/2339): Increment gather_errors for all errors emitted by inputs. - [#2339](https://github.com/influxdata/telegraf/pull/2339): Increment gather_errors for all errors emitted by inputs.
- [#2071](https://github.com/influxdata/telegraf/issues/2071): Use official docker SDK.
- [#1678](https://github.com/influxdata/telegraf/pull/1678): Add AMQP consumer input plugin
- [#2512](https://github.com/influxdata/telegraf/pull/2512): Added pprof tool.
- [#2501](https://github.com/influxdata/telegraf/pull/2501): Support DEAD(X) state in system input plugin.
- [#2522](https://github.com/influxdata/telegraf/pull/2522): Add support for mongodb client certificates.
- [#1948](https://github.com/influxdata/telegraf/pull/1948): Support adding SNMP table indexes as tags.
- [#2332](https://github.com/influxdata/telegraf/pull/2332): Add Elasticsearch 5.x output
- [#2587](https://github.com/influxdata/telegraf/pull/2587): Add json timestamp units configurability
- [#2597](https://github.com/influxdata/telegraf/issues/2597): Add support for Linux sysctl-fs metrics.
- [#2425](https://github.com/influxdata/telegraf/pull/2425): Support to include/exclude docker container labels as tags
- [#1667](https://github.com/influxdata/telegraf/pull/1667): dmcache input plugin
- [#2637](https://github.com/influxdata/telegraf/issues/2637): Add support for precision in http_listener
- [#2636](https://github.com/influxdata/telegraf/pull/2636): Add `message_len_max` option to `kafka_consumer` input
- [#1100](https://github.com/influxdata/telegraf/issues/1100): Add collectd parser
- [#1820](https://github.com/influxdata/telegraf/issues/1820): easier plugin testing without outputs
- [#2493](https://github.com/influxdata/telegraf/pull/2493): Check signature in the GitHub webhook plugin
- [#2038](https://github.com/influxdata/telegraf/issues/2038): Add papertrail support to webhooks
- [#2253](https://github.com/influxdata/telegraf/pull/2253): Change jolokia plugin to use bulk requests.
- [#2575](https://github.com/influxdata/telegraf/issues/2575) Add diskio input for Darwin
- [#2705](https://github.com/influxdata/telegraf/pull/2705): Kinesis output: add use_random_partitionkey option
- [#2635](https://github.com/influxdata/telegraf/issues/2635): add tcp keep-alive to socket_listener & socket_writer
- [#2031](https://github.com/influxdata/telegraf/pull/2031): Add Kapacitor input plugin
- [#2732](https://github.com/influxdata/telegraf/pull/2732): Use go 1.8.1
- [#2712](https://github.com/influxdata/telegraf/issues/2712): Documentation for rabbitmq input plugin
### Bugfixes ### Bugfixes
- [#2633](https://github.com/influxdata/telegraf/pull/2633): ipmi_sensor: allow @ symbol in password
- [#2077](https://github.com/influxdata/telegraf/issues/2077): SQL Server Input - Arithmetic overflow error converting numeric to data type int. - [#2077](https://github.com/influxdata/telegraf/issues/2077): SQL Server Input - Arithmetic overflow error converting numeric to data type int.
- [#2262](https://github.com/influxdata/telegraf/issues/2262): Flush jitter can inhibit metric collection. - [#2262](https://github.com/influxdata/telegraf/issues/2262): Flush jitter can inhibit metric collection.
- [#2287](https://github.com/influxdata/telegraf/issues/2287): Kubernetes input: Handle null startTime for stopped pods
- [#1636](https://github.com/influxdata/telegraf/issues/1636): procstat - stop caching PIDs.
- [#2318](https://github.com/influxdata/telegraf/issues/2318): haproxy input - Add missing fields. - [#2318](https://github.com/influxdata/telegraf/issues/2318): haproxy input - Add missing fields.
- [#2287](https://github.com/influxdata/telegraf/issues/2287): Kubernetes input: Handle null startTime for stopped pods. - [#2287](https://github.com/influxdata/telegraf/issues/2287): Kubernetes input: Handle null startTime for stopped pods.
- [#2356](https://github.com/influxdata/telegraf/issues/2356): cpu input panic when /proc/stat is empty. - [#2356](https://github.com/influxdata/telegraf/issues/2356): cpu input panic when /proc/stat is empty.
- [#2341](https://github.com/influxdata/telegraf/issues/2341): telegraf swallowing panics in --test mode. - [#2341](https://github.com/influxdata/telegraf/issues/2341): telegraf swallowing panics in --test mode.
- [#2358](https://github.com/influxdata/telegraf/pull/2358): Create pidfile with 644 permissions & defer file deletion. - [#2358](https://github.com/influxdata/telegraf/pull/2358): Create pidfile with 644 permissions & defer file deletion.
- [#2360](https://github.com/influxdata/telegraf/pull/2360): Fixed install/remove of telegraf on non-systemd Debian/Ubuntu systems
- [#2282](https://github.com/influxdata/telegraf/issues/2282): Reloading telegraf freezes prometheus output.
- [#2390](https://github.com/influxdata/telegraf/issues/2390): Empty tag value causes error on InfluxDB output.
- [#2380](https://github.com/influxdata/telegraf/issues/2380): buffer_size field value is negative number from "internal" plugin.
- [#2414](https://github.com/influxdata/telegraf/issues/2414): Missing error handling in the MySQL plugin leads to segmentation violation.
- [#2462](https://github.com/influxdata/telegraf/pull/2462): Fix type conflict in windows ping plugin.
- [#2178](https://github.com/influxdata/telegraf/issues/2178): logparser: regexp with lookahead.
- [#2466](https://github.com/influxdata/telegraf/issues/2466): Telegraf can crash in LoadDirectory on 0600 files.
- [#2215](https://github.com/influxdata/telegraf/issues/2215): Iptables input: document better that rules without a comment are ignored.
- [#2483](https://github.com/influxdata/telegraf/pull/2483): Fix win_perf_counters capping values at 100.
- [#2498](https://github.com/influxdata/telegraf/pull/2498): Exporting Ipmi.Path to be set by config.
- [#2500](https://github.com/influxdata/telegraf/pull/2500): Remove warning if parse empty content
- [#2520](https://github.com/influxdata/telegraf/pull/2520): Update default value for Cloudwatch rate limit
- [#2513](https://github.com/influxdata/telegraf/issues/2513): create /etc/telegraf/telegraf.d directory in tarball.
- [#2541](https://github.com/influxdata/telegraf/issues/2541): Return error on unsupported serializer data format.
- [#1827](https://github.com/influxdata/telegraf/issues/1827): Fix Windows Performance Counters multi instance identifier
- [#2576](https://github.com/influxdata/telegraf/pull/2576): Add write timeout to Riemann output
- [#2596](https://github.com/influxdata/telegraf/pull/2596): fix timestamp parsing on prometheus plugin
- [#2610](https://github.com/influxdata/telegraf/pull/2610): Fix deadlock when output cannot write
- [#2410](https://github.com/influxdata/telegraf/issues/2410): Fix connection leak in postgresql.
- [#2628](https://github.com/influxdata/telegraf/issues/2628): Set default measurement name for snmp input.
- [#2649](https://github.com/influxdata/telegraf/pull/2649): Improve performance of diskio with many disks
- [#2671](https://github.com/influxdata/telegraf/issues/2671): The internal input plugin uses the wrong units for `heap_objects`
- [#2684](https://github.com/influxdata/telegraf/pull/2684): Fix ipmi_sensor config is shared between all plugin instances
- [#2450](https://github.com/influxdata/telegraf/issues/2450): Network statistics not collected when system has alias interfaces
- [#1911](https://github.com/influxdata/telegraf/issues/1911): Sysstat plugin needs LANG=C or similar locale
- [#2528](https://github.com/influxdata/telegraf/issues/2528): File output closes standard streams on reload.
- [#2603](https://github.com/influxdata/telegraf/issues/2603): AMQP output disconnect blocks all outputs
- [#2706](https://github.com/influxdata/telegraf/issues/2706): Improve documentation for redis input plugin
## v1.2.1 [2017-02-01] ## v1.2.1 [2017-02-01]

View File

@@ -124,7 +124,7 @@ You should also add the following to your SampleConfig() return:
```toml ```toml
## Data format to consume. ## Data format to consume.
## Each data format has its own unique set of configuration options, read ## Each data format has it's own unique set of configuration options, read
## more about them here: ## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
data_format = "influx" data_format = "influx"
@@ -254,7 +254,7 @@ You should also add the following to your SampleConfig() return:
```toml ```toml
## Data format to output. ## Data format to output.
## Each data format has its own unique set of configuration options, read ## Each data format has it's own unique set of configuration options, read
## more about them here: ## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
data_format = "influx" data_format = "influx"

19
Godeps
View File

@@ -1,4 +1,3 @@
collectd.org 2ce144541b8903101fb8f1483cc0497a68798122
github.com/Shopify/sarama 574d3147eee384229bf96a5d12c207fe7b5234f3 github.com/Shopify/sarama 574d3147eee384229bf96a5d12c207fe7b5234f3
github.com/Sirupsen/logrus 61e43dc76f7ee59a82bdf3d71033dc12bea4c77d github.com/Sirupsen/logrus 61e43dc76f7ee59a82bdf3d71033dc12bea4c77d
github.com/aerospike/aerospike-client-go 95e1ad7791bdbca44707fedbb29be42024900d9c github.com/aerospike/aerospike-client-go 95e1ad7791bdbca44707fedbb29be42024900d9c
@@ -10,7 +9,10 @@ github.com/couchbase/go-couchbase bfe555a140d53dc1adf390f1a1d4b0fd4ceadb28
github.com/couchbase/gomemcached 4a25d2f4e1dea9ea7dd76dfd943407abf9b07d29 github.com/couchbase/gomemcached 4a25d2f4e1dea9ea7dd76dfd943407abf9b07d29
github.com/couchbase/goutils 5823a0cbaaa9008406021dc5daf80125ea30bba6 github.com/couchbase/goutils 5823a0cbaaa9008406021dc5daf80125ea30bba6
github.com/davecgh/go-spew 346938d642f2ec3594ed81d874461961cd0faa76 github.com/davecgh/go-spew 346938d642f2ec3594ed81d874461961cd0faa76
github.com/docker/docker b89aff1afa1f61993ab2ba18fd62d9375a195f5d github.com/docker/distribution fb0bebc4b64e3881cc52a2478d749845ed76d2a8
github.com/docker/engine-api 4290f40c056686fcaa5c9caf02eac1dde9315adf
github.com/docker/go-connections 9670439d95da2651d9dfc7acc5d2ed92d3f25ee6
github.com/docker/go-units 0dadbb0345b35ec7ef35e228dabb8de89a65bf52
github.com/eapache/go-resiliency b86b1ec0dd4209a588dc1285cdd471e73525c0b3 github.com/eapache/go-resiliency b86b1ec0dd4209a588dc1285cdd471e73525c0b3
github.com/eapache/go-xerial-snappy bb955e01b9346ac19dc29eb16586c90ded99a98c github.com/eapache/go-xerial-snappy bb955e01b9346ac19dc29eb16586c90ded99a98c
github.com/eapache/queue 44cc805cf13205b55f69e14bcb69867d1ae92f98 github.com/eapache/queue 44cc805cf13205b55f69e14bcb69867d1ae92f98
@@ -22,10 +24,11 @@ github.com/golang/snappy 7db9049039a047d955fe8c19b83c8ff5abd765c7
github.com/gorilla/mux 392c28fe23e1c45ddba891b0320b3b5df220beea github.com/gorilla/mux 392c28fe23e1c45ddba891b0320b3b5df220beea
github.com/hailocab/go-hostpool e80d13ce29ede4452c43dea11e79b9bc8a15b478 github.com/hailocab/go-hostpool e80d13ce29ede4452c43dea11e79b9bc8a15b478
github.com/hashicorp/consul 63d2fc68239b996096a1c55a0d4b400ea4c2583f github.com/hashicorp/consul 63d2fc68239b996096a1c55a0d4b400ea4c2583f
github.com/influxdata/tail a395bf99fe07c233f41fba0735fa2b13b58588ea github.com/hpcloud/tail 915e5feba042395f5fda4dbe9c0e99aeab3088b3
github.com/influxdata/toml 5d1d907f22ead1cd47adde17ceec5bda9cacaf8f github.com/influxdata/config 8ec4638a81500c20be24855812bc8498ebe2dc92
github.com/influxdata/toml ad49a5c2936f96b8f5943c3fdba47630ccf45a0d
github.com/influxdata/wlog 7c63b0a71ef8300adc255344d275e10e5c3a71ec github.com/influxdata/wlog 7c63b0a71ef8300adc255344d275e10e5c3a71ec
github.com/jackc/pgx b84338d7d62598f75859b2b146d830b22f1b9ec8 github.com/jackc/pgx c8080fc4a1bfa44bf90383ad0fdce2f68b7d313c
github.com/kardianos/osext c2c54e542fb797ad986b31721e1baedf214ca413 github.com/kardianos/osext c2c54e542fb797ad986b31721e1baedf214ca413
github.com/kardianos/service 6d3a0ee7d3425d9d835debc51a0ca1ffa28f4893 github.com/kardianos/service 6d3a0ee7d3425d9d835debc51a0ca1ffa28f4893
github.com/kballard/go-shellquote d8ec1a69a250a17bb0e419c386eac1f3711dc142 github.com/kballard/go-shellquote d8ec1a69a250a17bb0e419c386eac1f3711dc142
@@ -45,12 +48,11 @@ github.com/prometheus/common dd2f054febf4a6c00f2343686efb775948a8bff4
github.com/prometheus/procfs 1878d9fbb537119d24b21ca07effd591627cd160 github.com/prometheus/procfs 1878d9fbb537119d24b21ca07effd591627cd160
github.com/rcrowley/go-metrics 1f30fe9094a513ce4c700b9a54458bbb0c96996c github.com/rcrowley/go-metrics 1f30fe9094a513ce4c700b9a54458bbb0c96996c
github.com/samuel/go-zookeeper 1d7be4effb13d2d908342d349d71a284a7542693 github.com/samuel/go-zookeeper 1d7be4effb13d2d908342d349d71a284a7542693
github.com/satori/go.uuid 5bf94b69c6b68ee1b541973bb8e1144db23a194b github.com/shirou/gopsutil 77b5d0080adb6f028e457906f1944d9fcca34442
github.com/shirou/gopsutil 70693b6a3da51a8a686d31f1b346077bbc066062
github.com/soniah/gosnmp 5ad50dc75ab389f8a1c9f8a67d3a1cd85f67ed15 github.com/soniah/gosnmp 5ad50dc75ab389f8a1c9f8a67d3a1cd85f67ed15
github.com/streadway/amqp 63795daa9a446c920826655f26ba31c81c860fd6 github.com/streadway/amqp 63795daa9a446c920826655f26ba31c81c860fd6
github.com/stretchr/testify 4d4bfba8f1d1027c4fdbe371823030df51419987 github.com/stretchr/testify 4d4bfba8f1d1027c4fdbe371823030df51419987
github.com/vjeantet/grok d73e972b60935c7fec0b4ffbc904ed39ecaf7efe github.com/vjeantet/grok 83bfdfdfd1a8146795b28e547a8e3c8b28a466c2
github.com/wvanbergen/kafka bc265fedb9ff5b5c5d3c0fdcef4a819b3523d3ee github.com/wvanbergen/kafka bc265fedb9ff5b5c5d3c0fdcef4a819b3523d3ee
github.com/wvanbergen/kazoo-go 968957352185472eacb69215fa3dbfcfdbac1096 github.com/wvanbergen/kazoo-go 968957352185472eacb69215fa3dbfcfdbac1096
github.com/yuin/gopher-lua 66c871e454fcf10251c61bf8eff02d0978cae75a github.com/yuin/gopher-lua 66c871e454fcf10251c61bf8eff02d0978cae75a
@@ -61,5 +63,4 @@ golang.org/x/text 506f9d5c962f284575e88337e7d9296d27e729d3
gopkg.in/dancannon/gorethink.v1 edc7a6a68e2d8015f5ffe1b2560eed989f8a45be gopkg.in/dancannon/gorethink.v1 edc7a6a68e2d8015f5ffe1b2560eed989f8a45be
gopkg.in/fatih/pool.v2 6e328e67893eb46323ad06f0e92cb9536babbabc gopkg.in/fatih/pool.v2 6e328e67893eb46323ad06f0e92cb9536babbabc
gopkg.in/mgo.v2 3f83fa5005286a7fe593b055f0d7771a7dce4655 gopkg.in/mgo.v2 3f83fa5005286a7fe593b055f0d7771a7dce4655
gopkg.in/olivere/elastic.v5 ee3ebceab960cf68ab9a89ee6d78c031ef5b4a4e
gopkg.in/yaml.v2 4c78c975fe7c825c6d1466c42be594d1d6f3aba6 gopkg.in/yaml.v2 4c78c975fe7c825c6d1466c42be594d1d6f3aba6

View File

@@ -51,7 +51,6 @@ docker-run:
-e ADVERTISED_PORT=9092 \ -e ADVERTISED_PORT=9092 \
-p "2181:2181" -p "9092:9092" \ -p "2181:2181" -p "9092:9092" \
-d spotify/kafka -d spotify/kafka
docker run --name elasticsearch -p "9200:9200" -p "9300:9300" -d elasticsearch:5
docker run --name mysql -p "3306:3306" -e MYSQL_ALLOW_EMPTY_PASSWORD=yes -d mysql docker run --name mysql -p "3306:3306" -e MYSQL_ALLOW_EMPTY_PASSWORD=yes -d mysql
docker run --name memcached -p "11211:11211" -d memcached docker run --name memcached -p "11211:11211" -d memcached
docker run --name postgres -p "5432:5432" -d postgres docker run --name postgres -p "5432:5432" -d postgres
@@ -70,7 +69,6 @@ docker-run-circle:
-e ADVERTISED_PORT=9092 \ -e ADVERTISED_PORT=9092 \
-p "2181:2181" -p "9092:9092" \ -p "2181:2181" -p "9092:9092" \
-d spotify/kafka -d spotify/kafka
docker run --name elasticsearch -p "9200:9200" -p "9300:9300" -d elasticsearch:5
docker run --name nsq -p "4150:4150" -d nsqio/nsq /nsqd docker run --name nsq -p "4150:4150" -d nsqio/nsq /nsqd
docker run --name mqtt -p "1883:1883" -d ncarlier/mqtt docker run --name mqtt -p "1883:1883" -d ncarlier/mqtt
docker run --name riemann -p "5555:5555" -d stealthly/docker-riemann docker run --name riemann -p "5555:5555" -d stealthly/docker-riemann
@@ -78,8 +76,8 @@ docker-run-circle:
# Kill all docker containers, ignore errors # Kill all docker containers, ignore errors
docker-kill: docker-kill:
-docker kill nsq aerospike redis rabbitmq postgres memcached mysql kafka mqtt riemann nats elasticsearch -docker kill nsq aerospike redis rabbitmq postgres memcached mysql kafka mqtt riemann nats
-docker rm nsq aerospike redis rabbitmq postgres memcached mysql kafka mqtt riemann nats elasticsearch -docker rm nsq aerospike redis rabbitmq postgres memcached mysql kafka mqtt riemann nats
# Run full unit tests using docker containers (includes setup and teardown) # Run full unit tests using docker containers (includes setup and teardown)
test: vet docker-kill docker-run test: vet docker-kill docker-run

View File

@@ -43,7 +43,7 @@ Ansible role: https://github.com/rossmcdonald/telegraf
Telegraf manages dependencies via [gdm](https://github.com/sparrc/gdm), Telegraf manages dependencies via [gdm](https://github.com/sparrc/gdm),
which gets installed via the Makefile which gets installed via the Makefile
if you don't have it already. You also must build with golang version 1.8+. if you don't have it already. You also must build with golang version 1.5+.
1. [Install Go](https://golang.org/doc/install) 1. [Install Go](https://golang.org/doc/install)
2. [Setup your GOPATH](https://golang.org/doc/code.html#GOPATH) 2. [Setup your GOPATH](https://golang.org/doc/code.html#GOPATH)
@@ -97,21 +97,18 @@ configuration options.
## Input Plugins ## Input Plugins
* [aerospike](./plugins/inputs/aerospike)
* [amqp_consumer](./plugins/inputs/amqp_consumer) (rabbitmq)
* [apache](./plugins/inputs/apache)
* [aws cloudwatch](./plugins/inputs/cloudwatch) * [aws cloudwatch](./plugins/inputs/cloudwatch)
* [aerospike](./plugins/inputs/aerospike)
* [apache](./plugins/inputs/apache)
* [bcache](./plugins/inputs/bcache) * [bcache](./plugins/inputs/bcache)
* [cassandra](./plugins/inputs/cassandra) * [cassandra](./plugins/inputs/cassandra)
* [ceph](./plugins/inputs/ceph) * [ceph](./plugins/inputs/ceph)
* [cgroup](./plugins/inputs/cgroup)
* [chrony](./plugins/inputs/chrony) * [chrony](./plugins/inputs/chrony)
* [consul](./plugins/inputs/consul) * [consul](./plugins/inputs/consul)
* [conntrack](./plugins/inputs/conntrack) * [conntrack](./plugins/inputs/conntrack)
* [couchbase](./plugins/inputs/couchbase) * [couchbase](./plugins/inputs/couchbase)
* [couchdb](./plugins/inputs/couchdb) * [couchdb](./plugins/inputs/couchdb)
* [disque](./plugins/inputs/disque) * [disque](./plugins/inputs/disque)
* [dmcache](./plugins/inputs/dmcache)
* [dns query time](./plugins/inputs/dns_query) * [dns query time](./plugins/inputs/dns_query)
* [docker](./plugins/inputs/docker) * [docker](./plugins/inputs/docker)
* [dovecot](./plugins/inputs/dovecot) * [dovecot](./plugins/inputs/dovecot)
@@ -124,11 +121,9 @@ configuration options.
* [httpjson](./plugins/inputs/httpjson) (generic JSON-emitting http service plugin) * [httpjson](./plugins/inputs/httpjson) (generic JSON-emitting http service plugin)
* [internal](./plugins/inputs/internal) * [internal](./plugins/inputs/internal)
* [influxdb](./plugins/inputs/influxdb) * [influxdb](./plugins/inputs/influxdb)
* [interrupts](./plugins/inputs/interrupts)
* [ipmi_sensor](./plugins/inputs/ipmi_sensor) * [ipmi_sensor](./plugins/inputs/ipmi_sensor)
* [iptables](./plugins/inputs/iptables) * [iptables](./plugins/inputs/iptables)
* [jolokia](./plugins/inputs/jolokia) * [jolokia](./plugins/inputs/jolokia)
* [kapacitor](./plugins/inputs/kapacitor)
* [kubernetes](./plugins/inputs/kubernetes) * [kubernetes](./plugins/inputs/kubernetes)
* [leofs](./plugins/inputs/leofs) * [leofs](./plugins/inputs/leofs)
* [lustre2](./plugins/inputs/lustre2) * [lustre2](./plugins/inputs/lustre2)
@@ -177,7 +172,6 @@ configuration options.
* processes * processes
* kernel (/proc/stat) * kernel (/proc/stat)
* kernel (/proc/vmstat) * kernel (/proc/vmstat)
* linux_sysctl_fs (/proc/sys/fs)
Telegraf can also collect metrics via the following service plugins: Telegraf can also collect metrics via the following service plugins:
@@ -197,17 +191,6 @@ Telegraf can also collect metrics via the following service plugins:
* [github](./plugins/inputs/webhooks/github) * [github](./plugins/inputs/webhooks/github)
* [mandrill](./plugins/inputs/webhooks/mandrill) * [mandrill](./plugins/inputs/webhooks/mandrill)
* [rollbar](./plugins/inputs/webhooks/rollbar) * [rollbar](./plugins/inputs/webhooks/rollbar)
* [papertrail](./plugins/inputs/webhooks/papertrail)
Telegraf is able to parse the following input data formats into metrics, these
formats may be used with input plugins supporting the `data_format` option:
* [InfluxDB Line Protocol](./docs/DATA_FORMATS_INPUT.md#influx)
* [JSON](./docs/DATA_FORMATS_INPUT.md#json)
* [Graphite](./docs/DATA_FORMATS_INPUT.md#graphite)
* [Value](./docs/DATA_FORMATS_INPUT.md#value)
* [Nagios](./docs/DATA_FORMATS_INPUT.md#nagios)
* [Collectd](./docs/DATA_FORMATS_INPUT.md#collectd)
## Processor Plugins ## Processor Plugins
@@ -226,7 +209,6 @@ formats may be used with input plugins supporting the `data_format` option:
* [aws cloudwatch](./plugins/outputs/cloudwatch) * [aws cloudwatch](./plugins/outputs/cloudwatch)
* [datadog](./plugins/outputs/datadog) * [datadog](./plugins/outputs/datadog)
* [discard](./plugins/outputs/discard) * [discard](./plugins/outputs/discard)
* [elasticsearch](./plugins/outputs/elasticsearch)
* [file](./plugins/outputs/file) * [file](./plugins/outputs/file)
* [graphite](./plugins/outputs/graphite) * [graphite](./plugins/outputs/graphite)
* [graylog](./plugins/outputs/graylog) * [graylog](./plugins/outputs/graylog)

View File

@@ -191,12 +191,6 @@ func (a *Agent) Test() error {
}() }()
for _, input := range a.Config.Inputs { for _, input := range a.Config.Inputs {
if _, ok := input.Input.(telegraf.ServiceInput); ok {
fmt.Printf("\nWARNING: skipping plugin [[%s]]: service inputs not supported in --test mode\n",
input.Name())
continue
}
acc := NewAccumulator(input, metricC) acc := NewAccumulator(input, metricC)
acc.SetPrecision(a.Config.Agent.Precision.Duration, acc.SetPrecision(a.Config.Agent.Precision.Duration,
a.Config.Agent.Interval.Duration) a.Config.Agent.Interval.Duration)
@@ -215,7 +209,7 @@ func (a *Agent) Test() error {
// Special instructions for some inputs. cpu, for example, needs to be // Special instructions for some inputs. cpu, for example, needs to be
// run twice in order to return cpu usage percentages. // run twice in order to return cpu usage percentages.
switch input.Name() { switch input.Name() {
case "inputs.cpu", "inputs.mongodb", "inputs.procstat": case "cpu", "mongodb", "procstat":
time.Sleep(500 * time.Millisecond) time.Sleep(500 * time.Millisecond)
fmt.Printf("* Plugin: %s, Collection 2\n", input.Name()) fmt.Printf("* Plugin: %s, Collection 2\n", input.Name())
if err := input.Input.Gather(acc); err != nil { if err := input.Input.Gather(acc); err != nil {
@@ -398,6 +392,5 @@ func (a *Agent) Run(shutdown chan struct{}) error {
} }
wg.Wait() wg.Wait()
a.Close()
return nil return nil
} }

View File

@@ -1,11 +1,13 @@
machine: machine:
go:
version: 1.8.1
services: services:
- docker - docker
- memcached post:
- redis - sudo service zookeeper stop
- rabbitmq-server - go version
- sudo rm -rf /usr/local/go
- wget https://storage.googleapis.com/golang/go1.8rc3.linux-amd64.tar.gz
- sudo tar -C /usr/local -xzf go1.8rc3.linux-amd64.tar.gz
- go version
dependencies: dependencies:
override: override:

View File

@@ -4,10 +4,11 @@ import (
"flag" "flag"
"fmt" "fmt"
"log" "log"
"net/http"
_ "net/http/pprof" // Comment this line to disable pprof endpoint.
"os" "os"
"os/signal" "os/signal"
"path"
"path/filepath"
"plugin"
"runtime" "runtime"
"strings" "strings"
"syscall" "syscall"
@@ -15,19 +16,20 @@ import (
"github.com/influxdata/telegraf/agent" "github.com/influxdata/telegraf/agent"
"github.com/influxdata/telegraf/internal/config" "github.com/influxdata/telegraf/internal/config"
"github.com/influxdata/telegraf/logger" "github.com/influxdata/telegraf/logger"
"github.com/influxdata/telegraf/registry"
"github.com/influxdata/telegraf/registry/inputs"
"github.com/influxdata/telegraf/registry/outputs"
_ "github.com/influxdata/telegraf/plugins/aggregators/all" _ "github.com/influxdata/telegraf/plugins/aggregators/all"
"github.com/influxdata/telegraf/plugins/inputs"
_ "github.com/influxdata/telegraf/plugins/inputs/all" _ "github.com/influxdata/telegraf/plugins/inputs/all"
"github.com/influxdata/telegraf/plugins/outputs"
_ "github.com/influxdata/telegraf/plugins/outputs/all" _ "github.com/influxdata/telegraf/plugins/outputs/all"
_ "github.com/influxdata/telegraf/plugins/processors/all" _ "github.com/influxdata/telegraf/plugins/processors/all"
"github.com/kardianos/service" "github.com/kardianos/service"
) )
var fDebug = flag.Bool("debug", false, var fDebug = flag.Bool("debug", false,
"turn on debug logging") "turn on debug logging")
var pprofAddr = flag.String("pprof-addr", "",
"pprof address to listen on, not activate pprof if empty")
var fQuiet = flag.Bool("quiet", false, var fQuiet = flag.Bool("quiet", false,
"run in quiet mode") "run in quiet mode")
var fTest = flag.Bool("test", false, "gather metrics, print them out, and exit") var fTest = flag.Bool("test", false, "gather metrics, print them out, and exit")
@@ -54,6 +56,8 @@ var fUsage = flag.String("usage", "",
"print usage for a plugin, ie, 'telegraf -usage mysql'") "print usage for a plugin, ie, 'telegraf -usage mysql'")
var fService = flag.String("service", "", var fService = flag.String("service", "",
"operate on the service") "operate on the service")
var fPlugins = flag.String("plugins", "",
"path to directory containing external plugins")
// Telegraf version, populated linker. // Telegraf version, populated linker.
// ie, -ldflags "-X main.version=`git describe --always --tags`" // ie, -ldflags "-X main.version=`git describe --always --tags`"
@@ -91,7 +95,6 @@ The commands & flags are:
--output-filter filter the output plugins to enable, separator is : --output-filter filter the output plugins to enable, separator is :
--usage print usage for a plugin, ie, 'telegraf --usage mysql' --usage print usage for a plugin, ie, 'telegraf --usage mysql'
--debug print metrics as they're generated to stdout --debug print metrics as they're generated to stdout
--pprof-addr pprof address to listen on, format: localhost:6060 or :6060
--quiet run in quiet mode --quiet run in quiet mode
Examples: Examples:
@@ -110,9 +113,6 @@ Examples:
# run telegraf, enabling the cpu & memory input, and influxdb output plugins # run telegraf, enabling the cpu & memory input, and influxdb output plugins
telegraf --config telegraf.conf --input-filter cpu:mem --output-filter influxdb telegraf --config telegraf.conf --input-filter cpu:mem --output-filter influxdb
# run telegraf with pprof
telegraf --config telegraf.conf --pprof-addr localhost:6060
` `
var stop chan struct{} var stop chan struct{}
@@ -144,7 +144,7 @@ func reloadLoop(
log.Fatal("E! " + err.Error()) log.Fatal("E! " + err.Error())
} }
} }
if !*fTest && len(c.Outputs) == 0 { if len(c.Outputs) == 0 {
log.Fatalf("E! Error: no outputs found, did you provide a valid config file?") log.Fatalf("E! Error: no outputs found, did you provide a valid config file?")
} }
if len(c.Inputs) == 0 { if len(c.Inputs) == 0 {
@@ -254,11 +254,62 @@ func (p *program) Stop(s service.Service) error {
return nil return nil
} }
// loadExternalPlugins loads external plugins from shared libraries (.so, .dll, etc.)
// in the specified directory.
func loadExternalPlugins(rootDir string) error {
return filepath.Walk(rootDir, func(pth string, info os.FileInfo, err error) error {
// Stop if there was an error.
if err != nil {
return err
}
// Ignore directories.
if info.IsDir() {
return nil
}
// Ignore files that aren't shared libraries.
ext := strings.ToLower(path.Ext(pth))
if ext != ".so" && ext != ".dll" {
return nil
}
// name will be the path to the plugin file beginning at the root
// directory, minus the extension.
// ie, if the plugin file is /opt/telegraf-plugins/group1/foo.so, name
// will be "group1/foo"
name := strings.TrimPrefix(strings.TrimPrefix(pth, rootDir), string(os.PathSeparator))
name = strings.TrimSuffix(name, filepath.Ext(pth))
registry.SetName("external" + string(os.PathSeparator) + name)
defer registry.SetName("")
// Load plugin.
_, err = plugin.Open(pth)
if err != nil {
return fmt.Errorf("error loading [%s]: %s", pth, err)
}
return nil
})
}
func main() { func main() {
flag.Usage = func() { usageExit(0) } flag.Usage = func() { usageExit(0) }
flag.Parse() flag.Parse()
args := flag.Args() args := flag.Args()
// Load external plugins, if requested.
if *fPlugins != "" {
pluginsDir, err := filepath.Abs(*fPlugins)
if err != nil {
log.Fatal("E! " + err.Error())
}
log.Printf("I! Loading external plugins from: %s\n", pluginsDir)
if err := loadExternalPlugins(*fPlugins); err != nil {
log.Fatal("E! " + err.Error())
}
}
inputFilters, outputFilters := []string{}, []string{} inputFilters, outputFilters := []string{}, []string{}
if *fInputFilters != "" { if *fInputFilters != "" {
inputFilters = strings.Split(":"+strings.TrimSpace(*fInputFilters)+":", ":") inputFilters = strings.Split(":"+strings.TrimSpace(*fInputFilters)+":", ":")
@@ -275,23 +326,6 @@ func main() {
processorFilters = strings.Split(":"+strings.TrimSpace(*fProcessorFilters)+":", ":") processorFilters = strings.Split(":"+strings.TrimSpace(*fProcessorFilters)+":", ":")
} }
if *pprofAddr != "" {
go func() {
pprofHostPort := *pprofAddr
parts := strings.Split(pprofHostPort, ":")
if len(parts) == 2 && parts[0] == "" {
pprofHostPort = fmt.Sprintf("localhost:%s", parts[1])
}
pprofHostPort = "http://" + pprofHostPort + "/debug/pprof"
log.Printf("I! Starting pprof HTTP server at: %s", pprofHostPort)
if err := http.ListenAndServe(*pprofAddr, nil); err != nil {
log.Fatal("E! " + err.Error())
}
}()
}
if len(args) > 0 { if len(args) > 0 {
switch args[0] { switch args[0] {
case "version": case "version":

View File

@@ -24,16 +24,6 @@ Environment variables can be used anywhere in the config file, simply prepend
them with $. For strings the variable must be within quotes (ie, "$STR_VAR"), them with $. For strings the variable must be within quotes (ie, "$STR_VAR"),
for numbers and booleans they should be plain (ie, $INT_VAR, $BOOL_VAR) for numbers and booleans they should be plain (ie, $INT_VAR, $BOOL_VAR)
## Configuration file locations
The location of the configuration file can be set via the `--config` command
line flag. Telegraf will also pick up all files matching the pattern `*.conf` if
the `-config-directory` command line flag is used.
On most systems, the default locations are `/etc/telegraf/telegraf.conf` for
the main configuration file and `/etc/telegraf/telegraf.d` for the directory of
configuration files.
# Global Tags # Global Tags
Global tags can be specified in the `[global_tags]` section of the config file Global tags can be specified in the `[global_tags]` section of the config file
@@ -70,7 +60,7 @@ ie, a jitter of 5s and flush_interval 10s means flushes will happen every 10-15s
as the collection interval, with the maximum being 1s. Precision will NOT as the collection interval, with the maximum being 1s. Precision will NOT
be used for service inputs, such as logparser and statsd. Valid values are be used for service inputs, such as logparser and statsd. Valid values are
"ns", "us" (or "µs"), "ms", "s". "ns", "us" (or "µs"), "ms", "s".
* **logfile**: Specify the log file name. The empty string means to log to stderr. * **logfile**: Specify the log file name. The empty string means to log to stdout.
* **debug**: Run telegraf in debug mode. * **debug**: Run telegraf in debug mode.
* **quiet**: Run telegraf in quiet mode (error messages only). * **quiet**: Run telegraf in quiet mode (error messages only).
* **hostname**: Override default hostname, if empty use os.Hostname(). * **hostname**: Override default hostname, if empty use os.Hostname().
@@ -124,40 +114,31 @@ is not specified then processor execution order will be random.
Filters can be configured per input, output, processor, or aggregator, Filters can be configured per input, output, processor, or aggregator,
see below for examples. see below for examples.
* **namepass**: * **namepass**: An array of strings that is used to filter metrics generated by the
An array of glob pattern strings. Only points whose measurement name matches current input. Each string in the array is tested as a glob match against
a pattern in this list are emitted. measurement names and if it matches, the field is emitted.
* **namedrop**: * **namedrop**: The inverse of pass, if a measurement name matches, it is not emitted.
The inverse of `namepass`. If a match is found the point is discarded. This * **fieldpass**: An array of strings that is used to filter metrics generated by the
is tested on points after they have passed the `namepass` test. current input. Each string in the array is tested as a glob match against field names
* **fieldpass**: and if it matches, the field is emitted. fieldpass is not available for outputs.
An array of glob pattern strings. Only fields whose field key matches a * **fielddrop**: The inverse of pass, if a field name matches, it is not emitted.
pattern in this list are emitted. Not available for outputs. fielddrop is not available for outputs.
* **fielddrop**: * **tagpass**: tag names and arrays of strings that are used to filter
The inverse of `fieldpass`. Fields with a field key matching one of the measurements by the current input. Each string in the array is tested as a glob
patterns will be discarded from the point. Not available for outputs. match against the tag name, and if it matches the measurement is emitted.
* **tagpass**: * **tagdrop**: The inverse of tagpass. If a tag matches, the measurement is not
A table mapping tag keys to arrays of glob pattern strings. Only points emitted. This is tested on measurements that have passed the tagpass test.
that contain a tag key in the table and a tag value matching one of its * **tagexclude**: tagexclude can be used to exclude a tag from measurement(s).
patterns is emitted. As opposed to tagdrop, which will drop an entire measurement based on it's
* **tagdrop**: tags, tagexclude simply strips the given tag keys from the measurement. This
The inverse of `tagpass`. If a match is found the point is discarded. This can be used on inputs & outputs, but it is _recommended_ to be used on inputs,
is tested on points after they have passed the `tagpass` test. as it is more efficient to filter out tags at the ingestion point.
* **taginclude**: * **taginclude**: taginclude is the inverse of tagexclude. It will only include
An array of glob pattern strings. Only tags with a tag key matching one of the tag keys in the final measurement.
the patterns are emitted. In contrast to `tagpass`, which will pass an entire
point based on its tag, `taginclude` removes all non matching tags from the
point. This filter can be used on both inputs & outputs, but it is
_recommended_ to be used on inputs, as it is more efficient to filter out tags
at the ingestion point.
* **tagexclude**:
The inverse of `taginclude`. Tags with a tag key matching one of the patterns
will be discarded from the point.
**NOTE** Due to the way TOML is parsed, `tagpass` and `tagdrop` parameters **NOTE** `tagpass` and `tagdrop` parameters must be defined at the _end_ of
must be defined at the _end_ of the plugin definition, otherwise subsequent the plugin definition, otherwise subsequent plugin config options will be
plugin config options will be interpreted as part of the tagpass/tagdrop interpreted as part of the tagpass/tagdrop map.
tables.
#### Input Configuration Examples #### Input Configuration Examples

View File

@@ -7,7 +7,6 @@ Telegraf is able to parse the following input data formats into metrics:
1. [Graphite](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md#graphite) 1. [Graphite](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md#graphite)
1. [Value](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md#value), ie: 45 or "booyah" 1. [Value](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md#value), ie: 45 or "booyah"
1. [Nagios](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md#nagios) (exec input only) 1. [Nagios](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md#nagios) (exec input only)
1. [Collectd](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md#collectd)
Telegraf metrics, like InfluxDB Telegraf metrics, like InfluxDB
[points](https://docs.influxdata.com/influxdb/v0.10/write_protocols/line/), [points](https://docs.influxdata.com/influxdb/v0.10/write_protocols/line/),
@@ -41,7 +40,7 @@ example, in the exec plugin:
name_suffix = "_mycollector" name_suffix = "_mycollector"
## Data format to consume. ## Data format to consume.
## Each data format has its own unique set of configuration options, read ## Each data format has it's own unique set of configuration options, read
## more about them here: ## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
data_format = "json" data_format = "json"
@@ -68,7 +67,7 @@ metrics are parsed directly into Telegraf metrics.
name_suffix = "_mycollector" name_suffix = "_mycollector"
## Data format to consume. ## Data format to consume.
## Each data format has its own unique set of configuration options, read ## Each data format has it's own unique set of configuration options, read
## more about them here: ## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
data_format = "influx" data_format = "influx"
@@ -118,7 +117,7 @@ For example, if you had this configuration:
name_suffix = "_mycollector" name_suffix = "_mycollector"
## Data format to consume. ## Data format to consume.
## Each data format has its own unique set of configuration options, read ## Each data format has it's own unique set of configuration options, read
## more about them here: ## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
data_format = "json" data_format = "json"
@@ -162,7 +161,7 @@ For example, if the following configuration:
name_suffix = "_mycollector" name_suffix = "_mycollector"
## Data format to consume. ## Data format to consume.
## Each data format has its own unique set of configuration options, read ## Each data format has it's own unique set of configuration options, read
## more about them here: ## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
data_format = "json" data_format = "json"
@@ -233,7 +232,7 @@ name of the plugin.
name_override = "entropy_available" name_override = "entropy_available"
## Data format to consume. ## Data format to consume.
## Each data format has its own unique set of configuration options, read ## Each data format has it's own unique set of configuration options, read
## more about them here: ## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
data_format = "value" data_format = "value"
@@ -391,7 +390,7 @@ There are many more options available,
name_suffix = "_mycollector" name_suffix = "_mycollector"
## Data format to consume. ## Data format to consume.
## Each data format has its own unique set of configuration options, read ## Each data format has it's own unique set of configuration options, read
## more about them here: ## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
data_format = "graphite" data_format = "graphite"
@@ -428,54 +427,14 @@ Note: Nagios Input Data Formats is only supported in `exec` input plugin.
```toml ```toml
[[inputs.exec]] [[inputs.exec]]
## Commands array ## Commands array
commands = ["/usr/lib/nagios/plugins/check_load -w 5,6,7 -c 7,8,9"] commands = ["/usr/lib/nagios/plugins/check_load", "-w 5,6,7 -c 7,8,9"]
## measurement name suffix (for separating different commands) ## measurement name suffix (for separating different commands)
name_suffix = "_mycollector" name_suffix = "_mycollector"
## Data format to consume. ## Data format to consume.
## Each data format has its own unique set of configuration options, read ## Each data format has it's own unique set of configuration options, read
## more about them here: ## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
data_format = "nagios" data_format = "nagios"
``` ```
# Collectd:
The collectd format parses the collectd binary network protocol. Tags are
created for host, instance, type, and type instance. All collectd values are
added as float64 fields.
For more information about the binary network protocol see
[here](https://collectd.org/wiki/index.php/Binary_protocol).
You can control the cryptographic settings with parser options. Create an
authentication file and set `collectd_auth_file` to the path of the file, then
set the desired security level in `collectd_security_level`.
Additional information including client setup can be found
[here](https://collectd.org/wiki/index.php/Networking_introduction#Cryptographic_setup).
You can also change the path to the typesdb or add additional typesdb using
`collectd_typesdb`.
#### Collectd Configuration:
```toml
[[inputs.socket_listener]]
service_address = "udp://127.0.0.1:25826"
name_prefix = "collectd_"
## Data format to consume.
## Each data format has its own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
data_format = "collectd"
## Authentication file for cryptographic security levels
collectd_auth_file = "/etc/collectd/auth_file"
## One of none (default), sign, or encrypt
collectd_security_level = "encrypt"
## Path of to TypesDB specifications
collectd_typesdb = ["/usr/share/collectd/types.db"]
```

View File

@@ -36,7 +36,7 @@ config option, for example, in the `file` output plugin:
files = ["stdout"] files = ["stdout"]
## Data format to output. ## Data format to output.
## Each data format has its own unique set of configuration options, read ## Each data format has it's own unique set of configuration options, read
## more about them here: ## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
data_format = "influx" data_format = "influx"
@@ -60,7 +60,7 @@ metrics are serialized directly into InfluxDB line-protocol.
files = ["stdout", "/tmp/metrics.out"] files = ["stdout", "/tmp/metrics.out"]
## Data format to output. ## Data format to output.
## Each data format has its own unique set of configuration options, read ## Each data format has it's own unique set of configuration options, read
## more about them here: ## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
data_format = "influx" data_format = "influx"
@@ -104,7 +104,7 @@ tars.cpu-total.us-east-1.cpu.usage_idle 98.09 1455320690
files = ["stdout", "/tmp/metrics.out"] files = ["stdout", "/tmp/metrics.out"]
## Data format to output. ## Data format to output.
## Each data format has its own unique set of configuration options, read ## Each data format has it's own unique set of configuration options, read
## more about them here: ## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
data_format = "graphite" data_format = "graphite"
@@ -143,18 +143,8 @@ The JSON data format serialized Telegraf metrics in json format. The format is:
files = ["stdout", "/tmp/metrics.out"] files = ["stdout", "/tmp/metrics.out"]
## Data format to output. ## Data format to output.
## Each data format has its own unique set of configuration options, read ## Each data format has it's own unique set of configuration options, read
## more about them here: ## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
data_format = "json" data_format = "json"
json_timestamp_units = "1ns"
``` ```
By default, the timestamp that is output in JSON data format serialized Telegraf
metrics is in seconds. The precision of this timestamp can be adjusted for any output
by adding the optional `json_timestamp_units` parameter to the configuration for
that output. This parameter can be used to set the timestamp units to nanoseconds (`ns`),
microseconds (`us` or `µs`), milliseconds (`ms`), or seconds (`s`). Note that this
parameter will be truncated to the nearest power of 10 that, so if the `json_timestamp_units`
are set to `15ms` the timestamps for the JSON format serialized Telegraf metrics will be
output in hundredths of a second (`10ms`).

View File

@@ -1,5 +1,4 @@
# List # List
- collectd.org [MIT LICENSE](https://github.com/collectd/go-collectd/blob/master/LICENSE)
- github.com/Shopify/sarama [MIT LICENSE](https://github.com/Shopify/sarama/blob/master/MIT-LICENSE) - github.com/Shopify/sarama [MIT LICENSE](https://github.com/Shopify/sarama/blob/master/MIT-LICENSE)
- github.com/Sirupsen/logrus [MIT LICENSE](https://github.com/Sirupsen/logrus/blob/master/LICENSE) - github.com/Sirupsen/logrus [MIT LICENSE](https://github.com/Sirupsen/logrus/blob/master/LICENSE)
- github.com/armon/go-metrics [MIT LICENSE](https://github.com/armon/go-metrics/blob/master/LICENSE) - github.com/armon/go-metrics [MIT LICENSE](https://github.com/armon/go-metrics/blob/master/LICENSE)
@@ -31,3 +30,4 @@
- gopkg.in/dancannon/gorethink.v1 [APACHE LICENSE](https://github.com/dancannon/gorethink/blob/v1.1.2/LICENSE) - gopkg.in/dancannon/gorethink.v1 [APACHE LICENSE](https://github.com/dancannon/gorethink/blob/v1.1.2/LICENSE)
- gopkg.in/mgo.v2 [BSD LICENSE](https://github.com/go-mgo/mgo/blob/v2/LICENSE) - gopkg.in/mgo.v2 [BSD LICENSE](https://github.com/go-mgo/mgo/blob/v2/LICENSE)
- golang.org/x/crypto/ [BSD LICENSE](https://github.com/golang/crypto/blob/master/LICENSE) - golang.org/x/crypto/ [BSD LICENSE](https://github.com/golang/crypto/blob/master/LICENSE)

View File

@@ -1,24 +0,0 @@
# Telegraf profiling
Telegraf uses the standard package `net/http/pprof`. This package serves via its HTTP server runtime profiling data in the format expected by the pprof visualization tool.
By default, the profiling is turned off.
To enable profiling you need to specify address to config parameter `pprof-addr`, for example:
```
telegraf --config telegraf.conf --pprof-addr localhost:6060
```
There are several paths to get different profiling information:
To look at the heap profile:
`go tool pprof http://localhost:6060/debug/pprof/heap`
or to look at a 30-second CPU profile:
`go tool pprof http://localhost:6060/debug/pprof/profile?seconds=30`
To view all available profiles, open `http://localhost:6060/debug/pprof/` in your browser.

View File

@@ -55,13 +55,10 @@
## ie, a jitter of 5s and interval 10s means flushes will happen every 10-15s ## ie, a jitter of 5s and interval 10s means flushes will happen every 10-15s
flush_jitter = "0s" flush_jitter = "0s"
## By default or when set to "0s", precision will be set to the same ## By default, precision will be set to the same timestamp order as the
## timestamp order as the collection interval, with the maximum being 1s. ## collection interval, with the maximum being 1s.
## ie, when interval = "10s", precision will be "1s" ## Precision will NOT be used for service inputs, such as logparser and statsd.
## when interval = "250ms", precision will be "1ms" ## Valid values are "ns", "us" (or "µs"), "ms", "s".
## Precision will NOT be used for service inputs. It is up to each individual
## service input to set the timestamp at the appropriate precision.
## Valid time units are "ns", "us" (or "µs"), "ms", "s".
precision = "" precision = ""
## Logging configuration: ## Logging configuration:
@@ -84,10 +81,7 @@
# Configuration for influxdb server to send metrics to # Configuration for influxdb server to send metrics to
[[outputs.influxdb]] [[outputs.influxdb]]
## The HTTP or UDP URL for your InfluxDB instance. Each item should be ## The full HTTP or UDP endpoint URL for your InfluxDB instance.
## of the form:
## scheme "://" host [ ":" port]
##
## Multiple urls can be specified as part of the same cluster, ## Multiple urls can be specified as part of the same cluster,
## this means that only ONE of the urls will be written to each interval. ## this means that only ONE of the urls will be written to each interval.
# urls = ["udp://localhost:8089"] # UDP endpoint example # urls = ["udp://localhost:8089"] # UDP endpoint example
@@ -95,8 +89,7 @@
## The target database for metrics (telegraf will create it if not exists). ## The target database for metrics (telegraf will create it if not exists).
database = "telegraf" # required database = "telegraf" # required
## Name of existing retention policy to write to. Empty string writes to ## Retention policy to write to. Empty string writes to the default rp.
## the default retention policy.
retention_policy = "" retention_policy = ""
## Write consistency (clusters only), can be: "any", "one", "quorum", "all" ## Write consistency (clusters only), can be: "any", "one", "quorum", "all"
write_consistency = "any" write_consistency = "any"
@@ -138,11 +131,9 @@
# ## AMQP exchange # ## AMQP exchange
# exchange = "telegraf" # exchange = "telegraf"
# ## Auth method. PLAIN and EXTERNAL are supported # ## Auth method. PLAIN and EXTERNAL are supported
# ## Using EXTERNAL requires enabling the rabbitmq_auth_mechanism_ssl plugin as
# ## described here: https://www.rabbitmq.com/plugins.html
# # auth_method = "PLAIN" # # auth_method = "PLAIN"
# ## Telegraf tag to use as a routing key # ## Telegraf tag to use as a routing key
# ## ie, if this tag exists, its value will be used as the routing key # ## ie, if this tag exists, it's value will be used as the routing key
# routing_tag = "host" # routing_tag = "host"
# #
# ## InfluxDB retention policy # ## InfluxDB retention policy
@@ -150,10 +141,6 @@
# ## InfluxDB database # ## InfluxDB database
# # database = "telegraf" # # database = "telegraf"
# #
# ## Write timeout, formatted as a string. If not provided, will default
# ## to 5s. 0s means no timeout (not recommended).
# # timeout = "5s"
#
# ## Optional SSL Config # ## Optional SSL Config
# # ssl_ca = "/etc/telegraf/ca.pem" # # ssl_ca = "/etc/telegraf/ca.pem"
# # ssl_cert = "/etc/telegraf/cert.pem" # # ssl_cert = "/etc/telegraf/cert.pem"
@@ -162,7 +149,7 @@
# # insecure_skip_verify = false # # insecure_skip_verify = false
# #
# ## Data format to output. # ## Data format to output.
# ## Each data format has its own unique set of configuration options, read # ## Each data format has it's own unique set of configuration options, read
# ## more about them here: # ## more about them here:
# ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md # ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
# data_format = "influx" # data_format = "influx"
@@ -206,52 +193,13 @@
# # no configuration # # no configuration
# # Configuration for Elasticsearch to send metrics to.
# [[outputs.elasticsearch]]
# ## The full HTTP endpoint URL for your Elasticsearch instance
# ## Multiple urls can be specified as part of the same cluster,
# ## this means that only ONE of the urls will be written to each interval.
# urls = [ "http://node1.es.example.com:9200" ] # required.
# ## Elasticsearch client timeout, defaults to "5s" if not set.
# timeout = "5s"
# ## Set to true to ask Elasticsearch a list of all cluster nodes,
# ## thus it is not necessary to list all nodes in the urls config option.
# enable_sniffer = false
# ## Set the interval to check if the Elasticsearch nodes are available
# ## Setting to "0s" will disable the health check (not recommended in production)
# health_check_interval = "10s"
# ## HTTP basic authentication details (eg. when using Shield)
# # username = "telegraf"
# # password = "mypassword"
#
# ## Index Config
# ## The target index for metrics (Elasticsearch will create if it not exists).
# ## You can use the date specifiers below to create indexes per time frame.
# ## The metric timestamp will be used to decide the destination index name
# # %Y - year (2016)
# # %y - last two digits of year (00..99)
# # %m - month (01..12)
# # %d - day of month (e.g., 01)
# # %H - hour (00..23)
# index_name = "telegraf-%Y.%m.%d" # required.
#
# ## Template Config
# ## Set to true if you want telegraf to manage its index template.
# ## If enabled it will create a recommended index template for telegraf indexes
# manage_template = true
# ## The template name used for telegraf indexes
# template_name = "telegraf"
# ## Set to true if you want telegraf to overwrite an existing template
# overwrite_template = false
# # Send telegraf metrics to file(s) # # Send telegraf metrics to file(s)
# [[outputs.file]] # [[outputs.file]]
# ## Files to write to, "stdout" is a specially handled file. # ## Files to write to, "stdout" is a specially handled file.
# files = ["stdout", "/tmp/metrics.out"] # files = ["stdout", "/tmp/metrics.out"]
# #
# ## Data format to output. # ## Data format to output.
# ## Each data format has its own unique set of configuration options, read # ## Each data format has it's own unique set of configuration options, read
# ## more about them here: # ## more about them here:
# ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md # ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
# data_format = "influx" # data_format = "influx"
@@ -300,7 +248,7 @@
# ## Kafka topic for producer messages # ## Kafka topic for producer messages
# topic = "telegraf" # topic = "telegraf"
# ## Telegraf tag to use as a routing key # ## Telegraf tag to use as a routing key
# ## ie, if this tag exists, its value will be used as the routing key # ## ie, if this tag exists, it's value will be used as the routing key
# routing_tag = "host" # routing_tag = "host"
# #
# ## CompressionCodec represents the various compression codecs recognized by # ## CompressionCodec represents the various compression codecs recognized by
@@ -336,12 +284,8 @@
# ## Use SSL but skip chain & host verification # ## Use SSL but skip chain & host verification
# # insecure_skip_verify = false # # insecure_skip_verify = false
# #
# ## Optional SASL Config
# # sasl_username = "kafka"
# # sasl_password = "secret"
#
# ## Data format to output. # ## Data format to output.
# ## Each data format has its own unique set of configuration options, read # ## Each data format has it's own unique set of configuration options, read
# ## more about them here: # ## more about them here:
# ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md # ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
# data_format = "influx" # data_format = "influx"
@@ -371,14 +315,9 @@
# streamname = "StreamName" # streamname = "StreamName"
# ## PartitionKey as used for sharding data. # ## PartitionKey as used for sharding data.
# partitionkey = "PartitionKey" # partitionkey = "PartitionKey"
# ## If set the paritionKey will be a random UUID on every put.
# ## This allows for scaling across multiple shards in a stream.
# ## This will cause issues with ordering.
# use_random_partitionkey = false
#
# #
# ## Data format to output. # ## Data format to output.
# ## Each data format has its own unique set of configuration options, read # ## Each data format has it's own unique set of configuration options, read
# ## more about them here: # ## more about them here:
# ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md # ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
# data_format = "influx" # data_format = "influx"
@@ -430,7 +369,7 @@
# # insecure_skip_verify = false # # insecure_skip_verify = false
# #
# ## Data format to output. # ## Data format to output.
# ## Each data format has its own unique set of configuration options, read # ## Each data format has it's own unique set of configuration options, read
# ## more about them here: # ## more about them here:
# ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md # ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
# data_format = "influx" # data_format = "influx"
@@ -454,7 +393,7 @@
# # insecure_skip_verify = false # # insecure_skip_verify = false
# #
# ## Data format to output. # ## Data format to output.
# ## Each data format has its own unique set of configuration options, read # ## Each data format has it's own unique set of configuration options, read
# ## more about them here: # ## more about them here:
# ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md # ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
# data_format = "influx" # data_format = "influx"
@@ -468,7 +407,7 @@
# topic = "telegraf" # topic = "telegraf"
# #
# ## Data format to output. # ## Data format to output.
# ## Each data format has its own unique set of configuration options, read # ## Each data format has it's own unique set of configuration options, read
# ## more about them here: # ## more about them here:
# ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md # ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
# data_format = "influx" # data_format = "influx"
@@ -504,7 +443,7 @@
# # expiration_interval = "60s" # # expiration_interval = "60s"
# # Configuration for the Riemann server to send metrics to # # Configuration for Riemann server to send metrics to
# [[outputs.riemann]] # [[outputs.riemann]]
# ## The full TCP or UDP URL of the Riemann server # ## The full TCP or UDP URL of the Riemann server
# url = "tcp://localhost:5555" # url = "tcp://localhost:5555"
@@ -533,12 +472,9 @@
# #
# ## Description for Riemann event # ## Description for Riemann event
# # description_text = "metrics collected from telegraf" # # description_text = "metrics collected from telegraf"
#
# ## Riemann client write timeout, defaults to "5s" if not set.
# # timeout = "5s"
# # Configuration for the Riemann server to send metrics to # # Configuration for the legacy Riemann plugin
# [[outputs.riemann_legacy]] # [[outputs.riemann_legacy]]
# ## URL of server # ## URL of server
# url = "localhost:5555" # url = "localhost:5555"
@@ -548,33 +484,6 @@
# separator = " " # separator = " "
# # Generic socket writer capable of handling multiple socket types.
# [[outputs.socket_writer]]
# ## URL to connect to
# # address = "tcp://127.0.0.1:8094"
# # address = "tcp://example.com:http"
# # address = "tcp4://127.0.0.1:8094"
# # address = "tcp6://127.0.0.1:8094"
# # address = "tcp6://[2001:db8::1]:8094"
# # address = "udp://127.0.0.1:8094"
# # address = "udp4://127.0.0.1:8094"
# # address = "udp6://127.0.0.1:8094"
# # address = "unix:///tmp/telegraf.sock"
# # address = "unixgram:///tmp/telegraf.sock"
#
# ## Period between keep alive probes.
# ## Only applies to TCP sockets.
# ## 0 disables keep alive probes.
# ## Defaults to the OS configuration.
# # keep_alive_period = "5m"
#
# ## Data format to generate.
# ## Each data format has its own unique set of configuration options, read
# ## more about them here:
# ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
# # data_format = "influx"
############################################################################### ###############################################################################
# PROCESSOR PLUGINS # # PROCESSOR PLUGINS #
@@ -622,7 +531,7 @@
## Ignore some mountpoints by filesystem type. For example (dev)tmpfs (usually ## Ignore some mountpoints by filesystem type. For example (dev)tmpfs (usually
## present on /run, /var/run, /dev/shm or /dev). ## present on /run, /var/run, /dev/shm or /dev).
ignore_fs = ["tmpfs", "devtmpfs", "devfs"] ignore_fs = ["tmpfs", "devtmpfs"]
# Read metrics about disk IO by device # Read metrics about disk IO by device
@@ -633,23 +542,6 @@
# devices = ["sda", "sdb"] # devices = ["sda", "sdb"]
## Uncomment the following line if you need disk serial numbers. ## Uncomment the following line if you need disk serial numbers.
# skip_serial_number = false # skip_serial_number = false
#
## On systems which support it, device metadata can be added in the form of
## tags.
## Currently only Linux is supported via udev properties. You can view
## available properties for a device by running:
## 'udevadm info -q property -n /dev/sda'
# device_tags = ["ID_FS_TYPE", "ID_FS_USAGE"]
#
## Using the same metadata source as device_tags, you can also customize the
## name of the device via templates.
## The 'name_templates' parameter is a list of templates to try and apply to
## the device. The template may contain variables in the form of '$PROPERTY' or
## '${PROPERTY}'. The first template which does not contain any variables not
## present for the device is used as the device name tag.
## The typical use case is for LVM volumes, to get the VG/LV name instead of
## the near-meaningless DM-0 name.
# name_templates = ["$ID_FS_LABEL","$DM_VG_NAME/$DM_LV_NAME"]
# Get kernel statistics from /proc/stat # Get kernel statistics from /proc/stat
@@ -766,7 +658,7 @@
# gather_admin_socket_stats = true # gather_admin_socket_stats = true
# #
# ## Whether to gather statistics via ceph commands # ## Whether to gather statistics via ceph commands
# gather_cluster_stats = false # gather_cluster_stats = true
# # Read specific statistics per cgroup # # Read specific statistics per cgroup
@@ -785,12 +677,6 @@
# # files = ["memory.*usage*", "memory.limit_in_bytes"] # # files = ["memory.*usage*", "memory.limit_in_bytes"]
# # Get standard chrony metrics, requires chronyc executable.
# [[inputs.chrony]]
# ## If true, chronyc tries to perform a DNS lookup for the time server.
# # dns_lookup = false
# # Pull Metric Statistics from Amazon CloudWatch # # Pull Metric Statistics from Amazon CloudWatch
# [[inputs.cloudwatch]] # [[inputs.cloudwatch]]
# ## Amazon Region # ## Amazon Region
@@ -836,10 +722,9 @@
# namespace = "AWS/ELB" # namespace = "AWS/ELB"
# #
# ## Maximum requests per second. Note that the global default AWS rate limit is # ## Maximum requests per second. Note that the global default AWS rate limit is
# ## 400 reqs/sec, so if you define multiple namespaces, these should add up to a # ## 10 reqs/sec, so if you define multiple namespaces, these should add up to a
# ## maximum of 400. Optional - default value is 200. # ## maximum of 10. Optional - default value is 10.
# ## See http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_limits.html # ratelimit = 10
# ratelimit = 200
# #
# ## Metrics to Pull (optional) # ## Metrics to Pull (optional)
# ## Defaults to all Metrics in Namespace if nothing is provided # ## Defaults to all Metrics in Namespace if nothing is provided
@@ -853,22 +738,6 @@
# # value = "p-example" # # value = "p-example"
# # Collects conntrack stats from the configured directories and files.
# [[inputs.conntrack]]
# ## The following defaults would work with multiple versions of conntrack.
# ## Note the nf_ and ip_ filename prefixes are mutually exclusive across
# ## kernel versions, as are the directory locations.
#
# ## Superset of filenames to look for within the conntrack dirs.
# ## Missing files will be ignored.
# files = ["ip_conntrack_count","ip_conntrack_max",
# "nf_conntrack_count","nf_conntrack_max"]
#
# ## Directories to search within for the conntrack files above.
# ## Missing directrories will be ignored.
# dirs = ["/proc/sys/net/ipv4/netfilter","/proc/sys/net/netfilter"]
# # Gather health check statuses from services registered in Consul # # Gather health check statuses from services registered in Consul
# [[inputs.consul]] # [[inputs.consul]]
# ## Most of these values defaults to the one configured on a Consul's agent level. # ## Most of these values defaults to the one configured on a Consul's agent level.
@@ -916,12 +785,6 @@
# servers = ["localhost"] # servers = ["localhost"]
# # Provide a native collection for dmsetup based statistics for dm-cache
# [[inputs.dmcache]]
# ## Whether to report per-device stats or not
# per_device = true
# # Query given DNS server and gives statistics # # Query given DNS server and gives statistics
# [[inputs.dns_query]] # [[inputs.dns_query]]
# ## servers to query # ## servers to query
@@ -958,10 +821,6 @@
# ## Whether to report for each container total blkio and network stats or not # ## Whether to report for each container total blkio and network stats or not
# total = false # total = false
# #
# ## docker labels to include and exclude as tags. Globs accepted.
# ## Note that an empty array for both will include all labels as tags
# docker_label_include = []
# docker_label_exclude = []
# # Read statistics from one or many dovecot servers # # Read statistics from one or many dovecot servers
@@ -1025,7 +884,7 @@
# name_suffix = "_mycollector" # name_suffix = "_mycollector"
# #
# ## Data format to consume. # ## Data format to consume.
# ## Each data format has its own unique set of configuration options, read # ## Each data format has it's own unique set of configuration options, read
# ## more about them here: # ## more about them here:
# ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md # ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
# data_format = "influx" # data_format = "influx"
@@ -1090,39 +949,14 @@
# ## with optional port. ie localhost, 10.10.3.33:1936, etc. # ## with optional port. ie localhost, 10.10.3.33:1936, etc.
# ## Make sure you specify the complete path to the stats endpoint # ## Make sure you specify the complete path to the stats endpoint
# ## including the protocol, ie http://10.10.3.33:1936/haproxy?stats # ## including the protocol, ie http://10.10.3.33:1936/haproxy?stats
# # #
# ## If no servers are specified, then default to 127.0.0.1:1936/haproxy?stats # ## If no servers are specified, then default to 127.0.0.1:1936/haproxy?stats
# servers = ["http://myhaproxy.com:1936/haproxy?stats"] # servers = ["http://myhaproxy.com:1936/haproxy?stats"]
# # ##
# ## You can also use local socket with standard wildcard globbing. # ## You can also use local socket with standard wildcard globbing.
# ## Server address not starting with 'http' will be treated as a possible # ## Server address not starting with 'http' will be treated as a possible
# ## socket, so both examples below are valid. # ## socket, so both examples below are valid.
# # servers = ["socket:/run/haproxy/admin.sock", "/run/haproxy/*.sock"] # ## servers = ["socket:/run/haproxy/admin.sock", "/run/haproxy/*.sock"]
#
# ## By default, some of the fields are renamed from what haproxy calls them.
# ## Setting this option to true results in the plugin keeping the original
# ## field names.
# # keep_field_names = true
#
# ## Optional SSL Config
# # ssl_ca = "/etc/telegraf/ca.pem"
# # ssl_cert = "/etc/telegraf/cert.pem"
# # ssl_key = "/etc/telegraf/key.pem"
# ## Use SSL but skip chain & host verification
# # insecure_skip_verify = false
# # Monitor disks' temperatures using hddtemp
# [[inputs.hddtemp]]
# ## By default, telegraf gathers temps data from all disks detected by the
# ## hddtemp.
# ##
# ## Only collect temps from the selected disks.
# ##
# ## A * as the device name will return the temperature values of all disks.
# ##
# # address = "127.0.0.1:7634"
# # devices = ["sda", "*"]
# # HTTP/HTTPS request given an address a method and a timeout # # HTTP/HTTPS request given an address a method and a timeout
@@ -1143,11 +977,6 @@
# # {'fake':'data'} # # {'fake':'data'}
# # ''' # # '''
# #
# ## Optional substring or regex match in body of the response
# ## response_string_match = "\"service_status\": \"up\""
# ## response_string_match = "ok"
# ## response_string_match = "\".*_status\".?:.?\"up\""
#
# ## Optional SSL Config # ## Optional SSL Config
# # ssl_ca = "/etc/telegraf/ca.pem" # # ssl_ca = "/etc/telegraf/ca.pem"
# # ssl_cert = "/etc/telegraf/cert.pem" # # ssl_cert = "/etc/telegraf/cert.pem"
@@ -1161,10 +990,7 @@
# ## NOTE This plugin only reads numerical measurements, strings and booleans # ## NOTE This plugin only reads numerical measurements, strings and booleans
# ## will be ignored. # ## will be ignored.
# #
# ## Name for the service being polled. Will be appended to the name of the # ## a name for the service being polled
# ## measurement e.g. httpjson_webserver_stats
# ##
# ## Deprecated (1.3.0): Use name_override, name_suffix, name_prefix instead.
# name = "webserver_stats" # name = "webserver_stats"
# #
# ## URL of each server in the service's cluster # ## URL of each server in the service's cluster
@@ -1184,14 +1010,12 @@
# # "my_tag_2" # # "my_tag_2"
# # ] # # ]
# #
# ## HTTP parameters (all values must be strings). For "GET" requests, data # ## HTTP parameters (all values must be strings)
# ## will be included in the query. For "POST" requests, data will be included # [inputs.httpjson.parameters]
# ## in the request body as "x-www-form-urlencoded". # event_type = "cpu_spike"
# # [inputs.httpjson.parameters] # threshold = "0.75"
# # event_type = "cpu_spike"
# # threshold = "0.75"
# #
# ## HTTP Headers (all values must be strings) # ## HTTP Header parameters (all values must be strings)
# # [inputs.httpjson.headers] # # [inputs.httpjson.headers]
# # X-Auth-Token = "my-xauth-token" # # X-Auth-Token = "my-xauth-token"
# # apiVersion = "v1" # # apiVersion = "v1"
@@ -1226,44 +1050,14 @@
# # collect_memstats = true # # collect_memstats = true
# # This plugin gathers interrupts data from /proc/interrupts and /proc/softirqs. # # Read metrics from one or many bare metal servers
# [[inputs.interrupts]]
# ## To filter which IRQs to collect, make use of tagpass / tagdrop, i.e.
# # [inputs.interrupts.tagdrop]
# # irq = [ "NET_RX", "TASKLET" ]
# # Read metrics from the bare metal servers via IPMI
# [[inputs.ipmi_sensor]] # [[inputs.ipmi_sensor]]
# ## optionally specify the path to the ipmitool executable # ## specify servers via a url matching:
# # path = "/usr/bin/ipmitool"
# #
# ## optionally specify one or more servers via a url matching
# ## [username[:password]@][protocol[(address)]] # ## [username[:password]@][protocol[(address)]]
# ## e.g. # ## e.g.
# ## root:passwd@lan(127.0.0.1) # ## root:passwd@lan(127.0.0.1)
# ## # ##
# ## if no servers are specified, local machine sensor stats will be queried # servers = ["USERID:PASSW0RD@lan(192.168.1.1)"]
# ##
# # servers = ["USERID:PASSW0RD@lan(192.168.1.1)"]
# # Gather packets and bytes throughput from iptables
# [[inputs.iptables]]
# ## iptables require root access on most systems.
# ## Setting 'use_sudo' to true will make use of sudo to run iptables.
# ## Users must configure sudo to allow telegraf user to run iptables with no password.
# ## iptables can be restricted to only list command "iptables -nvL".
# use_sudo = false
# ## Setting 'use_lock' to true runs iptables with the "-w" option.
# ## Adjust your sudo settings appropriately if using this option ("iptables -wnvl")
# use_lock = false
# ## defines the table to monitor:
# table = "filter"
# ## defines the chains to monitor.
# ## NOTE: iptables rules without a comment will not be monitored.
# ## Read the plugin documentation for more information.
# chains = [ "INPUT" ]
# # Read JMX metrics through Jolokia # # Read JMX metrics through Jolokia
@@ -1293,13 +1087,6 @@
# ## Includes connection time, any redirects, and reading the response body. # ## Includes connection time, any redirects, and reading the response body.
# # client_timeout = "4s" # # client_timeout = "4s"
# #
# ## Attribute delimiter
# ##
# ## When multiple attributes are returned for a single
# ## [inputs.jolokia.metrics], the field name is a concatenation of the metric
# ## name, and the attribute name, separated by the given delimiter.
# # delimiter = "_"
#
# ## List of servers exposing jolokia read service # ## List of servers exposing jolokia read service
# [[inputs.jolokia.servers]] # [[inputs.jolokia.servers]]
# name = "as-server-01" # name = "as-server-01"
@@ -1330,23 +1117,6 @@
# attribute = "LoadedClassCount,UnloadedClassCount,TotalLoadedClassCount" # attribute = "LoadedClassCount,UnloadedClassCount,TotalLoadedClassCount"
# # Read Kapacitor-formatted JSON metrics from one or more HTTP endpoints
# [[inputs.kapacitor]]
# ## Multiple URLs from which to read Kapacitor-formatted JSON
# ## Default is "http://localhost:9092/kapacitor/v1/debug/vars".
# urls = [
# "http://localhost:9092/kapacitor/v1/debug/vars"
# ]
#
# ## Time limit for http requests
# timeout = "5s"
# # Get kernel statistics from /proc/vmstat
# [[inputs.kernel_vmstat]]
# # no configuration
# # Read metrics from the kubernetes kubelet api # # Read metrics from the kubernetes kubelet api
# [[inputs.kubernetes]] # [[inputs.kubernetes]]
# ## URL for the kubelet # ## URL for the kubelet
@@ -1370,11 +1140,6 @@
# servers = ["127.0.0.1:4021"] # servers = ["127.0.0.1:4021"]
# # Provides Linux sysctl fs metrics
# [[inputs.linux_sysctl_fs]]
# # no configuration
# # Read metrics from local Lustre service on OST, MDS # # Read metrics from local Lustre service on OST, MDS
# [[inputs.lustre2]] # [[inputs.lustre2]]
# ## An array of /proc globs to search for Lustre stats # ## An array of /proc globs to search for Lustre stats
@@ -1451,13 +1216,6 @@
# ## 10.0.0.1:10000, etc. # ## 10.0.0.1:10000, etc.
# servers = ["127.0.0.1:27017"] # servers = ["127.0.0.1:27017"]
# gather_perdb_stats = false # gather_perdb_stats = false
#
# ## Optional SSL Config
# # ssl_ca = "/etc/telegraf/ca.pem"
# # ssl_cert = "/etc/telegraf/cert.pem"
# # ssl_key = "/etc/telegraf/key.pem"
# ## Use SSL but skip chain & host verification
# # insecure_skip_verify = false
# # Read metrics from one or many mysql servers # # Read metrics from one or many mysql servers
@@ -1485,15 +1243,9 @@
# ## gather thread state counts from INFORMATION_SCHEMA.PROCESSLIST # ## gather thread state counts from INFORMATION_SCHEMA.PROCESSLIST
# gather_process_list = true # gather_process_list = true
# # # #
# ## gather thread state counts from INFORMATION_SCHEMA.USER_STATISTICS
# gather_user_statistics = true
# #
# ## gather auto_increment columns and max values from information schema # ## gather auto_increment columns and max values from information schema
# gather_info_schema_auto_inc = true # gather_info_schema_auto_inc = true
# # # #
# ## gather metrics from INFORMATION_SCHEMA.INNODB_METRICS
# gather_innodb_metrics = true
# #
# ## gather metrics from SHOW SLAVE STATUS command output # ## gather metrics from SHOW SLAVE STATUS command output
# gather_slave_status = true # gather_slave_status = true
# # # #
@@ -1631,7 +1383,7 @@
# ## NOTE: this plugin forks the ping command. You may need to set capabilities # ## NOTE: this plugin forks the ping command. You may need to set capabilities
# ## via setcap cap_net_raw+p /bin/ping # ## via setcap cap_net_raw+p /bin/ping
# # # #
# ## List of urls to ping # ## urls to ping
# urls = ["www.google.com"] # required # urls = ["www.google.com"] # required
# ## number of pings to send per collection (ping -c <COUNT>) # ## number of pings to send per collection (ping -c <COUNT>)
# # count = 1 # # count = 1
@@ -1665,7 +1417,7 @@
# # ignored_databases = ["postgres", "template0", "template1"] # # ignored_databases = ["postgres", "template0", "template1"]
# #
# ## A list of databases to pull metrics about. If not specified, metrics for all # ## A list of databases to pull metrics about. If not specified, metrics for all
# ## databases are gathered. Do NOT use with the 'ignored_databases' option. # ## databases are gathered. Do NOT use with the 'ignore_databases' option.
# # databases = ["app_production", "testing"] # # databases = ["app_production", "testing"]
@@ -1847,13 +1599,6 @@
# servers = ["http://localhost:8098"] # servers = ["http://localhost:8098"]
# # Monitor sensors, requires lm-sensors package
# [[inputs.sensors]]
# ## Remove numbers from field names.
# ## If true, a field name like 'temp1_input' will be changed to 'temp_input'.
# # remove_numbers = true
# # Retrieves SNMP values from remote agents # # Retrieves SNMP values from remote agents
# [[inputs.snmp]] # [[inputs.snmp]]
# agents = [ "127.0.0.1:161" ] # agents = [ "127.0.0.1:161" ]
@@ -2030,68 +1775,6 @@
# # ] # # ]
# # Sysstat metrics collector
# [[inputs.sysstat]]
# ## Path to the sadc command.
# #
# ## Common Defaults:
# ## Debian/Ubuntu: /usr/lib/sysstat/sadc
# ## Arch: /usr/lib/sa/sadc
# ## RHEL/CentOS: /usr/lib64/sa/sadc
# sadc_path = "/usr/lib/sa/sadc" # required
# #
# #
# ## Path to the sadf command, if it is not in PATH
# # sadf_path = "/usr/bin/sadf"
# #
# #
# ## Activities is a list of activities, that are passed as argument to the
# ## sadc collector utility (e.g: DISK, SNMP etc...)
# ## The more activities that are added, the more data is collected.
# # activities = ["DISK"]
# #
# #
# ## Group metrics to measurements.
# ##
# ## If group is false each metric will be prefixed with a description
# ## and represents itself a measurement.
# ##
# ## If Group is true, corresponding metrics are grouped to a single measurement.
# # group = true
# #
# #
# ## Options for the sadf command. The values on the left represent the sadf
# ## options and the values on the right their description (wich are used for
# ## grouping and prefixing metrics).
# ##
# ## Run 'sar -h' or 'man sar' to find out the supported options for your
# ## sysstat version.
# [inputs.sysstat.options]
# -C = "cpu"
# -B = "paging"
# -b = "io"
# -d = "disk" # requires DISK activity
# "-n ALL" = "network"
# "-P ALL" = "per_cpu"
# -q = "queue"
# -R = "mem"
# -r = "mem_util"
# -S = "swap_util"
# -u = "cpu_util"
# -v = "inode"
# -W = "swap"
# -w = "task"
# # -H = "hugepages" # only available for newer linux distributions
# # "-I ALL" = "interrupts" # requires INT activity
# #
# #
# ## Device tags can be used to add additional tags for devices.
# ## For example the configuration below adds a tag vg with value rootvg for
# ## all metrics with sda devices.
# # [[inputs.sysstat.device_tags.sda]]
# # vg = "rootvg"
# # Inserts sine and cosine waves for demonstration purposes # # Inserts sine and cosine waves for demonstration purposes
# [[inputs.trig]] # [[inputs.trig]]
# ## Set the amplitude # ## Set the amplitude
@@ -2147,39 +1830,6 @@
# SERVICE INPUT PLUGINS # # SERVICE INPUT PLUGINS #
############################################################################### ###############################################################################
# # AMQP consumer plugin
# [[inputs.amqp_consumer]]
# ## AMQP url
# url = "amqp://localhost:5672/influxdb"
# ## AMQP exchange
# exchange = "telegraf"
# ## AMQP queue name
# queue = "telegraf"
# ## Binding Key
# binding_key = "#"
#
# ## Maximum number of messages server should give to the worker.
# prefetch_count = 50
#
# ## Auth method. PLAIN and EXTERNAL are supported
# ## Using EXTERNAL requires enabling the rabbitmq_auth_mechanism_ssl plugin as
# ## described here: https://www.rabbitmq.com/plugins.html
# # auth_method = "PLAIN"
#
# ## Optional SSL Config
# # ssl_ca = "/etc/telegraf/ca.pem"
# # ssl_cert = "/etc/telegraf/cert.pem"
# # ssl_key = "/etc/telegraf/key.pem"
# ## Use SSL but skip chain & host verification
# # insecure_skip_verify = false
#
# ## Data format to output.
# ## Each data format has its own unique set of configuration options, read
# ## more about them here:
# ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
# data_format = "influx"
# # Influx HTTP write listener # # Influx HTTP write listener
# [[inputs.http_listener]] # [[inputs.http_listener]]
# ## Address and port to host HTTP listener on # ## Address and port to host HTTP listener on
@@ -2213,14 +1863,10 @@
# offset = "oldest" # offset = "oldest"
# #
# ## Data format to consume. # ## Data format to consume.
# ## Each data format has its own unique set of configuration options, read # ## Each data format has it's own unique set of configuration options, read
# ## more about them here: # ## more about them here:
# ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md # ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
# data_format = "influx" # data_format = "influx"
#
# ## Maximum length of a message to consume, in bytes (default 0/unlimited);
# ## larger messages are dropped
# max_message_len = 65536
# # Stream and parse log file(s). # # Stream and parse log file(s).
@@ -2232,9 +1878,7 @@
# ## /var/log/*/*.log -> find all .log files with a parent dir in /var/log # ## /var/log/*/*.log -> find all .log files with a parent dir in /var/log
# ## /var/log/apache.log -> only tail the apache log file # ## /var/log/apache.log -> only tail the apache log file
# files = ["/var/log/apache/access.log"] # files = ["/var/log/apache/access.log"]
# ## Read files that currently exist from the beginning. Files that are created # ## Read file from beginning.
# ## while telegraf is running (and that match the "files" globs) will always
# ## be read from the beginning.
# from_beginning = false # from_beginning = false
# #
# ## Parse logstash-style "grok" patterns: # ## Parse logstash-style "grok" patterns:
@@ -2288,7 +1932,7 @@
# # insecure_skip_verify = false # # insecure_skip_verify = false
# #
# ## Data format to consume. # ## Data format to consume.
# ## Each data format has its own unique set of configuration options, read # ## Each data format has it's own unique set of configuration options, read
# ## more about them here: # ## more about them here:
# ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md # ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
# data_format = "influx" # data_format = "influx"
@@ -2311,7 +1955,7 @@
# # pending_bytes_limit = 67108864 # # pending_bytes_limit = 67108864
# #
# ## Data format to consume. # ## Data format to consume.
# ## Each data format has its own unique set of configuration options, read # ## Each data format has it's own unique set of configuration options, read
# ## more about them here: # ## more about them here:
# ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md # ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
# data_format = "influx" # data_format = "influx"
@@ -2326,50 +1970,12 @@
# max_in_flight = 100 # max_in_flight = 100
# #
# ## Data format to consume. # ## Data format to consume.
# ## Each data format has its own unique set of configuration options, read # ## Each data format has it's own unique set of configuration options, read
# ## more about them here: # ## more about them here:
# ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md # ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
# data_format = "influx" # data_format = "influx"
# # Generic socket listener capable of handling multiple socket types.
# [[inputs.socket_listener]]
# ## URL to listen on
# # service_address = "tcp://:8094"
# # service_address = "tcp://127.0.0.1:http"
# # service_address = "tcp4://:8094"
# # service_address = "tcp6://:8094"
# # service_address = "tcp6://[2001:db8::1]:8094"
# # service_address = "udp://:8094"
# # service_address = "udp4://:8094"
# # service_address = "udp6://:8094"
# # service_address = "unix:///tmp/telegraf.sock"
# # service_address = "unixgram:///tmp/telegraf.sock"
#
# ## Maximum number of concurrent connections.
# ## Only applies to stream sockets (e.g. TCP).
# ## 0 (default) is unlimited.
# # max_connections = 1024
#
# ## Maximum socket buffer size in bytes.
# ## For stream sockets, once the buffer fills up, the sender will start backing up.
# ## For datagram sockets, once the buffer fills up, metrics will start dropping.
# ## Defaults to the OS default.
# # read_buffer_size = 65535
#
# ## Period between keep alive probes.
# ## Only applies to TCP sockets.
# ## 0 disables keep alive probes.
# ## Defaults to the OS configuration.
# # keep_alive_period = "5m"
#
# ## Data format to consume.
# ## Each data format has its own unique set of configuration options, read
# ## more about them here:
# ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
# # data_format = "influx"
# # Statsd Server # # Statsd Server
# [[inputs.statsd]] # [[inputs.statsd]]
# ## Address and port to host UDP listener on # ## Address and port to host UDP listener on
@@ -2431,7 +2037,7 @@
# pipe = false # pipe = false
# #
# ## Data format to consume. # ## Data format to consume.
# ## Each data format has its own unique set of configuration options, read # ## Each data format has it's own unique set of configuration options, read
# ## more about them here: # ## more about them here:
# ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md # ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
# data_format = "influx" # data_format = "influx"
@@ -2439,16 +2045,41 @@
# # Generic TCP listener # # Generic TCP listener
# [[inputs.tcp_listener]] # [[inputs.tcp_listener]]
# # DEPRECATED: the TCP listener plugin has been deprecated in favor of the # ## Address and port to host TCP listener on
# # socket_listener plugin # # service_address = ":8094"
# # see https://github.com/influxdata/telegraf/tree/master/plugins/inputs/socket_listener #
# ## Number of TCP messages allowed to queue up. Once filled, the
# ## TCP listener will start dropping packets.
# # allowed_pending_messages = 10000
#
# ## Maximum number of concurrent TCP connections to allow
# # max_tcp_connections = 250
#
# ## Data format to consume.
# ## Each data format has it's own unique set of configuration options, read
# ## more about them here:
# ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
# data_format = "influx"
# # Generic UDP listener # # Generic UDP listener
# [[inputs.udp_listener]] # [[inputs.udp_listener]]
# # DEPRECATED: the TCP listener plugin has been deprecated in favor of the # ## Address and port to host UDP listener on
# # socket_listener plugin # # service_address = ":8092"
# # see https://github.com/influxdata/telegraf/tree/master/plugins/inputs/socket_listener #
# ## Number of UDP messages allowed to queue up. Once filled, the
# ## UDP listener will start dropping packets.
# # allowed_pending_messages = 10000
#
# ## Set the buffer size of the UDP connection outside of OS default (in bytes)
# ## If set to 0, take OS default
# udp_buffer_size = 16777216
#
# ## Data format to consume.
# ## Each data format has it's own unique set of configuration options, read
# ## more about them here:
# ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
# data_format = "influx"
# # A Webhooks Event collector # # A Webhooks Event collector
@@ -2461,14 +2092,10 @@
# #
# [inputs.webhooks.github] # [inputs.webhooks.github]
# path = "/github" # path = "/github"
# # secret = ""
# #
# [inputs.webhooks.mandrill] # [inputs.webhooks.mandrill]
# path = "/mandrill" # path = "/mandrill"
# #
# [inputs.webhooks.rollbar] # [inputs.webhooks.rollbar]
# path = "/rollbar" # path = "/rollbar"
#
# [inputs.webhooks.papertrail]
# path = "/papertrail"

View File

@@ -117,8 +117,7 @@
Instances = ["*"] Instances = ["*"]
Counters = [ Counters = [
"% Idle Time", "% Idle Time",
"% Disk Time", "% Disk Time","% Disk Read Time",
"% Disk Read Time",
"% Disk Write Time", "% Disk Write Time",
"Current Disk Queue Length", "Current Disk Queue Length",
"% Free Space", "% Free Space",

View File

@@ -45,11 +45,9 @@ func (b *Buffer) Add(metrics ...telegraf.Metric) {
select { select {
case b.buf <- metrics[i]: case b.buf <- metrics[i]:
default: default:
b.mu.Lock()
MetricsDropped.Incr(1) MetricsDropped.Incr(1)
<-b.buf <-b.buf
b.buf <- metrics[i] b.buf <- metrics[i]
b.mu.Unlock()
} }
} }
} }

View File

@@ -6,7 +6,6 @@ import (
"fmt" "fmt"
"io/ioutil" "io/ioutil"
"log" "log"
"math"
"os" "os"
"path/filepath" "path/filepath"
"regexp" "regexp"
@@ -19,13 +18,14 @@ import (
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal" "github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/internal/models" "github.com/influxdata/telegraf/internal/models"
"github.com/influxdata/telegraf/plugins/aggregators"
"github.com/influxdata/telegraf/plugins/inputs"
"github.com/influxdata/telegraf/plugins/outputs"
"github.com/influxdata/telegraf/plugins/parsers" "github.com/influxdata/telegraf/plugins/parsers"
"github.com/influxdata/telegraf/plugins/processors"
"github.com/influxdata/telegraf/plugins/serializers" "github.com/influxdata/telegraf/plugins/serializers"
"github.com/influxdata/telegraf/registry/aggregators"
"github.com/influxdata/telegraf/registry/inputs"
"github.com/influxdata/telegraf/registry/outputs"
"github.com/influxdata/telegraf/registry/processors"
"github.com/influxdata/config"
"github.com/influxdata/toml" "github.com/influxdata/toml"
"github.com/influxdata/toml/ast" "github.com/influxdata/toml/ast"
) )
@@ -40,6 +40,14 @@ var (
// envVarRe is a regex to find environment variables in the config file // envVarRe is a regex to find environment variables in the config file
envVarRe = regexp.MustCompile(`\$\w+`) envVarRe = regexp.MustCompile(`\$\w+`)
// addQuoteRe is a regex for finding and adding quotes around / characters
// when they are used for distinguishing external plugins.
// ie, a ReplaceAll() with this pattern will be used to turn this:
// [[inputs.external/test/example]]
// to
// [[inputs."external/test/example"]]
addQuoteRe = regexp.MustCompile(`(\[?\[?inputs|outputs|processors|aggregators)\.(external\/[^.\]]+)`)
) )
// Config specifies the URL/user/password for the database that telegraf // Config specifies the URL/user/password for the database that telegraf
@@ -85,8 +93,8 @@ type AgentConfig struct {
// ie, if Interval=10s then always collect on :00, :10, :20, etc. // ie, if Interval=10s then always collect on :00, :10, :20, etc.
RoundInterval bool RoundInterval bool
// By default or when set to "0s", precision will be set to the same // By default, precision will be set to the same timestamp order as the
// timestamp order as the collection interval, with the maximum being 1s. // collection interval, with the maximum being 1s.
// ie, when interval = "10s", precision will be "1s" // ie, when interval = "10s", precision will be "1s"
// when interval = "250ms", precision will be "1ms" // when interval = "250ms", precision will be "1ms"
// Precision will NOT be used for service inputs. It is up to each individual // Precision will NOT be used for service inputs. It is up to each individual
@@ -230,13 +238,10 @@ var header = `# Telegraf Configuration
## ie, a jitter of 5s and interval 10s means flushes will happen every 10-15s ## ie, a jitter of 5s and interval 10s means flushes will happen every 10-15s
flush_jitter = "0s" flush_jitter = "0s"
## By default or when set to "0s", precision will be set to the same ## By default, precision will be set to the same timestamp order as the
## timestamp order as the collection interval, with the maximum being 1s. ## collection interval, with the maximum being 1s.
## ie, when interval = "10s", precision will be "1s" ## Precision will NOT be used for service inputs, such as logparser and statsd.
## when interval = "250ms", precision will be "1ms" ## Valid values are "ns", "us" (or "µs"), "ms", "s".
## Precision will NOT be used for service inputs. It is up to each individual
## service input to set the timestamp at the appropriate precision.
## Valid time units are "ns", "us" (or "µs"), "ms", "s".
precision = "" precision = ""
## Logging configuration: ## Logging configuration:
@@ -509,10 +514,6 @@ func PrintOutputConfig(name string) error {
func (c *Config) LoadDirectory(path string) error { func (c *Config) LoadDirectory(path string) error {
walkfn := func(thispath string, info os.FileInfo, _ error) error { walkfn := func(thispath string, info os.FileInfo, _ error) error {
if info == nil {
log.Printf("W! Telegraf is not permitted to read %s", thispath)
return nil
}
if info.IsDir() { if info.IsDir() {
return nil return nil
} }
@@ -573,7 +574,7 @@ func (c *Config) LoadConfig(path string) error {
if !ok { if !ok {
return fmt.Errorf("%s: invalid configuration", path) return fmt.Errorf("%s: invalid configuration", path)
} }
if err = toml.UnmarshalTable(subTable, c.Tags); err != nil { if err = config.UnmarshalTable(subTable, c.Tags); err != nil {
log.Printf("E! Could not parse [global_tags] config\n") log.Printf("E! Could not parse [global_tags] config\n")
return fmt.Errorf("Error parsing %s, %s", path, err) return fmt.Errorf("Error parsing %s, %s", path, err)
} }
@@ -586,7 +587,7 @@ func (c *Config) LoadConfig(path string) error {
if !ok { if !ok {
return fmt.Errorf("%s: invalid configuration", path) return fmt.Errorf("%s: invalid configuration", path)
} }
if err = toml.UnmarshalTable(subTable, c.Agent); err != nil { if err = config.UnmarshalTable(subTable, c.Agent); err != nil {
log.Printf("E! Could not parse [agent] config\n") log.Printf("E! Could not parse [agent] config\n")
return fmt.Errorf("Error parsing %s, %s", path, err) return fmt.Errorf("Error parsing %s, %s", path, err)
} }
@@ -708,6 +709,9 @@ func parseFile(fpath string) (*ast.Table, error) {
} }
} }
// add quotes around external plugin paths.
contents = addQuoteRe.ReplaceAll(contents, []byte(`$1."$2"`))
return toml.Parse(contents) return toml.Parse(contents)
} }
@@ -723,7 +727,7 @@ func (c *Config) addAggregator(name string, table *ast.Table) error {
return err return err
} }
if err := toml.UnmarshalTable(table, aggregator); err != nil { if err := config.UnmarshalTable(table, aggregator); err != nil {
return err return err
} }
@@ -743,7 +747,7 @@ func (c *Config) addProcessor(name string, table *ast.Table) error {
return err return err
} }
if err := toml.UnmarshalTable(table, processor); err != nil { if err := config.UnmarshalTable(table, processor); err != nil {
return err return err
} }
@@ -783,7 +787,7 @@ func (c *Config) addOutput(name string, table *ast.Table) error {
return err return err
} }
if err := toml.UnmarshalTable(table, output); err != nil { if err := config.UnmarshalTable(table, output); err != nil {
return err return err
} }
@@ -824,7 +828,7 @@ func (c *Config) addInput(name string, table *ast.Table) error {
return err return err
} }
if err := toml.UnmarshalTable(table, input); err != nil { if err := config.UnmarshalTable(table, input); err != nil {
return err return err
} }
@@ -916,7 +920,7 @@ func buildAggregator(name string, tbl *ast.Table) (*models.AggregatorConfig, err
conf.Tags = make(map[string]string) conf.Tags = make(map[string]string)
if node, ok := tbl.Fields["tags"]; ok { if node, ok := tbl.Fields["tags"]; ok {
if subtbl, ok := node.(*ast.Table); ok { if subtbl, ok := node.(*ast.Table); ok {
if err := toml.UnmarshalTable(subtbl, conf.Tags); err != nil { if err := config.UnmarshalTable(subtbl, conf.Tags); err != nil {
log.Printf("Could not parse tags for input %s\n", name) log.Printf("Could not parse tags for input %s\n", name)
} }
} }
@@ -1153,7 +1157,7 @@ func buildInput(name string, tbl *ast.Table) (*models.InputConfig, error) {
cp.Tags = make(map[string]string) cp.Tags = make(map[string]string)
if node, ok := tbl.Fields["tags"]; ok { if node, ok := tbl.Fields["tags"]; ok {
if subtbl, ok := node.(*ast.Table); ok { if subtbl, ok := node.(*ast.Table); ok {
if err := toml.UnmarshalTable(subtbl, cp.Tags); err != nil { if err := config.UnmarshalTable(subtbl, cp.Tags); err != nil {
log.Printf("E! Could not parse tags for input %s\n", name) log.Printf("E! Could not parse tags for input %s\n", name)
} }
} }
@@ -1233,34 +1237,6 @@ func buildParser(name string, tbl *ast.Table) (parsers.Parser, error) {
} }
} }
if node, ok := tbl.Fields["collectd_auth_file"]; ok {
if kv, ok := node.(*ast.KeyValue); ok {
if str, ok := kv.Value.(*ast.String); ok {
c.CollectdAuthFile = str.Value
}
}
}
if node, ok := tbl.Fields["collectd_security_level"]; ok {
if kv, ok := node.(*ast.KeyValue); ok {
if str, ok := kv.Value.(*ast.String); ok {
c.CollectdSecurityLevel = str.Value
}
}
}
if node, ok := tbl.Fields["collectd_typesdb"]; ok {
if kv, ok := node.(*ast.KeyValue); ok {
if ary, ok := kv.Value.(*ast.Array); ok {
for _, elem := range ary.Value {
if str, ok := elem.(*ast.String); ok {
c.CollectdTypesDB = append(c.CollectdTypesDB, str.Value)
}
}
}
}
}
c.MetricName = name c.MetricName = name
delete(tbl.Fields, "data_format") delete(tbl.Fields, "data_format")
@@ -1268,9 +1244,6 @@ func buildParser(name string, tbl *ast.Table) (parsers.Parser, error) {
delete(tbl.Fields, "templates") delete(tbl.Fields, "templates")
delete(tbl.Fields, "tag_keys") delete(tbl.Fields, "tag_keys")
delete(tbl.Fields, "data_type") delete(tbl.Fields, "data_type")
delete(tbl.Fields, "collectd_auth_file")
delete(tbl.Fields, "collectd_security_level")
delete(tbl.Fields, "collectd_typesdb")
return parsers.NewParser(c) return parsers.NewParser(c)
} }
@@ -1279,7 +1252,7 @@ func buildParser(name string, tbl *ast.Table) (parsers.Parser, error) {
// a serializers.Serializer object, and creates it, which can then be added onto // a serializers.Serializer object, and creates it, which can then be added onto
// an Output object. // an Output object.
func buildSerializer(name string, tbl *ast.Table) (serializers.Serializer, error) { func buildSerializer(name string, tbl *ast.Table) (serializers.Serializer, error) {
c := &serializers.Config{TimestampUnits: time.Duration(1 * time.Second)} c := &serializers.Config{}
if node, ok := tbl.Fields["data_format"]; ok { if node, ok := tbl.Fields["data_format"]; ok {
if kv, ok := node.(*ast.KeyValue); ok { if kv, ok := node.(*ast.KeyValue); ok {
@@ -1309,26 +1282,9 @@ func buildSerializer(name string, tbl *ast.Table) (serializers.Serializer, error
} }
} }
if node, ok := tbl.Fields["json_timestamp_units"]; ok {
if kv, ok := node.(*ast.KeyValue); ok {
if str, ok := kv.Value.(*ast.String); ok {
timestampVal, err := time.ParseDuration(str.Value)
if err != nil {
return nil, fmt.Errorf("Unable to parse json_timestamp_units as a duration, %s", err)
}
// now that we have a duration, truncate it to the nearest
// power of ten (just in case)
nearest_exponent := int64(math.Log10(float64(timestampVal.Nanoseconds())))
new_nanoseconds := int64(math.Pow(10.0, float64(nearest_exponent)))
c.TimestampUnits = time.Duration(new_nanoseconds)
}
}
}
delete(tbl.Fields, "data_format") delete(tbl.Fields, "data_format")
delete(tbl.Fields, "prefix") delete(tbl.Fields, "prefix")
delete(tbl.Fields, "template") delete(tbl.Fields, "template")
delete(tbl.Fields, "json_timestamp_units")
return serializers.NewSerializer(c) return serializers.NewSerializer(c)
} }

View File

@@ -6,11 +6,11 @@ import (
"time" "time"
"github.com/influxdata/telegraf/internal/models" "github.com/influxdata/telegraf/internal/models"
"github.com/influxdata/telegraf/plugins/inputs"
"github.com/influxdata/telegraf/plugins/inputs/exec" "github.com/influxdata/telegraf/plugins/inputs/exec"
"github.com/influxdata/telegraf/plugins/inputs/memcached" "github.com/influxdata/telegraf/plugins/inputs/memcached"
"github.com/influxdata/telegraf/plugins/inputs/procstat" "github.com/influxdata/telegraf/plugins/inputs/procstat"
"github.com/influxdata/telegraf/plugins/parsers" "github.com/influxdata/telegraf/plugins/parsers"
"github.com/influxdata/telegraf/registry/inputs"
"github.com/stretchr/testify/assert" "github.com/stretchr/testify/assert"
) )

View File

@@ -60,7 +60,7 @@
# Kafka topic for producer messages # Kafka topic for producer messages
topic = "telegraf" topic = "telegraf"
# Telegraf tag to use as a routing key # Telegraf tag to use as a routing key
# ie, if this tag exists, its value will be used as the routing key # ie, if this tag exists, it's value will be used as the routing key
routing_tag = "host" routing_tag = "host"

View File

@@ -0,0 +1,37 @@
package errchan
import (
"fmt"
"strings"
)
type ErrChan struct {
C chan error
}
// New returns an error channel of max length 'n'
// errors can be sent to the ErrChan.C channel, and will be returned when
// ErrChan.Error() is called.
func New(n int) *ErrChan {
return &ErrChan{
C: make(chan error, n),
}
}
// Error closes the ErrChan.C channel and returns an error if there are any
// non-nil errors, otherwise returns nil.
func (e *ErrChan) Error() error {
close(e.C)
var out string
for err := range e.C {
if err != nil {
out += "[" + err.Error() + "], "
}
}
if out != "" {
return fmt.Errorf("Errors encountered: " + strings.TrimRight(out, ", "))
}
return nil
}

View File

@@ -122,9 +122,9 @@ func (ro *RunningOutput) AddMetric(m telegraf.Metric) {
// Write writes all cached points to this output. // Write writes all cached points to this output.
func (ro *RunningOutput) Write() error { func (ro *RunningOutput) Write() error {
nFails, nMetrics := ro.failMetrics.Len(), ro.metrics.Len() nFails, nMetrics := ro.failMetrics.Len(), ro.metrics.Len()
ro.BufferSize.Set(int64(nFails + nMetrics))
log.Printf("D! Output [%s] buffer fullness: %d / %d metrics. ", log.Printf("D! Output [%s] buffer fullness: %d / %d metrics. ",
ro.Name, nFails+nMetrics, ro.MetricBufferLimit) ro.Name, nFails+nMetrics, ro.MetricBufferLimit)
ro.BufferSize.Incr(int64(nFails + nMetrics))
var err error var err error
if !ro.failMetrics.IsEmpty() { if !ro.failMetrics.IsEmpty() {
// how many batches of failed writes we need to write. // how many batches of failed writes we need to write.
@@ -176,6 +176,7 @@ func (ro *RunningOutput) write(metrics []telegraf.Metric) error {
log.Printf("D! Output [%s] wrote batch of %d metrics in %s\n", log.Printf("D! Output [%s] wrote batch of %d metrics in %s\n",
ro.Name, nMetrics, elapsed) ro.Name, nMetrics, elapsed)
ro.MetricsWritten.Incr(int64(nMetrics)) ro.MetricsWritten.Incr(int64(nMetrics))
ro.BufferSize.Incr(-int64(nMetrics))
ro.WriteTime.Incr(elapsed.Nanoseconds()) ro.WriteTime.Incr(elapsed.Nanoseconds())
} }
return err return err

View File

@@ -4,14 +4,11 @@ import (
"io" "io"
"log" "log"
"os" "os"
"regexp"
"time" "time"
"github.com/influxdata/wlog" "github.com/influxdata/wlog"
) )
var prefixRegex = regexp.MustCompile("^[DIWE]!")
// newTelegrafWriter returns a logging-wrapped writer. // newTelegrafWriter returns a logging-wrapped writer.
func newTelegrafWriter(w io.Writer) io.Writer { func newTelegrafWriter(w io.Writer) io.Writer {
return &telegrafLog{ return &telegrafLog{
@@ -24,13 +21,7 @@ type telegrafLog struct {
} }
func (t *telegrafLog) Write(b []byte) (n int, err error) { func (t *telegrafLog) Write(b []byte) (n int, err error) {
var line []byte return t.writer.Write(append([]byte(time.Now().UTC().Format(time.RFC3339)+" "), b...))
if !prefixRegex.Match(b) {
line = append([]byte(time.Now().UTC().Format(time.RFC3339)+" I! "), b...)
} else {
line = append([]byte(time.Now().UTC().Format(time.RFC3339)+" "), b...)
}
return t.writer.Write(line)
} }
// SetupLogging configures the logging output. // SetupLogging configures the logging output.

View File

@@ -51,19 +51,6 @@ func TestErrorWriteLogToFile(t *testing.T) {
assert.Equal(t, f[19:], []byte("Z E! TEST\n")) assert.Equal(t, f[19:], []byte("Z E! TEST\n"))
} }
func TestAddDefaultLogLevel(t *testing.T) {
tmpfile, err := ioutil.TempFile("", "")
assert.NoError(t, err)
defer func() { os.Remove(tmpfile.Name()) }()
SetupLogging(true, false, tmpfile.Name())
log.Printf("TEST")
f, err := ioutil.ReadFile(tmpfile.Name())
assert.NoError(t, err)
assert.Equal(t, f[19:], []byte("Z I! TEST\n"))
}
func BenchmarkTelegrafLogWrite(b *testing.B) { func BenchmarkTelegrafLogWrite(b *testing.B) {
var msg = []byte("test") var msg = []byte("test")
var buf bytes.Buffer var buf bytes.Buffer

View File

@@ -44,18 +44,13 @@ func New(
// pre-allocate exact size of the tags slice // pre-allocate exact size of the tags slice
taglen := 0 taglen := 0
for k, v := range tags { for k, v := range tags {
if len(k) == 0 || len(v) == 0 { // TODO check that length of tag key & value are > 0
continue
}
taglen += 2 + len(escape(k, "tagkey")) + len(escape(v, "tagval")) taglen += 2 + len(escape(k, "tagkey")) + len(escape(v, "tagval"))
} }
m.tags = make([]byte, taglen) m.tags = make([]byte, taglen)
i := 0 i := 0
for k, v := range tags { for k, v := range tags {
if len(k) == 0 || len(v) == 0 {
continue
}
m.tags[i] = ',' m.tags[i] = ','
i++ i++
i += copy(m.tags[i:], escape(k, "tagkey")) i += copy(m.tags[i:], escape(k, "tagkey"))

View File

@@ -625,26 +625,3 @@ func TestNewMetricFailNaN(t *testing.T) {
_, err := New("cpu", tags, fields, now) _, err := New("cpu", tags, fields, now)
assert.NoError(t, err) assert.NoError(t, err)
} }
func TestEmptyTagValueOrKey(t *testing.T) {
now := time.Now()
tags := map[string]string{
"host": "localhost",
"emptytag": "",
"": "valuewithoutkey",
}
fields := map[string]interface{}{
"usage_idle": float64(99),
}
m, err := New("cpu", tags, fields, now)
assert.True(t, m.HasTag("host"))
assert.False(t, m.HasTag("emptytag"))
assert.Equal(t,
fmt.Sprintf("cpu,host=localhost usage_idle=99 %d\n", now.UnixNano()),
m.String())
assert.NoError(t, err)
}

View File

@@ -4,7 +4,6 @@ import (
"bytes" "bytes"
"errors" "errors"
"fmt" "fmt"
"strconv"
"time" "time"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf"
@@ -41,21 +40,10 @@ const (
) )
func Parse(buf []byte) ([]telegraf.Metric, error) { func Parse(buf []byte) ([]telegraf.Metric, error) {
return ParseWithDefaultTimePrecision(buf, time.Now(), "") return ParseWithDefaultTime(buf, time.Now())
} }
func ParseWithDefaultTime(buf []byte, t time.Time) ([]telegraf.Metric, error) { func ParseWithDefaultTime(buf []byte, t time.Time) ([]telegraf.Metric, error) {
return ParseWithDefaultTimePrecision(buf, t, "")
}
func ParseWithDefaultTimePrecision(
buf []byte,
t time.Time,
precision string,
) ([]telegraf.Metric, error) {
if len(buf) == 0 {
return []telegraf.Metric{}, nil
}
if len(buf) <= 6 { if len(buf) <= 6 {
return []telegraf.Metric{}, makeError("buffer too short", buf, 0) return []telegraf.Metric{}, makeError("buffer too short", buf, 0)
} }
@@ -72,7 +60,7 @@ func ParseWithDefaultTimePrecision(
continue continue
} }
m, err := parseMetric(buf[i:i+j], t, precision) m, err := parseMetric(buf[i:i+j], t)
if err != nil { if err != nil {
i += j + 1 // increment i past the previous newline i += j + 1 // increment i past the previous newline
errStr += " " + err.Error() errStr += " " + err.Error()
@@ -89,10 +77,7 @@ func ParseWithDefaultTimePrecision(
return metrics, nil return metrics, nil
} }
func parseMetric(buf []byte, func parseMetric(buf []byte, defaultTime time.Time) (telegraf.Metric, error) {
defaultTime time.Time,
precision string,
) (telegraf.Metric, error) {
var dTime string var dTime string
// scan the first block which is measurement[,tag1=value1,tag2=value=2...] // scan the first block which is measurement[,tag1=value1,tag2=value=2...]
pos, key, err := scanKey(buf, 0) pos, key, err := scanKey(buf, 0)
@@ -126,23 +111,9 @@ func parseMetric(buf []byte,
return nil, err return nil, err
} }
// apply precision multiplier
var nsec int64
multiplier := getPrecisionMultiplier(precision)
if multiplier > 1 {
tsint, err := parseIntBytes(ts, 10, 64)
if err != nil {
return nil, err
}
nsec := multiplier * tsint
ts = []byte(strconv.FormatInt(nsec, 10))
}
m := &metric{ m := &metric{
fields: fields, fields: fields,
t: ts, t: ts,
nsec: nsec,
} }
// parse out the measurement name // parse out the measurement name
@@ -654,21 +625,3 @@ func makeError(reason string, buf []byte, i int) error {
return fmt.Errorf("metric parsing error, reason: [%s], buffer: [%s], index: [%d]", return fmt.Errorf("metric parsing error, reason: [%s], buffer: [%s], index: [%d]",
reason, buf, i) reason, buf, i)
} }
// getPrecisionMultiplier will return a multiplier for the precision specified.
func getPrecisionMultiplier(precision string) int64 {
d := time.Nanosecond
switch precision {
case "u":
d = time.Microsecond
case "ms":
d = time.Millisecond
case "s":
d = time.Second
case "m":
d = time.Minute
case "h":
d = time.Hour
}
return int64(d)
}

View File

@@ -364,27 +364,6 @@ func TestParseNegativeTimestamps(t *testing.T) {
} }
} }
func TestParsePrecision(t *testing.T) {
for _, tt := range []struct {
line string
precision string
expected int64
}{
{"test v=42 1491847420", "s", 1491847420000000000},
{"test v=42 1491847420123", "ms", 1491847420123000000},
{"test v=42 1491847420123456", "u", 1491847420123456000},
{"test v=42 1491847420123456789", "ns", 1491847420123456789},
{"test v=42 1491847420123456789", "1s", 1491847420123456789},
{"test v=42 1491847420123456789", "asdf", 1491847420123456789},
} {
metrics, err := ParseWithDefaultTimePrecision(
[]byte(tt.line+"\n"), time.Now(), tt.precision)
assert.NoError(t, err, tt)
assert.Equal(t, tt.expected, metrics[0].UnixNano())
}
}
func TestParseMaxKeyLength(t *testing.T) { func TestParseMaxKeyLength(t *testing.T) {
key := "" key := ""
for { for {

View File

@@ -2,7 +2,7 @@ package minmax
import ( import (
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/aggregators" "github.com/influxdata/telegraf/registry/aggregators"
) )
type MinMax struct { type MinMax struct {

View File

@@ -1,11 +0,0 @@
package aggregators
import "github.com/influxdata/telegraf"
type Creator func() telegraf.Aggregator
var Aggregators = map[string]Creator{}
func Add(name string, creator Creator) {
Aggregators[name] = creator
}

View File

@@ -1,8 +1,6 @@
# Example Input Plugin # Example Input Plugin
The example plugin gathers metrics about example things. This description The example plugin gathers metrics about example things
explains at a high level what the plugin does and provides links to where
additional information can be found.
### Configuration: ### Configuration:
@@ -14,8 +12,7 @@ additional information can be found.
### Measurements & Fields: ### Measurements & Fields:
Here you should add an optional description and links to where the user can <optional description>
get more information about the measurements.
- measurement1 - measurement1
- field1 (type, unit) - field1 (type, unit)
@@ -33,11 +30,8 @@ get more information about the measurements.
### Sample Queries: ### Sample Queries:
This section should contain some useful InfluxDB queries that can be used to These are some useful queries (to generate dashboards or other) to run against data from this plugin:
get started with the plugin or to generate dashboards. For each query listed,
describe at a high level what data is returned.
Get the max, mean, and min for the measurement in the last hour:
``` ```
SELECT max(field1), mean(field1), min(field1) FROM measurement1 WHERE tag1=bar AND time > now() - 1h GROUP BY tag SELECT max(field1), mean(field1), min(field1) FROM measurement1 WHERE tag1=bar AND time > now() - 1h GROUP BY tag
``` ```
@@ -45,7 +39,7 @@ SELECT max(field1), mean(field1), min(field1) FROM measurement1 WHERE tag1=bar A
### Example Output: ### Example Output:
``` ```
$ telegraf -input-filter example -test $ ./telegraf -config telegraf.conf -input-filter example -test
measurement1,tag1=foo,tag2=bar field1=1i,field2=2.1 1453831884664956455 measurement1,tag1=foo,tag2=bar field1=1i,field2=2.1 1453831884664956455
measurement2,tag1=foo,tag2=bar,tag3=baz field3=1i 1453831884664956455 measurement2,tag1=foo,tag2=bar,tag3=baz field3=1i 1453831884664956455
``` ```

View File

@@ -10,7 +10,8 @@ import (
"time" "time"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/internal/errchan"
"github.com/influxdata/telegraf/registry/inputs"
as "github.com/aerospike/aerospike-client-go" as "github.com/aerospike/aerospike-client-go"
) )
@@ -40,16 +41,17 @@ func (a *Aerospike) Gather(acc telegraf.Accumulator) error {
} }
var wg sync.WaitGroup var wg sync.WaitGroup
errChan := errchan.New(len(a.Servers))
wg.Add(len(a.Servers)) wg.Add(len(a.Servers))
for _, server := range a.Servers { for _, server := range a.Servers {
go func(serv string) { go func(serv string) {
defer wg.Done() defer wg.Done()
acc.AddError(a.gatherServer(serv, acc)) errChan.C <- a.gatherServer(serv, acc)
}(server) }(server)
} }
wg.Wait() wg.Wait()
return nil return errChan.Error()
} }
func (a *Aerospike) gatherServer(hostport string, acc telegraf.Accumulator) error { func (a *Aerospike) gatherServer(hostport string, acc telegraf.Accumulator) error {

View File

@@ -19,7 +19,7 @@ func TestAerospikeStatistics(t *testing.T) {
var acc testutil.Accumulator var acc testutil.Accumulator
err := acc.GatherError(a.Gather) err := a.Gather(&acc)
require.NoError(t, err) require.NoError(t, err)
assert.True(t, acc.HasMeasurement("aerospike_node")) assert.True(t, acc.HasMeasurement("aerospike_node"))
@@ -41,7 +41,8 @@ func TestAerospikeStatisticsPartialErr(t *testing.T) {
var acc testutil.Accumulator var acc testutil.Accumulator
require.Error(t, acc.GatherError(a.Gather)) err := a.Gather(&acc)
require.Error(t, err)
assert.True(t, acc.HasMeasurement("aerospike_node")) assert.True(t, acc.HasMeasurement("aerospike_node"))
assert.True(t, acc.HasMeasurement("aerospike_namespace")) assert.True(t, acc.HasMeasurement("aerospike_namespace"))

View File

@@ -2,7 +2,6 @@ package all
import ( import (
_ "github.com/influxdata/telegraf/plugins/inputs/aerospike" _ "github.com/influxdata/telegraf/plugins/inputs/aerospike"
_ "github.com/influxdata/telegraf/plugins/inputs/amqp_consumer"
_ "github.com/influxdata/telegraf/plugins/inputs/apache" _ "github.com/influxdata/telegraf/plugins/inputs/apache"
_ "github.com/influxdata/telegraf/plugins/inputs/bcache" _ "github.com/influxdata/telegraf/plugins/inputs/bcache"
_ "github.com/influxdata/telegraf/plugins/inputs/cassandra" _ "github.com/influxdata/telegraf/plugins/inputs/cassandra"
@@ -15,7 +14,6 @@ import (
_ "github.com/influxdata/telegraf/plugins/inputs/couchbase" _ "github.com/influxdata/telegraf/plugins/inputs/couchbase"
_ "github.com/influxdata/telegraf/plugins/inputs/couchdb" _ "github.com/influxdata/telegraf/plugins/inputs/couchdb"
_ "github.com/influxdata/telegraf/plugins/inputs/disque" _ "github.com/influxdata/telegraf/plugins/inputs/disque"
_ "github.com/influxdata/telegraf/plugins/inputs/dmcache"
_ "github.com/influxdata/telegraf/plugins/inputs/dns_query" _ "github.com/influxdata/telegraf/plugins/inputs/dns_query"
_ "github.com/influxdata/telegraf/plugins/inputs/docker" _ "github.com/influxdata/telegraf/plugins/inputs/docker"
_ "github.com/influxdata/telegraf/plugins/inputs/dovecot" _ "github.com/influxdata/telegraf/plugins/inputs/dovecot"
@@ -30,12 +28,10 @@ import (
_ "github.com/influxdata/telegraf/plugins/inputs/httpjson" _ "github.com/influxdata/telegraf/plugins/inputs/httpjson"
_ "github.com/influxdata/telegraf/plugins/inputs/influxdb" _ "github.com/influxdata/telegraf/plugins/inputs/influxdb"
_ "github.com/influxdata/telegraf/plugins/inputs/internal" _ "github.com/influxdata/telegraf/plugins/inputs/internal"
_ "github.com/influxdata/telegraf/plugins/inputs/interrupts"
_ "github.com/influxdata/telegraf/plugins/inputs/ipmi_sensor" _ "github.com/influxdata/telegraf/plugins/inputs/ipmi_sensor"
_ "github.com/influxdata/telegraf/plugins/inputs/iptables" _ "github.com/influxdata/telegraf/plugins/inputs/iptables"
_ "github.com/influxdata/telegraf/plugins/inputs/jolokia" _ "github.com/influxdata/telegraf/plugins/inputs/jolokia"
_ "github.com/influxdata/telegraf/plugins/inputs/kafka_consumer" _ "github.com/influxdata/telegraf/plugins/inputs/kafka_consumer"
_ "github.com/influxdata/telegraf/plugins/inputs/kapacitor"
_ "github.com/influxdata/telegraf/plugins/inputs/kubernetes" _ "github.com/influxdata/telegraf/plugins/inputs/kubernetes"
_ "github.com/influxdata/telegraf/plugins/inputs/leofs" _ "github.com/influxdata/telegraf/plugins/inputs/leofs"
_ "github.com/influxdata/telegraf/plugins/inputs/logparser" _ "github.com/influxdata/telegraf/plugins/inputs/logparser"

View File

@@ -1,47 +0,0 @@
# AMQP Consumer Input Plugin
This plugin provides a consumer for use with AMQP 0-9-1, a promenent implementation of this protocol being [RabbitMQ](https://www.rabbitmq.com/).
Metrics are read from a topic exchange using the configured queue and binding_key.
Message payload should be formatted in one of the [Telegraf Data Formats](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md).
For an introduction to AMQP see:
- https://www.rabbitmq.com/tutorials/amqp-concepts.html
- https://www.rabbitmq.com/getstarted.html
The following defaults are known to work with RabbitMQ:
```toml
# AMQP consumer plugin
[[inputs.amqp_consumer]]
## AMQP url
url = "amqp://localhost:5672/influxdb"
## AMQP exchange
exchange = "telegraf"
## AMQP queue name
queue = "telegraf"
## Binding Key
binding_key = "#"
## Controls how many messages the server will try to keep on the network
## for consumers before receiving delivery acks.
#prefetch_count = 50
## Auth method. PLAIN and EXTERNAL are supported.
## Using EXTERNAL requires enabling the rabbitmq_auth_mechanism_ssl plugin as
## described here: https://www.rabbitmq.com/plugins.html
# auth_method = "PLAIN"
## Optional SSL Config
# ssl_ca = "/etc/telegraf/ca.pem"
# ssl_cert = "/etc/telegraf/cert.pem"
# ssl_key = "/etc/telegraf/key.pem"
## Use SSL but skip chain & host verification
# insecure_skip_verify = false
## Data format to output.
## Each data format has its own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
data_format = "influx"
```

View File

@@ -1,280 +0,0 @@
package amqp_consumer
import (
"fmt"
"log"
"strings"
"sync"
"time"
"github.com/streadway/amqp"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs"
"github.com/influxdata/telegraf/plugins/parsers"
)
// AMQPConsumer is the top level struct for this plugin
type AMQPConsumer struct {
URL string
// AMQP exchange
Exchange string
// Queue Name
Queue string
// Binding Key
BindingKey string `toml:"binding_key"`
// Controls how many messages the server will try to keep on the network
// for consumers before receiving delivery acks.
PrefetchCount int
// AMQP Auth method
AuthMethod string
// Path to CA file
SSLCA string `toml:"ssl_ca"`
// Path to host cert file
SSLCert string `toml:"ssl_cert"`
// Path to cert key file
SSLKey string `toml:"ssl_key"`
// Use SSL but skip chain & host verification
InsecureSkipVerify bool
parser parsers.Parser
conn *amqp.Connection
wg *sync.WaitGroup
}
type externalAuth struct{}
func (a *externalAuth) Mechanism() string {
return "EXTERNAL"
}
func (a *externalAuth) Response() string {
return fmt.Sprintf("\000")
}
const (
DefaultAuthMethod = "PLAIN"
DefaultPrefetchCount = 50
)
func (a *AMQPConsumer) SampleConfig() string {
return `
## AMQP url
url = "amqp://localhost:5672/influxdb"
## AMQP exchange
exchange = "telegraf"
## AMQP queue name
queue = "telegraf"
## Binding Key
binding_key = "#"
## Maximum number of messages server should give to the worker.
prefetch_count = 50
## Auth method. PLAIN and EXTERNAL are supported
## Using EXTERNAL requires enabling the rabbitmq_auth_mechanism_ssl plugin as
## described here: https://www.rabbitmq.com/plugins.html
# auth_method = "PLAIN"
## Optional SSL Config
# ssl_ca = "/etc/telegraf/ca.pem"
# ssl_cert = "/etc/telegraf/cert.pem"
# ssl_key = "/etc/telegraf/key.pem"
## Use SSL but skip chain & host verification
# insecure_skip_verify = false
## Data format to output.
## Each data format has its own unique set of configuration options, read
## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
data_format = "influx"
`
}
func (a *AMQPConsumer) Description() string {
return "AMQP consumer plugin"
}
func (a *AMQPConsumer) SetParser(parser parsers.Parser) {
a.parser = parser
}
// All gathering is done in the Start function
func (a *AMQPConsumer) Gather(_ telegraf.Accumulator) error {
return nil
}
func (a *AMQPConsumer) createConfig() (*amqp.Config, error) {
// make new tls config
tls, err := internal.GetTLSConfig(
a.SSLCert, a.SSLKey, a.SSLCA, a.InsecureSkipVerify)
if err != nil {
return nil, err
}
// parse auth method
var sasl []amqp.Authentication // nil by default
if strings.ToUpper(a.AuthMethod) == "EXTERNAL" {
sasl = []amqp.Authentication{&externalAuth{}}
}
config := amqp.Config{
TLSClientConfig: tls,
SASL: sasl, // if nil, it will be PLAIN
}
return &config, nil
}
// Start satisfies the telegraf.ServiceInput interface
func (a *AMQPConsumer) Start(acc telegraf.Accumulator) error {
amqpConf, err := a.createConfig()
if err != nil {
return err
}
msgs, err := a.connect(amqpConf)
if err != nil {
return err
}
a.wg = &sync.WaitGroup{}
a.wg.Add(1)
go a.process(msgs, acc)
go func() {
err := <-a.conn.NotifyClose(make(chan *amqp.Error))
if err == nil {
return
}
log.Printf("I! AMQP consumer connection closed: %s; trying to reconnect", err)
for {
msgs, err := a.connect(amqpConf)
if err != nil {
log.Printf("E! AMQP connection failed: %s", err)
time.Sleep(10 * time.Second)
continue
}
a.wg.Add(1)
go a.process(msgs, acc)
break
}
}()
return nil
}
func (a *AMQPConsumer) connect(amqpConf *amqp.Config) (<-chan amqp.Delivery, error) {
conn, err := amqp.DialConfig(a.URL, *amqpConf)
if err != nil {
return nil, err
}
a.conn = conn
ch, err := conn.Channel()
if err != nil {
return nil, fmt.Errorf("Failed to open a channel: %s", err)
}
err = ch.ExchangeDeclare(
a.Exchange, // name
"topic", // type
true, // durable
false, // auto-deleted
false, // internal
false, // no-wait
nil, // arguments
)
if err != nil {
return nil, fmt.Errorf("Failed to declare an exchange: %s", err)
}
q, err := ch.QueueDeclare(
a.Queue, // queue
true, // durable
false, // delete when unused
false, // exclusive
false, // no-wait
nil, // arguments
)
if err != nil {
return nil, fmt.Errorf("Failed to declare a queue: %s", err)
}
err = ch.QueueBind(
q.Name, // queue
a.BindingKey, // binding-key
a.Exchange, // exchange
false,
nil,
)
if err != nil {
return nil, fmt.Errorf("Failed to bind a queue: %s", err)
}
err = ch.Qos(
a.PrefetchCount,
0, // prefetch-size
false, // global
)
if err != nil {
return nil, fmt.Errorf("Failed to set QoS: %s", err)
}
msgs, err := ch.Consume(
q.Name, // queue
"", // consumer
false, // auto-ack
false, // exclusive
false, // no-local
false, // no-wait
nil, // arguments
)
if err != nil {
return nil, fmt.Errorf("Failed establishing connection to queue: %s", err)
}
log.Println("I! Started AMQP consumer")
return msgs, err
}
// Read messages from queue and add them to the Accumulator
func (a *AMQPConsumer) process(msgs <-chan amqp.Delivery, acc telegraf.Accumulator) {
defer a.wg.Done()
for d := range msgs {
metrics, err := a.parser.Parse(d.Body)
if err != nil {
log.Printf("E! %v: error parsing metric - %v", err, string(d.Body))
} else {
for _, m := range metrics {
acc.AddFields(m.Name(), m.Fields(), m.Tags(), m.Time())
}
}
d.Ack(false)
}
log.Printf("I! AMQP consumer queue closed")
}
func (a *AMQPConsumer) Stop() {
err := a.conn.Close()
if err != nil && err != amqp.ErrClosed {
log.Printf("E! Error closing AMQP connection: %s", err)
return
}
a.wg.Wait()
log.Println("I! Stopped AMQP service")
}
func init() {
inputs.Add("amqp_consumer", func() telegraf.Input {
return &AMQPConsumer{
AuthMethod: DefaultAuthMethod,
PrefetchCount: DefaultPrefetchCount,
}
})
}

View File

@@ -4,7 +4,7 @@
- **urls** []string: List of apache-status URLs to collect from. Default is "http://localhost/server-status?auto". - **urls** []string: List of apache-status URLs to collect from. Default is "http://localhost/server-status?auto".
- **username** string: Username for HTTP basic authentication - **username** string: Username for HTTP basic authentication
- **password** string: Password for HTTP basic authentication - **password** string: Password for HTTP basic authentication
- **timeout** duration: time that the HTTP connection will remain waiting for response. Default 4 seconds ("4s") - **timeout** duration: time that the HTTP connection will remain waiting for response. Defalt 4 seconds ("4s")
##### Optional SSL Config ##### Optional SSL Config

View File

@@ -8,12 +8,11 @@ import (
"net/url" "net/url"
"strconv" "strconv"
"strings" "strings"
"sync"
"time" "time"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal" "github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/registry/inputs"
) )
type Apache struct { type Apache struct {
@@ -66,23 +65,28 @@ func (n *Apache) Gather(acc telegraf.Accumulator) error {
n.ResponseTimeout.Duration = time.Second * 5 n.ResponseTimeout.Duration = time.Second * 5
} }
var wg sync.WaitGroup var outerr error
wg.Add(len(n.Urls)) var errch = make(chan error)
for _, u := range n.Urls { for _, u := range n.Urls {
addr, err := url.Parse(u) addr, err := url.Parse(u)
if err != nil { if err != nil {
acc.AddError(fmt.Errorf("Unable to parse address '%s': %s", u, err)) return fmt.Errorf("Unable to parse address '%s': %s", u, err)
continue
} }
go func(addr *url.URL) { go func(addr *url.URL) {
defer wg.Done() errch <- n.gatherUrl(addr, acc)
acc.AddError(n.gatherUrl(addr, acc))
}(addr) }(addr)
} }
wg.Wait() // Drain channel, waiting for all requests to finish and save last error.
return nil for range n.Urls {
if err := <-errch; err != nil {
outerr = err
}
}
return outerr
} }
func (n *Apache) gatherUrl(addr *url.URL, acc telegraf.Accumulator) error { func (n *Apache) gatherUrl(addr *url.URL, acc telegraf.Accumulator) error {

View File

@@ -41,7 +41,7 @@ func TestHTTPApache(t *testing.T) {
} }
var acc testutil.Accumulator var acc testutil.Accumulator
err := acc.GatherError(a.Gather) err := a.Gather(&acc)
require.NoError(t, err) require.NoError(t, err)
fields := map[string]interface{}{ fields := map[string]interface{}{

View File

@@ -9,7 +9,7 @@ import (
"strings" "strings"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/registry/inputs"
) )
type Bcache struct { type Bcache struct {

View File

@@ -5,8 +5,9 @@ import (
"errors" "errors"
"fmt" "fmt"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/registry/inputs"
"io/ioutil" "io/ioutil"
"log"
"net/http" "net/http"
"net/url" "net/url"
"strings" "strings"
@@ -122,8 +123,8 @@ func (j javaMetric) addTagsFields(out map[string]interface{}) {
} }
j.acc.AddFields(tokens["class"]+tokens["type"], fields, tags) j.acc.AddFields(tokens["class"]+tokens["type"], fields, tags)
} else { } else {
j.acc.AddError(fmt.Errorf("Missing key 'value' in '%s' output response\n%v\n", fmt.Printf("Missing key 'value' in '%s' output response\n%v\n",
j.metric, out)) j.metric, out)
} }
} }
@@ -154,8 +155,8 @@ func (c cassandraMetric) addTagsFields(out map[string]interface{}) {
addCassandraMetric(k, c, v.(map[string]interface{})) addCassandraMetric(k, c, v.(map[string]interface{}))
} }
} else { } else {
c.acc.AddError(fmt.Errorf("Missing key 'value' in '%s' output response\n%v\n", fmt.Printf("Missing key 'value' in '%s' output response\n%v\n",
c.metric, out)) c.metric, out)
return return
} }
} else { } else {
@@ -163,8 +164,8 @@ func (c cassandraMetric) addTagsFields(out map[string]interface{}) {
addCassandraMetric(r.(map[string]interface{})["mbean"].(string), addCassandraMetric(r.(map[string]interface{})["mbean"].(string),
c, values.(map[string]interface{})) c, values.(map[string]interface{}))
} else { } else {
c.acc.AddError(fmt.Errorf("Missing key 'value' in '%s' output response\n%v\n", fmt.Printf("Missing key 'value' in '%s' output response\n%v\n",
c.metric, out)) c.metric, out)
return return
} }
} }
@@ -273,8 +274,8 @@ func (c *Cassandra) Gather(acc telegraf.Accumulator) error {
m = newCassandraMetric(serverTokens["host"], metric, acc) m = newCassandraMetric(serverTokens["host"], metric, acc)
} else { } else {
// unsupported metric type // unsupported metric type
acc.AddError(fmt.Errorf("E! Unsupported Cassandra metric [%s], skipping", log.Printf("I! Unsupported Cassandra metric [%s], skipping",
metric)) metric)
continue continue
} }
@@ -282,8 +283,7 @@ func (c *Cassandra) Gather(acc telegraf.Accumulator) error {
requestUrl, err := url.Parse("http://" + serverTokens["host"] + ":" + requestUrl, err := url.Parse("http://" + serverTokens["host"] + ":" +
serverTokens["port"] + context + metric) serverTokens["port"] + context + metric)
if err != nil { if err != nil {
acc.AddError(err) return err
continue
} }
if serverTokens["user"] != "" && serverTokens["passwd"] != "" { if serverTokens["user"] != "" && serverTokens["passwd"] != "" {
requestUrl.User = url.UserPassword(serverTokens["user"], requestUrl.User = url.UserPassword(serverTokens["user"],
@@ -291,12 +291,8 @@ func (c *Cassandra) Gather(acc telegraf.Accumulator) error {
} }
out, err := c.getAttr(requestUrl) out, err := c.getAttr(requestUrl)
if err != nil {
acc.AddError(err)
continue
}
if out["status"] != 200.0 { if out["status"] != 200.0 {
acc.AddError(fmt.Errorf("URL returned with status %v\n", out["status"])) fmt.Printf("URL returned with status %v\n", out["status"])
continue continue
} }
m.addTagsFields(out) m.addTagsFields(out)

View File

@@ -151,7 +151,7 @@ func TestHttpJsonJavaMultiValue(t *testing.T) {
var acc testutil.Accumulator var acc testutil.Accumulator
acc.SetDebug(true) acc.SetDebug(true)
err := acc.GatherError(cassandra.Gather) err := cassandra.Gather(&acc)
assert.Nil(t, err) assert.Nil(t, err)
assert.Equal(t, 2, len(acc.Metrics)) assert.Equal(t, 2, len(acc.Metrics))
@@ -180,7 +180,7 @@ func TestHttpJsonJavaMultiType(t *testing.T) {
var acc testutil.Accumulator var acc testutil.Accumulator
acc.SetDebug(true) acc.SetDebug(true)
err := acc.GatherError(cassandra.Gather) err := cassandra.Gather(&acc)
assert.Nil(t, err) assert.Nil(t, err)
assert.Equal(t, 2, len(acc.Metrics)) assert.Equal(t, 2, len(acc.Metrics))
@@ -197,17 +197,16 @@ func TestHttpJsonJavaMultiType(t *testing.T) {
} }
// Test that the proper values are ignored or collected // Test that the proper values are ignored or collected
func TestHttp404(t *testing.T) { func TestHttpJsonOn404(t *testing.T) {
jolokia := genJolokiaClientStub(invalidJSON, 404, Servers, jolokia := genJolokiaClientStub(validJavaMultiValueJSON, 404, Servers,
[]string{HeapMetric}) []string{HeapMetric})
var acc testutil.Accumulator var acc testutil.Accumulator
err := acc.GatherError(jolokia.Gather) err := jolokia.Gather(&acc)
assert.Error(t, err) assert.Nil(t, err)
assert.Equal(t, 0, len(acc.Metrics)) assert.Equal(t, 0, len(acc.Metrics))
assert.Contains(t, err.Error(), "has status code 404")
} }
// Test that the proper values are ignored or collected for class=Cassandra // Test that the proper values are ignored or collected for class=Cassandra
@@ -215,7 +214,7 @@ func TestHttpJsonCassandraMultiValue(t *testing.T) {
cassandra := genJolokiaClientStub(validCassandraMultiValueJSON, 200, Servers, []string{ReadLatencyMetric}) cassandra := genJolokiaClientStub(validCassandraMultiValueJSON, 200, Servers, []string{ReadLatencyMetric})
var acc testutil.Accumulator var acc testutil.Accumulator
err := acc.GatherError(cassandra.Gather) err := cassandra.Gather(&acc)
assert.Nil(t, err) assert.Nil(t, err)
assert.Equal(t, 1, len(acc.Metrics)) assert.Equal(t, 1, len(acc.Metrics))
@@ -247,7 +246,7 @@ func TestHttpJsonCassandraNestedMultiValue(t *testing.T) {
var acc testutil.Accumulator var acc testutil.Accumulator
acc.SetDebug(true) acc.SetDebug(true)
err := acc.GatherError(cassandra.Gather) err := cassandra.Gather(&acc)
assert.Nil(t, err) assert.Nil(t, err)
assert.Equal(t, 2, len(acc.Metrics)) assert.Equal(t, 2, len(acc.Metrics))

View File

@@ -11,7 +11,7 @@ import (
"strings" "strings"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/registry/inputs"
) )
const ( const (
@@ -101,12 +101,12 @@ func (c *Ceph) gatherAdminSocketStats(acc telegraf.Accumulator) error {
for _, s := range sockets { for _, s := range sockets {
dump, err := perfDump(c.CephBinary, s) dump, err := perfDump(c.CephBinary, s)
if err != nil { if err != nil {
acc.AddError(fmt.Errorf("E! error reading from socket '%s': %v", s.socket, err)) log.Printf("E! error reading from socket '%s': %v", s.socket, err)
continue continue
} }
data, err := parseDump(dump) data, err := parseDump(dump)
if err != nil { if err != nil {
acc.AddError(fmt.Errorf("E! error parsing dump from socket '%s': %v", s.socket, err)) log.Printf("E! error parsing dump from socket '%s': %v", s.socket, err)
continue continue
} }
for tag, metrics := range data { for tag, metrics := range data {

View File

@@ -2,7 +2,7 @@ package cgroup
import ( import (
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/registry/inputs"
) )
type CGroup struct { type CGroup struct {

View File

@@ -22,11 +22,10 @@ func (g *CGroup) Gather(acc telegraf.Accumulator) error {
for dir := range list { for dir := range list {
if dir.err != nil { if dir.err != nil {
acc.AddError(dir.err) return dir.err
continue
} }
if err := g.gatherDir(dir.path, acc); err != nil { if err := g.gatherDir(dir.path, acc); err != nil {
acc.AddError(err) return err
} }
} }

View File

@@ -24,7 +24,7 @@ var cg1 = &CGroup{
func TestCgroupStatistics_1(t *testing.T) { func TestCgroupStatistics_1(t *testing.T) {
var acc testutil.Accumulator var acc testutil.Accumulator
err := acc.GatherError(cg1.Gather) err := cg1.Gather(&acc)
require.NoError(t, err) require.NoError(t, err)
tags := map[string]string{ tags := map[string]string{
@@ -56,7 +56,7 @@ var cg2 = &CGroup{
func TestCgroupStatistics_2(t *testing.T) { func TestCgroupStatistics_2(t *testing.T) {
var acc testutil.Accumulator var acc testutil.Accumulator
err := acc.GatherError(cg2.Gather) err := cg2.Gather(&acc)
require.NoError(t, err) require.NoError(t, err)
tags := map[string]string{ tags := map[string]string{
@@ -81,7 +81,7 @@ var cg3 = &CGroup{
func TestCgroupStatistics_3(t *testing.T) { func TestCgroupStatistics_3(t *testing.T) {
var acc testutil.Accumulator var acc testutil.Accumulator
err := acc.GatherError(cg3.Gather) err := cg3.Gather(&acc)
require.NoError(t, err) require.NoError(t, err)
tags := map[string]string{ tags := map[string]string{
@@ -108,7 +108,7 @@ var cg4 = &CGroup{
func TestCgroupStatistics_4(t *testing.T) { func TestCgroupStatistics_4(t *testing.T) {
var acc testutil.Accumulator var acc testutil.Accumulator
err := acc.GatherError(cg4.Gather) err := cg4.Gather(&acc)
require.NoError(t, err) require.NoError(t, err)
tags := map[string]string{ tags := map[string]string{
@@ -140,7 +140,7 @@ var cg5 = &CGroup{
func TestCgroupStatistics_5(t *testing.T) { func TestCgroupStatistics_5(t *testing.T) {
var acc testutil.Accumulator var acc testutil.Accumulator
err := acc.GatherError(cg5.Gather) err := cg5.Gather(&acc)
require.NoError(t, err) require.NoError(t, err)
tags := map[string]string{ tags := map[string]string{
@@ -167,7 +167,7 @@ var cg6 = &CGroup{
func TestCgroupStatistics_6(t *testing.T) { func TestCgroupStatistics_6(t *testing.T) {
var acc testutil.Accumulator var acc testutil.Accumulator
err := acc.GatherError(cg6.Gather) err := cg6.Gather(&acc)
require.NoError(t, err) require.NoError(t, err)
tags := map[string]string{ tags := map[string]string{

View File

@@ -12,7 +12,7 @@ import (
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal" "github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/registry/inputs"
) )
var ( var (

View File

@@ -42,10 +42,9 @@ API endpoint. In the following order the plugin will attempt to authenticate.
namespace = "AWS/ELB" namespace = "AWS/ELB"
## Maximum requests per second. Note that the global default AWS rate limit is ## Maximum requests per second. Note that the global default AWS rate limit is
## 400 reqs/sec, so if you define multiple namespaces, these should add up to a ## 10 reqs/sec, so if you define multiple namespaces, these should add up to a
## maximum of 400. Optional - default value is 200. ## maximum of 10. Optional - default value is 10.
## See http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_limits.html ratelimit = 10
ratelimit = 200
## Metrics to Pull (optional) ## Metrics to Pull (optional)
## Defaults to all Metrics in Namespace if nothing is provided ## Defaults to all Metrics in Namespace if nothing is provided

View File

@@ -13,8 +13,9 @@ import (
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal" "github.com/influxdata/telegraf/internal"
internalaws "github.com/influxdata/telegraf/internal/config/aws" internalaws "github.com/influxdata/telegraf/internal/config/aws"
"github.com/influxdata/telegraf/internal/errchan"
"github.com/influxdata/telegraf/internal/limiter" "github.com/influxdata/telegraf/internal/limiter"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/registry/inputs"
) )
type ( type (
@@ -104,10 +105,9 @@ func (c *CloudWatch) SampleConfig() string {
namespace = "AWS/ELB" namespace = "AWS/ELB"
## Maximum requests per second. Note that the global default AWS rate limit is ## Maximum requests per second. Note that the global default AWS rate limit is
## 400 reqs/sec, so if you define multiple namespaces, these should add up to a ## 10 reqs/sec, so if you define multiple namespaces, these should add up to a
## maximum of 400. Optional - default value is 200. ## maximum of 10. Optional - default value is 10.
## See http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/cloudwatch_limits.html ratelimit = 10
ratelimit = 200
## Metrics to Pull (optional) ## Metrics to Pull (optional)
## Defaults to all Metrics in Namespace if nothing is provided ## Defaults to all Metrics in Namespace if nothing is provided
@@ -185,6 +185,8 @@ func (c *CloudWatch) Gather(acc telegraf.Accumulator) error {
if err != nil { if err != nil {
return err return err
} }
metricCount := len(metrics)
errChan := errchan.New(metricCount)
now := time.Now() now := time.Now()
@@ -199,12 +201,12 @@ func (c *CloudWatch) Gather(acc telegraf.Accumulator) error {
<-lmtr.C <-lmtr.C
go func(inm *cloudwatch.Metric) { go func(inm *cloudwatch.Metric) {
defer wg.Done() defer wg.Done()
acc.AddError(c.gatherMetric(acc, inm, now)) c.gatherMetric(acc, inm, now, errChan.C)
}(m) }(m)
} }
wg.Wait() wg.Wait()
return nil return errChan.Error()
} }
func init() { func init() {
@@ -212,7 +214,7 @@ func init() {
ttl, _ := time.ParseDuration("1hr") ttl, _ := time.ParseDuration("1hr")
return &CloudWatch{ return &CloudWatch{
CacheTTL: internal.Duration{Duration: ttl}, CacheTTL: internal.Duration{Duration: ttl},
RateLimit: 200, RateLimit: 10,
} }
}) })
} }
@@ -282,11 +284,13 @@ func (c *CloudWatch) gatherMetric(
acc telegraf.Accumulator, acc telegraf.Accumulator,
metric *cloudwatch.Metric, metric *cloudwatch.Metric,
now time.Time, now time.Time,
) error { errChan chan error,
) {
params := c.getStatisticsInput(metric, now) params := c.getStatisticsInput(metric, now)
resp, err := c.client.GetMetricStatistics(params) resp, err := c.client.GetMetricStatistics(params)
if err != nil { if err != nil {
return err errChan <- err
return
} }
for _, point := range resp.Datapoints { for _, point := range resp.Datapoints {
@@ -321,7 +325,7 @@ func (c *CloudWatch) gatherMetric(
acc.AddFields(formatMeasurement(c.Namespace), fields, tags, *point.Timestamp) acc.AddFields(formatMeasurement(c.Namespace), fields, tags, *point.Timestamp)
} }
return nil errChan <- nil
} }
/* /*

View File

@@ -58,13 +58,13 @@ func TestGather(t *testing.T) {
Namespace: "AWS/ELB", Namespace: "AWS/ELB",
Delay: internalDuration, Delay: internalDuration,
Period: internalDuration, Period: internalDuration,
RateLimit: 200, RateLimit: 10,
} }
var acc testutil.Accumulator var acc testutil.Accumulator
c.client = &mockGatherCloudWatchClient{} c.client = &mockGatherCloudWatchClient{}
acc.GatherError(c.Gather) c.Gather(&acc)
fields := map[string]interface{}{} fields := map[string]interface{}{}
fields["latency_minimum"] = 0.1 fields["latency_minimum"] = 0.1
@@ -146,7 +146,7 @@ func TestSelectMetrics(t *testing.T) {
Namespace: "AWS/ELB", Namespace: "AWS/ELB",
Delay: internalDuration, Delay: internalDuration,
Period: internalDuration, Period: internalDuration,
RateLimit: 200, RateLimit: 10,
Metrics: []*Metric{ Metrics: []*Metric{
&Metric{ &Metric{
MetricNames: []string{"Latency", "RequestCount"}, MetricNames: []string{"Latency", "RequestCount"},
@@ -207,13 +207,14 @@ func TestGenerateStatisticsInputParams(t *testing.T) {
} }
func TestMetricsCacheTimeout(t *testing.T) { func TestMetricsCacheTimeout(t *testing.T) {
ttl, _ := time.ParseDuration("5ms")
cache := &MetricCache{ cache := &MetricCache{
Metrics: []*cloudwatch.Metric{}, Metrics: []*cloudwatch.Metric{},
Fetched: time.Now(), Fetched: time.Now(),
TTL: time.Minute, TTL: ttl,
} }
assert.True(t, cache.IsValid()) assert.True(t, cache.IsValid())
cache.Fetched = time.Now().Add(-time.Minute) time.Sleep(ttl)
assert.False(t, cache.IsValid()) assert.False(t, cache.IsValid())
} }

View File

@@ -10,7 +10,8 @@ import (
"strings" "strings"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/registry/inputs"
"log"
"path/filepath" "path/filepath"
) )
@@ -92,15 +93,15 @@ func (c *Conntrack) Gather(acc telegraf.Accumulator) error {
contents, err := ioutil.ReadFile(fName) contents, err := ioutil.ReadFile(fName)
if err != nil { if err != nil {
acc.AddError(fmt.Errorf("E! failed to read file '%s': %v", fName, err)) log.Printf("E! failed to read file '%s': %v", fName, err)
continue continue
} }
v := strings.TrimSpace(string(contents)) v := strings.TrimSpace(string(contents))
fields[metricKey], err = strconv.ParseFloat(v, 64) fields[metricKey], err = strconv.ParseFloat(v, 64)
if err != nil { if err != nil {
acc.AddError(fmt.Errorf("E! failed to parse metric, expected number but "+ log.Printf("E! failed to parse metric, expected number but "+
" found '%s': %v", v, err)) " found '%s': %v", v, err)
} }
} }
} }

View File

@@ -6,7 +6,7 @@ import (
"github.com/hashicorp/consul/api" "github.com/hashicorp/consul/api"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal" "github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/registry/inputs"
) )
type Consul struct { type Consul struct {

View File

@@ -3,7 +3,7 @@ package couchbase
import ( import (
couchbase "github.com/couchbase/go-couchbase" couchbase "github.com/couchbase/go-couchbase"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/registry/inputs"
"sync" "sync"
) )
@@ -42,17 +42,19 @@ func (r *Couchbase) Gather(acc telegraf.Accumulator) error {
var wg sync.WaitGroup var wg sync.WaitGroup
var outerr error
for _, serv := range r.Servers { for _, serv := range r.Servers {
wg.Add(1) wg.Add(1)
go func(serv string) { go func(serv string) {
defer wg.Done() defer wg.Done()
acc.AddError(r.gatherServer(serv, acc, nil)) outerr = r.gatherServer(serv, acc, nil)
}(serv) }(serv)
} }
wg.Wait() wg.Wait()
return nil return outerr
} }
func (r *Couchbase) gatherServer(addr string, acc telegraf.Accumulator, pool *couchbase.Pool) error { func (r *Couchbase) gatherServer(addr string, acc telegraf.Accumulator, pool *couchbase.Pool) error {

View File

@@ -2,11 +2,13 @@ package couchdb
import ( import (
"encoding/json" "encoding/json"
"errors"
"fmt" "fmt"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/registry/inputs"
"net/http" "net/http"
"reflect" "reflect"
"strings"
"sync" "sync"
"time" "time"
) )
@@ -81,22 +83,36 @@ func (*CouchDB) SampleConfig() string {
} }
func (c *CouchDB) Gather(accumulator telegraf.Accumulator) error { func (c *CouchDB) Gather(accumulator telegraf.Accumulator) error {
errorChannel := make(chan error, len(c.HOSTs))
var wg sync.WaitGroup var wg sync.WaitGroup
for _, u := range c.HOSTs { for _, u := range c.HOSTs {
wg.Add(1) wg.Add(1)
go func(host string) { go func(host string) {
defer wg.Done() defer wg.Done()
if err := c.fetchAndInsertData(accumulator, host); err != nil { if err := c.fetchAndInsertData(accumulator, host); err != nil {
accumulator.AddError(fmt.Errorf("[host=%s]: %s", host, err)) errorChannel <- fmt.Errorf("[host=%s]: %s", host, err)
} }
}(u) }(u)
} }
wg.Wait() wg.Wait()
close(errorChannel)
// If there weren't any errors, we can return nil now.
if len(errorChannel) == 0 {
return nil return nil
} }
// There were errors, so join them all together as one big error.
errorStrings := make([]string, 0, len(errorChannel))
for err := range errorChannel {
errorStrings = append(errorStrings, err.Error())
}
return errors.New(strings.Join(errorStrings, "\n"))
}
var tr = &http.Transport{ var tr = &http.Transport{
ResponseHeaderTimeout: time.Duration(3 * time.Second), ResponseHeaderTimeout: time.Duration(3 * time.Second),
} }

View File

@@ -316,5 +316,5 @@ func TestBasic(t *testing.T) {
} }
var acc testutil.Accumulator var acc testutil.Accumulator
require.NoError(t, acc.GatherError(plugin.Gather)) require.NoError(t, plugin.Gather(&acc))
} }

View File

@@ -12,7 +12,7 @@ import (
"time" "time"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/registry/inputs"
) )
type Disque struct { type Disque struct {
@@ -75,11 +75,12 @@ func (g *Disque) Gather(acc telegraf.Accumulator) error {
var wg sync.WaitGroup var wg sync.WaitGroup
var outerr error
for _, serv := range g.Servers { for _, serv := range g.Servers {
u, err := url.Parse(serv) u, err := url.Parse(serv)
if err != nil { if err != nil {
acc.AddError(fmt.Errorf("Unable to parse to address '%s': %s", serv, err)) return fmt.Errorf("Unable to parse to address '%s': %s", serv, err)
continue
} else if u.Scheme == "" { } else if u.Scheme == "" {
// fallback to simple string based address (i.e. "10.0.0.1:10000") // fallback to simple string based address (i.e. "10.0.0.1:10000")
u.Scheme = "tcp" u.Scheme = "tcp"
@@ -89,13 +90,13 @@ func (g *Disque) Gather(acc telegraf.Accumulator) error {
wg.Add(1) wg.Add(1)
go func(serv string) { go func(serv string) {
defer wg.Done() defer wg.Done()
acc.AddError(g.gatherServer(u, acc)) outerr = g.gatherServer(u, acc)
}(serv) }(serv)
} }
wg.Wait() wg.Wait()
return nil return outerr
} }
const defaultPort = "7711" const defaultPort = "7711"

View File

@@ -51,7 +51,7 @@ func TestDisqueGeneratesMetrics(t *testing.T) {
var acc testutil.Accumulator var acc testutil.Accumulator
err = acc.GatherError(r.Gather) err = r.Gather(&acc)
require.NoError(t, err) require.NoError(t, err)
fields := map[string]interface{}{ fields := map[string]interface{}{
@@ -117,7 +117,7 @@ func TestDisqueCanPullStatsFromMultipleServers(t *testing.T) {
var acc testutil.Accumulator var acc testutil.Accumulator
err = acc.GatherError(r.Gather) err = r.Gather(&acc)
require.NoError(t, err) require.NoError(t, err)
fields := map[string]interface{}{ fields := map[string]interface{}{

View File

@@ -1,47 +0,0 @@
# DMCache Input Plugin
This plugin provide a native collection for dmsetup based statistics for dm-cache.
This plugin requires sudo, that is why you should setup and be sure that the telegraf is able to execute sudo without a password.
`sudo /sbin/dmsetup status --target cache` is the full command that telegraf will run for debugging purposes.
### Configuration
```toml
[[inputs.dmcache]]
## Whether to report per-device stats or not
per_device = true
```
### Measurements & Fields:
- dmcache
- length
- target
- metadata_blocksize
- metadata_used
- metadata_total
- cache_blocksize
- cache_used
- cache_total
- read_hits
- read_misses
- write_hits
- write_misses
- demotions
- promotions
- dirty
### Tags:
- All measurements have the following tags:
- device
### Example Output:
```
$ ./telegraf --test --config /etc/telegraf/telegraf.conf --input-filter dmcache
* Plugin: inputs.dmcache, Collection 1
> dmcache,device=example cache_blocksize=0i,read_hits=995134034411520i,read_misses=916807089127424i,write_hits=195107267543040i,metadata_used=12861440i,write_misses=563725346013184i,promotions=3265223720960i,dirty=0i,metadata_blocksize=0i,cache_used=1099511627776ii,cache_total=0i,length=0i,metadata_total=1073741824i,demotions=3265223720960i 1491482035000000000
```

View File

@@ -1,33 +0,0 @@
package dmcache
import (
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
)
type DMCache struct {
PerDevice bool `toml:"per_device"`
getCurrentStatus func() ([]string, error)
}
var sampleConfig = `
## Whether to report per-device stats or not
per_device = true
`
func (c *DMCache) SampleConfig() string {
return sampleConfig
}
func (c *DMCache) Description() string {
return "Provide a native collection for dmsetup based statistics for dm-cache"
}
func init() {
inputs.Add("dmcache", func() telegraf.Input {
return &DMCache{
PerDevice: true,
getCurrentStatus: dmSetupStatus,
}
})
}

View File

@@ -1,190 +0,0 @@
// +build linux
package dmcache
import (
"os/exec"
"strconv"
"strings"
"errors"
"github.com/influxdata/telegraf"
)
const metricName = "dmcache"
type cacheStatus struct {
device string
length int
target string
metadataBlocksize int
metadataUsed int
metadataTotal int
cacheBlocksize int
cacheUsed int
cacheTotal int
readHits int
readMisses int
writeHits int
writeMisses int
demotions int
promotions int
dirty int
}
func (c *DMCache) Gather(acc telegraf.Accumulator) error {
outputLines, err := c.getCurrentStatus()
if err != nil {
return err
}
totalStatus := cacheStatus{}
for _, s := range outputLines {
status, err := parseDMSetupStatus(s)
if err != nil {
return err
}
if c.PerDevice {
tags := map[string]string{"device": status.device}
acc.AddFields(metricName, toFields(status), tags)
}
aggregateStats(&totalStatus, status)
}
acc.AddFields(metricName, toFields(totalStatus), map[string]string{"device": "all"})
return nil
}
func parseDMSetupStatus(line string) (cacheStatus, error) {
var err error
parseError := errors.New("Output from dmsetup could not be parsed")
status := cacheStatus{}
values := strings.Fields(line)
if len(values) < 15 {
return cacheStatus{}, parseError
}
status.device = strings.TrimRight(values[0], ":")
status.length, err = strconv.Atoi(values[2])
if err != nil {
return cacheStatus{}, err
}
status.target = values[3]
status.metadataBlocksize, err = strconv.Atoi(values[4])
if err != nil {
return cacheStatus{}, err
}
metadata := strings.Split(values[5], "/")
if len(metadata) != 2 {
return cacheStatus{}, parseError
}
status.metadataUsed, err = strconv.Atoi(metadata[0])
if err != nil {
return cacheStatus{}, err
}
status.metadataTotal, err = strconv.Atoi(metadata[1])
if err != nil {
return cacheStatus{}, err
}
status.cacheBlocksize, err = strconv.Atoi(values[6])
if err != nil {
return cacheStatus{}, err
}
cache := strings.Split(values[7], "/")
if len(cache) != 2 {
return cacheStatus{}, parseError
}
status.cacheUsed, err = strconv.Atoi(cache[0])
if err != nil {
return cacheStatus{}, err
}
status.cacheTotal, err = strconv.Atoi(cache[1])
if err != nil {
return cacheStatus{}, err
}
status.readHits, err = strconv.Atoi(values[8])
if err != nil {
return cacheStatus{}, err
}
status.readMisses, err = strconv.Atoi(values[9])
if err != nil {
return cacheStatus{}, err
}
status.writeHits, err = strconv.Atoi(values[10])
if err != nil {
return cacheStatus{}, err
}
status.writeMisses, err = strconv.Atoi(values[11])
if err != nil {
return cacheStatus{}, err
}
status.demotions, err = strconv.Atoi(values[12])
if err != nil {
return cacheStatus{}, err
}
status.promotions, err = strconv.Atoi(values[13])
if err != nil {
return cacheStatus{}, err
}
status.dirty, err = strconv.Atoi(values[14])
if err != nil {
return cacheStatus{}, err
}
return status, nil
}
func aggregateStats(totalStatus *cacheStatus, status cacheStatus) {
totalStatus.length += status.length
totalStatus.metadataBlocksize += status.metadataBlocksize
totalStatus.metadataUsed += status.metadataUsed
totalStatus.metadataTotal += status.metadataTotal
totalStatus.cacheBlocksize += status.cacheBlocksize
totalStatus.cacheUsed += status.cacheUsed
totalStatus.cacheTotal += status.cacheTotal
totalStatus.readHits += status.readHits
totalStatus.readMisses += status.readMisses
totalStatus.writeHits += status.writeHits
totalStatus.writeMisses += status.writeMisses
totalStatus.demotions += status.demotions
totalStatus.promotions += status.promotions
totalStatus.dirty += status.dirty
}
func toFields(status cacheStatus) map[string]interface{} {
fields := make(map[string]interface{})
fields["length"] = status.length
fields["metadata_blocksize"] = status.metadataBlocksize
fields["metadata_used"] = status.metadataUsed
fields["metadata_total"] = status.metadataTotal
fields["cache_blocksize"] = status.cacheBlocksize
fields["cache_used"] = status.cacheUsed
fields["cache_total"] = status.cacheTotal
fields["read_hits"] = status.readHits
fields["read_misses"] = status.readMisses
fields["write_hits"] = status.writeHits
fields["write_misses"] = status.writeMisses
fields["demotions"] = status.demotions
fields["promotions"] = status.promotions
fields["dirty"] = status.dirty
return fields
}
func dmSetupStatus() ([]string, error) {
out, err := exec.Command("/bin/sh", "-c", "sudo /sbin/dmsetup status --target cache").Output()
if err != nil {
return nil, err
}
if string(out) == "No devices found\n" {
return []string{}, nil
}
outString := strings.TrimRight(string(out), "\n")
status := strings.Split(outString, "\n")
return status, nil
}

View File

@@ -1,15 +0,0 @@
// +build !linux
package dmcache
import (
"github.com/influxdata/telegraf"
)
func (c *DMCache) Gather(acc telegraf.Accumulator) error {
return nil
}
func dmSetupStatus() ([]string, error) {
return []string{}, nil
}

View File

@@ -1,169 +0,0 @@
package dmcache
import (
"errors"
"testing"
"github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/require"
)
var (
measurement = "dmcache"
badFormatOutput = []string{"cs-1: 0 4883791872 cache 8 1018/1501122 512 7/464962 139 352643 "}
good2DevicesFormatOutput = []string{
"cs-1: 0 4883791872 cache 8 1018/1501122 512 7/464962 139 352643 15 46 0 7 0 1 writeback 2 migration_threshold 2048 mq 10 random_threshold 4 sequential_threshold 512 discard_promote_adjustment 1 read_promote_adjustment 4 write_promote_adjustment 8",
"cs-2: 0 4294967296 cache 8 72352/1310720 128 26/24327168 2409 286 265 524682 0 0 0 1 writethrough 2 migration_threshold 2048 mq 10 random_threshold 4 sequential_threshold 512 discard_promote_adjustment 1 read_promote_adjustment 4 write_promote_adjustment 8",
}
)
func TestPerDeviceGoodOutput(t *testing.T) {
var acc testutil.Accumulator
var plugin = &DMCache{
PerDevice: true,
getCurrentStatus: func() ([]string, error) {
return good2DevicesFormatOutput, nil
},
}
err := plugin.Gather(&acc)
require.NoError(t, err)
tags1 := map[string]string{
"device": "cs-1",
}
fields1 := map[string]interface{}{
"length": 4883791872,
"metadata_blocksize": 8,
"metadata_used": 1018,
"metadata_total": 1501122,
"cache_blocksize": 512,
"cache_used": 7,
"cache_total": 464962,
"read_hits": 139,
"read_misses": 352643,
"write_hits": 15,
"write_misses": 46,
"demotions": 0,
"promotions": 7,
"dirty": 0,
}
acc.AssertContainsTaggedFields(t, measurement, fields1, tags1)
tags2 := map[string]string{
"device": "cs-2",
}
fields2 := map[string]interface{}{
"length": 4294967296,
"metadata_blocksize": 8,
"metadata_used": 72352,
"metadata_total": 1310720,
"cache_blocksize": 128,
"cache_used": 26,
"cache_total": 24327168,
"read_hits": 2409,
"read_misses": 286,
"write_hits": 265,
"write_misses": 524682,
"demotions": 0,
"promotions": 0,
"dirty": 0,
}
acc.AssertContainsTaggedFields(t, measurement, fields2, tags2)
tags3 := map[string]string{
"device": "all",
}
fields3 := map[string]interface{}{
"length": 9178759168,
"metadata_blocksize": 16,
"metadata_used": 73370,
"metadata_total": 2811842,
"cache_blocksize": 640,
"cache_used": 33,
"cache_total": 24792130,
"read_hits": 2548,
"read_misses": 352929,
"write_hits": 280,
"write_misses": 524728,
"demotions": 0,
"promotions": 7,
"dirty": 0,
}
acc.AssertContainsTaggedFields(t, measurement, fields3, tags3)
}
func TestNotPerDeviceGoodOutput(t *testing.T) {
var acc testutil.Accumulator
var plugin = &DMCache{
PerDevice: false,
getCurrentStatus: func() ([]string, error) {
return good2DevicesFormatOutput, nil
},
}
err := plugin.Gather(&acc)
require.NoError(t, err)
tags := map[string]string{
"device": "all",
}
fields := map[string]interface{}{
"length": 9178759168,
"metadata_blocksize": 16,
"metadata_used": 73370,
"metadata_total": 2811842,
"cache_blocksize": 640,
"cache_used": 33,
"cache_total": 24792130,
"read_hits": 2548,
"read_misses": 352929,
"write_hits": 280,
"write_misses": 524728,
"demotions": 0,
"promotions": 7,
"dirty": 0,
}
acc.AssertContainsTaggedFields(t, measurement, fields, tags)
}
func TestNoDevicesOutput(t *testing.T) {
var acc testutil.Accumulator
var plugin = &DMCache{
PerDevice: true,
getCurrentStatus: func() ([]string, error) {
return []string{}, nil
},
}
err := plugin.Gather(&acc)
require.NoError(t, err)
}
func TestErrorDuringGettingStatus(t *testing.T) {
var acc testutil.Accumulator
var plugin = &DMCache{
PerDevice: true,
getCurrentStatus: func() ([]string, error) {
return nil, errors.New("dmsetup doesn't exist")
},
}
err := plugin.Gather(&acc)
require.Error(t, err)
}
func TestBadFormatOfStatus(t *testing.T) {
var acc testutil.Accumulator
var plugin = &DMCache{
PerDevice: true,
getCurrentStatus: func() ([]string, error) {
return badFormatOutput, nil
},
}
err := plugin.Gather(&acc)
require.Error(t, err)
}

View File

@@ -9,7 +9,8 @@ import (
"time" "time"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/internal/errchan"
"github.com/influxdata/telegraf/registry/inputs"
) )
type DnsQuery struct { type DnsQuery struct {
@@ -57,10 +58,11 @@ func (d *DnsQuery) Description() string {
func (d *DnsQuery) Gather(acc telegraf.Accumulator) error { func (d *DnsQuery) Gather(acc telegraf.Accumulator) error {
d.setDefaultValues() d.setDefaultValues()
errChan := errchan.New(len(d.Domains) * len(d.Servers))
for _, domain := range d.Domains { for _, domain := range d.Domains {
for _, server := range d.Servers { for _, server := range d.Servers {
dnsQueryTime, err := d.getDnsQueryTime(domain, server) dnsQueryTime, err := d.getDnsQueryTime(domain, server)
acc.AddError(err) errChan.C <- err
tags := map[string]string{ tags := map[string]string{
"server": server, "server": server,
"domain": domain, "domain": domain,
@@ -72,7 +74,7 @@ func (d *DnsQuery) Gather(acc telegraf.Accumulator) error {
} }
} }
return nil return errChan.Error()
} }
func (d *DnsQuery) setDefaultValues() { func (d *DnsQuery) setDefaultValues() {

View File

@@ -24,7 +24,7 @@ func TestGathering(t *testing.T) {
} }
var acc testutil.Accumulator var acc testutil.Accumulator
err := acc.GatherError(dnsConfig.Gather) err := dnsConfig.Gather(&acc)
assert.NoError(t, err) assert.NoError(t, err)
metric, ok := acc.Get("dns_query") metric, ok := acc.Get("dns_query")
require.True(t, ok) require.True(t, ok)
@@ -44,7 +44,7 @@ func TestGatheringMxRecord(t *testing.T) {
var acc testutil.Accumulator var acc testutil.Accumulator
dnsConfig.RecordType = "MX" dnsConfig.RecordType = "MX"
err := acc.GatherError(dnsConfig.Gather) err := dnsConfig.Gather(&acc)
assert.NoError(t, err) assert.NoError(t, err)
metric, ok := acc.Get("dns_query") metric, ok := acc.Get("dns_query")
require.True(t, ok) require.True(t, ok)
@@ -70,7 +70,7 @@ func TestGatheringRootDomain(t *testing.T) {
} }
fields := map[string]interface{}{} fields := map[string]interface{}{}
err := acc.GatherError(dnsConfig.Gather) err := dnsConfig.Gather(&acc)
assert.NoError(t, err) assert.NoError(t, err)
metric, ok := acc.Get("dns_query") metric, ok := acc.Get("dns_query")
require.True(t, ok) require.True(t, ok)
@@ -96,7 +96,7 @@ func TestMetricContainsServerAndDomainAndRecordTypeTags(t *testing.T) {
} }
fields := map[string]interface{}{} fields := map[string]interface{}{}
err := acc.GatherError(dnsConfig.Gather) err := dnsConfig.Gather(&acc)
assert.NoError(t, err) assert.NoError(t, err)
metric, ok := acc.Get("dns_query") metric, ok := acc.Get("dns_query")
require.True(t, ok) require.True(t, ok)
@@ -121,7 +121,7 @@ func TestGatheringTimeout(t *testing.T) {
channel := make(chan error, 1) channel := make(chan error, 1)
go func() { go func() {
channel <- acc.GatherError(dnsConfig.Gather) channel <- dnsConfig.Gather(&acc)
}() }()
select { select {
case res := <-channel: case res := <-channel:

View File

@@ -16,26 +16,12 @@ for the stat structure can be found
``` ```
# Read metrics about docker containers # Read metrics about docker containers
[[inputs.docker]] [[inputs.docker]]
## Docker Endpoint # Docker Endpoint
## To use TCP, set endpoint = "tcp://[ip]:[port]" # To use TCP, set endpoint = "tcp://[ip]:[port]"
## To use environment variables (ie, docker-machine), set endpoint = "ENV" # To use environment variables (ie, docker-machine), set endpoint = "ENV"
endpoint = "unix:///var/run/docker.sock" endpoint = "unix:///var/run/docker.sock"
## Only collect metrics for these containers, collect all if empty # Only collect metrics for these containers, collect all if empty
container_names = [] container_names = []
## Timeout for docker list, info, and stats commands
timeout = "5s"
## Whether to report for each container per-device blkio (8:0, 8:1...) and
## network (eth0, eth1, ...) stats or not
perdevice = true
## Whether to report for each container total blkio and network stats or not
total = false
## docker labels to include and exclude as tags. Globs accepted.
## Note that an empty array for both will include all labels as tags
docker_label_include = []
docker_label_exclude = []
``` ```
### Measurements & Fields: ### Measurements & Fields:
@@ -136,32 +122,30 @@ based on the availability of per-cpu stats on your system.
### Tags: ### Tags:
#### Docker Engine tags
- docker (memory_total) - docker (memory_total)
- unit=bytes - unit=bytes
- engine_host
- docker (pool_blocksize) - docker (pool_blocksize)
- unit=bytes - unit=bytes
- engine_host
- docker_data - docker_data
- unit=bytes - unit=bytes
- engine_host
- docker_metadata - docker_metadata
- unit=bytes - unit=bytes
- engine_host
#### Docker Container tags - docker_container_mem specific:
- Tags on all containers:
- engine_host
- container_image - container_image
- container_name - container_name
- container_version
- docker_container_mem specific:
- docker_container_cpu specific: - docker_container_cpu specific:
- container_image
- container_name
- cpu - cpu
- docker_container_net specific: - docker_container_net specific:
- container_image
- container_name
- network - network
- docker_container_blkio specific: - docker_container_blkio specific:
- container_image
- container_name
- device - device
### Example Output: ### Example Output:

View File

@@ -1,28 +1,24 @@
package docker package system
import ( import (
"context"
"encoding/json" "encoding/json"
"fmt" "fmt"
"io" "io"
"log"
"regexp" "regexp"
"strconv" "strconv"
"strings" "strings"
"sync" "sync"
"time" "time"
"github.com/docker/docker/api/types" "golang.org/x/net/context"
"github.com/docker/docker/client"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/filter"
"github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs"
)
type DockerLabelFilter struct { "github.com/docker/engine-api/client"
labelInclude filter.Filter "github.com/docker/engine-api/types"
labelExclude filter.Filter "github.com/influxdata/telegraf"
} "github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/registry/inputs"
)
// Docker object // Docker object
type Docker struct { type Docker struct {
@@ -31,52 +27,16 @@ type Docker struct {
Timeout internal.Duration Timeout internal.Duration
PerDevice bool `toml:"perdevice"` PerDevice bool `toml:"perdevice"`
Total bool `toml:"total"` Total bool `toml:"total"`
LabelInclude []string `toml:"docker_label_include"`
LabelExclude []string `toml:"docker_label_exclude"`
LabelFilter DockerLabelFilter client DockerClient
client *client.Client
engine_host string engine_host string
testing bool
labelFiltersCreated bool
} }
// infoWrapper wraps client.Client.List for testing. // DockerClient interface, useful for testing
func infoWrapper(c *client.Client, ctx context.Context) (types.Info, error) { type DockerClient interface {
if c != nil { Info(ctx context.Context) (types.Info, error)
return c.Info(ctx) ContainerList(ctx context.Context, options types.ContainerListOptions) ([]types.Container, error)
} ContainerStats(ctx context.Context, containerID string, stream bool) (io.ReadCloser, error)
fc := FakeDockerClient{}
return fc.Info(ctx)
}
// listWrapper wraps client.Client.ContainerList for testing.
func listWrapper(
c *client.Client,
ctx context.Context,
options types.ContainerListOptions,
) ([]types.Container, error) {
if c != nil {
return c.ContainerList(ctx, options)
}
fc := FakeDockerClient{}
return fc.ContainerList(ctx, options)
}
// statsWrapper wraps client.Client.ContainerStats for testing.
func statsWrapper(
c *client.Client,
ctx context.Context,
containerID string,
stream bool,
) (types.ContainerStats, error) {
if c != nil {
return c.ContainerStats(ctx, containerID, stream)
}
fc := FakeDockerClient{}
return fc.ContainerStats(ctx, containerID, stream)
} }
// KB, MB, GB, TB, PB...human friendly // KB, MB, GB, TB, PB...human friendly
@@ -108,10 +68,6 @@ var sampleConfig = `
## Whether to report for each container total blkio and network stats or not ## Whether to report for each container total blkio and network stats or not
total = false total = false
## docker labels to include and exclude as tags. Globs accepted.
## Note that an empty array for both will include all labels as tags
docker_label_include = []
docker_label_exclude = []
` `
// Description returns input description // Description returns input description
@@ -124,7 +80,7 @@ func (d *Docker) SampleConfig() string { return sampleConfig }
// Gather starts stats collection // Gather starts stats collection
func (d *Docker) Gather(acc telegraf.Accumulator) error { func (d *Docker) Gather(acc telegraf.Accumulator) error {
if d.client == nil && !d.testing { if d.client == nil {
var c *client.Client var c *client.Client
var err error var err error
defaultHeaders := map[string]string{"User-Agent": "engine-api-cli-1.0"} defaultHeaders := map[string]string{"User-Agent": "engine-api-cli-1.0"}
@@ -146,26 +102,18 @@ func (d *Docker) Gather(acc telegraf.Accumulator) error {
} }
d.client = c d.client = c
} }
// Create label filters if not already created
if !d.labelFiltersCreated {
err := d.createLabelFilters()
if err != nil {
return err
}
d.labelFiltersCreated = true
}
// Get daemon info // Get daemon info
err := d.gatherInfo(acc) err := d.gatherInfo(acc)
if err != nil { if err != nil {
acc.AddError(err) fmt.Println(err.Error())
} }
// List containers // List containers
opts := types.ContainerListOptions{} opts := types.ContainerListOptions{}
ctx, cancel := context.WithTimeout(context.Background(), d.Timeout.Duration) ctx, cancel := context.WithTimeout(context.Background(), d.Timeout.Duration)
defer cancel() defer cancel()
containers, err := listWrapper(d.client, ctx, opts) containers, err := d.client.ContainerList(ctx, opts)
if err != nil { if err != nil {
return err return err
} }
@@ -178,8 +126,8 @@ func (d *Docker) Gather(acc telegraf.Accumulator) error {
defer wg.Done() defer wg.Done()
err := d.gatherContainer(c, acc) err := d.gatherContainer(c, acc)
if err != nil { if err != nil {
acc.AddError(fmt.Errorf("E! Error gathering container %s stats: %s\n", log.Printf("E! Error gathering container %s stats: %s\n",
c.Names, err.Error())) c.Names, err.Error())
} }
}(container) }(container)
} }
@@ -196,7 +144,7 @@ func (d *Docker) gatherInfo(acc telegraf.Accumulator) error {
// Get info from docker daemon // Get info from docker daemon
ctx, cancel := context.WithTimeout(context.Background(), d.Timeout.Duration) ctx, cancel := context.WithTimeout(context.Background(), d.Timeout.Duration)
defer cancel() defer cancel()
info, err := infoWrapper(d.client, ctx) info, err := d.client.Info(ctx)
if err != nil { if err != nil {
return err return err
} }
@@ -299,12 +247,12 @@ func (d *Docker) gatherContainer(
ctx, cancel := context.WithTimeout(context.Background(), d.Timeout.Duration) ctx, cancel := context.WithTimeout(context.Background(), d.Timeout.Duration)
defer cancel() defer cancel()
r, err := statsWrapper(d.client, ctx, container.ID, false) r, err := d.client.ContainerStats(ctx, container.ID, false)
if err != nil { if err != nil {
return fmt.Errorf("Error getting docker stats: %s", err.Error()) return fmt.Errorf("Error getting docker stats: %s", err.Error())
} }
defer r.Body.Close() defer r.Close()
dec := json.NewDecoder(r.Body) dec := json.NewDecoder(r)
if err = dec.Decode(&v); err != nil { if err = dec.Decode(&v); err != nil {
if err == io.EOF { if err == io.EOF {
return nil return nil
@@ -314,12 +262,8 @@ func (d *Docker) gatherContainer(
// Add labels to tags // Add labels to tags
for k, label := range container.Labels { for k, label := range container.Labels {
if len(d.LabelInclude) == 0 || d.LabelFilter.labelInclude.Match(k) {
if len(d.LabelExclude) == 0 || !d.LabelFilter.labelExclude.Match(k) {
tags[k] = label tags[k] = label
} }
}
}
gatherContainerStats(v, acc, tags, container.ID, d.PerDevice, d.Total) gatherContainerStats(v, acc, tags, container.ID, d.PerDevice, d.Total)
@@ -624,32 +568,11 @@ func parseSize(sizeStr string) (int64, error) {
return int64(size), nil return int64(size), nil
} }
func (d *Docker) createLabelFilters() error {
if len(d.LabelInclude) != 0 && d.LabelFilter.labelInclude == nil {
var err error
d.LabelFilter.labelInclude, err = filter.Compile(d.LabelInclude)
if err != nil {
return err
}
}
if len(d.LabelExclude) != 0 && d.LabelFilter.labelExclude == nil {
var err error
d.LabelFilter.labelExclude, err = filter.Compile(d.LabelExclude)
if err != nil {
return err
}
}
return nil
}
func init() { func init() {
inputs.Add("docker", func() telegraf.Input { inputs.Add("docker", func() telegraf.Input {
return &Docker{ return &Docker{
PerDevice: true, PerDevice: true,
Timeout: internal.Duration{Duration: time.Second * 5}, Timeout: internal.Duration{Duration: time.Second * 5},
labelFiltersCreated: false,
} }
}) })
} }

View File

@@ -1,12 +1,18 @@
package docker package system
import ( import (
"io"
"io/ioutil"
"strings"
"testing" "testing"
"time" "time"
"golang.org/x/net/context"
"github.com/docker/engine-api/types"
"github.com/docker/engine-api/types/registry"
"github.com/influxdata/telegraf/testutil" "github.com/influxdata/telegraf/testutil"
"github.com/docker/docker/api/types"
"github.com/stretchr/testify/require" "github.com/stretchr/testify/require"
) )
@@ -244,65 +250,146 @@ func testStats() *types.StatsJSON {
return stats return stats
} }
var gatherLabelsTests = []struct { type FakeDockerClient struct {
include []string
exclude []string
expected []string
notexpected []string
}{
{[]string{}, []string{}, []string{"label1", "label2"}, []string{}},
{[]string{"*"}, []string{}, []string{"label1", "label2"}, []string{}},
{[]string{"lab*"}, []string{}, []string{"label1", "label2"}, []string{}},
{[]string{"label1"}, []string{}, []string{"label1"}, []string{"label2"}},
{[]string{"label1*"}, []string{}, []string{"label1"}, []string{"label2"}},
{[]string{}, []string{"*"}, []string{}, []string{"label1", "label2"}},
{[]string{}, []string{"lab*"}, []string{}, []string{"label1", "label2"}},
{[]string{}, []string{"label1"}, []string{"label2"}, []string{"label1"}},
{[]string{"*"}, []string{"*"}, []string{}, []string{"label1", "label2"}},
} }
func TestDockerGatherLabels(t *testing.T) { func (d FakeDockerClient) Info(ctx context.Context) (types.Info, error) {
for _, tt := range gatherLabelsTests { env := types.Info{
var acc testutil.Accumulator Containers: 108,
d := Docker{ ContainersRunning: 98,
client: nil, ContainersStopped: 6,
testing: true, ContainersPaused: 3,
OomKillDisable: false,
SystemTime: "2016-02-24T00:55:09.15073105-05:00",
NEventsListener: 0,
ID: "5WQQ:TFWR:FDNG:OKQ3:37Y4:FJWG:QIKK:623T:R3ME:QTKB:A7F7:OLHD",
Debug: false,
LoggingDriver: "json-file",
KernelVersion: "4.3.0-1-amd64",
IndexServerAddress: "https://index.docker.io/v1/",
MemTotal: 3840757760,
Images: 199,
CPUCfsQuota: true,
Name: "absol",
SwapLimit: false,
IPv4Forwarding: true,
ExperimentalBuild: false,
CPUCfsPeriod: true,
RegistryConfig: &registry.ServiceConfig{
IndexConfigs: map[string]*registry.IndexInfo{
"docker.io": {
Name: "docker.io",
Mirrors: []string{},
Official: true,
Secure: true,
},
}, InsecureRegistryCIDRs: []*registry.NetIPNet{{IP: []byte{127, 0, 0, 0}, Mask: []byte{255, 0, 0, 0}}}, Mirrors: []string{}},
OperatingSystem: "Linux Mint LMDE (containerized)",
BridgeNfIptables: true,
HTTPSProxy: "",
Labels: []string{},
MemoryLimit: false,
DriverStatus: [][2]string{{"Pool Name", "docker-8:1-1182287-pool"}, {"Pool Blocksize", "65.54 kB"}, {"Backing Filesystem", "extfs"}, {"Data file", "/dev/loop0"}, {"Metadata file", "/dev/loop1"}, {"Data Space Used", "17.3 GB"}, {"Data Space Total", "107.4 GB"}, {"Data Space Available", "36.53 GB"}, {"Metadata Space Used", "20.97 MB"}, {"Metadata Space Total", "2.147 GB"}, {"Metadata Space Available", "2.127 GB"}, {"Udev Sync Supported", "true"}, {"Deferred Removal Enabled", "false"}, {"Data loop file", "/var/lib/docker/devicemapper/devicemapper/data"}, {"Metadata loop file", "/var/lib/docker/devicemapper/devicemapper/metadata"}, {"Library Version", "1.02.115 (2016-01-25)"}},
NFd: 19,
HTTPProxy: "",
Driver: "devicemapper",
NGoroutines: 39,
NCPU: 4,
DockerRootDir: "/var/lib/docker",
NoProxy: "",
BridgeNfIP6tables: true,
}
return env, nil
} }
for _, label := range tt.include { func (d FakeDockerClient) ContainerList(octx context.Context, options types.ContainerListOptions) ([]types.Container, error) {
d.LabelInclude = append(d.LabelInclude, label) container1 := types.Container{
ID: "e2173b9478a6ae55e237d4d74f8bbb753f0817192b5081334dc78476296b7dfb",
Names: []string{"/etcd"},
Image: "quay.io/coreos/etcd:v2.2.2",
Command: "/etcd -name etcd0 -advertise-client-urls http://localhost:2379 -listen-client-urls http://0.0.0.0:2379",
Created: 1455941930,
Status: "Up 4 hours",
Ports: []types.Port{
types.Port{
PrivatePort: 7001,
PublicPort: 0,
Type: "tcp",
},
types.Port{
PrivatePort: 4001,
PublicPort: 0,
Type: "tcp",
},
types.Port{
PrivatePort: 2380,
PublicPort: 0,
Type: "tcp",
},
types.Port{
PrivatePort: 2379,
PublicPort: 2379,
Type: "tcp",
IP: "0.0.0.0",
},
},
SizeRw: 0,
SizeRootFs: 0,
} }
for _, label := range tt.exclude { container2 := types.Container{
d.LabelExclude = append(d.LabelExclude, label) ID: "b7dfbb9478a6ae55e237d4d74f8bbb753f0817192b5081334dc78476296e2173",
Names: []string{"/etcd2"},
Image: "quay.io:4443/coreos/etcd:v2.2.2",
Command: "/etcd -name etcd2 -advertise-client-urls http://localhost:2379 -listen-client-urls http://0.0.0.0:2379",
Created: 1455941933,
Status: "Up 4 hours",
Ports: []types.Port{
types.Port{
PrivatePort: 7002,
PublicPort: 0,
Type: "tcp",
},
types.Port{
PrivatePort: 4002,
PublicPort: 0,
Type: "tcp",
},
types.Port{
PrivatePort: 2381,
PublicPort: 0,
Type: "tcp",
},
types.Port{
PrivatePort: 2382,
PublicPort: 2382,
Type: "tcp",
IP: "0.0.0.0",
},
},
SizeRw: 0,
SizeRootFs: 0,
} }
err := d.Gather(&acc) containers := []types.Container{container1, container2}
require.NoError(t, err) return containers, nil
for _, label := range tt.expected { //#{e6a96c84ca91a5258b7cb752579fb68826b68b49ff957487695cd4d13c343b44 titilambert/snmpsim /bin/sh -c 'snmpsimd --agent-udpv4-endpoint=0.0.0.0:31161 --process-user=root --process-group=user' 1455724831 Up 4 hours [{31161 31161 udp 0.0.0.0}] 0 0 [/snmp] map[]}]2016/02/24 01:05:01 Gathered metrics, (3s interval), from 1 inputs in 1.233836656s
if !acc.HasTag("docker_container_cpu", label) {
t.Errorf("Didn't get expected label of %s. Test was: Include: %s Exclude %s",
label, tt.include, tt.exclude)
}
} }
for _, label := range tt.notexpected { func (d FakeDockerClient) ContainerStats(ctx context.Context, containerID string, stream bool) (io.ReadCloser, error) {
if acc.HasTag("docker_container_cpu", label) { var stat io.ReadCloser
t.Errorf("Got unexpected label of %s. Test was: Include: %s Exclude %s", jsonStat := `{"read":"2016-02-24T11:42:27.472459608-05:00","memory_stats":{"stats":{},"limit":18935443456},"blkio_stats":{"io_service_bytes_recursive":[{"major":252,"minor":1,"op":"Read","value":753664},{"major":252,"minor":1,"op":"Write"},{"major":252,"minor":1,"op":"Sync"},{"major":252,"minor":1,"op":"Async","value":753664},{"major":252,"minor":1,"op":"Total","value":753664}],"io_serviced_recursive":[{"major":252,"minor":1,"op":"Read","value":26},{"major":252,"minor":1,"op":"Write"},{"major":252,"minor":1,"op":"Sync"},{"major":252,"minor":1,"op":"Async","value":26},{"major":252,"minor":1,"op":"Total","value":26}]},"cpu_stats":{"cpu_usage":{"percpu_usage":[17871,4959158,1646137,1231652,11829401,244656,369972,0],"usage_in_usermode":10000000,"total_usage":20298847},"system_cpu_usage":24052607520000000,"throttling_data":{}},"precpu_stats":{"cpu_usage":{"percpu_usage":[17871,4959158,1646137,1231652,11829401,244656,369972,0],"usage_in_usermode":10000000,"total_usage":20298847},"system_cpu_usage":24052599550000000,"throttling_data":{}}}`
label, tt.include, tt.exclude) stat = ioutil.NopCloser(strings.NewReader(jsonStat))
} return stat, nil
}
}
} }
func TestDockerGatherInfo(t *testing.T) { func TestDockerGatherInfo(t *testing.T) {
var acc testutil.Accumulator var acc testutil.Accumulator
d := Docker{ client := FakeDockerClient{}
client: nil, d := Docker{client: client}
testing: true,
} err := d.Gather(&acc)
err := acc.GatherError(d.Gather)
require.NoError(t, err) require.NoError(t, err)
acc.AssertContainsTaggedFields(t, acc.AssertContainsTaggedFields(t,
@@ -345,8 +432,6 @@ func TestDockerGatherInfo(t *testing.T) {
"cpu": "cpu3", "cpu": "cpu3",
"container_version": "v2.2.2", "container_version": "v2.2.2",
"engine_host": "absol", "engine_host": "absol",
"label1": "test_value_1",
"label2": "test_value_2",
}, },
) )
acc.AssertContainsTaggedFields(t, acc.AssertContainsTaggedFields(t,
@@ -393,8 +478,6 @@ func TestDockerGatherInfo(t *testing.T) {
"container_name": "etcd2", "container_name": "etcd2",
"container_image": "quay.io:4443/coreos/etcd", "container_image": "quay.io:4443/coreos/etcd",
"container_version": "v2.2.2", "container_version": "v2.2.2",
"label1": "test_value_1",
"label2": "test_value_2",
}, },
) )

View File

@@ -1,151 +0,0 @@
package docker
import (
"context"
"io/ioutil"
"strings"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/registry"
)
type FakeDockerClient struct {
}
func (d FakeDockerClient) Info(ctx context.Context) (types.Info, error) {
env := types.Info{
Containers: 108,
ContainersRunning: 98,
ContainersStopped: 6,
ContainersPaused: 3,
OomKillDisable: false,
SystemTime: "2016-02-24T00:55:09.15073105-05:00",
NEventsListener: 0,
ID: "5WQQ:TFWR:FDNG:OKQ3:37Y4:FJWG:QIKK:623T:R3ME:QTKB:A7F7:OLHD",
Debug: false,
LoggingDriver: "json-file",
KernelVersion: "4.3.0-1-amd64",
IndexServerAddress: "https://index.docker.io/v1/",
MemTotal: 3840757760,
Images: 199,
CPUCfsQuota: true,
Name: "absol",
SwapLimit: false,
IPv4Forwarding: true,
ExperimentalBuild: false,
CPUCfsPeriod: true,
RegistryConfig: &registry.ServiceConfig{
IndexConfigs: map[string]*registry.IndexInfo{
"docker.io": {
Name: "docker.io",
Mirrors: []string{},
Official: true,
Secure: true,
},
}, InsecureRegistryCIDRs: []*registry.NetIPNet{{IP: []byte{127, 0, 0, 0}, Mask: []byte{255, 0, 0, 0}}}, Mirrors: []string{}},
OperatingSystem: "Linux Mint LMDE (containerized)",
BridgeNfIptables: true,
HTTPSProxy: "",
Labels: []string{},
MemoryLimit: false,
DriverStatus: [][2]string{{"Pool Name", "docker-8:1-1182287-pool"}, {"Pool Blocksize", "65.54 kB"}, {"Backing Filesystem", "extfs"}, {"Data file", "/dev/loop0"}, {"Metadata file", "/dev/loop1"}, {"Data Space Used", "17.3 GB"}, {"Data Space Total", "107.4 GB"}, {"Data Space Available", "36.53 GB"}, {"Metadata Space Used", "20.97 MB"}, {"Metadata Space Total", "2.147 GB"}, {"Metadata Space Available", "2.127 GB"}, {"Udev Sync Supported", "true"}, {"Deferred Removal Enabled", "false"}, {"Data loop file", "/var/lib/docker/devicemapper/devicemapper/data"}, {"Metadata loop file", "/var/lib/docker/devicemapper/devicemapper/metadata"}, {"Library Version", "1.02.115 (2016-01-25)"}},
NFd: 19,
HTTPProxy: "",
Driver: "devicemapper",
NGoroutines: 39,
NCPU: 4,
DockerRootDir: "/var/lib/docker",
NoProxy: "",
BridgeNfIP6tables: true,
}
return env, nil
}
func (d FakeDockerClient) ContainerList(octx context.Context, options types.ContainerListOptions) ([]types.Container, error) {
container1 := types.Container{
ID: "e2173b9478a6ae55e237d4d74f8bbb753f0817192b5081334dc78476296b7dfb",
Names: []string{"/etcd"},
Image: "quay.io/coreos/etcd:v2.2.2",
Command: "/etcd -name etcd0 -advertise-client-urls http://localhost:2379 -listen-client-urls http://0.0.0.0:2379",
Created: 1455941930,
Status: "Up 4 hours",
Ports: []types.Port{
types.Port{
PrivatePort: 7001,
PublicPort: 0,
Type: "tcp",
},
types.Port{
PrivatePort: 4001,
PublicPort: 0,
Type: "tcp",
},
types.Port{
PrivatePort: 2380,
PublicPort: 0,
Type: "tcp",
},
types.Port{
PrivatePort: 2379,
PublicPort: 2379,
Type: "tcp",
IP: "0.0.0.0",
},
},
Labels: map[string]string{
"label1": "test_value_1",
"label2": "test_value_2",
},
SizeRw: 0,
SizeRootFs: 0,
}
container2 := types.Container{
ID: "b7dfbb9478a6ae55e237d4d74f8bbb753f0817192b5081334dc78476296e2173",
Names: []string{"/etcd2"},
Image: "quay.io:4443/coreos/etcd:v2.2.2",
Command: "/etcd -name etcd2 -advertise-client-urls http://localhost:2379 -listen-client-urls http://0.0.0.0:2379",
Created: 1455941933,
Status: "Up 4 hours",
Ports: []types.Port{
types.Port{
PrivatePort: 7002,
PublicPort: 0,
Type: "tcp",
},
types.Port{
PrivatePort: 4002,
PublicPort: 0,
Type: "tcp",
},
types.Port{
PrivatePort: 2381,
PublicPort: 0,
Type: "tcp",
},
types.Port{
PrivatePort: 2382,
PublicPort: 2382,
Type: "tcp",
IP: "0.0.0.0",
},
},
Labels: map[string]string{
"label1": "test_value_1",
"label2": "test_value_2",
},
SizeRw: 0,
SizeRootFs: 0,
}
containers := []types.Container{container1, container2}
return containers, nil
//#{e6a96c84ca91a5258b7cb752579fb68826b68b49ff957487695cd4d13c343b44 titilambert/snmpsim /bin/sh -c 'snmpsimd --agent-udpv4-endpoint=0.0.0.0:31161 --process-user=root --process-group=user' 1455724831 Up 4 hours [{31161 31161 udp 0.0.0.0}] 0 0 [/snmp] map[]}]2016/02/24 01:05:01 Gathered metrics, (3s interval), from 1 inputs in 1.233836656s
}
func (d FakeDockerClient) ContainerStats(ctx context.Context, containerID string, stream bool) (types.ContainerStats, error) {
var stat types.ContainerStats
jsonStat := `{"read":"2016-02-24T11:42:27.472459608-05:00","memory_stats":{"stats":{},"limit":18935443456},"blkio_stats":{"io_service_bytes_recursive":[{"major":252,"minor":1,"op":"Read","value":753664},{"major":252,"minor":1,"op":"Write"},{"major":252,"minor":1,"op":"Sync"},{"major":252,"minor":1,"op":"Async","value":753664},{"major":252,"minor":1,"op":"Total","value":753664}],"io_serviced_recursive":[{"major":252,"minor":1,"op":"Read","value":26},{"major":252,"minor":1,"op":"Write"},{"major":252,"minor":1,"op":"Sync"},{"major":252,"minor":1,"op":"Async","value":26},{"major":252,"minor":1,"op":"Total","value":26}]},"cpu_stats":{"cpu_usage":{"percpu_usage":[17871,4959158,1646137,1231652,11829401,244656,369972,0],"usage_in_usermode":10000000,"total_usage":20298847},"system_cpu_usage":24052607520000000,"throttling_data":{}},"precpu_stats":{"cpu_usage":{"percpu_usage":[17871,4959158,1646137,1231652,11829401,244656,369972,0],"usage_in_usermode":10000000,"total_usage":20298847},"system_cpu_usage":24052599550000000,"throttling_data":{}}}`
stat.Body = ioutil.NopCloser(strings.NewReader(jsonStat))
return stat, nil
}

View File

@@ -12,7 +12,8 @@ import (
"time" "time"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/internal/errchan"
"github.com/influxdata/telegraf/registry/inputs"
) )
type Dovecot struct { type Dovecot struct {
@@ -65,18 +66,19 @@ func (d *Dovecot) Gather(acc telegraf.Accumulator) error {
} }
var wg sync.WaitGroup var wg sync.WaitGroup
errChan := errchan.New(len(d.Servers) * len(d.Filters))
for _, server := range d.Servers { for _, server := range d.Servers {
for _, filter := range d.Filters { for _, filter := range d.Filters {
wg.Add(1) wg.Add(1)
go func(s string, f string) { go func(s string, f string) {
defer wg.Done() defer wg.Done()
acc.AddError(d.gatherServer(s, acc, d.Type, f)) errChan.C <- d.gatherServer(s, acc, d.Type, f)
}(server, filter) }(server, filter)
} }
} }
wg.Wait() wg.Wait()
return nil return errChan.Error()
} }
func (d *Dovecot) gatherServer(addr string, acc telegraf.Accumulator, qtype string, filter string) error { func (d *Dovecot) gatherServer(addr string, acc telegraf.Accumulator, qtype string, filter string) error {

View File

@@ -10,8 +10,9 @@ import (
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal" "github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/internal/errchan"
jsonparser "github.com/influxdata/telegraf/plugins/parsers/json" jsonparser "github.com/influxdata/telegraf/plugins/parsers/json"
"github.com/influxdata/telegraf/registry/inputs"
"io/ioutil" "io/ioutil"
"strings" "strings"
) )
@@ -152,6 +153,7 @@ func (e *Elasticsearch) Gather(acc telegraf.Accumulator) error {
e.client = client e.client = client
} }
errChan := errchan.New(len(e.Servers) * 3)
var wg sync.WaitGroup var wg sync.WaitGroup
wg.Add(len(e.Servers)) wg.Add(len(e.Servers))
@@ -174,21 +176,24 @@ func (e *Elasticsearch) Gather(acc telegraf.Accumulator) error {
// Always gather node states // Always gather node states
if err := e.gatherNodeStats(url, acc); err != nil { if err := e.gatherNodeStats(url, acc); err != nil {
acc.AddError(fmt.Errorf(mask.ReplaceAllString(err.Error(), "http(s)://XXX:XXX@"))) err = fmt.Errorf(mask.ReplaceAllString(err.Error(), "http(s)://XXX:XXX@"))
errChan.C <- err
return return
} }
if e.ClusterHealth { if e.ClusterHealth {
url = s + "/_cluster/health?level=indices" url = s + "/_cluster/health?level=indices"
if err := e.gatherClusterHealth(url, acc); err != nil { if err := e.gatherClusterHealth(url, acc); err != nil {
acc.AddError(fmt.Errorf(mask.ReplaceAllString(err.Error(), "http(s)://XXX:XXX@"))) err = fmt.Errorf(mask.ReplaceAllString(err.Error(), "http(s)://XXX:XXX@"))
errChan.C <- err
return return
} }
} }
if e.ClusterStats && e.isMaster { if e.ClusterStats && e.isMaster {
if err := e.gatherClusterStats(s+"/_cluster/stats", acc); err != nil { if err := e.gatherClusterStats(s+"/_cluster/stats", acc); err != nil {
acc.AddError(fmt.Errorf(mask.ReplaceAllString(err.Error(), "http(s)://XXX:XXX@"))) err = fmt.Errorf(mask.ReplaceAllString(err.Error(), "http(s)://XXX:XXX@"))
errChan.C <- err
return return
} }
} }
@@ -196,7 +201,7 @@ func (e *Elasticsearch) Gather(acc telegraf.Accumulator) error {
} }
wg.Wait() wg.Wait()
return nil return errChan.Error()
} }
func (e *Elasticsearch) createHttpClient() (*http.Client, error) { func (e *Elasticsearch) createHttpClient() (*http.Client, error) {

View File

@@ -71,7 +71,7 @@ func TestGather(t *testing.T) {
es.client.Transport = newTransportMock(http.StatusOK, nodeStatsResponse) es.client.Transport = newTransportMock(http.StatusOK, nodeStatsResponse)
var acc testutil.Accumulator var acc testutil.Accumulator
if err := acc.GatherError(es.Gather); err != nil { if err := es.Gather(&acc); err != nil {
t.Fatal(err) t.Fatal(err)
} }

View File

@@ -15,9 +15,10 @@ import (
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal" "github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/internal/errchan"
"github.com/influxdata/telegraf/plugins/parsers" "github.com/influxdata/telegraf/plugins/parsers"
"github.com/influxdata/telegraf/plugins/parsers/nagios" "github.com/influxdata/telegraf/plugins/parsers/nagios"
"github.com/influxdata/telegraf/registry/inputs"
) )
const sampleConfig = ` const sampleConfig = `
@@ -35,7 +36,7 @@ const sampleConfig = `
name_suffix = "_mycollector" name_suffix = "_mycollector"
## Data format to consume. ## Data format to consume.
## Each data format has its own unique set of configuration options, read ## Each data format has it's own unique set of configuration options, read
## more about them here: ## more about them here:
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
data_format = "influx" data_format = "influx"
@@ -49,6 +50,7 @@ type Exec struct {
parser parsers.Parser parser parsers.Parser
runner Runner runner Runner
errChan chan error
} }
func NewExec() *Exec { func NewExec() *Exec {
@@ -148,13 +150,13 @@ func (e *Exec) ProcessCommand(command string, acc telegraf.Accumulator, wg *sync
out, err := e.runner.Run(e, command, acc) out, err := e.runner.Run(e, command, acc)
if err != nil { if err != nil {
acc.AddError(err) e.errChan <- err
return return
} }
metrics, err := e.parser.Parse(out) metrics, err := e.parser.Parse(out)
if err != nil { if err != nil {
acc.AddError(err) e.errChan <- err
} else { } else {
for _, metric := range metrics { for _, metric := range metrics {
acc.AddFields(metric.Name(), metric.Fields(), metric.Tags(), metric.Time()) acc.AddFields(metric.Name(), metric.Fields(), metric.Tags(), metric.Time())
@@ -191,8 +193,7 @@ func (e *Exec) Gather(acc telegraf.Accumulator) error {
matches, err := filepath.Glob(cmdAndArgs[0]) matches, err := filepath.Glob(cmdAndArgs[0])
if err != nil { if err != nil {
acc.AddError(err) return err
continue
} }
if len(matches) == 0 { if len(matches) == 0 {
@@ -213,12 +214,15 @@ func (e *Exec) Gather(acc telegraf.Accumulator) error {
} }
} }
errChan := errchan.New(len(commands))
e.errChan = errChan.C
wg.Add(len(commands)) wg.Add(len(commands))
for _, command := range commands { for _, command := range commands {
go e.ProcessCommand(command, acc, &wg) go e.ProcessCommand(command, acc, &wg)
} }
wg.Wait() wg.Wait()
return nil return errChan.Error()
} }
func init() { func init() {

View File

@@ -37,8 +37,6 @@ const malformedJson = `
` `
const lineProtocol = "cpu,host=foo,datacenter=us-east usage_idle=99,usage_busy=1\n" const lineProtocol = "cpu,host=foo,datacenter=us-east usage_idle=99,usage_busy=1\n"
const lineProtocolEmpty = ""
const lineProtocolShort = "ab"
const lineProtocolMulti = ` const lineProtocolMulti = `
cpu,cpu=cpu0,host=foo,datacenter=us-east usage_idle=99,usage_busy=1 cpu,cpu=cpu0,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
@@ -101,7 +99,7 @@ func TestExec(t *testing.T) {
} }
var acc testutil.Accumulator var acc testutil.Accumulator
err := acc.GatherError(e.Gather) err := e.Gather(&acc)
require.NoError(t, err) require.NoError(t, err)
assert.Equal(t, acc.NFields(), 8, "non-numeric measurements should be ignored") assert.Equal(t, acc.NFields(), 8, "non-numeric measurements should be ignored")
@@ -127,7 +125,8 @@ func TestExecMalformed(t *testing.T) {
} }
var acc testutil.Accumulator var acc testutil.Accumulator
require.Error(t, acc.GatherError(e.Gather)) err := e.Gather(&acc)
require.Error(t, err)
assert.Equal(t, acc.NFields(), 0, "No new points should have been added") assert.Equal(t, acc.NFields(), 0, "No new points should have been added")
} }
@@ -140,7 +139,8 @@ func TestCommandError(t *testing.T) {
} }
var acc testutil.Accumulator var acc testutil.Accumulator
require.Error(t, acc.GatherError(e.Gather)) err := e.Gather(&acc)
require.Error(t, err)
assert.Equal(t, acc.NFields(), 0, "No new points should have been added") assert.Equal(t, acc.NFields(), 0, "No new points should have been added")
} }
@@ -153,7 +153,8 @@ func TestLineProtocolParse(t *testing.T) {
} }
var acc testutil.Accumulator var acc testutil.Accumulator
require.NoError(t, acc.GatherError(e.Gather)) err := e.Gather(&acc)
require.NoError(t, err)
fields := map[string]interface{}{ fields := map[string]interface{}{
"usage_idle": float64(99), "usage_idle": float64(99),
@@ -166,33 +167,6 @@ func TestLineProtocolParse(t *testing.T) {
acc.AssertContainsTaggedFields(t, "cpu", fields, tags) acc.AssertContainsTaggedFields(t, "cpu", fields, tags)
} }
func TestLineProtocolEmptyParse(t *testing.T) {
parser, _ := parsers.NewInfluxParser()
e := &Exec{
runner: newRunnerMock([]byte(lineProtocolEmpty), nil),
Commands: []string{"line-protocol"},
parser: parser,
}
var acc testutil.Accumulator
err := e.Gather(&acc)
require.NoError(t, err)
}
func TestLineProtocolShortParse(t *testing.T) {
parser, _ := parsers.NewInfluxParser()
e := &Exec{
runner: newRunnerMock([]byte(lineProtocolShort), nil),
Commands: []string{"line-protocol"},
parser: parser,
}
var acc testutil.Accumulator
err := acc.GatherError(e.Gather)
require.Error(t, err)
assert.Contains(t, err.Error(), "buffer too short", "A buffer too short error was expected")
}
func TestLineProtocolParseMultiple(t *testing.T) { func TestLineProtocolParseMultiple(t *testing.T) {
parser, _ := parsers.NewInfluxParser() parser, _ := parsers.NewInfluxParser()
e := &Exec{ e := &Exec{
@@ -202,7 +176,7 @@ func TestLineProtocolParseMultiple(t *testing.T) {
} }
var acc testutil.Accumulator var acc testutil.Accumulator
err := acc.GatherError(e.Gather) err := e.Gather(&acc)
require.NoError(t, err) require.NoError(t, err)
fields := map[string]interface{}{ fields := map[string]interface{}{
@@ -228,7 +202,7 @@ func TestExecCommandWithGlob(t *testing.T) {
e.SetParser(parser) e.SetParser(parser)
var acc testutil.Accumulator var acc testutil.Accumulator
err := acc.GatherError(e.Gather) err := e.Gather(&acc)
require.NoError(t, err) require.NoError(t, err)
fields := map[string]interface{}{ fields := map[string]interface{}{
@@ -244,7 +218,7 @@ func TestExecCommandWithoutGlob(t *testing.T) {
e.SetParser(parser) e.SetParser(parser)
var acc testutil.Accumulator var acc testutil.Accumulator
err := acc.GatherError(e.Gather) err := e.Gather(&acc)
require.NoError(t, err) require.NoError(t, err)
fields := map[string]interface{}{ fields := map[string]interface{}{
@@ -260,7 +234,7 @@ func TestExecCommandWithoutGlobAndPath(t *testing.T) {
e.SetParser(parser) e.SetParser(parser)
var acc testutil.Accumulator var acc testutil.Accumulator
err := acc.GatherError(e.Gather) err := e.Gather(&acc)
require.NoError(t, err) require.NoError(t, err)
fields := map[string]interface{}{ fields := map[string]interface{}{

View File

@@ -9,7 +9,7 @@ import (
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal/globpath" "github.com/influxdata/telegraf/internal/globpath"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/registry/inputs"
) )
const sampleConfig = ` const sampleConfig = `
@@ -48,6 +48,7 @@ func (_ *FileStat) Description() string {
func (_ *FileStat) SampleConfig() string { return sampleConfig } func (_ *FileStat) SampleConfig() string { return sampleConfig }
func (f *FileStat) Gather(acc telegraf.Accumulator) error { func (f *FileStat) Gather(acc telegraf.Accumulator) error {
var errS string
var err error var err error
for _, filepath := range f.Files { for _, filepath := range f.Files {
@@ -55,7 +56,7 @@ func (f *FileStat) Gather(acc telegraf.Accumulator) error {
g, ok := f.globs[filepath] g, ok := f.globs[filepath]
if !ok { if !ok {
if g, err = globpath.Compile(filepath); err != nil { if g, err = globpath.Compile(filepath); err != nil {
acc.AddError(err) errS += err.Error() + " "
continue continue
} }
f.globs[filepath] = g f.globs[filepath] = g
@@ -91,7 +92,7 @@ func (f *FileStat) Gather(acc telegraf.Accumulator) error {
if f.Md5 { if f.Md5 {
md5, err := getMd5(fileName) md5, err := getMd5(fileName)
if err != nil { if err != nil {
acc.AddError(err) errS += err.Error() + " "
} else { } else {
fields["md5_sum"] = md5 fields["md5_sum"] = md5
} }
@@ -101,6 +102,9 @@ func (f *FileStat) Gather(acc telegraf.Accumulator) error {
} }
} }
if errS != "" {
return fmt.Errorf(errS)
}
return nil return nil
} }

View File

@@ -19,7 +19,7 @@ func TestGatherNoMd5(t *testing.T) {
} }
acc := testutil.Accumulator{} acc := testutil.Accumulator{}
acc.GatherError(fs.Gather) fs.Gather(&acc)
tags1 := map[string]string{ tags1 := map[string]string{
"file": dir + "log1.log", "file": dir + "log1.log",
@@ -59,7 +59,7 @@ func TestGatherExplicitFiles(t *testing.T) {
} }
acc := testutil.Accumulator{} acc := testutil.Accumulator{}
acc.GatherError(fs.Gather) fs.Gather(&acc)
tags1 := map[string]string{ tags1 := map[string]string{
"file": dir + "log1.log", "file": dir + "log1.log",
@@ -99,7 +99,7 @@ func TestGatherGlob(t *testing.T) {
} }
acc := testutil.Accumulator{} acc := testutil.Accumulator{}
acc.GatherError(fs.Gather) fs.Gather(&acc)
tags1 := map[string]string{ tags1 := map[string]string{
"file": dir + "log1.log", "file": dir + "log1.log",
@@ -131,7 +131,7 @@ func TestGatherSuperAsterisk(t *testing.T) {
} }
acc := testutil.Accumulator{} acc := testutil.Accumulator{}
acc.GatherError(fs.Gather) fs.Gather(&acc)
tags1 := map[string]string{ tags1 := map[string]string{
"file": dir + "log1.log", "file": dir + "log1.log",

View File

@@ -4,6 +4,7 @@ import (
"bytes" "bytes"
"encoding/base64" "encoding/base64"
"encoding/json" "encoding/json"
"errors"
"fmt" "fmt"
"io/ioutil" "io/ioutil"
"net" "net"
@@ -15,7 +16,7 @@ import (
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal" "github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/registry/inputs"
) )
type ResponseMetrics struct { type ResponseMetrics struct {
@@ -148,18 +149,32 @@ func (h *GrayLog) Gather(acc telegraf.Accumulator) error {
h.client.SetHTTPClient(client) h.client.SetHTTPClient(client)
} }
errorChannel := make(chan error, len(h.Servers))
for _, server := range h.Servers { for _, server := range h.Servers {
wg.Add(1) wg.Add(1)
go func(server string) { go func(server string) {
defer wg.Done() defer wg.Done()
acc.AddError(h.gatherServer(acc, server)) if err := h.gatherServer(acc, server); err != nil {
errorChannel <- err
}
}(server) }(server)
} }
wg.Wait() wg.Wait()
close(errorChannel)
// Get all errors and return them as one giant error
errorStrings := []string{}
for err := range errorChannel {
errorStrings = append(errorStrings, err.Error())
}
if len(errorStrings) == 0 {
return nil return nil
} }
return errors.New(strings.Join(errorStrings, "\n"))
}
// Gathers data from a particular server // Gathers data from a particular server
// Parameters: // Parameters:

View File

@@ -157,7 +157,7 @@ func TestNormalResponse(t *testing.T) {
for _, service := range graylog { for _, service := range graylog {
var acc testutil.Accumulator var acc testutil.Accumulator
err := acc.GatherError(service.Gather) err := service.Gather(&acc)
require.NoError(t, err) require.NoError(t, err)
for k, v := range expectedFields { for k, v := range expectedFields {
acc.AssertContainsTaggedFields(t, k, v, validTags[k]) acc.AssertContainsTaggedFields(t, k, v, validTags[k])
@@ -170,9 +170,9 @@ func TestHttpJson500(t *testing.T) {
graylog := genMockGrayLog(validJSON, 500) graylog := genMockGrayLog(validJSON, 500)
var acc testutil.Accumulator var acc testutil.Accumulator
err := acc.GatherError(graylog[0].Gather) err := graylog[0].Gather(&acc)
assert.Error(t, err) assert.NotNil(t, err)
assert.Equal(t, 0, acc.NFields()) assert.Equal(t, 0, acc.NFields())
} }
@@ -181,9 +181,9 @@ func TestHttpJsonBadJson(t *testing.T) {
graylog := genMockGrayLog(invalidJSON, 200) graylog := genMockGrayLog(invalidJSON, 200)
var acc testutil.Accumulator var acc testutil.Accumulator
err := acc.GatherError(graylog[0].Gather) err := graylog[0].Gather(&acc)
assert.Error(t, err) assert.NotNil(t, err)
assert.Equal(t, 0, acc.NFields()) assert.Equal(t, 0, acc.NFields())
} }
@@ -192,8 +192,8 @@ func TestHttpJsonEmptyResponse(t *testing.T) {
graylog := genMockGrayLog(empty, 200) graylog := genMockGrayLog(empty, 200)
var acc testutil.Accumulator var acc testutil.Accumulator
err := acc.GatherError(graylog[0].Gather) err := graylog[0].Gather(&acc)
assert.Error(t, err) assert.NotNil(t, err)
assert.Equal(t, 0, acc.NFields()) assert.Equal(t, 0, acc.NFields())
} }

View File

@@ -7,30 +7,7 @@
```toml ```toml
# SampleConfig # SampleConfig
[[inputs.haproxy]] [[inputs.haproxy]]
## An array of address to gather stats about. Specify an ip on hostname servers = ["http://1.2.3.4/haproxy?stats", "/var/run/haproxy*.sock"]
## with optional port. ie localhost, 10.10.3.33:1936, etc.
## Make sure you specify the complete path to the stats endpoint
## including the protocol, ie http://10.10.3.33:1936/haproxy?stats
## If no servers are specified, then default to 127.0.0.1:1936/haproxy?stats
servers = ["http://myhaproxy.com:1936/haproxy?stats"]
## You can also use local socket with standard wildcard globbing.
## Server address not starting with 'http' will be treated as a possible
## socket, so both examples below are valid.
# servers = ["socket:/run/haproxy/admin.sock", "/run/haproxy/*.sock"]
## By default, some of the fields are renamed from what haproxy calls them.
## Setting this option to true results in the plugin keeping the original
## field names.
# keep_field_names = true
## Optional SSL Config
# ssl_ca = "/etc/telegraf/ca.pem"
# ssl_cert = "/etc/telegraf/cert.pem"
# ssl_key = "/etc/telegraf/key.pem"
## Use SSL but skip chain & host verification
# insecure_skip_verify = false
``` ```
#### `servers` #### `servers`

View File

@@ -14,8 +14,7 @@ import (
"time" "time"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal" "github.com/influxdata/telegraf/registry/inputs"
"github.com/influxdata/telegraf/plugins/inputs"
) )
//CSV format: https://cbonte.github.io/haproxy-dconv/1.5/configuration.html#9.1 //CSV format: https://cbonte.github.io/haproxy-dconv/1.5/configuration.html#9.1
@@ -26,15 +25,6 @@ type haproxy struct {
client *http.Client client *http.Client
KeepFieldNames bool KeepFieldNames bool
// Path to CA file
SSLCA string `toml:"ssl_ca"`
// Path to host cert file
SSLCert string `toml:"ssl_cert"`
// Path to cert key file
SSLKey string `toml:"ssl_key"`
// Use SSL but skip chain & host verification
InsecureSkipVerify bool
} }
var sampleConfig = ` var sampleConfig = `
@@ -42,26 +32,19 @@ var sampleConfig = `
## with optional port. ie localhost, 10.10.3.33:1936, etc. ## with optional port. ie localhost, 10.10.3.33:1936, etc.
## Make sure you specify the complete path to the stats endpoint ## Make sure you specify the complete path to the stats endpoint
## including the protocol, ie http://10.10.3.33:1936/haproxy?stats ## including the protocol, ie http://10.10.3.33:1936/haproxy?stats
#
## If no servers are specified, then default to 127.0.0.1:1936/haproxy?stats ## If no servers are specified, then default to 127.0.0.1:1936/haproxy?stats
servers = ["http://myhaproxy.com:1936/haproxy?stats"] servers = ["http://myhaproxy.com:1936/haproxy?stats"]
##
## You can also use local socket with standard wildcard globbing. ## You can also use local socket with standard wildcard globbing.
## Server address not starting with 'http' will be treated as a possible ## Server address not starting with 'http' will be treated as a possible
## socket, so both examples below are valid. ## socket, so both examples below are valid.
# servers = ["socket:/run/haproxy/admin.sock", "/run/haproxy/*.sock"] ## servers = ["socket:/run/haproxy/admin.sock", "/run/haproxy/*.sock"]
#
## By default, some of the fields are renamed from what haproxy calls them. ## By default, some of the fields are renamed from what haproxy calls them.
## Setting this option to true results in the plugin keeping the original ## Setting this option to true results in the plugin keeping the original
## field names. ## field names.
# keep_field_names = true ## keep_field_names = true
## Optional SSL Config
# ssl_ca = "/etc/telegraf/ca.pem"
# ssl_cert = "/etc/telegraf/cert.pem"
# ssl_key = "/etc/telegraf/key.pem"
## Use SSL but skip chain & host verification
# insecure_skip_verify = false
` `
func (r *haproxy) SampleConfig() string { func (r *haproxy) SampleConfig() string {
@@ -144,15 +127,7 @@ func (g *haproxy) gatherServer(addr string, acc telegraf.Accumulator) error {
} }
if g.client == nil { if g.client == nil {
tlsCfg, err := internal.GetTLSConfig( tr := &http.Transport{ResponseHeaderTimeout: time.Duration(3 * time.Second)}
g.SSLCert, g.SSLKey, g.SSLCA, g.InsecureSkipVerify)
if err != nil {
return err
}
tr := &http.Transport{
ResponseHeaderTimeout: time.Duration(3 * time.Second),
TLSClientConfig: tlsCfg,
}
client := &http.Client{ client := &http.Client{
Transport: tr, Transport: tr,
Timeout: time.Duration(4 * time.Second), Timeout: time.Duration(4 * time.Second),

View File

@@ -4,8 +4,8 @@ package hddtemp
import ( import (
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
gohddtemp "github.com/influxdata/telegraf/plugins/inputs/hddtemp/go-hddtemp" gohddtemp "github.com/influxdata/telegraf/plugins/inputs/hddtemp/go-hddtemp"
"github.com/influxdata/telegraf/registry/inputs"
) )
const defaultAddress = "127.0.0.1:7634" const defaultAddress = "127.0.0.1:7634"

View File

@@ -2,18 +2,11 @@
The HTTP listener is a service input plugin that listens for messages sent via HTTP POST. The HTTP listener is a service input plugin that listens for messages sent via HTTP POST.
The plugin expects messages in the InfluxDB line-protocol ONLY, other Telegraf input data formats are not supported. The plugin expects messages in the InfluxDB line-protocol ONLY, other Telegraf input data formats are not supported.
The intent of the plugin is to allow Telegraf to serve as a proxy/router for the `/write` endpoint of the InfluxDB HTTP API. The intent of the plugin is to allow Telegraf to serve as a proxy/router for the /write endpoint of the InfluxDB HTTP API.
The `/write` endpoint supports the `precision` query parameter and can be set to one of `ns`, `u`, `ms`, `s`, `m`, `h`. All other parameters are ignored and defer to the output plugins configuration.
When chaining Telegraf instances using this plugin, CREATE DATABASE requests receive a 200 OK response with message body `{"results":[]}` but they are not relayed. The output configuration of the Telegraf instance which ultimately submits data to InfluxDB determines the destination database. When chaining Telegraf instances using this plugin, CREATE DATABASE requests receive a 200 OK response with message body `{"results":[]}` but they are not relayed. The output configuration of the Telegraf instance which ultimately submits data to InfluxDB determines the destination database.
See: [Telegraf Input Data Formats](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md#influx). See: [Telegraf Input Data Formats](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md#influx).
Example: curl -i -XPOST 'http://localhost:8186/write' --data-binary 'cpu_load_short,host=server01,region=us-west value=0.64 1434055562000000000'
**Example:**
```
curl -i -XPOST 'http://localhost:8186/write' --data-binary 'cpu_load_short,host=server01,region=us-west value=0.64 1434055562000000000'
```
### Configuration: ### Configuration:

View File

@@ -12,8 +12,8 @@ import (
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal" "github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs"
"github.com/influxdata/telegraf/plugins/parsers/influx" "github.com/influxdata/telegraf/plugins/parsers/influx"
"github.com/influxdata/telegraf/registry/inputs"
"github.com/influxdata/telegraf/selfstat" "github.com/influxdata/telegraf/selfstat"
) )
@@ -35,7 +35,6 @@ type HTTPListener struct {
WriteTimeout internal.Duration WriteTimeout internal.Duration
MaxBodySize int64 MaxBodySize int64
MaxLineSize int MaxLineSize int
Port int
mu sync.Mutex mu sync.Mutex
wg sync.WaitGroup wg sync.WaitGroup
@@ -125,7 +124,6 @@ func (h *HTTPListener) Start(acc telegraf.Accumulator) error {
return err return err
} }
h.listener = listener h.listener = listener
h.Port = listener.Addr().(*net.TCPAddr).Port
h.wg.Add(1) h.wg.Add(1)
go func() { go func() {
@@ -207,12 +205,10 @@ func (h *HTTPListener) serveWrite(res http.ResponseWriter, req *http.Request) {
} }
now := time.Now() now := time.Now()
precision := req.URL.Query().Get("precision")
// Handle gzip request bodies // Handle gzip request bodies
body := req.Body body := req.Body
if req.Header.Get("Content-Encoding") == "gzip" {
var err error var err error
if req.Header.Get("Content-Encoding") == "gzip" {
body, err = gzip.NewReader(req.Body) body, err = gzip.NewReader(req.Body)
defer body.Close() defer body.Close()
if err != nil { if err != nil {
@@ -265,7 +261,7 @@ func (h *HTTPListener) serveWrite(res http.ResponseWriter, req *http.Request) {
if err == io.ErrUnexpectedEOF { if err == io.ErrUnexpectedEOF {
// finished reading the request body // finished reading the request body
if err := h.parse(buf[:n+bufStart], now, precision); err != nil { if err := h.parse(buf[:n+bufStart], now); err != nil {
log.Println("E! " + err.Error()) log.Println("E! " + err.Error())
return400 = true return400 = true
} }
@@ -290,7 +286,7 @@ func (h *HTTPListener) serveWrite(res http.ResponseWriter, req *http.Request) {
bufStart = 0 bufStart = 0
continue continue
} }
if err := h.parse(buf[:i+1], now, precision); err != nil { if err := h.parse(buf[:i+1], now); err != nil {
log.Println("E! " + err.Error()) log.Println("E! " + err.Error())
return400 = true return400 = true
} }
@@ -303,8 +299,8 @@ func (h *HTTPListener) serveWrite(res http.ResponseWriter, req *http.Request) {
} }
} }
func (h *HTTPListener) parse(b []byte, t time.Time, precision string) error { func (h *HTTPListener) parse(b []byte, t time.Time) error {
metrics, err := h.parser.ParseWithDefaultTimePrecision(b, t, precision) metrics, err := h.parser.ParseWithDefaultTime(b, t)
for _, m := range metrics { for _, m := range metrics {
h.acc.AddFields(m.Name(), m.Fields(), m.Tags(), m.Time()) h.acc.AddFields(m.Name(), m.Fields(), m.Tags(), m.Time())

File diff suppressed because one or more lines are too long

View File

@@ -13,7 +13,7 @@ import (
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal" "github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/registry/inputs"
) )
// HTTPResponse struct // HTTPResponse struct

View File

@@ -329,7 +329,7 @@ func TestTimeout(t *testing.T) {
Address: ts.URL + "/twosecondnap", Address: ts.URL + "/twosecondnap",
Body: "{ 'test': 'data'}", Body: "{ 'test': 'data'}",
Method: "GET", Method: "GET",
ResponseTimeout: internal.Duration{Duration: time.Millisecond}, ResponseTimeout: internal.Duration{Duration: time.Second * 1},
Headers: map[string]string{ Headers: map[string]string{
"Content-Type": "application/json", "Content-Type": "application/json",
}, },

View File

@@ -1,79 +1,128 @@
# HTTP JSON Input Plugin # HTTP JSON Plugin
The httpjson plugin collects data from HTTP URLs which respond with JSON. It flattens the JSON and finds all numeric values, treating them as floats. The httpjson plugin can collect data from remote URLs which respond with JSON. Then it flattens JSON and finds all numeric values, treating them as floats.
### Configuration: For example, if you have a service called _mycollector_, which has HTTP endpoint for gathering stats at http://my.service.com/_stats, you would configure the HTTP JSON plugin like this:
```toml ```
[[inputs.httpjson]] [[inputs.httpjson]]
## NOTE This plugin only reads numerical measurements, strings and booleans name = "mycollector"
## will be ignored.
## Name for the service being polled. Will be appended to the name of the
## measurement e.g. "httpjson_webserver_stats".
##
## Deprecated (1.3.0): Use name_override, name_suffix, name_prefix instead.
name = "webserver_stats"
## URL of each server in the service's cluster
servers = [ servers = [
"http://localhost:9999/stats/", "http://my.service.com/_stats"
"http://localhost:9998/stats/",
] ]
## Set response_timeout (default 5 seconds)
response_timeout = "5s"
## HTTP method to use: GET or POST (case-sensitive) # HTTP method to use (case-sensitive)
method = "GET" method = "GET"
## Tags to extract from top-level of JSON server response. # Set response_timeout (default 5 seconds)
# tag_keys = [ response_timeout = "5s"
# "my_tag_1",
# "my_tag_2"
# ]
## HTTP Request Parameters (all values must be strings). For "GET" requests, data
## will be included in the query. For "POST" requests, data will be included
## in the request body as "x-www-form-urlencoded".
# [inputs.httpjson.parameters]
# event_type = "cpu_spike"
# threshold = "0.75"
## HTTP Request Headers (all values must be strings).
# [inputs.httpjson.headers]
# X-Auth-Token = "my-xauth-token"
# apiVersion = "v1"
## Optional SSL Config
# ssl_ca = "/etc/telegraf/ca.pem"
# ssl_cert = "/etc/telegraf/cert.pem"
# ssl_key = "/etc/telegraf/key.pem"
## Use SSL but skip chain & host verification
# insecure_skip_verify = false
``` ```
### Measurements & Fields: `name` is used as a prefix for the measurements.
- httpjson `method` specifies HTTP method to use for requests.
- response_time (float): Response time in seconds
Additional fields are dependant on the response of the remote service being polled. `response_timeout` specifies timeout to wait to get the response
### Tags: You can also specify which keys from server response should be considered tags:
- All measurements have the following tags: ```
- server: HTTP origin as defined in configuration as `servers`. [[inputs.httpjson]]
...
Any top level keys listed under `tag_keys` in the configuration are added as tags. Top level keys are defined as keys in the root level of the object in a single object response, or in the root level of each object within an array of objects. tag_keys = [
"role",
"version"
]
```
If the JSON response is an array of objects, then each object will be parsed with the same configuration.
### Examples Output: You can also specify additional request parameters for the service:
This plugin understands responses containing a single JSON object, or a JSON Array of Objects. ```
[[inputs.httpjson]]
...
**Object Output:** [inputs.httpjson.parameters]
event_type = "cpu_spike"
threshold = "0.75"
Given the following response body: ```
You can also specify additional request header parameters for the service:
```
[[inputs.httpjson]]
...
[inputs.httpjson.headers]
X-Auth-Token = "my-xauth-token"
apiVersion = "v1"
```
# Example:
Let's say that we have a service named "mycollector" configured like this:
```
[[inputs.httpjson]]
name = "mycollector"
servers = [
"http://my.service.com/_stats"
]
# HTTP method to use (case-sensitive)
method = "GET"
tag_keys = ["service"]
```
which responds with the following JSON:
```json
{
"service": "service01",
"a": 0.5,
"b": {
"c": "some text",
"d": 0.1,
"e": 5
}
}
```
The collected metrics will be:
```
httpjson_mycollector_a,service='service01',server='http://my.service.com/_stats' value=0.5
httpjson_mycollector_b_d,service='service01',server='http://my.service.com/_stats' value=0.1
httpjson_mycollector_b_e,service='service01',server='http://my.service.com/_stats' value=5
```
# Example 2, Multiple Services:
There is also the option to collect JSON from multiple services, here is an example doing that.
```
[[inputs.httpjson]]
name = "mycollector1"
servers = [
"http://my.service1.com/_stats"
]
# HTTP method to use (case-sensitive)
method = "GET"
[[inputs.httpjson]]
name = "mycollector2"
servers = [
"http://service.net/json/stats"
]
# HTTP method to use (case-sensitive)
method = "POST"
```
The services respond with the following JSON:
mycollector1:
```json ```json
{ {
"a": 0.5, "a": 0.5,
@@ -81,30 +130,45 @@ Given the following response body:
"c": "some text", "c": "some text",
"d": 0.1, "d": 0.1,
"e": 5 "e": 5
}, }
"service": "service01"
} }
``` ```
The following metric is produced:
`httpjson,server=http://localhost:9999/stats/ b_d=0.1,a=0.5,b_e=5,response_time=0.001` mycollector2:
```json
{
"load": 100,
"users": 1335
}
```
Note that only numerical values are extracted and the type is float. The collected metrics will be:
If `tag_keys` is included in the configuration: ```
httpjson_mycollector1_a,server='http://my.service.com/_stats' value=0.5
httpjson_mycollector1_b_d,server='http://my.service.com/_stats' value=0.1
httpjson_mycollector1_b_e,server='http://my.service.com/_stats' value=5
```toml httpjson_mycollector2_load,server='http://service.net/json/stats' value=100
httpjson_mycollector2_users,server='http://service.net/json/stats' value=1335
```
# Example 3, Multiple Metrics in Response:
The response JSON can be treated as an array of data points that are all parsed with the same configuration.
```
[[inputs.httpjson]] [[inputs.httpjson]]
name = "mycollector"
servers = [
"http://my.service.com/_stats"
]
# HTTP method to use (case-sensitive)
method = "GET"
tag_keys = ["service"] tag_keys = ["service"]
``` ```
Then the `service` tag will also be added: which responds with the following JSON:
`httpjson,server=http://localhost:9999/stats/,service=service01 b_d=0.1,a=0.5,b_e=5,response_time=0.001`
**Array Output:**
If the service returns an array of objects, one metric is be created for each object:
```json ```json
[ [
@@ -129,5 +193,12 @@ If the service returns an array of objects, one metric is be created for each ob
] ]
``` ```
`httpjson,server=http://localhost:9999/stats/,service=service01 a=0.5,b_d=0.1,b_e=5,response_time=0.003` The collected metrics will be:
`httpjson,server=http://localhost:9999/stats/,service=service02 a=0.6,b_d=0.2,b_e=6,response_time=0.003` ```
httpjson_mycollector_a,service='service01',server='http://my.service.com/_stats' value=0.5
httpjson_mycollector_b_d,service='service01',server='http://my.service.com/_stats' value=0.1
httpjson_mycollector_b_e,service='service01',server='http://my.service.com/_stats' value=5
httpjson_mycollector_a,service='service02',server='http://my.service.com/_stats' value=0.6
httpjson_mycollector_b_d,service='service02',server='http://my.service.com/_stats' value=0.2
httpjson_mycollector_b_e,service='service02',server='http://my.service.com/_stats' value=6
```

View File

@@ -1,6 +1,7 @@
package httpjson package httpjson
import ( import (
"errors"
"fmt" "fmt"
"io/ioutil" "io/ioutil"
"net/http" "net/http"
@@ -11,8 +12,8 @@ import (
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal" "github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs"
"github.com/influxdata/telegraf/plugins/parsers" "github.com/influxdata/telegraf/plugins/parsers"
"github.com/influxdata/telegraf/registry/inputs"
) )
// HttpJson struct // HttpJson struct
@@ -72,10 +73,7 @@ var sampleConfig = `
## NOTE This plugin only reads numerical measurements, strings and booleans ## NOTE This plugin only reads numerical measurements, strings and booleans
## will be ignored. ## will be ignored.
## Name for the service being polled. Will be appended to the name of the ## a name for the service being polled
## measurement e.g. httpjson_webserver_stats
##
## Deprecated (1.3.0): Use name_override, name_suffix, name_prefix instead.
name = "webserver_stats" name = "webserver_stats"
## URL of each server in the service's cluster ## URL of each server in the service's cluster
@@ -95,14 +93,12 @@ var sampleConfig = `
# "my_tag_2" # "my_tag_2"
# ] # ]
## HTTP parameters (all values must be strings). For "GET" requests, data ## HTTP parameters (all values must be strings)
## will be included in the query. For "POST" requests, data will be included [inputs.httpjson.parameters]
## in the request body as "x-www-form-urlencoded". event_type = "cpu_spike"
# [inputs.httpjson.parameters] threshold = "0.75"
# event_type = "cpu_spike"
# threshold = "0.75"
## HTTP Headers (all values must be strings) ## HTTP Header parameters (all values must be strings)
# [inputs.httpjson.headers] # [inputs.httpjson.headers]
# X-Auth-Token = "my-xauth-token" # X-Auth-Token = "my-xauth-token"
# apiVersion = "v1" # apiVersion = "v1"
@@ -144,18 +140,32 @@ func (h *HttpJson) Gather(acc telegraf.Accumulator) error {
h.client.SetHTTPClient(client) h.client.SetHTTPClient(client)
} }
errorChannel := make(chan error, len(h.Servers))
for _, server := range h.Servers { for _, server := range h.Servers {
wg.Add(1) wg.Add(1)
go func(server string) { go func(server string) {
defer wg.Done() defer wg.Done()
acc.AddError(h.gatherServer(acc, server)) if err := h.gatherServer(acc, server); err != nil {
errorChannel <- err
}
}(server) }(server)
} }
wg.Wait() wg.Wait()
close(errorChannel)
// Get all errors and return them as one giant error
errorStrings := []string{}
for err := range errorChannel {
errorStrings = append(errorStrings, err.Error())
}
if len(errorStrings) == 0 {
return nil return nil
} }
return errors.New(strings.Join(errorStrings, "\n"))
}
// Gathers data from a particular server // Gathers data from a particular server
// Parameters: // Parameters:

View File

@@ -210,7 +210,7 @@ func TestHttpJson200(t *testing.T) {
for _, service := range httpjson { for _, service := range httpjson {
var acc testutil.Accumulator var acc testutil.Accumulator
err := acc.GatherError(service.Gather) err := service.Gather(&acc)
require.NoError(t, err) require.NoError(t, err)
assert.Equal(t, 12, acc.NFields()) assert.Equal(t, 12, acc.NFields())
// Set responsetime // Set responsetime
@@ -245,7 +245,7 @@ func TestHttpJsonGET_URL(t *testing.T) {
} }
var acc testutil.Accumulator var acc testutil.Accumulator
err := acc.GatherError(a.Gather) err := a.Gather(&acc)
require.NoError(t, err) require.NoError(t, err)
// remove response_time from gathered fields because it's non-deterministic // remove response_time from gathered fields because it's non-deterministic
@@ -318,7 +318,7 @@ func TestHttpJsonGET(t *testing.T) {
} }
var acc testutil.Accumulator var acc testutil.Accumulator
err := acc.GatherError(a.Gather) err := a.Gather(&acc)
require.NoError(t, err) require.NoError(t, err)
// remove response_time from gathered fields because it's non-deterministic // remove response_time from gathered fields because it's non-deterministic
@@ -392,7 +392,7 @@ func TestHttpJsonPOST(t *testing.T) {
} }
var acc testutil.Accumulator var acc testutil.Accumulator
err := acc.GatherError(a.Gather) err := a.Gather(&acc)
require.NoError(t, err) require.NoError(t, err)
// remove response_time from gathered fields because it's non-deterministic // remove response_time from gathered fields because it's non-deterministic
@@ -448,9 +448,9 @@ func TestHttpJson500(t *testing.T) {
httpjson := genMockHttpJson(validJSON, 500) httpjson := genMockHttpJson(validJSON, 500)
var acc testutil.Accumulator var acc testutil.Accumulator
err := acc.GatherError(httpjson[0].Gather) err := httpjson[0].Gather(&acc)
assert.Error(t, err) assert.NotNil(t, err)
assert.Equal(t, 0, acc.NFields()) assert.Equal(t, 0, acc.NFields())
} }
@@ -460,9 +460,9 @@ func TestHttpJsonBadMethod(t *testing.T) {
httpjson[0].Method = "NOT_A_REAL_METHOD" httpjson[0].Method = "NOT_A_REAL_METHOD"
var acc testutil.Accumulator var acc testutil.Accumulator
err := acc.GatherError(httpjson[0].Gather) err := httpjson[0].Gather(&acc)
assert.Error(t, err) assert.NotNil(t, err)
assert.Equal(t, 0, acc.NFields()) assert.Equal(t, 0, acc.NFields())
} }
@@ -471,9 +471,9 @@ func TestHttpJsonBadJson(t *testing.T) {
httpjson := genMockHttpJson(invalidJSON, 200) httpjson := genMockHttpJson(invalidJSON, 200)
var acc testutil.Accumulator var acc testutil.Accumulator
err := acc.GatherError(httpjson[0].Gather) err := httpjson[0].Gather(&acc)
assert.Error(t, err) assert.NotNil(t, err)
assert.Equal(t, 0, acc.NFields()) assert.Equal(t, 0, acc.NFields())
} }
@@ -482,9 +482,9 @@ func TestHttpJsonEmptyResponse(t *testing.T) {
httpjson := genMockHttpJson(empty, 200) httpjson := genMockHttpJson(empty, 200)
var acc testutil.Accumulator var acc testutil.Accumulator
err := acc.GatherError(httpjson[0].Gather) err := httpjson[0].Gather(&acc)
assert.Error(t, err) assert.NotNil(t, err)
assert.Equal(t, 0, acc.NFields()) assert.Equal(t, 0, acc.NFields())
} }
@@ -495,7 +495,7 @@ func TestHttpJson200Tags(t *testing.T) {
for _, service := range httpjson { for _, service := range httpjson {
if service.Name == "other_webapp" { if service.Name == "other_webapp" {
var acc testutil.Accumulator var acc testutil.Accumulator
err := acc.GatherError(service.Gather) err := service.Gather(&acc)
// Set responsetime // Set responsetime
for _, p := range acc.Metrics { for _, p := range acc.Metrics {
p.Fields["response_time"] = 1.0 p.Fields["response_time"] = 1.0
@@ -533,7 +533,7 @@ func TestHttpJsonArray200Tags(t *testing.T) {
for _, service := range httpjson { for _, service := range httpjson {
if service.Name == "other_webapp" { if service.Name == "other_webapp" {
var acc testutil.Accumulator var acc testutil.Accumulator
err := acc.GatherError(service.Gather) err := service.Gather(&acc)
// Set responsetime // Set responsetime
for _, p := range acc.Metrics { for _, p := range acc.Metrics {
p.Fields["response_time"] = 1.0 p.Fields["response_time"] = 1.0

View File

@@ -5,12 +5,13 @@ import (
"errors" "errors"
"fmt" "fmt"
"net/http" "net/http"
"strings"
"sync" "sync"
"time" "time"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal" "github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/registry/inputs"
) )
type InfluxDB struct { type InfluxDB struct {
@@ -56,22 +57,36 @@ func (i *InfluxDB) Gather(acc telegraf.Accumulator) error {
} }
} }
errorChannel := make(chan error, len(i.URLs))
var wg sync.WaitGroup var wg sync.WaitGroup
for _, u := range i.URLs { for _, u := range i.URLs {
wg.Add(1) wg.Add(1)
go func(url string) { go func(url string) {
defer wg.Done() defer wg.Done()
if err := i.gatherURL(acc, url); err != nil { if err := i.gatherURL(acc, url); err != nil {
acc.AddError(fmt.Errorf("[url=%s]: %s", url, err)) errorChannel <- fmt.Errorf("[url=%s]: %s", url, err)
} }
}(u) }(u)
} }
wg.Wait() wg.Wait()
close(errorChannel)
// If there weren't any errors, we can return nil now.
if len(errorChannel) == 0 {
return nil return nil
} }
// There were errors, so join them all together as one big error.
errorStrings := make([]string, 0, len(errorChannel))
for err := range errorChannel {
errorStrings = append(errorStrings, err.Error())
}
return errors.New(strings.Join(errorStrings, "\n"))
}
type point struct { type point struct {
Name string `json:"name"` Name string `json:"name"`
Tags map[string]string `json:"tags"` Tags map[string]string `json:"tags"`

View File

@@ -25,7 +25,7 @@ func TestBasic(t *testing.T) {
} }
var acc testutil.Accumulator var acc testutil.Accumulator
require.NoError(t, acc.GatherError(plugin.Gather)) require.NoError(t, plugin.Gather(&acc))
require.Len(t, acc.Metrics, 3) require.Len(t, acc.Metrics, 3)
fields := map[string]interface{}{ fields := map[string]interface{}{
@@ -72,7 +72,7 @@ func TestInfluxDB(t *testing.T) {
} }
var acc testutil.Accumulator var acc testutil.Accumulator
require.NoError(t, acc.GatherError(plugin.Gather)) require.NoError(t, plugin.Gather(&acc))
require.Len(t, acc.Metrics, 34) require.Len(t, acc.Metrics, 34)
@@ -132,7 +132,7 @@ func TestInfluxDB2(t *testing.T) {
} }
var acc testutil.Accumulator var acc testutil.Accumulator
require.NoError(t, acc.GatherError(plugin.Gather)) require.NoError(t, plugin.Gather(&acc))
require.Len(t, acc.Metrics, 34) require.Len(t, acc.Metrics, 34)
@@ -157,7 +157,7 @@ func TestErrorHandling(t *testing.T) {
} }
var acc testutil.Accumulator var acc testutil.Accumulator
require.Error(t, acc.GatherError(plugin.Gather)) require.Error(t, plugin.Gather(&acc))
} }
func TestErrorHandling404(t *testing.T) { func TestErrorHandling404(t *testing.T) {
@@ -175,7 +175,7 @@ func TestErrorHandling404(t *testing.T) {
} }
var acc testutil.Accumulator var acc testutil.Accumulator
require.Error(t, acc.GatherError(plugin.Gather)) require.Error(t, plugin.Gather(&acc))
} }
const basicJSON = ` const basicJSON = `

View File

@@ -4,7 +4,7 @@ import (
"runtime" "runtime"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/registry/inputs"
"github.com/influxdata/telegraf/selfstat" "github.com/influxdata/telegraf/selfstat"
) )
@@ -48,7 +48,7 @@ func (s *Self) Gather(acc telegraf.Accumulator) error {
"heap_idle_bytes": m.HeapIdle, // bytes in idle spans "heap_idle_bytes": m.HeapIdle, // bytes in idle spans
"heap_in_use_bytes": m.HeapInuse, // bytes in non-idle span "heap_in_use_bytes": m.HeapInuse, // bytes in non-idle span
"heap_released_bytes": m.HeapReleased, // bytes released to the OS "heap_released_bytes": m.HeapReleased, // bytes released to the OS
"heap_objects": m.HeapObjects, // total number of allocated objects "heap_objects_bytes": m.HeapObjects, // total number of allocated objects
"num_gc": m.NumGC, "num_gc": m.NumGC,
} }
acc.AddFields("internal_memstats", fields, map[string]string{}) acc.AddFields("internal_memstats", fields, map[string]string{})

View File

@@ -1,35 +0,0 @@
# Interrupts Input Plugin
The interrupts plugin gathers metrics about IRQs from `/proc/interrupts` and `/proc/softirqs`.
### Configuration
```
[[inputs.interrupts]]
## To filter which IRQs to collect, make use of tagpass / tagdrop, i.e.
# [inputs.interrupts.tagdrop]
# irq = [ "NET_RX", "TASKLET" ]
```
### Measurements
There are two measurements reported by this plugin.
- `interrupts` gathers metrics from the `/proc/interrupts` file
- `soft_interrupts` gathers metrics from the `/proc/softirqs` file
### Fields
- CPUx: the amount of interrupts for the IRQ handled by that CPU
- total: total amount of interrupts for all CPUs
### Tags
- irq: the IRQ
- type: the type of interrupt
- device: the name of the device that is located at that IRQ
### Example Output
```
./telegraf -config ~/interrupts_config.conf -test
* Plugin: inputs.interrupts, Collection 1
> interrupts,irq=0,type=IO-APIC,device=2-edge\ timer,host=hostname CPU0=23i,total=23i 1489346531000000000
> interrupts,irq=1,host=hostname,type=IO-APIC,device=1-edge\ i8042 CPU0=9i,total=9i 1489346531000000000
> interrupts,irq=30,type=PCI-MSI,device=65537-edge\ virtio1-input.0,host=hostname CPU0=1i,total=1i 1489346531000000000
> soft_interrupts,irq=NET_RX,host=hostname CPU0=280879i,total=280879i 1489346531000000000
```

View File

@@ -1,123 +0,0 @@
package interrupts
import (
"bufio"
"fmt"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
"io"
"os"
"strconv"
"strings"
)
type Interrupts struct{}
type IRQ struct {
ID string
Type string
Device string
Total int64
Cpus []int64
}
func NewIRQ(id string) *IRQ {
return &IRQ{ID: id, Cpus: []int64{}}
}
const sampleConfig = `
## To filter which IRQs to collect, make use of tagpass / tagdrop, i.e.
# [inputs.interrupts.tagdrop]
# irq = [ "NET_RX", "TASKLET" ]
`
func (s *Interrupts) Description() string {
return "This plugin gathers interrupts data from /proc/interrupts and /proc/softirqs."
}
func (s *Interrupts) SampleConfig() string {
return sampleConfig
}
func parseInterrupts(r io.Reader) ([]IRQ, error) {
var irqs []IRQ
var cpucount int
scanner := bufio.NewScanner(r)
if scanner.Scan() {
cpus := strings.Fields(scanner.Text())
if cpus[0] != "CPU0" {
return nil, fmt.Errorf("Expected first line to start with CPU0, but was %s", scanner.Text())
}
cpucount = len(cpus)
}
for scanner.Scan() {
fields := strings.Fields(scanner.Text())
if !strings.HasSuffix(fields[0], ":") {
continue
}
irqid := strings.TrimRight(fields[0], ":")
irq := NewIRQ(irqid)
irqvals := fields[1:len(fields)]
for i := 0; i < cpucount; i++ {
if i < len(irqvals) {
irqval, err := strconv.ParseInt(irqvals[i], 10, 64)
if err != nil {
return irqs, fmt.Errorf("Unable to parse %q from %q: %s", irqvals[i], scanner.Text(), err)
}
irq.Cpus = append(irq.Cpus, irqval)
}
}
for _, irqval := range irq.Cpus {
irq.Total += irqval
}
_, err := strconv.ParseInt(irqid, 10, 64)
if err == nil && len(fields) >= cpucount+2 {
irq.Type = fields[cpucount+1]
irq.Device = strings.Join(fields[cpucount+2:], " ")
} else if len(fields) > cpucount {
irq.Type = strings.Join(fields[cpucount+1:], " ")
}
irqs = append(irqs, *irq)
}
if scanner.Err() != nil {
return nil, fmt.Errorf("Error scanning file: %s", scanner.Err())
}
return irqs, nil
}
func gatherTagsFields(irq IRQ) (map[string]string, map[string]interface{}) {
tags := map[string]string{"irq": irq.ID, "type": irq.Type, "device": irq.Device}
fields := map[string]interface{}{"total": irq.Total}
for i := 0; i < len(irq.Cpus); i++ {
cpu := fmt.Sprintf("CPU%d", i)
fields[cpu] = irq.Cpus[i]
}
return tags, fields
}
func (s *Interrupts) Gather(acc telegraf.Accumulator) error {
for measurement, file := range map[string]string{"interrupts": "/proc/interrupts", "soft_interrupts": "/proc/softirqs"} {
f, err := os.Open(file)
if err != nil {
acc.AddError(fmt.Errorf("Could not open file: %s", file))
continue
}
defer f.Close()
irqs, err := parseInterrupts(f)
if err != nil {
acc.AddError(fmt.Errorf("Parsing %s: %s", file, err))
continue
}
for _, irq := range irqs {
tags, fields := gatherTagsFields(irq)
acc.AddFields(measurement, fields, tags)
}
}
return nil
}
func init() {
inputs.Add("interrupts", func() telegraf.Input {
return &Interrupts{}
})
}

View File

@@ -1,60 +0,0 @@
package interrupts
import (
"bytes"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"testing"
)
func TestParseInterrupts(t *testing.T) {
interruptStr := ` CPU0 CPU1
0: 134 0 IO-APIC-edge timer
1: 7 3 IO-APIC-edge i8042
NMI: 0 0 Non-maskable interrupts
LOC: 2338608687 2334309625 Local timer interrupts
MIS: 0
NET_RX: 867028 225
TASKLET: 205 0`
f := bytes.NewBufferString(interruptStr)
parsed := []IRQ{
IRQ{
ID: "0", Type: "IO-APIC-edge", Device: "timer",
Cpus: []int64{int64(134), int64(0)}, Total: int64(134),
},
IRQ{
ID: "1", Type: "IO-APIC-edge", Device: "i8042",
Cpus: []int64{int64(7), int64(3)}, Total: int64(10),
},
IRQ{
ID: "NMI", Type: "Non-maskable interrupts",
Cpus: []int64{int64(0), int64(0)}, Total: int64(0),
},
IRQ{
ID: "LOC", Type: "Local timer interrupts",
Cpus: []int64{int64(2338608687), int64(2334309625)},
Total: int64(4672918312),
},
IRQ{
ID: "MIS", Cpus: []int64{int64(0)}, Total: int64(0),
},
IRQ{
ID: "NET_RX", Cpus: []int64{int64(867028), int64(225)},
Total: int64(867253),
},
IRQ{
ID: "TASKLET", Cpus: []int64{int64(205), int64(0)},
Total: int64(205),
},
}
got, err := parseInterrupts(f)
require.Equal(t, nil, err)
require.NotEqual(t, 0, len(got))
require.Equal(t, len(got), len(parsed))
for i := 0; i < len(parsed); i++ {
assert.Equal(t, parsed[i], got[i])
for k := 0; k < len(parsed[i].Cpus); k++ {
assert.Equal(t, parsed[i].Cpus[k], got[i].Cpus[k])
}
}
}

View File

@@ -18,7 +18,7 @@ type Connection struct {
func NewConnection(server string) *Connection { func NewConnection(server string) *Connection {
conn := &Connection{} conn := &Connection{}
inx1 := strings.LastIndex(server, "@") inx1 := strings.Index(server, "@")
inx2 := strings.Index(server, "(") inx2 := strings.Index(server, "(")
inx3 := strings.Index(server, ")") inx3 := strings.Index(server, ")")

View File

@@ -1,42 +0,0 @@
package ipmi_sensor
import (
"testing"
"github.com/stretchr/testify/assert"
)
type conTest struct {
Got string
Want *Connection
}
func TestNewConnection(t *testing.T) {
testData := []struct {
addr string
con *Connection
}{
{
"USERID:PASSW0RD@lan(192.168.1.1)",
&Connection{
Hostname: "192.168.1.1",
Username: "USERID",
Password: "PASSW0RD",
Interface: "lan",
},
},
{
"USERID:PASS:!@#$%^&*(234)_+W0RD@lan(192.168.1.1)",
&Connection{
Hostname: "192.168.1.1",
Username: "USERID",
Password: "PASS:!@#$%^&*(234)_+W0RD",
Interface: "lan",
},
},
}
for _, v := range testData {
assert.Equal(t, v.con, NewConnection(v.addr))
}
}

View File

@@ -9,7 +9,7 @@ import (
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal" "github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/registry/inputs"
) )
var ( var (
@@ -17,7 +17,7 @@ var (
) )
type Ipmi struct { type Ipmi struct {
Path string path string
Servers []string Servers []string
} }
@@ -44,7 +44,7 @@ func (m *Ipmi) Description() string {
} }
func (m *Ipmi) Gather(acc telegraf.Accumulator) error { func (m *Ipmi) Gather(acc telegraf.Accumulator) error {
if len(m.Path) == 0 { if len(m.path) == 0 {
return fmt.Errorf("ipmitool not found: verify that ipmitool is installed and that ipmitool is in your PATH") return fmt.Errorf("ipmitool not found: verify that ipmitool is installed and that ipmitool is in your PATH")
} }
@@ -52,8 +52,7 @@ func (m *Ipmi) Gather(acc telegraf.Accumulator) error {
for _, server := range m.Servers { for _, server := range m.Servers {
err := m.parse(acc, server) err := m.parse(acc, server)
if err != nil { if err != nil {
acc.AddError(err) return err
continue
} }
} }
} else { } else {
@@ -77,7 +76,7 @@ func (m *Ipmi) parse(acc telegraf.Accumulator, server string) error {
} }
opts = append(opts, "sdr") opts = append(opts, "sdr")
cmd := execCommand(m.Path, opts...) cmd := execCommand(m.path, opts...)
out, err := internal.CombinedOutputTimeout(cmd, time.Second*5) out, err := internal.CombinedOutputTimeout(cmd, time.Second*5)
if err != nil { if err != nil {
return fmt.Errorf("failed to run command %s: %s - %s", strings.Join(cmd.Args, " "), err, string(out)) return fmt.Errorf("failed to run command %s: %s - %s", strings.Join(cmd.Args, " "), err, string(out))
@@ -150,10 +149,9 @@ func init() {
m := Ipmi{} m := Ipmi{}
path, _ := exec.LookPath("ipmitool") path, _ := exec.LookPath("ipmitool")
if len(path) > 0 { if len(path) > 0 {
m.Path = path m.path = path
} }
inputs.Add("ipmi_sensor", func() telegraf.Input { inputs.Add("ipmi_sensor", func() telegraf.Input {
m := m
return &m return &m
}) })
} }

View File

@@ -14,13 +14,13 @@ import (
func TestGather(t *testing.T) { func TestGather(t *testing.T) {
i := &Ipmi{ i := &Ipmi{
Servers: []string{"USERID:PASSW0RD@lan(192.168.1.1)"}, Servers: []string{"USERID:PASSW0RD@lan(192.168.1.1)"},
Path: "ipmitool", path: "ipmitool",
} }
// overwriting exec commands with mock commands // overwriting exec commands with mock commands
execCommand = fakeExecCommand execCommand = fakeExecCommand
var acc testutil.Accumulator var acc testutil.Accumulator
err := acc.GatherError(i.Gather) err := i.Gather(&acc)
require.NoError(t, err) require.NoError(t, err)
@@ -118,10 +118,10 @@ func TestGather(t *testing.T) {
} }
i = &Ipmi{ i = &Ipmi{
Path: "ipmitool", path: "ipmitool",
} }
err = acc.GatherError(i.Gather) err = i.Gather(&acc)
var testsWithoutServer = []struct { var testsWithoutServer = []struct {
fields map[string]interface{} fields map[string]interface{}

View File

@@ -2,11 +2,7 @@
The iptables plugin gathers packets and bytes counters for rules within a set of table and chain from the Linux's iptables firewall. The iptables plugin gathers packets and bytes counters for rules within a set of table and chain from the Linux's iptables firewall.
Rules are identified through associated comment. **Rules without comment are ignored**. Rules are identified through associated comment. Rules without comment are ignored.
Indeed we need a unique ID for the rule and the rule number is not a constant: it may vary when rules are inserted/deleted at start-up or by automatic tools (interactive firewalls, fail2ban, ...).
Also when the rule set is becoming big (hundreds of lines) most people are interested in monitoring only a small part of the rule set.
Before using this plugin **you must ensure that the rules you want to monitor are named with a unique comment**. Comments are added using the `-m comment --comment "my comment"` iptables options.
The iptables command requires CAP_NET_ADMIN and CAP_NET_RAW capabilities. You have several options to grant telegraf to run iptables: The iptables command requires CAP_NET_ADMIN and CAP_NET_RAW capabilities. You have several options to grant telegraf to run iptables:

View File

@@ -10,7 +10,7 @@ import (
"strings" "strings"
"github.com/influxdata/telegraf" "github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs" "github.com/influxdata/telegraf/registry/inputs"
) )
// Iptables is a telegraf plugin to gather packets and bytes throughput from Linux's iptables packet filter. // Iptables is a telegraf plugin to gather packets and bytes throughput from Linux's iptables packet filter.
@@ -33,16 +33,14 @@ func (ipt *Iptables) SampleConfig() string {
## iptables require root access on most systems. ## iptables require root access on most systems.
## Setting 'use_sudo' to true will make use of sudo to run iptables. ## Setting 'use_sudo' to true will make use of sudo to run iptables.
## Users must configure sudo to allow telegraf user to run iptables with no password. ## Users must configure sudo to allow telegraf user to run iptables with no password.
## iptables can be restricted to only list command "iptables -nvL". ## iptables can be restricted to only list command "iptables -nvL"
use_sudo = false use_sudo = false
## Setting 'use_lock' to true runs iptables with the "-w" option. ## Setting 'use_lock' to true runs iptables with the "-w" option.
## Adjust your sudo settings appropriately if using this option ("iptables -wnvl") ## Adjust your sudo settings appropriately if using this option ("iptables -wnvl")
use_lock = false use_lock = false
## defines the table to monitor: ## defines the table to monitor:
table = "filter" table = "filter"
## defines the chains to monitor. ## defines the chains to monitor:
## NOTE: iptables rules without a comment will not be monitored.
## Read the plugin documentation for more information.
chains = [ "INPUT" ] chains = [ "INPUT" ]
` `
} }
@@ -54,19 +52,20 @@ func (ipt *Iptables) Gather(acc telegraf.Accumulator) error {
} }
// best effort : we continue through the chains even if an error is encountered, // best effort : we continue through the chains even if an error is encountered,
// but we keep track of the last error. // but we keep track of the last error.
var err error
for _, chain := range ipt.Chains { for _, chain := range ipt.Chains {
data, e := ipt.lister(ipt.Table, chain) data, e := ipt.lister(ipt.Table, chain)
if e != nil { if e != nil {
acc.AddError(e) err = e
continue continue
} }
e = ipt.parseAndGather(data, acc) e = ipt.parseAndGather(data, acc)
if e != nil { if e != nil {
acc.AddError(e) err = e
continue continue
} }
} }
return nil return err
} }
func (ipt *Iptables) chainList(table, chain string) (string, error) { func (ipt *Iptables) chainList(table, chain string) (string, error) {

Some files were not shown because too many files have changed in this diff Show More