Compare commits

..

5 Commits

Author SHA1 Message Date
David Norton
81caa56859 move plugin interfaces into separate package 2016-12-23 10:18:27 -05:00
David Norton
3e6c4a53a4 fix plugin namespacing 2016-12-22 12:06:04 -05:00
David Norton
ba8f91a038 update Circle build to use go1.8beta1 2016-12-22 12:06:04 -05:00
David Norton
2b77751df8 make plugin dir configurable and fix namespacing 2016-12-22 12:06:04 -05:00
David Norton
70d678c442 load external plugins 2016-12-22 12:06:04 -05:00
247 changed files with 2063 additions and 6521 deletions

View File

@@ -1,71 +1,4 @@
## v1.3 [unreleased]
### Release Notes
- Ceph: the `ceph_pgmap_state` metric content has been modified to use a unique field `count`, with each state expressed as a `state` tag.
Telegraf < 1.3:
```
# field_name value
active+clean 123
active+clean+scrubbing 3
```
Telegraf >= 1.3:
```
# field_name value tag
count 123 state=active+clean
count 3 state=active+clean+scrubbing
```
- The [Riemann output plugin](./plugins/outputs/riemann) has been rewritten
and the previous riemann plugin is _incompatible_ with the new one. The reasons
for this are outlined in issue [#1878](https://github.com/influxdata/telegraf/issues/1878).
The previous riemann output will still be available using
`outputs.riemann_legacy` if needed, but that will eventually be deprecated.
It is highly recommended that all users migrate to the new riemann output plugin.
### Features
- [#2204](https://github.com/influxdata/telegraf/pull/2204): Extend http_response to support searching for a substring in response. Return 1 if found, else 0.
- [#2137](https://github.com/influxdata/telegraf/pull/2137): Added userstats to mysql input plugin.
- [#2179](https://github.com/influxdata/telegraf/pull/2179): Added more InnoDB metric to MySQL plugin.
- [#2229](https://github.com/influxdata/telegraf/pull/2229): `ceph_pgmap_state` metric now uses a single field `count`, with PG state published as `state` tag.
- [#2251](https://github.com/influxdata/telegraf/pull/2251): InfluxDB output: use own client for improved through-put and less allocations.
- [#2330](https://github.com/influxdata/telegraf/pull/2330): Keep -config-directory when running as Windows service.
- [#1900](https://github.com/influxdata/telegraf/pull/1900): Riemann plugin rewrite.
- [#1453](https://github.com/influxdata/telegraf/pull/1453): diskio: add support for name templates and udev tags.
- [#2277](https://github.com/influxdata/telegraf/pull/2277): add integer metrics for Consul check health state.
- [#2201](https://github.com/influxdata/telegraf/pull/2201): Add lock option to the IPtables input plugin.
- [#2244](https://github.com/influxdata/telegraf/pull/2244): Support ipmi_sensor plugin querying local ipmi sensors.
- [#2339](https://github.com/influxdata/telegraf/pull/2339): Increment gather_errors for all errors emitted by inputs.
### Bugfixes
- [#2077](https://github.com/influxdata/telegraf/issues/2077): SQL Server Input - Arithmetic overflow error converting numeric to data type int.
- [#2262](https://github.com/influxdata/telegraf/issues/2262): Flush jitter can inhibit metric collection.
- [#2287](https://github.com/influxdata/telegraf/issues/2287): Kubernetes input: Handle null startTime for stopped pods
- [#1636](https://github.com/influxdata/telegraf/issues/1636): procstat - stop caching PIDs.
- [#2318](https://github.com/influxdata/telegraf/issues/2318): haproxy input - Add missing fields.
- [#2287](https://github.com/influxdata/telegraf/issues/2287): Kubernetes input: Handle null startTime for stopped pods.
- [#2356](https://github.com/influxdata/telegraf/issues/2356): cpu input panic when /proc/stat is empty.
- [#2341](https://github.com/influxdata/telegraf/issues/2341): telegraf swallowing panics in --test mode.
- [#2358](https://github.com/influxdata/telegraf/pull/2358): Create pidfile with 644 permissions & defer file deletion.
## v1.2.1 [2017-02-01]
### Bugfixes
- [#2317](https://github.com/influxdata/telegraf/issues/2317): Fix segfault on nil metrics with influxdb output.
- [#2324](https://github.com/influxdata/telegraf/issues/2324): Fix negative number handling.
### Features
- [#2348](https://github.com/influxdata/telegraf/pull/2348): Go version 1.7.4 -> 1.7.5
## v1.2 [2017-01-00]
## v1.2 [unreleased]
### Release Notes
@@ -111,8 +44,6 @@ plugins, not just statsd.
- [#1942](https://github.com/influxdata/telegraf/pull/1942): Change Amazon Kinesis output plugin to use the built-in serializer plugins.
- [#1980](https://github.com/influxdata/telegraf/issues/1980): Hide username/password from elasticsearch error log messages.
- [#2097](https://github.com/influxdata/telegraf/issues/2097): Configurable HTTP timeouts in Jolokia plugin
- [#2255](https://github.com/influxdata/telegraf/pull/2255): Allow changing jolokia attribute delimiter
- [#2094](https://github.com/influxdata/telegraf/pull/2094): Add generic socket listener & writer.
### Bugfixes
@@ -122,7 +53,7 @@ plugins, not just statsd.
- [#1775](https://github.com/influxdata/telegraf/issues/1775): Cache & expire metrics for delivery to prometheus.
- [#2146](https://github.com/influxdata/telegraf/issues/2146): Fix potential panic in aggregator plugin metric maker.
- [#1843](https://github.com/influxdata/telegraf/pull/1843) & [#1668](https://github.com/influxdata/telegraf/issues/1668): Add optional ability to define PID as a tag.
- [#1730](https://github.com/influxdata/telegraf/issues/1730) & [#2261](https://github.com/influxdata/telegraf/pull/2261): Fix win_perf_counters not gathering non-English counters.
- [#1730](https://github.com/influxdata/telegraf/issues/1730): Fix win_perf_counters not gathering non-English counters.
- [#2061](https://github.com/influxdata/telegraf/issues/2061): Fix panic when file stat info cannot be collected due to permissions or other issue(s).
- [#2045](https://github.com/influxdata/telegraf/issues/2045): Graylog output should set short_message field.
- [#1904](https://github.com/influxdata/telegraf/issues/1904): Hddtemp always put the value in the field temperature.
@@ -135,10 +66,6 @@ plugins, not just statsd.
- [#1973](https://github.com/influxdata/telegraf/issues/1973): Partial fix: logparser CLF pattern with IPv6 addresses.
- [#1975](https://github.com/influxdata/telegraf/issues/1975) & [#2102](https://github.com/influxdata/telegraf/issues/2102): Fix thread-safety when using multiple instances of the statsd input plugin.
- [#2027](https://github.com/influxdata/telegraf/issues/2027): docker input: interface conversion panic fix.
- [#1814](https://github.com/influxdata/telegraf/issues/1814): snmp: ensure proper context is present on error messages.
- [#2299](https://github.com/influxdata/telegraf/issues/2299): opentsdb: add tcp:// prefix if no scheme provided.
- [#2297](https://github.com/influxdata/telegraf/issues/2297): influx parser: parse line-protocol without newlines.
- [#2245](https://github.com/influxdata/telegraf/issues/2245): influxdb output: fix field type conflict blocking output buffer.
## v1.1.2 [2016-12-12]
@@ -289,11 +216,8 @@ which can be installed via
evaluated at every flush interval, rather than once at startup. This makes it
consistent with the behavior of `collection_jitter`.
- postgresql plugins now handle oid and name typed columns seamlessly, previously they were ignored/skipped.
### Features
- [#1617](https://github.com/influxdata/telegraf/pull/1617): postgresql_extensible now handles name and oid types correctly.
- [#1413](https://github.com/influxdata/telegraf/issues/1413): Separate container_version from container_image tag.
- [#1525](https://github.com/influxdata/telegraf/pull/1525): Support setting per-device and total metrics for Docker network and blockio.
- [#1466](https://github.com/influxdata/telegraf/pull/1466): MongoDB input plugin: adding per DB stats from db.stats()

113
Godeps
View File

@@ -1,66 +1,65 @@
github.com/Shopify/sarama 574d3147eee384229bf96a5d12c207fe7b5234f3
github.com/Sirupsen/logrus 61e43dc76f7ee59a82bdf3d71033dc12bea4c77d
github.com/aerospike/aerospike-client-go 95e1ad7791bdbca44707fedbb29be42024900d9c
github.com/amir/raidman c74861fe6a7bb8ede0a010ce4485bdbb4fc4c985
github.com/aws/aws-sdk-go 7524cb911daddd6e5c9195def8e59ae892bef8d9
github.com/beorn7/perks 4c0e84591b9aa9e6dcfdf3e020114cd81f89d5f9
github.com/cenkalti/backoff b02f2bbce11d7ea6b97f282ef1771b0fe2f65ef3
github.com/couchbase/go-couchbase bfe555a140d53dc1adf390f1a1d4b0fd4ceadb28
github.com/couchbase/gomemcached 4a25d2f4e1dea9ea7dd76dfd943407abf9b07d29
github.com/Shopify/sarama 8aadb476e66ca998f2f6bb3c993e9a2daa3666b9
github.com/Sirupsen/logrus 219c8cb75c258c552e999735be6df753ffc7afdc
github.com/aerospike/aerospike-client-go 7f3a312c3b2a60ac083ec6da296091c52c795c63
github.com/amir/raidman 53c1b967405155bfc8758557863bf2e14f814687
github.com/aws/aws-sdk-go 13a12060f716145019378a10e2806c174356b857
github.com/beorn7/perks 3ac7bf7a47d159a033b107610db8a1b6575507a4
github.com/cenkalti/backoff 4dc77674aceaabba2c7e3da25d4c823edfb73f99
github.com/couchbase/go-couchbase cb664315a324d87d19c879d9cc67fda6be8c2ac1
github.com/couchbase/gomemcached a5ea6356f648fec6ab89add00edd09151455b4b2
github.com/couchbase/goutils 5823a0cbaaa9008406021dc5daf80125ea30bba6
github.com/davecgh/go-spew 346938d642f2ec3594ed81d874461961cd0faa76
github.com/docker/distribution fb0bebc4b64e3881cc52a2478d749845ed76d2a8
github.com/docker/engine-api 4290f40c056686fcaa5c9caf02eac1dde9315adf
github.com/docker/go-connections 9670439d95da2651d9dfc7acc5d2ed92d3f25ee6
github.com/docker/go-units 0dadbb0345b35ec7ef35e228dabb8de89a65bf52
github.com/dancannon/gorethink e7cac92ea2bc52638791a021f212145acfedb1fc
github.com/davecgh/go-spew 5215b55f46b2b919f50a1df0eaa5886afe4e3b3d
github.com/docker/engine-api 8924d6900370b4c7e7984be5adc61f50a80d7537
github.com/docker/go-connections f549a9393d05688dff0992ef3efd8bbe6c628aeb
github.com/docker/go-units 5d2041e26a699eaca682e2ea41c8f891e1060444
github.com/eapache/go-resiliency b86b1ec0dd4209a588dc1285cdd471e73525c0b3
github.com/eapache/go-xerial-snappy bb955e01b9346ac19dc29eb16586c90ded99a98c
github.com/eapache/queue 44cc805cf13205b55f69e14bcb69867d1ae92f98
github.com/eclipse/paho.mqtt.golang d4f545eb108a2d19f9b1a735689dbfb719bc21fb
github.com/go-sql-driver/mysql 2e00b5cd70399450106cec6431c2e2ce3cae5034
github.com/gobwas/glob bea32b9cd2d6f55753d94a28e959b13f0244797a
github.com/golang/protobuf 8ee79997227bf9b34611aee7946ae64735e6fd93
github.com/golang/snappy 7db9049039a047d955fe8c19b83c8ff5abd765c7
github.com/gorilla/mux 392c28fe23e1c45ddba891b0320b3b5df220beea
github.com/eapache/queue ded5959c0d4e360646dc9e9908cff48666781367
github.com/eclipse/paho.mqtt.golang 0f7a459f04f13a41b7ed752d47944528d4bf9a86
github.com/go-sql-driver/mysql 1fca743146605a172a266e1654e01e5cd5669bee
github.com/gobwas/glob 49571a1557cd20e6a2410adc6421f85b66c730b5
github.com/golang/protobuf 552c7b9542c194800fd493123b3798ef0a832032
github.com/golang/snappy d9eb7a3d35ec988b8585d4a0068e462c27d28380
github.com/gorilla/context 1ea25387ff6f684839d82767c1733ff4d4d15d0a
github.com/gorilla/mux c9e326e2bdec29039a3761c07bece13133863e1e
github.com/hailocab/go-hostpool e80d13ce29ede4452c43dea11e79b9bc8a15b478
github.com/hashicorp/consul 63d2fc68239b996096a1c55a0d4b400ea4c2583f
github.com/hpcloud/tail 915e5feba042395f5fda4dbe9c0e99aeab3088b3
github.com/influxdata/config 8ec4638a81500c20be24855812bc8498ebe2dc92
github.com/influxdata/toml ad49a5c2936f96b8f5943c3fdba47630ccf45a0d
github.com/hashicorp/consul 5aa90455ce78d4d41578bafc86305e6e6b28d7d2
github.com/hpcloud/tail b2940955ab8b26e19d43a43c4da0475dd81bdb56
github.com/influxdata/config b79f6829346b8d6e78ba73544b1e1038f1f1c9da
github.com/influxdata/influxdb fc57c0f7c635df3873f3d64f0ed2100ddc94d5ae
github.com/influxdata/toml af4df43894b16e3fd2b788d01bd27ad0776ef2d0
github.com/influxdata/wlog 7c63b0a71ef8300adc255344d275e10e5c3a71ec
github.com/jackc/pgx c8080fc4a1bfa44bf90383ad0fdce2f68b7d313c
github.com/kardianos/osext c2c54e542fb797ad986b31721e1baedf214ca413
github.com/kardianos/service 6d3a0ee7d3425d9d835debc51a0ca1ffa28f4893
github.com/kardianos/osext 29ae4ffbc9a6fe9fb2bc5029050ce6996ea1d3bc
github.com/kardianos/service 5e335590050d6d00f3aa270217d288dda1c94d0a
github.com/kballard/go-shellquote d8ec1a69a250a17bb0e419c386eac1f3711dc142
github.com/klauspost/crc32 cb6bfca970f6908083f26f39a79009d608efd5cd
github.com/matttproud/golang_protobuf_extensions c12348ce28de40eed0136aa2b644d0ee0650e56c
github.com/miekg/dns 99f84ae56e75126dd77e5de4fae2ea034a468ca1
github.com/klauspost/crc32 19b0b332c9e4516a6370a0456e6182c3b5036720
github.com/lib/pq e182dc4027e2ded4b19396d638610f2653295f36
github.com/matttproud/golang_protobuf_extensions d0c3fe89de86839aecf2e0579c40ba3bb336a453
github.com/miekg/dns cce6c130cdb92c752850880fd285bea1d64439dd
github.com/mreiferson/go-snappystream 028eae7ab5c4c9e2d1cb4c4ca1e53259bbe7e504
github.com/naoina/go-stringutil 6b638e95a32d0c1131db0e7fe83775cbea4a0d0b
github.com/nats-io/go-nats ea9585611a4ab58a205b9b125ebd74c389a6b898
github.com/nats-io/nats ea9585611a4ab58a205b9b125ebd74c389a6b898
github.com/nats-io/nuid 289cccf02c178dc782430d534e3c1f5b72af807f
github.com/nsqio/go-nsq a53d495e81424aaf7a7665a9d32a97715c40e953
github.com/pierrec/lz4 5c9560bfa9ace2bf86080bf40d46b34ae44604df
github.com/pierrec/xxHash 5a004441f897722c627870a981d02b29924215fa
github.com/prometheus/client_golang c317fb74746eac4fc65fe3909195f4cf67c5562a
github.com/nats-io/nats ea8b4fd12ebb823073c0004b9f09ac8748f4f165
github.com/nats-io/nuid a5152d67cf63cbfb5d992a395458722a45194715
github.com/nsqio/go-nsq 0b80d6f05e15ca1930e0c5e1d540ed627e299980
github.com/opencontainers/runc 89ab7f2ccc1e45ddf6485eaa802c35dcf321dfc8
github.com/prometheus/client_golang 18acf9993a863f4c4b40612e19cdd243e7c86831
github.com/prometheus/client_model fa8ad6fec33561be4280a8f0514318c79d7f6cb6
github.com/prometheus/common dd2f054febf4a6c00f2343686efb775948a8bff4
github.com/prometheus/procfs 1878d9fbb537119d24b21ca07effd591627cd160
github.com/rcrowley/go-metrics 1f30fe9094a513ce4c700b9a54458bbb0c96996c
github.com/samuel/go-zookeeper 1d7be4effb13d2d908342d349d71a284a7542693
github.com/shirou/gopsutil 77b5d0080adb6f028e457906f1944d9fcca34442
github.com/soniah/gosnmp 5ad50dc75ab389f8a1c9f8a67d3a1cd85f67ed15
github.com/streadway/amqp 63795daa9a446c920826655f26ba31c81c860fd6
github.com/stretchr/testify 4d4bfba8f1d1027c4fdbe371823030df51419987
github.com/prometheus/common e8eabff8812b05acf522b45fdcd725a785188e37
github.com/prometheus/procfs 406e5b7bfd8201a36e2bb5f7bdae0b03380c2ce8
github.com/samuel/go-zookeeper 218e9c81c0dd8b3b18172b2bbfad92cc7d6db55f
github.com/shirou/gopsutil 1516eb9ddc5e61ba58874047a98f8b44b5e585e8
github.com/soniah/gosnmp 3fe3beb30fa9700988893c56a63b1df8e1b68c26
github.com/streadway/amqp b4f3ceab0337f013208d31348b578d83c0064744
github.com/stretchr/testify 1f4a1643a57e798696635ea4c126e9127adb7d3c
github.com/vjeantet/grok 83bfdfdfd1a8146795b28e547a8e3c8b28a466c2
github.com/wvanbergen/kafka bc265fedb9ff5b5c5d3c0fdcef4a819b3523d3ee
github.com/wvanbergen/kazoo-go 968957352185472eacb69215fa3dbfcfdbac1096
github.com/yuin/gopher-lua 66c871e454fcf10251c61bf8eff02d0978cae75a
github.com/wvanbergen/kafka 46f9a1cf3f670edec492029fadded9c2d9e18866
github.com/wvanbergen/kazoo-go 0f768712ae6f76454f987c3356177e138df258f8
github.com/yuin/gopher-lua bf3808abd44b1e55143a2d7f08571aaa80db1808
github.com/zensqlmonitor/go-mssqldb ffe5510c6fa5e15e6d983210ab501c815b56b363
golang.org/x/crypto dc137beb6cce2043eb6b5f223ab8bf51c32459f4
golang.org/x/net f2499483f923065a842d38eb4c7f1927e6fc6e6d
golang.org/x/text 506f9d5c962f284575e88337e7d9296d27e729d3
gopkg.in/dancannon/gorethink.v1 edc7a6a68e2d8015f5ffe1b2560eed989f8a45be
gopkg.in/fatih/pool.v2 6e328e67893eb46323ad06f0e92cb9536babbabc
gopkg.in/mgo.v2 3f83fa5005286a7fe593b055f0d7771a7dce4655
gopkg.in/yaml.v2 4c78c975fe7c825c6d1466c42be594d1d6f3aba6
golang.org/x/crypto c197bcf24cde29d3f73c7b4ac6fd41f4384e8af6
golang.org/x/net 6acef71eb69611914f7a30939ea9f6e194c78172
golang.org/x/text a71fd10341b064c10f4a81ceac72bcf70f26ea34
gopkg.in/dancannon/gorethink.v1 7d1af5be49cb5ecc7b177bf387d232050299d6ef
gopkg.in/fatih/pool.v2 cba550ebf9bce999a02e963296d4bc7a486cb715
gopkg.in/mgo.v2 d90005c5262a3463800497ea5a89aed5fe22c886
gopkg.in/yaml.v2 a83829b6f1293c91addabc89d0571c246397bbf4

View File

@@ -58,7 +58,7 @@ docker-run:
docker run --name redis -p "6379:6379" -d redis
docker run --name nsq -p "4150:4150" -d nsqio/nsq /nsqd
docker run --name mqtt -p "1883:1883" -d ncarlier/mqtt
docker run --name riemann -p "5555:5555" -d stealthly/docker-riemann
docker run --name riemann -p "5555:5555" -d blalor/riemann
docker run --name nats -p "4222:4222" -d nats
# Run docker containers necessary for CircleCI unit tests
@@ -71,7 +71,7 @@ docker-run-circle:
-d spotify/kafka
docker run --name nsq -p "4150:4150" -d nsqio/nsq /nsqd
docker run --name mqtt -p "1883:1883" -d ncarlier/mqtt
docker run --name riemann -p "5555:5555" -d stealthly/docker-riemann
docker run --name riemann -p "5555:5555" -d blalor/riemann
docker run --name nats -p "4222:4222" -d nats
# Kill all docker containers, ignore errors

View File

@@ -25,20 +25,60 @@ new plugins.
## Installation:
You can either download the binaries directly from the
[downloads](https://www.influxdata.com/downloads) page.
### Linux deb and rpm Packages:
A few alternate installs are available here as well:
Latest:
* https://dl.influxdata.com/telegraf/releases/telegraf_1.1.1_amd64.deb
* https://dl.influxdata.com/telegraf/releases/telegraf-1.1.1.x86_64.rpm
Latest (arm):
* https://dl.influxdata.com/telegraf/releases/telegraf_1.1.1_armhf.deb
* https://dl.influxdata.com/telegraf/releases/telegraf-1.1.1.armhf.rpm
##### Package Instructions:
* Telegraf binary is installed in `/usr/bin/telegraf`
* Telegraf daemon configuration file is in `/etc/telegraf/telegraf.conf`
* On sysv systems, the telegraf daemon can be controlled via
`service telegraf [action]`
* On systemd systems (such as Ubuntu 15+), the telegraf daemon can be
controlled via `systemctl [action] telegraf`
### yum/apt Repositories:
There is a yum/apt repo available for the whole InfluxData stack, see
[here](https://docs.influxdata.com/influxdb/latest/introduction/installation/#installation)
for instructions on setting up the repo. Once it is configured, you will be able
to use this repo to install & update telegraf.
### Linux tarballs:
Latest:
* https://dl.influxdata.com/telegraf/releases/telegraf-1.1.1_linux_amd64.tar.gz
* https://dl.influxdata.com/telegraf/releases/telegraf-1.1.1_linux_i386.tar.gz
* https://dl.influxdata.com/telegraf/releases/telegraf-1.1.1_linux_armhf.tar.gz
### FreeBSD tarball:
Latest:
* https://dl.influxdata.com/telegraf/releases/telegraf-VERSION_freebsd_amd64.tar.gz
* https://dl.influxdata.com/telegraf/releases/telegraf-1.1.1_freebsd_amd64.tar.gz
### Ansible Role:
Ansible role: https://github.com/rossmcdonald/telegraf
### OSX via Homebrew:
```
brew update
brew install telegraf
```
### Windows Binaries (EXPERIMENTAL)
Latest:
* https://dl.influxdata.com/telegraf/releases/telegraf-1.1.1_windows_amd64.zip
### From Source:
Telegraf manages dependencies via [gdm](https://github.com/sparrc/gdm),
@@ -59,31 +99,31 @@ See usage with:
telegraf --help
```
#### Generate a telegraf config file:
### Generate a telegraf config file:
```
telegraf config > telegraf.conf
```
#### Generate config with only cpu input & influxdb output plugins defined
### Generate config with only cpu input & influxdb output plugins defined
```
telegraf --input-filter cpu --output-filter influxdb config
```
#### Run a single telegraf collection, outputing metrics to stdout
### Run a single telegraf collection, outputing metrics to stdout
```
telegraf --config telegraf.conf -test
```
#### Run telegraf with all plugins defined in config file
### Run telegraf with all plugins defined in config file
```
telegraf --config telegraf.conf
```
#### Run telegraf, enabling the cpu & memory input, and influxdb output plugins
### Run telegraf, enabling the cpu & memory input, and influxdb output plugins
```
telegraf --config telegraf.conf -input-filter cpu:mem -output-filter influxdb
@@ -124,7 +164,6 @@ configuration options.
* [ipmi_sensor](./plugins/inputs/ipmi_sensor)
* [iptables](./plugins/inputs/iptables)
* [jolokia](./plugins/inputs/jolokia)
* [kubernetes](./plugins/inputs/kubernetes)
* [leofs](./plugins/inputs/leofs)
* [lustre2](./plugins/inputs/lustre2)
* [mailchimp](./plugins/inputs/mailchimp)
@@ -182,7 +221,6 @@ Telegraf can also collect metrics via the following service plugins:
* [nsq_consumer](./plugins/inputs/nsq_consumer)
* [logparser](./plugins/inputs/logparser)
* [statsd](./plugins/inputs/statsd)
* [socket_listener](./plugins/inputs/socket_listener)
* [tail](./plugins/inputs/tail)
* [tcp_listener](./plugins/inputs/tcp_listener)
* [udp_listener](./plugins/inputs/udp_listener)
@@ -204,7 +242,7 @@ Telegraf can also collect metrics via the following service plugins:
* [influxdb](./plugins/outputs/influxdb)
* [amon](./plugins/outputs/amon)
* [amqp](./plugins/outputs/amqp) (rabbitmq)
* [amqp](./plugins/outputs/amqp)
* [aws kinesis](./plugins/outputs/kinesis)
* [aws cloudwatch](./plugins/outputs/cloudwatch)
* [datadog](./plugins/outputs/datadog)
@@ -220,9 +258,7 @@ Telegraf can also collect metrics via the following service plugins:
* [nsq](./plugins/outputs/nsq)
* [opentsdb](./plugins/outputs/opentsdb)
* [prometheus](./plugins/outputs/prometheus_client)
* [socket_writer](./plugins/outputs/socket_writer)
* [riemann](./plugins/outputs/riemann)
* [riemann_legacy](./plugins/outputs/riemann_legacy)
## Contributing

View File

@@ -4,7 +4,7 @@ import (
"log"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/selfstat"
)
@@ -18,14 +18,14 @@ type MetricMaker interface {
measurement string,
fields map[string]interface{},
tags map[string]string,
mType telegraf.ValueType,
mType plugins.ValueType,
t time.Time,
) telegraf.Metric
) plugins.Metric
}
func NewAccumulator(
maker MetricMaker,
metrics chan telegraf.Metric,
metrics chan plugins.Metric,
) *accumulator {
acc := accumulator{
maker: maker,
@@ -36,7 +36,7 @@ func NewAccumulator(
}
type accumulator struct {
metrics chan telegraf.Metric
metrics chan plugins.Metric
maker MetricMaker
@@ -49,7 +49,7 @@ func (ac *accumulator) AddFields(
tags map[string]string,
t ...time.Time,
) {
if m := ac.maker.MakeMetric(measurement, fields, tags, telegraf.Untyped, ac.getTime(t)); m != nil {
if m := ac.maker.MakeMetric(measurement, fields, tags, plugins.Untyped, ac.getTime(t)); m != nil {
ac.metrics <- m
}
}
@@ -60,7 +60,7 @@ func (ac *accumulator) AddGauge(
tags map[string]string,
t ...time.Time,
) {
if m := ac.maker.MakeMetric(measurement, fields, tags, telegraf.Gauge, ac.getTime(t)); m != nil {
if m := ac.maker.MakeMetric(measurement, fields, tags, plugins.Gauge, ac.getTime(t)); m != nil {
ac.metrics <- m
}
}
@@ -71,7 +71,7 @@ func (ac *accumulator) AddCounter(
tags map[string]string,
t ...time.Time,
) {
if m := ac.maker.MakeMetric(measurement, fields, tags, telegraf.Counter, ac.getTime(t)); m != nil {
if m := ac.maker.MakeMetric(measurement, fields, tags, plugins.Counter, ac.getTime(t)); m != nil {
ac.metrics <- m
}
}

View File

@@ -8,7 +8,7 @@ import (
"testing"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/metric"
"github.com/stretchr/testify/assert"
@@ -17,7 +17,7 @@ import (
func TestAdd(t *testing.T) {
now := time.Now()
metrics := make(chan telegraf.Metric, 10)
metrics := make(chan plugins.Metric, 10)
defer close(metrics)
a := NewAccumulator(&TestMetricMaker{}, metrics)
@@ -48,7 +48,7 @@ func TestAdd(t *testing.T) {
func TestAddFields(t *testing.T) {
now := time.Now()
metrics := make(chan telegraf.Metric, 10)
metrics := make(chan plugins.Metric, 10)
defer close(metrics)
a := NewAccumulator(&TestMetricMaker{}, metrics)
@@ -79,7 +79,7 @@ func TestAccAddError(t *testing.T) {
log.SetOutput(errBuf)
defer log.SetOutput(os.Stderr)
metrics := make(chan telegraf.Metric, 10)
metrics := make(chan plugins.Metric, 10)
defer close(metrics)
a := NewAccumulator(&TestMetricMaker{}, metrics)
@@ -100,7 +100,7 @@ func TestAccAddError(t *testing.T) {
func TestAddNoIntervalWithPrecision(t *testing.T) {
now := time.Date(2006, time.February, 10, 12, 0, 0, 82912748, time.UTC)
metrics := make(chan telegraf.Metric, 10)
metrics := make(chan plugins.Metric, 10)
defer close(metrics)
a := NewAccumulator(&TestMetricMaker{}, metrics)
a.SetPrecision(0, time.Second)
@@ -132,7 +132,7 @@ func TestAddNoIntervalWithPrecision(t *testing.T) {
func TestAddDisablePrecision(t *testing.T) {
now := time.Date(2006, time.February, 10, 12, 0, 0, 82912748, time.UTC)
metrics := make(chan telegraf.Metric, 10)
metrics := make(chan plugins.Metric, 10)
defer close(metrics)
a := NewAccumulator(&TestMetricMaker{}, metrics)
@@ -164,7 +164,7 @@ func TestAddDisablePrecision(t *testing.T) {
func TestAddNoPrecisionWithInterval(t *testing.T) {
now := time.Date(2006, time.February, 10, 12, 0, 0, 82912748, time.UTC)
metrics := make(chan telegraf.Metric, 10)
metrics := make(chan plugins.Metric, 10)
defer close(metrics)
a := NewAccumulator(&TestMetricMaker{}, metrics)
@@ -196,7 +196,7 @@ func TestAddNoPrecisionWithInterval(t *testing.T) {
func TestDifferentPrecisions(t *testing.T) {
now := time.Date(2006, time.February, 10, 12, 0, 0, 82912748, time.UTC)
metrics := make(chan telegraf.Metric, 10)
metrics := make(chan plugins.Metric, 10)
defer close(metrics)
a := NewAccumulator(&TestMetricMaker{}, metrics)
@@ -243,7 +243,7 @@ func TestDifferentPrecisions(t *testing.T) {
func TestAddGauge(t *testing.T) {
now := time.Now()
metrics := make(chan telegraf.Metric, 10)
metrics := make(chan plugins.Metric, 10)
defer close(metrics)
a := NewAccumulator(&TestMetricMaker{}, metrics)
@@ -260,24 +260,24 @@ func TestAddGauge(t *testing.T) {
testm := <-metrics
actual := testm.String()
assert.Contains(t, actual, "acctest value=101")
assert.Equal(t, testm.Type(), telegraf.Gauge)
assert.Equal(t, testm.Type(), plugins.Gauge)
testm = <-metrics
actual = testm.String()
assert.Contains(t, actual, "acctest,acc=test value=101")
assert.Equal(t, testm.Type(), telegraf.Gauge)
assert.Equal(t, testm.Type(), plugins.Gauge)
testm = <-metrics
actual = testm.String()
assert.Equal(t,
fmt.Sprintf("acctest,acc=test value=101 %d\n", now.UnixNano()),
actual)
assert.Equal(t, testm.Type(), telegraf.Gauge)
assert.Equal(t, testm.Type(), plugins.Gauge)
}
func TestAddCounter(t *testing.T) {
now := time.Now()
metrics := make(chan telegraf.Metric, 10)
metrics := make(chan plugins.Metric, 10)
defer close(metrics)
a := NewAccumulator(&TestMetricMaker{}, metrics)
@@ -294,19 +294,19 @@ func TestAddCounter(t *testing.T) {
testm := <-metrics
actual := testm.String()
assert.Contains(t, actual, "acctest value=101")
assert.Equal(t, testm.Type(), telegraf.Counter)
assert.Equal(t, testm.Type(), plugins.Counter)
testm = <-metrics
actual = testm.String()
assert.Contains(t, actual, "acctest,acc=test value=101")
assert.Equal(t, testm.Type(), telegraf.Counter)
assert.Equal(t, testm.Type(), plugins.Counter)
testm = <-metrics
actual = testm.String()
assert.Equal(t,
fmt.Sprintf("acctest,acc=test value=101 %d\n", now.UnixNano()),
actual)
assert.Equal(t, testm.Type(), telegraf.Counter)
assert.Equal(t, testm.Type(), plugins.Counter)
}
type TestMetricMaker struct {
@@ -319,20 +319,20 @@ func (tm *TestMetricMaker) MakeMetric(
measurement string,
fields map[string]interface{},
tags map[string]string,
mType telegraf.ValueType,
mType plugins.ValueType,
t time.Time,
) telegraf.Metric {
) plugins.Metric {
switch mType {
case telegraf.Untyped:
case plugins.Untyped:
if m, err := metric.New(measurement, tags, fields, t); err == nil {
return m
}
case telegraf.Counter:
if m, err := metric.New(measurement, tags, fields, t, telegraf.Counter); err == nil {
case plugins.Counter:
if m, err := metric.New(measurement, tags, fields, t, plugins.Counter); err == nil {
return m
}
case telegraf.Gauge:
if m, err := metric.New(measurement, tags, fields, t, telegraf.Gauge); err == nil {
case plugins.Gauge:
if m, err := metric.New(measurement, tags, fields, t, plugins.Gauge); err == nil {
return m
}
}

View File

@@ -8,10 +8,10 @@ import (
"sync"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/internal/config"
"github.com/influxdata/telegraf/internal/models"
"github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/selfstat"
)
@@ -46,7 +46,7 @@ func NewAgent(config *config.Config) (*Agent, error) {
func (a *Agent) Connect() error {
for _, o := range a.Config.Outputs {
switch ot := o.Output.(type) {
case telegraf.ServiceOutput:
case plugins.ServiceOutput:
if err := ot.Start(); err != nil {
log.Printf("E! Service for output %s failed to start, exiting\n%s\n",
o.Name, err.Error())
@@ -76,7 +76,7 @@ func (a *Agent) Close() error {
for _, o := range a.Config.Outputs {
err = o.Output.Close()
switch ot := o.Output.(type) {
case telegraf.ServiceOutput:
case plugins.ServiceOutput:
ot.Stop()
}
}
@@ -101,7 +101,7 @@ func (a *Agent) gatherer(
shutdown chan struct{},
input *models.RunningInput,
interval time.Duration,
metricC chan telegraf.Metric,
metricC chan plugins.Metric,
) {
defer panicRecover(input)
@@ -157,13 +157,13 @@ func gatherWithTimeout(
select {
case err := <-done:
if err != nil {
acc.AddError(err)
log.Printf("E! ERROR in input [%s]: %s", input.Name(), err)
}
return
case <-ticker.C:
err := fmt.Errorf("took longer to collect than collection interval (%s)",
timeout)
acc.AddError(err)
log.Printf("E! ERROR: input [%s] took longer to collect than "+
"collection interval (%s)",
input.Name(), timeout)
continue
case <-shutdown:
return
@@ -176,7 +176,7 @@ func gatherWithTimeout(
func (a *Agent) Test() error {
shutdown := make(chan struct{})
defer close(shutdown)
metricC := make(chan telegraf.Metric)
metricC := make(chan plugins.Metric)
// dummy receiver for the point channel
go func() {
@@ -241,14 +241,14 @@ func (a *Agent) flush() {
}
// flusher monitors the metrics input channel and flushes on the minimum interval
func (a *Agent) flusher(shutdown chan struct{}, metricC chan telegraf.Metric) error {
func (a *Agent) flusher(shutdown chan struct{}, metricC chan plugins.Metric) error {
// Inelegant, but this sleep is to allow the Gather threads to run, so that
// the flusher will flush after metrics are collected.
time.Sleep(time.Millisecond * 300)
// create an output metric channel and a gorouting that continously passes
// each metric onto the output plugins & aggregators.
outMetricC := make(chan telegraf.Metric, 100)
outMetricC := make(chan plugins.Metric, 100)
var wg sync.WaitGroup
wg.Add(1)
go func() {
@@ -286,7 +286,6 @@ func (a *Agent) flusher(shutdown chan struct{}, metricC chan telegraf.Metric) er
}()
ticker := time.NewTicker(a.Config.Agent.FlushInterval.Duration)
semaphore := make(chan struct{}, 1)
for {
select {
case <-shutdown:
@@ -296,22 +295,12 @@ func (a *Agent) flusher(shutdown chan struct{}, metricC chan telegraf.Metric) er
a.flush()
return nil
case <-ticker.C:
go func() {
select {
case semaphore <- struct{}{}:
internal.RandomSleep(a.Config.Agent.FlushJitter.Duration, shutdown)
a.flush()
<-semaphore
default:
// skipping this flush because one is already happening
log.Println("W! Skipping a scheduled flush because there is" +
" already a flush ongoing.")
}
}()
internal.RandomSleep(a.Config.Agent.FlushJitter.Duration, shutdown)
a.flush()
case metric := <-metricC:
// NOTE potential bottleneck here as we put each metric through the
// processors serially.
mS := []telegraf.Metric{metric}
mS := []plugins.Metric{metric}
for _, processor := range a.Config.Processors {
mS = processor.Apply(mS...)
}
@@ -332,13 +321,13 @@ func (a *Agent) Run(shutdown chan struct{}) error {
a.Config.Agent.Hostname, a.Config.Agent.FlushInterval.Duration)
// channel shared between all input threads for accumulating metrics
metricC := make(chan telegraf.Metric, 100)
metricC := make(chan plugins.Metric, 100)
// Start all ServicePlugins
for _, input := range a.Config.Inputs {
input.SetDefaultTags(a.Config.Tags)
switch p := input.Input.(type) {
case telegraf.ServiceInput:
case plugins.ServiceInput:
acc := NewAccumulator(input, metricC)
// Service input plugins should set their own precision of their
// metrics.

View File

@@ -4,9 +4,9 @@ machine:
post:
- sudo service zookeeper stop
- go version
- sudo rm -rf /usr/local/go
- wget https://storage.googleapis.com/golang/go1.8rc3.linux-amd64.tar.gz
- sudo tar -C /usr/local -xzf go1.8rc3.linux-amd64.tar.gz
- go version | grep 1.7.4 || sudo rm -rf /usr/local/go
- wget https://storage.googleapis.com/golang/go1.8beta1.linux-amd64.tar.gz
- sudo tar -C /usr/local -xzf go1.8beta1.linux-amd64.tar.gz
- go version
dependencies:

View File

@@ -16,11 +16,14 @@ import (
"github.com/influxdata/telegraf/agent"
"github.com/influxdata/telegraf/internal/config"
"github.com/influxdata/telegraf/logger"
"github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/plugins/aggregators"
_ "github.com/influxdata/telegraf/plugins/aggregators/all"
"github.com/influxdata/telegraf/plugins/inputs"
_ "github.com/influxdata/telegraf/plugins/inputs/all"
"github.com/influxdata/telegraf/plugins/outputs"
_ "github.com/influxdata/telegraf/plugins/outputs/all"
"github.com/influxdata/telegraf/plugins/processors"
_ "github.com/influxdata/telegraf/plugins/processors/all"
"github.com/kardianos/service"
)
@@ -114,17 +117,94 @@ Examples:
var stop chan struct{}
func reloadLoop(
stop chan struct{},
inputFilters []string,
outputFilters []string,
aggregatorFilters []string,
processorFilters []string,
) {
var srvc service.Service
type program struct{}
func reloadLoop(stop chan struct{}, s service.Service) {
defer func() {
if service.Interactive() {
os.Exit(0)
}
return
}()
reload := make(chan bool, 1)
reload <- true
for <-reload {
reload <- false
flag.Parse()
args := flag.Args()
var inputFilters []string
if *fInputFilters != "" {
inputFilter := strings.TrimSpace(*fInputFilters)
inputFilters = strings.Split(":"+inputFilter+":", ":")
}
var outputFilters []string
if *fOutputFilters != "" {
outputFilter := strings.TrimSpace(*fOutputFilters)
outputFilters = strings.Split(":"+outputFilter+":", ":")
}
var aggregatorFilters []string
if *fAggregatorFilters != "" {
aggregatorFilter := strings.TrimSpace(*fAggregatorFilters)
aggregatorFilters = strings.Split(":"+aggregatorFilter+":", ":")
}
var processorFilters []string
if *fProcessorFilters != "" {
processorFilter := strings.TrimSpace(*fProcessorFilters)
processorFilters = strings.Split(":"+processorFilter+":", ":")
}
if len(args) > 0 {
switch args[0] {
case "version":
fmt.Printf("Telegraf v%s (git: %s %s)\n", version, branch, commit)
return
case "config":
config.PrintSampleConfig(
inputFilters,
outputFilters,
aggregatorFilters,
processorFilters,
)
return
}
}
// switch for flags which just do something and exit immediately
switch {
case *fOutputList:
fmt.Println("Available Output Plugins:")
for k, _ := range outputs.Outputs {
fmt.Printf(" %s\n", k)
}
return
case *fInputList:
fmt.Println("Available Input Plugins:")
for k, _ := range inputs.Inputs {
fmt.Printf(" %s\n", k)
}
return
case *fVersion:
fmt.Printf("Telegraf v%s (git: %s %s)\n", version, branch, commit)
return
case *fSampleConfig:
config.PrintSampleConfig(
inputFilters,
outputFilters,
aggregatorFilters,
processorFilters,
)
return
case *fUsage != "":
if err := config.PrintInputConfig(*fUsage); err != nil {
if err2 := config.PrintOutputConfig(*fUsage); err2 != nil {
log.Fatalf("E! %s and %s", err, err2)
}
}
return
}
// If no other options are specified, load the config file and run.
c := config.NewConfig()
@@ -165,7 +245,7 @@ func reloadLoop(
if err != nil {
log.Fatal("E! " + err.Error())
}
os.Exit(0)
return
}
err = ag.Connect()
@@ -199,21 +279,14 @@ func reloadLoop(
log.Printf("I! Tags enabled: %s", c.ListTags())
if *fPidfile != "" {
f, err := os.OpenFile(*fPidfile, os.O_CREATE|os.O_WRONLY, 0644)
f, err := os.Create(*fPidfile)
if err != nil {
log.Printf("E! Unable to create pidfile: %s", err)
} else {
fmt.Fprintf(f, "%d\n", os.Getpid())
f.Close()
defer func() {
err := os.Remove(*fPidfile)
if err != nil {
log.Printf("E! Unable to remove pidfile: %s", err)
}
}()
log.Fatalf("E! Unable to create pidfile: %s", err)
}
fmt.Fprintf(f, "%d\n", os.Getpid())
f.Close()
}
ag.Run(shutdown)
@@ -225,26 +298,14 @@ func usageExit(rc int) {
os.Exit(rc)
}
type program struct {
inputFilters []string
outputFilters []string
aggregatorFilters []string
processorFilters []string
}
func (p *program) Start(s service.Service) error {
srvc = s
go p.run()
return nil
}
func (p *program) run() {
stop = make(chan struct{})
reloadLoop(
stop,
p.inputFilters,
p.outputFilters,
p.aggregatorFilters,
p.processorFilters,
)
reloadLoop(stop, srvc)
}
func (p *program) Stop(s service.Service) error {
close(stop)
@@ -272,19 +333,59 @@ func loadExternalPlugins(dir string) error {
}
// Load plugin.
_, err = plugin.Open(pth)
p, err := plugin.Open(pth)
if err != nil {
return fmt.Errorf("error opening [%s]: %s", pth, err)
return err
}
// Register plugin.
if err := registerPlugin(dir, pth, p); err != nil {
return err
}
return nil
})
}
// registerPlugin registers an external plugin with telegraf.
func registerPlugin(pluginsDir, filePath string, p *plugin.Plugin) error {
// Clean the file path and make sure it's relative to the root plugins directory.
// This is done because plugin names are namespaced using the directory
// structure. E.g., if the root plugin directory, passed in the pluginsDir
// argument, is '/home/jdoe/bin/telegraf/plugins' and we're registering plugin
// '/home/jdoe/bin/telegraf/plugins/input/mysql.so'
pluginsDir = filepath.Clean(pluginsDir)
parentDir, _ := filepath.Split(pluginsDir)
var err error
if filePath, err = filepath.Rel(parentDir, filePath); err != nil {
return err
}
// Strip the file extension and save it.
ext := path.Ext(filePath)
filePath = strings.TrimSuffix(filePath, ext)
// Convert path separators to "." to generate a plugin name namespaced by directory names.
name := strings.Replace(filePath, string(os.PathSeparator), ".", -1)
if create, err := p.Lookup("NewInput"); err == nil {
inputs.Add(name, inputs.Creator(create.(func() plugins.Input)))
} else if create, err := p.Lookup("NewOutput"); err == nil {
outputs.Add(name, outputs.Creator(create.(func() plugins.Output)))
} else if create, err := p.Lookup("NewProcessor"); err == nil {
processors.Add(name, processors.Creator(create.(func() plugins.Processor)))
} else if create, err := p.Lookup("NewAggregator"); err == nil {
aggregators.Add(name, aggregators.Creator(create.(func() plugins.Aggregator)))
} else {
return fmt.Errorf("not a telegraf plugin: %s%s", filePath, ext)
}
log.Printf("I! Registered: %s (from %s%s)\n", name, filePath, ext)
return nil
}
func main() {
flag.Usage = func() { usageExit(0) }
flag.Parse()
args := flag.Args()
// Load external plugins, if requested.
if *fPlugins != "" {
@@ -298,72 +399,6 @@ func main() {
}
}
inputFilters, outputFilters := []string{}, []string{}
if *fInputFilters != "" {
inputFilters = strings.Split(":"+strings.TrimSpace(*fInputFilters)+":", ":")
}
if *fOutputFilters != "" {
outputFilters = strings.Split(":"+strings.TrimSpace(*fOutputFilters)+":", ":")
}
aggregatorFilters, processorFilters := []string{}, []string{}
if *fAggregatorFilters != "" {
aggregatorFilters = strings.Split(":"+strings.TrimSpace(*fAggregatorFilters)+":", ":")
}
if *fProcessorFilters != "" {
processorFilters = strings.Split(":"+strings.TrimSpace(*fProcessorFilters)+":", ":")
}
if len(args) > 0 {
switch args[0] {
case "version":
fmt.Printf("Telegraf v%s (git: %s %s)\n", version, branch, commit)
return
case "config":
config.PrintSampleConfig(
inputFilters,
outputFilters,
aggregatorFilters,
processorFilters,
)
return
}
}
// switch for flags which just do something and exit immediately
switch {
case *fOutputList:
fmt.Println("Available Output Plugins:")
for k, _ := range outputs.Outputs {
fmt.Printf(" %s\n", k)
}
return
case *fInputList:
fmt.Println("Available Input Plugins:")
for k, _ := range inputs.Inputs {
fmt.Printf(" %s\n", k)
}
return
case *fVersion:
fmt.Printf("Telegraf v%s (git: %s %s)\n", version, branch, commit)
return
case *fSampleConfig:
config.PrintSampleConfig(
inputFilters,
outputFilters,
aggregatorFilters,
processorFilters,
)
return
case *fUsage != "":
err := config.PrintInputConfig(*fUsage)
err2 := config.PrintOutputConfig(*fUsage)
if err != nil && err2 != nil {
log.Fatalf("E! %s and %s", err, err2)
}
return
}
if runtime.GOOS == "windows" {
svcConfig := &service.Config{
Name: "telegraf",
@@ -373,12 +408,7 @@ func main() {
Arguments: []string{"-config", "C:\\Program Files\\Telegraf\\telegraf.conf"},
}
prg := &program{
inputFilters: inputFilters,
outputFilters: outputFilters,
aggregatorFilters: aggregatorFilters,
processorFilters: processorFilters,
}
prg := &program{}
s, err := service.New(prg, svcConfig)
if err != nil {
log.Fatal("E! " + err.Error())
@@ -389,14 +419,10 @@ func main() {
if *fConfig != "" {
(*svcConfig).Arguments = []string{"-config", *fConfig}
}
if *fConfigDirectory != "" {
(*svcConfig).Arguments = append((*svcConfig).Arguments, "-config-directory", *fConfigDirectory)
}
err := service.Control(s, *fService)
if err != nil {
log.Fatal("E! " + err.Error())
}
os.Exit(0)
} else {
err = s.Run()
if err != nil {
@@ -405,12 +431,6 @@ func main() {
}
} else {
stop = make(chan struct{})
reloadLoop(
stop,
inputFilters,
outputFilters,
aggregatorFilters,
processorFilters,
)
reloadLoop(stop, nil)
}
}

View File

@@ -140,6 +140,8 @@
# # retention_policy = "default"
# ## InfluxDB database
# # database = "telegraf"
# ## InfluxDB precision
# # precision = "s"
#
# ## Optional SSL Config
# # ssl_ca = "/etc/telegraf/ca.pem"
@@ -188,11 +190,6 @@
# # timeout = "5s"
# # Send metrics to nowhere at all
# [[outputs.discard]]
# # no configuration
# # Send telegraf metrics to file(s)
# [[outputs.file]]
# ## Files to write to, "stdout" is a specially handled file.
@@ -222,7 +219,7 @@
# # Send telegraf metrics to graylog(s)
# [[outputs.graylog]]
# ## UDP endpoint for your graylog instance.
# ## Udp endpoint for your graylog instance.
# servers = ["127.0.0.1:12201", "192.168.1.1:12201"]
@@ -315,13 +312,9 @@
# streamname = "StreamName"
# ## PartitionKey as used for sharding data.
# partitionkey = "PartitionKey"
#
# ## Data format to output.
# ## Each data format has it's own unique set of configuration options, read
# ## more about them here:
# ## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
# data_format = "influx"
#
# ## format of the Data payload in the kinesis PutRecord, supported
# ## String and Custom.
# format = "string"
# ## debug will show upstream aws messages.
# debug = false
@@ -358,9 +351,6 @@
# # username = "telegraf"
# # password = "metricsmetricsmetricsmetrics"
#
# ## client ID, if not set a random ID is generated
# # client_id = ""
#
# ## Optional SSL Config
# # ssl_ca = "/etc/telegraf/ca.pem"
# # ssl_cert = "/etc/telegraf/cert.pem"
@@ -438,44 +428,10 @@
# [[outputs.prometheus_client]]
# ## Address to listen on
# # listen = ":9126"
#
# ## Interval to expire metrics and not deliver to prometheus, 0 == no expiration
# # expiration_interval = "60s"
# # Configuration for Riemann server to send metrics to
# # Configuration for the Riemann server to send metrics to
# [[outputs.riemann]]
# ## The full TCP or UDP URL of the Riemann server
# url = "tcp://localhost:5555"
#
# ## Riemann event TTL, floating-point time in seconds.
# ## Defines how long that an event is considered valid for in Riemann
# # ttl = 30.0
#
# ## Separator to use between measurement and field name in Riemann service name
# ## This does not have any effect if 'measurement_as_attribute' is set to 'true'
# separator = "/"
#
# ## Set measurement name as Riemann attribute 'measurement', instead of prepending it to the Riemann service name
# # measurement_as_attribute = false
#
# ## Send string metrics as Riemann event states.
# ## Unless enabled all string metrics will be ignored
# # string_as_state = false
#
# ## A list of tag keys whose values get sent as Riemann tags.
# ## If empty, all Telegraf tag values will be sent as tags
# # tag_keys = ["telegraf","custom_tag"]
#
# ## Additional Riemann tags to send.
# # tags = ["telegraf-output"]
#
# ## Description for Riemann event
# # description_text = "metrics collected from telegraf"
# # Configuration for the legacy Riemann plugin
# [[outputs.riemann_legacy]]
# ## URL of server
# url = "localhost:5555"
# ## transport protocol to use either tcp or udp
@@ -582,19 +538,6 @@
# ## An array of Apache status URI to gather stats.
# ## Default is "http://localhost/server-status?auto".
# urls = ["http://localhost/server-status?auto"]
# ## user credentials for basic HTTP authentication
# username = "myuser"
# password = "mypassword"
#
# ## Timeout to the complete conection and reponse time in seconds
# response_timeout = "25s" ## default to 5 seconds
#
# ## Optional SSL Config
# # ssl_ca = "/etc/telegraf/ca.pem"
# # ssl_cert = "/etc/telegraf/cert.pem"
# # ssl_key = "/etc/telegraf/key.pem"
# ## Use SSL but skip chain & host verification
# # insecure_skip_verify = false
# # Read metrics of bcache from stats_total and dirty_data
@@ -697,13 +640,6 @@
# #profile = ""
# #shared_credential_file = ""
#
# # The minimum period for Cloudwatch metrics is 1 minute (60s). However not all
# # metrics are made available to the 1 minute period. Some are collected at
# # 3 minute and 5 minutes intervals. See https://aws.amazon.com/cloudwatch/faqs/#monitoring.
# # Note that if a period is configured that is smaller than the minimum for a
# # particular metric, that metric will not be returned by the Cloudwatch API
# # and will not be collected by Telegraf.
# #
# ## Requested CloudWatch aggregation Period (required - must be a multiple of 60s)
# period = "5m"
#
@@ -853,13 +789,13 @@
# ## of the cluster.
# local = true
#
# ## Set cluster_health to true when you want to also obtain cluster health stats
# ## set cluster_health to true when you want to also obtain cluster health stats
# cluster_health = false
#
# ## Set cluster_stats to true when you want to also obtain cluster stats from the
# ## Set cluster_stats to true when you want to obtain cluster stats from the
# ## Master node.
# cluster_stats = false
#
# ## Optional SSL Config
# # ssl_ca = "/etc/telegraf/ca.pem"
# # ssl_cert = "/etc/telegraf/cert.pem"
@@ -1044,12 +980,6 @@
# timeout = "5s"
# # Collect statistics about itself
# [[inputs.internal]]
# ## If true, collect telegraf memory stats.
# # collect_memstats = true
# # Read metrics from one or many bare metal servers
# [[inputs.ipmi_sensor]]
# ## specify servers via a url matching:
@@ -1063,9 +993,8 @@
# # Read JMX metrics through Jolokia
# [[inputs.jolokia]]
# ## This is the context root used to compose the jolokia url
# ## NOTE that Jolokia requires a trailing slash at the end of the context root
# ## NOTE that your jolokia security policy must allow for POST requests.
# context = "/jolokia/"
# context = "/jolokia"
#
# ## This specifies the mode used
# # mode = "proxy"
@@ -1077,15 +1006,6 @@
# # host = "127.0.0.1"
# # port = "8080"
#
# ## Optional http timeouts
# ##
# ## response_header_timeout, if non-zero, specifies the amount of time to wait
# ## for a server's response headers after fully writing the request.
# # response_header_timeout = "3s"
# ##
# ## client_timeout specifies a time limit for requests made by this client.
# ## Includes connection time, any redirects, and reading the response body.
# # client_timeout = "4s"
#
# ## List of servers exposing jolokia read service
# [[inputs.jolokia.servers]]
@@ -1224,8 +1144,8 @@
# ## [username[:password]@][protocol[(address)]]/[?tls=[true|false|skip-verify]]
# ## see https://github.com/go-sql-driver/mysql#dsn-data-source-name
# ## e.g.
# ## servers = ["user:passwd@tcp(127.0.0.1:3306)/?tls=false"]
# ## servers = ["user@tcp(127.0.0.1:3306)/?tls=false"]
# ## db_user:passwd@tcp(127.0.0.1:3306)/?tls=false
# ## db_user@tcp(127.0.0.1:3306)/?tls=false
# #
# ## If no servers are specified, then localhost is used as the host.
# servers = ["tcp(127.0.0.1:3306)/"]
@@ -1286,24 +1206,18 @@
# # TCP or UDP 'ping' given url and collect response time in seconds
# [[inputs.net_response]]
# ## Protocol, must be "tcp" or "udp"
# ## NOTE: because the "udp" protocol does not respond to requests, it requires
# ## a send/expect string pair (see below).
# protocol = "tcp"
# ## Server address (default localhost)
# address = "localhost:80"
# address = "github.com:80"
# ## Set timeout
# timeout = "1s"
#
# ## Optional string sent to the server
# # send = "ssh"
# ## Optional expected string in answer
# # expect = "ssh"
# ## Set read timeout (only used if expecting a response)
# read_timeout = "1s"
#
# ## The following options are required for UDP checks. For TCP, they are
# ## optional. The plugin will send the given string to the server and then
# ## expect to receive the given 'expect' string back.
# ## string sent to the server
# # send = "ssh"
# ## expected string in answer
# # expect = "ssh"
# # Read TCP metrics such as established, time wait and sockets counts.
@@ -1505,8 +1419,6 @@
# prefix = ""
# ## comment this out if you want raw cpu_time stats
# fielddrop = ["cpu_time_*"]
# ## This is optional; moves pid into a tag instead of a field
# pid_tag = false
# # Read metrics from one or many prometheus clients
@@ -1517,9 +1429,6 @@
# ## Use bearer token for authorization
# # bearer_token = /path/to/bearer/token
#
# ## Specify timeout duration for slower prometheus clients (default is 3s)
# # response_timeout = "3s"
#
# ## Optional SSL Config
# # ssl_ca = /path/to/cafile
# # ssl_cert = /path/to/certfile
@@ -1548,16 +1457,6 @@
# ## Use SSL but skip chain & host verification
# # insecure_skip_verify = false
#
# ## Optional request timeouts
# ##
# ## ResponseHeaderTimeout, if non-zero, specifies the amount of time to wait
# ## for a server's response headers after fully writing the request.
# # header_timeout = "3s"
# ##
# ## client_timeout specifies a time limit for requests made by this client.
# ## Includes connection time, any redirects, and reading the response body.
# # client_timeout = "4s"
#
# ## A list of nodes to pull metrics about. If not specified, metrics for
# ## all nodes are gathered.
# # nodes = ["rabbit@node1", "rabbit@node2"]
@@ -1980,19 +1879,14 @@
# [[inputs.statsd]]
# ## Address and port to host UDP listener on
# service_address = ":8125"
#
# ## The following configuration options control when telegraf clears it's cache
# ## of previous values. If set to false, then telegraf will only clear it's
# ## cache when the daemon is restarted.
# ## Reset gauges every interval (default=true)
# delete_gauges = true
# ## Reset counters every interval (default=true)
# delete_counters = true
# ## Reset sets every interval (default=true)
# delete_sets = true
# ## Reset timings & histograms every interval (default=true)
# ## Delete gauges every interval (default=false)
# delete_gauges = false
# ## Delete counters every interval (default=false)
# delete_counters = false
# ## Delete sets every interval (default=false)
# delete_sets = false
# ## Delete timings & histograms every interval (default=true)
# delete_timings = true
#
# ## Percentiles to calculate for timing & histogram stats
# percentiles = [90]
#
@@ -2033,8 +1927,6 @@
# files = ["/var/mymetrics.out"]
# ## Read file from beginning.
# from_beginning = false
# ## Whether file is a named pipe
# pipe = false
#
# ## Data format to consume.
# ## Each data format has it's own unique set of configuration options, read
@@ -2071,10 +1963,6 @@
# ## UDP listener will start dropping packets.
# # allowed_pending_messages = 10000
#
# ## Set the buffer size of the UDP connection outside of OS default (in bytes)
# ## If set to 0, take OS default
# udp_buffer_size = 16777216
#
# ## Data format to consume.
# ## Each data format has it's own unique set of configuration options, read
# ## more about them here:
@@ -2098,4 +1986,3 @@
#
# [inputs.webhooks.rollbar]
# path = "/rollbar"

View File

@@ -105,11 +105,10 @@
"% Privileged Time",
"% User Time",
"% Processor Time",
"% DPC Time",
]
Measurement = "win_cpu"
# Set to true to include _Total instance when querying for all (*).
IncludeTotal=true
#IncludeTotal=false
[[inputs.win_perf_counters.object]]
# Disk times and queues
@@ -119,51 +118,19 @@
"% Idle Time",
"% Disk Time","% Disk Read Time",
"% Disk Write Time",
"% User Time",
"Current Disk Queue Length",
"% Free Space",
"Free Megabytes",
]
Measurement = "win_disk"
# Set to true to include _Total instance when querying for all (*).
#IncludeTotal=false
[[inputs.win_perf_counters.object]]
ObjectName = "PhysicalDisk"
Instances = ["*"]
Counters = [
"Disk Read Bytes/sec",
"Disk Write Bytes/sec",
"Current Disk Queue Length",
"Disk Reads/sec",
"Disk Writes/sec",
"% Disk Time",
"% Disk Read Time",
"% Disk Write Time",
]
Measurement = "win_diskio"
[[inputs.win_perf_counters.object]]
ObjectName = "Network Interface"
Instances = ["*"]
Counters = [
"Bytes Received/sec",
"Bytes Sent/sec",
"Packets Received/sec",
"Packets Sent/sec",
"Packets Received Discarded",
"Packets Outbound Discarded",
"Packets Received Errors",
"Packets Outbound Errors",
]
Measurement = "win_net"
[[inputs.win_perf_counters.object]]
ObjectName = "System"
Counters = [
"Context Switches/sec",
"System Calls/sec",
"Processor Queue Length",
"System Up Time",
]
Instances = ["------"]
Measurement = "win_system"
@@ -183,10 +150,6 @@
"Transition Faults/sec",
"Pool Nonpaged Bytes",
"Pool Paged Bytes",
"Standby Cache Reserve Bytes",
"Standby Cache Normal Priority Bytes",
"Standby Cache Core Bytes",
]
# Use 6 x - to remove the Instance bit from the query.
Instances = ["------"]
@@ -194,31 +157,6 @@
# Set to true to include _Total instance when querying for all (*).
#IncludeTotal=false
[[inputs.win_perf_counters.object]]
# Example query where the Instance portion must be removed to get data back,
# such as from the Paging File object.
ObjectName = "Paging File"
Counters = [
"% Usage",
]
Instances = ["_Total"]
Measurement = "win_swap"
[[inputs.win_perf_counters.object]]
ObjectName = "Network Interface"
Instances = ["*"]
Counters = [
"Bytes Sent/sec",
"Bytes Received/sec",
"Packets Sent/sec",
"Packets Received/sec",
"Packets Received Discarded",
"Packets Received Errors",
"Packets Outbound Discarded",
"Packets Outbound Errors",
]
# Windows system plugins using WMI (disabled by default, using
# win_perf_counters over WMI is recommended)

View File

@@ -3,7 +3,7 @@ package buffer
import (
"sync"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/selfstat"
)
@@ -14,7 +14,7 @@ var (
// Buffer is an object for storing metrics in a circular buffer.
type Buffer struct {
buf chan telegraf.Metric
buf chan plugins.Metric
mu sync.Mutex
}
@@ -24,7 +24,7 @@ type Buffer struct {
// called when the buffer is full, then the oldest metric(s) will be dropped.
func NewBuffer(size int) *Buffer {
return &Buffer{
buf: make(chan telegraf.Metric, size),
buf: make(chan plugins.Metric, size),
}
}
@@ -39,7 +39,7 @@ func (b *Buffer) Len() int {
}
// Add adds metrics to the buffer.
func (b *Buffer) Add(metrics ...telegraf.Metric) {
func (b *Buffer) Add(metrics ...plugins.Metric) {
for i, _ := range metrics {
MetricsWritten.Incr(1)
select {
@@ -55,10 +55,10 @@ func (b *Buffer) Add(metrics ...telegraf.Metric) {
// Batch returns a batch of metrics of size batchSize.
// the batch will be of maximum length batchSize. It can be less than batchSize,
// if the length of Buffer is less than batchSize.
func (b *Buffer) Batch(batchSize int) []telegraf.Metric {
func (b *Buffer) Batch(batchSize int) []plugins.Metric {
b.mu.Lock()
n := min(len(b.buf), batchSize)
out := make([]telegraf.Metric, n)
out := make([]plugins.Metric, n)
for i := 0; i < n; i++ {
out[i] = <-b.buf
}

View File

@@ -3,13 +3,13 @@ package buffer
import (
"testing"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/assert"
)
var metricList = []telegraf.Metric{
var metricList = []plugins.Metric{
testutil.TestMetric(2, "mymetric1"),
testutil.TestMetric(1, "mymetric2"),
testutil.TestMetric(11, "mymetric3"),

View File

@@ -15,9 +15,9 @@ import (
"strings"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/internal/models"
"github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/plugins/aggregators"
"github.com/influxdata/telegraf/plugins/inputs"
"github.com/influxdata/telegraf/plugins/outputs"
@@ -399,7 +399,7 @@ func printFilteredInputs(inputFilters []string, commented bool) {
sort.Strings(pnames)
// cache service inputs to print them at the end
servInputs := make(map[string]telegraf.ServiceInput)
servInputs := make(map[string]plugins.ServiceInput)
// for alphabetical looping:
servInputNames := []string{}
@@ -409,7 +409,7 @@ func printFilteredInputs(inputFilters []string, commented bool) {
input := creator()
switch p := input.(type) {
case telegraf.ServiceInput:
case plugins.ServiceInput:
servInputs[pname] = p
servInputNames = append(servInputNames, pname)
continue

View File

@@ -28,7 +28,7 @@ func TestCompileAndMatch(t *testing.T) {
require.NoError(t, err)
matches := g1.Match()
assert.Len(t, matches, 6)
assert.Len(t, matches, 3)
matches = g2.Match()
assert.Len(t, matches, 2)
matches = g3.Match()
@@ -56,16 +56,6 @@ func TestFindRootDir(t *testing.T) {
}
}
func TestFindNestedTextFile(t *testing.T) {
dir := getTestdataDir()
// test super asterisk
g1, err := Compile(dir + "/**.txt")
require.NoError(t, err)
matches := g1.Match()
assert.Len(t, matches, 1)
}
func getTestdataDir() string {
_, filename, _, _ := runtime.Caller(1)
return strings.Replace(filename, "globpath_test.go", "testdata", 1)

View File

@@ -5,8 +5,8 @@ import (
"math"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/metric"
"github.com/influxdata/telegraf/plugins"
)
// makemetric is used by both RunningAggregator & RunningInput
@@ -32,9 +32,9 @@ func makemetric(
daemonTags map[string]string,
filter Filter,
applyFilter bool,
mType telegraf.ValueType,
mType plugins.ValueType,
t time.Time,
) telegraf.Metric {
) plugins.Metric {
if len(fields) == 0 || len(measurement) == 0 {
return nil
}

View File

@@ -3,28 +3,28 @@ package models
import (
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/metric"
"github.com/influxdata/telegraf/plugins"
)
type RunningAggregator struct {
a telegraf.Aggregator
a plugins.Aggregator
Config *AggregatorConfig
metrics chan telegraf.Metric
metrics chan plugins.Metric
periodStart time.Time
periodEnd time.Time
}
func NewRunningAggregator(
a telegraf.Aggregator,
a plugins.Aggregator,
conf *AggregatorConfig,
) *RunningAggregator {
return &RunningAggregator{
a: a,
Config: conf,
metrics: make(chan telegraf.Metric, 100),
metrics: make(chan plugins.Metric, 100),
}
}
@@ -52,9 +52,9 @@ func (r *RunningAggregator) MakeMetric(
measurement string,
fields map[string]interface{},
tags map[string]string,
mType telegraf.ValueType,
mType plugins.ValueType,
t time.Time,
) telegraf.Metric {
) plugins.Metric {
m := makemetric(
measurement,
fields,
@@ -80,7 +80,7 @@ func (r *RunningAggregator) MakeMetric(
// Add applies the given metric to the aggregator.
// Before applying to the plugin, it will run any defined filters on the metric.
// Apply returns true if the original metric should be dropped.
func (r *RunningAggregator) Add(in telegraf.Metric) bool {
func (r *RunningAggregator) Add(in plugins.Metric) bool {
if r.Config.Filter.IsActive() {
// check if the aggregator should apply this metric
name := in.Name()
@@ -98,11 +98,11 @@ func (r *RunningAggregator) Add(in telegraf.Metric) bool {
r.metrics <- in
return r.Config.DropOriginal
}
func (r *RunningAggregator) add(in telegraf.Metric) {
func (r *RunningAggregator) add(in plugins.Metric) {
r.a.Add(in)
}
func (r *RunningAggregator) push(acc telegraf.Accumulator) {
func (r *RunningAggregator) push(acc plugins.Accumulator) {
r.a.Push(acc)
}
@@ -113,7 +113,7 @@ func (r *RunningAggregator) reset() {
// Run runs the running aggregator, listens for incoming metrics, and waits
// for period ticks to tell it when to push and reset the aggregator.
func (r *RunningAggregator) Run(
acc telegraf.Accumulator,
acc plugins.Accumulator,
shutdown chan struct{},
) {
// The start of the period is truncated to the nearest second.

View File

@@ -7,7 +7,7 @@ import (
"testing"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/assert"
@@ -30,7 +30,7 @@ func TestAdd(t *testing.T) {
"RITest",
map[string]interface{}{"value": int(101)},
map[string]string{},
telegraf.Untyped,
plugins.Untyped,
time.Now().Add(time.Millisecond*150),
)
assert.False(t, ra.Add(m))
@@ -62,7 +62,7 @@ func TestAddMetricsOutsideCurrentPeriod(t *testing.T) {
"RITest",
map[string]interface{}{"value": int(101)},
map[string]string{},
telegraf.Untyped,
plugins.Untyped,
time.Now().Add(-time.Hour),
)
assert.False(t, ra.Add(m))
@@ -72,7 +72,7 @@ func TestAddMetricsOutsideCurrentPeriod(t *testing.T) {
"RITest",
map[string]interface{}{"value": int(101)},
map[string]string{},
telegraf.Untyped,
plugins.Untyped,
time.Now().Add(time.Hour),
)
assert.False(t, ra.Add(m))
@@ -82,7 +82,7 @@ func TestAddMetricsOutsideCurrentPeriod(t *testing.T) {
"RITest",
map[string]interface{}{"value": int(101)},
map[string]string{},
telegraf.Untyped,
plugins.Untyped,
time.Now().Add(time.Millisecond*50),
)
assert.False(t, ra.Add(m))
@@ -120,7 +120,7 @@ func TestAddAndPushOnePeriod(t *testing.T) {
"RITest",
map[string]interface{}{"value": int(101)},
map[string]string{},
telegraf.Untyped,
plugins.Untyped,
time.Now().Add(time.Millisecond*100),
)
assert.False(t, ra.Add(m))
@@ -151,7 +151,7 @@ func TestAddDropOriginal(t *testing.T) {
"RITest",
map[string]interface{}{"value": int(101)},
map[string]string{},
telegraf.Untyped,
plugins.Untyped,
time.Now(),
)
assert.True(t, ra.Add(m))
@@ -161,7 +161,7 @@ func TestAddDropOriginal(t *testing.T) {
"foobar",
map[string]interface{}{"value": int(101)},
map[string]string{},
telegraf.Untyped,
plugins.Untyped,
time.Now(),
)
assert.False(t, ra.Add(m2))
@@ -179,7 +179,7 @@ func TestMakeMetricA(t *testing.T) {
"RITest",
map[string]interface{}{"value": int(101)},
map[string]string{},
telegraf.Untyped,
plugins.Untyped,
now,
)
assert.Equal(
@@ -190,14 +190,14 @@ func TestMakeMetricA(t *testing.T) {
assert.Equal(
t,
m.Type(),
telegraf.Untyped,
plugins.Untyped,
)
m = ra.MakeMetric(
"RITest",
map[string]interface{}{"value": int(101)},
map[string]string{},
telegraf.Counter,
plugins.Counter,
now,
)
assert.Equal(
@@ -208,14 +208,14 @@ func TestMakeMetricA(t *testing.T) {
assert.Equal(
t,
m.Type(),
telegraf.Counter,
plugins.Counter,
)
m = ra.MakeMetric(
"RITest",
map[string]interface{}{"value": int(101)},
map[string]string{},
telegraf.Gauge,
plugins.Gauge,
now,
)
assert.Equal(
@@ -226,7 +226,7 @@ func TestMakeMetricA(t *testing.T) {
assert.Equal(
t,
m.Type(),
telegraf.Gauge,
plugins.Gauge,
)
}
@@ -240,14 +240,14 @@ func (t *TestAggregator) Reset() {
atomic.StoreInt64(&t.sum, 0)
}
func (t *TestAggregator) Push(acc telegraf.Accumulator) {
func (t *TestAggregator) Push(acc plugins.Accumulator) {
acc.AddFields("TestMetric",
map[string]interface{}{"sum": t.sum},
map[string]string{},
)
}
func (t *TestAggregator) Add(in telegraf.Metric) {
func (t *TestAggregator) Add(in plugins.Metric) {
for _, v := range in.Fields() {
if vi, ok := v.(int64); ok {
atomic.AddInt64(&t.sum, vi)

View File

@@ -4,14 +4,14 @@ import (
"fmt"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/selfstat"
)
var GlobalMetricsGathered = selfstat.Register("agent", "metrics_gathered", map[string]string{})
type RunningInput struct {
Input telegraf.Input
Input plugins.Input
Config *InputConfig
trace bool
@@ -21,7 +21,7 @@ type RunningInput struct {
}
func NewRunningInput(
input telegraf.Input,
input plugins.Input,
config *InputConfig,
) *RunningInput {
return &RunningInput{
@@ -56,9 +56,9 @@ func (r *RunningInput) MakeMetric(
measurement string,
fields map[string]interface{},
tags map[string]string,
mType telegraf.ValueType,
mType plugins.ValueType,
t time.Time,
) telegraf.Metric {
) plugins.Metric {
m := makemetric(
measurement,
fields,
@@ -75,7 +75,7 @@ func (r *RunningInput) MakeMetric(
)
if r.trace && m != nil {
fmt.Print("> " + m.String())
fmt.Println("> " + m.String())
}
r.MetricsGathered.Incr(1)

View File

@@ -6,7 +6,7 @@ import (
"testing"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins"
"github.com/stretchr/testify/assert"
)
@@ -21,7 +21,7 @@ func TestMakeMetricNoFields(t *testing.T) {
"RITest",
map[string]interface{}{},
map[string]string{},
telegraf.Untyped,
plugins.Untyped,
now,
)
assert.Nil(t, m)
@@ -41,7 +41,7 @@ func TestMakeMetricNilFields(t *testing.T) {
"nil": nil,
},
map[string]string{},
telegraf.Untyped,
plugins.Untyped,
now,
)
assert.Equal(
@@ -66,7 +66,7 @@ func TestMakeMetric(t *testing.T) {
"RITest",
map[string]interface{}{"value": int(101)},
map[string]string{},
telegraf.Untyped,
plugins.Untyped,
now,
)
assert.Equal(
@@ -77,14 +77,14 @@ func TestMakeMetric(t *testing.T) {
assert.Equal(
t,
m.Type(),
telegraf.Untyped,
plugins.Untyped,
)
m = ri.MakeMetric(
"RITest",
map[string]interface{}{"value": int(101)},
map[string]string{},
telegraf.Counter,
plugins.Counter,
now,
)
assert.Equal(
@@ -95,14 +95,14 @@ func TestMakeMetric(t *testing.T) {
assert.Equal(
t,
m.Type(),
telegraf.Counter,
plugins.Counter,
)
m = ri.MakeMetric(
"RITest",
map[string]interface{}{"value": int(101)},
map[string]string{},
telegraf.Gauge,
plugins.Gauge,
now,
)
assert.Equal(
@@ -113,7 +113,7 @@ func TestMakeMetric(t *testing.T) {
assert.Equal(
t,
m.Type(),
telegraf.Gauge,
plugins.Gauge,
)
}
@@ -133,7 +133,7 @@ func TestMakeMetricWithPluginTags(t *testing.T) {
"RITest",
map[string]interface{}{"value": int(101)},
nil,
telegraf.Untyped,
plugins.Untyped,
now,
)
assert.Equal(
@@ -161,7 +161,7 @@ func TestMakeMetricFilteredOut(t *testing.T) {
"RITest",
map[string]interface{}{"value": int(101)},
nil,
telegraf.Untyped,
plugins.Untyped,
now,
)
assert.Nil(t, m)
@@ -183,7 +183,7 @@ func TestMakeMetricWithDaemonTags(t *testing.T) {
"RITest",
map[string]interface{}{"value": int(101)},
map[string]string{},
telegraf.Untyped,
plugins.Untyped,
now,
)
assert.Equal(
@@ -213,7 +213,7 @@ func TestMakeMetricInfFields(t *testing.T) {
"ninf": ninf,
},
map[string]string{},
telegraf.Untyped,
plugins.Untyped,
now,
)
assert.Equal(
@@ -250,7 +250,7 @@ func TestMakeMetricAllFieldTypes(t *testing.T) {
"m": true,
},
map[string]string{},
telegraf.Untyped,
plugins.Untyped,
now,
)
assert.Contains(t, m.String(), "a=10i")
@@ -280,7 +280,7 @@ func TestMakeMetricNameOverride(t *testing.T) {
"RITest",
map[string]interface{}{"value": int(101)},
map[string]string{},
telegraf.Untyped,
plugins.Untyped,
now,
)
assert.Equal(
@@ -301,7 +301,7 @@ func TestMakeMetricNamePrefix(t *testing.T) {
"RITest",
map[string]interface{}{"value": int(101)},
map[string]string{},
telegraf.Untyped,
plugins.Untyped,
now,
)
assert.Equal(
@@ -322,7 +322,7 @@ func TestMakeMetricNameSuffix(t *testing.T) {
"RITest",
map[string]interface{}{"value": int(101)},
map[string]string{},
telegraf.Untyped,
plugins.Untyped,
now,
)
assert.Equal(
@@ -336,4 +336,4 @@ type testInput struct{}
func (t *testInput) Description() string { return "" }
func (t *testInput) SampleConfig() string { return "" }
func (t *testInput) Gather(acc telegraf.Accumulator) error { return nil }
func (t *testInput) Gather(acc plugins.Accumulator) error { return nil }

View File

@@ -4,7 +4,7 @@ import (
"log"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/internal/buffer"
"github.com/influxdata/telegraf/metric"
"github.com/influxdata/telegraf/selfstat"
@@ -21,7 +21,7 @@ const (
// RunningOutput contains the output configuration
type RunningOutput struct {
Name string
Output telegraf.Output
Output plugins.Output
Config *OutputConfig
MetricBufferLimit int
MetricBatchSize int
@@ -38,7 +38,7 @@ type RunningOutput struct {
func NewRunningOutput(
name string,
output telegraf.Output,
output plugins.Output,
conf *OutputConfig,
batchSize int,
bufferLimit int,
@@ -89,10 +89,7 @@ func NewRunningOutput(
// AddMetric adds a metric to the output. This function can also write cached
// points if FlushBufferWhenFull is true.
func (ro *RunningOutput) AddMetric(m telegraf.Metric) {
if m == nil {
return
}
func (ro *RunningOutput) AddMetric(m plugins.Metric) {
// Filter any tagexclude/taginclude parameters before adding metric
if ro.Config.Filter.IsActive() {
// In order to filter out tags, we need to create a new metric, since
@@ -164,7 +161,7 @@ func (ro *RunningOutput) Write() error {
return nil
}
func (ro *RunningOutput) write(metrics []telegraf.Metric) error {
func (ro *RunningOutput) write(metrics []plugins.Metric) error {
nMetrics := len(metrics)
if nMetrics == 0 {
return nil

View File

@@ -5,14 +5,14 @@ import (
"sync"
"testing"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
var first5 = []telegraf.Metric{
var first5 = []plugins.Metric{
testutil.TestMetric(101, "metric1"),
testutil.TestMetric(101, "metric2"),
testutil.TestMetric(101, "metric3"),
@@ -20,7 +20,7 @@ var first5 = []telegraf.Metric{
testutil.TestMetric(101, "metric5"),
}
var next5 = []telegraf.Metric{
var next5 = []plugins.Metric{
testutil.TestMetric(101, "metric6"),
testutil.TestMetric(101, "metric7"),
testutil.TestMetric(101, "metric8"),
@@ -75,23 +75,6 @@ func BenchmarkRunningOutputAddFailWrites(b *testing.B) {
}
}
func TestAddingNilMetric(t *testing.T) {
conf := &OutputConfig{
Filter: Filter{},
}
m := &mockOutput{}
ro := NewRunningOutput("test", m, conf, 1000, 10000)
ro.AddMetric(nil)
ro.AddMetric(nil)
ro.AddMetric(nil)
err := ro.Write()
assert.NoError(t, err)
assert.Len(t, m.Metrics(), 0)
}
// Test that NameDrop filters ger properly applied.
func TestRunningOutput_DropFilter(t *testing.T) {
conf := &OutputConfig{
@@ -482,7 +465,7 @@ func TestRunningOutputWriteFailOrder3(t *testing.T) {
type mockOutput struct {
sync.Mutex
metrics []telegraf.Metric
metrics []plugins.Metric
// if true, mock a write failure
failWrite bool
@@ -504,7 +487,7 @@ func (m *mockOutput) SampleConfig() string {
return ""
}
func (m *mockOutput) Write(metrics []telegraf.Metric) error {
func (m *mockOutput) Write(metrics []plugins.Metric) error {
m.Lock()
defer m.Unlock()
if m.failWrite {
@@ -512,7 +495,7 @@ func (m *mockOutput) Write(metrics []telegraf.Metric) error {
}
if m.metrics == nil {
m.metrics = []telegraf.Metric{}
m.metrics = []plugins.Metric{}
}
for _, metric := range metrics {
@@ -521,7 +504,7 @@ func (m *mockOutput) Write(metrics []telegraf.Metric) error {
return nil
}
func (m *mockOutput) Metrics() []telegraf.Metric {
func (m *mockOutput) Metrics() []plugins.Metric {
m.Lock()
defer m.Unlock()
return m.metrics
@@ -548,7 +531,7 @@ func (m *perfOutput) SampleConfig() string {
return ""
}
func (m *perfOutput) Write(metrics []telegraf.Metric) error {
func (m *perfOutput) Write(metrics []plugins.Metric) error {
if m.failWrite {
return fmt.Errorf("Failed Write!")
}

View File

@@ -1,12 +1,12 @@
package models
import (
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins"
)
type RunningProcessor struct {
Name string
Processor telegraf.Processor
Processor plugins.Processor
Config *ProcessorConfig
}
@@ -23,8 +23,8 @@ type ProcessorConfig struct {
Filter Filter
}
func (rp *RunningProcessor) Apply(in ...telegraf.Metric) []telegraf.Metric {
ret := []telegraf.Metric{}
func (rp *RunningProcessor) Apply(in ...plugins.Metric) []plugins.Metric {
ret := []plugins.Metric{}
for _, metric := range in {
if rp.Config.Filter.IsActive() {

View File

@@ -3,7 +3,7 @@ package models
import (
"testing"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/assert"
@@ -19,8 +19,8 @@ func (f *TestProcessor) Description() string { return "" }
// "foo" to "fuz"
// "bar" to "baz"
// And it also drops measurements named "dropme"
func (f *TestProcessor) Apply(in ...telegraf.Metric) []telegraf.Metric {
out := make([]telegraf.Metric, 0)
func (f *TestProcessor) Apply(in ...plugins.Metric) []plugins.Metric {
out := make([]plugins.Metric, 0)
for _, m := range in {
switch m.Name() {
case "foo":
@@ -46,7 +46,7 @@ func NewTestRunningProcessor() *RunningProcessor {
}
func TestRunningProcessor(t *testing.T) {
inmetrics := []telegraf.Metric{
inmetrics := []plugins.Metric{
testutil.TestMetric(1, "foo"),
testutil.TestMetric(1, "bar"),
testutil.TestMetric(1, "baz"),
@@ -69,7 +69,7 @@ func TestRunningProcessor(t *testing.T) {
}
func TestRunningProcessor_WithNameDrop(t *testing.T) {
inmetrics := []telegraf.Metric{
inmetrics := []plugins.Metric{
testutil.TestMetric(1, "foo"),
testutil.TestMetric(1, "bar"),
testutil.TestMetric(1, "baz"),
@@ -96,7 +96,7 @@ func TestRunningProcessor_WithNameDrop(t *testing.T) {
}
func TestRunningProcessor_DroppedMetric(t *testing.T) {
inmetrics := []telegraf.Metric{
inmetrics := []plugins.Metric{
testutil.TestMetric(1, "dropme"),
testutil.TestMetric(1, "foo"),
testutil.TestMetric(1, "bar"),

View File

@@ -8,7 +8,10 @@ import (
"strconv"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins"
// TODO remove
"github.com/influxdata/influxdb/client/v2"
)
const MaxInt = int(^uint(0) >> 1)
@@ -18,8 +21,8 @@ func New(
tags map[string]string,
fields map[string]interface{},
t time.Time,
mType ...telegraf.ValueType,
) (telegraf.Metric, error) {
mType ...plugins.ValueType,
) (plugins.Metric, error) {
if len(fields) == 0 {
return nil, fmt.Errorf("Metric cannot be made without any fields")
}
@@ -27,11 +30,11 @@ func New(
return nil, fmt.Errorf("Metric cannot be made with an empty name")
}
var thisType telegraf.ValueType
var thisType plugins.ValueType
if len(mType) > 0 {
thisType = mType[0]
} else {
thisType = telegraf.Untyped
thisType = plugins.Untyped
}
m := &metric{
@@ -126,7 +129,7 @@ type metric struct {
fields []byte
t []byte
mType telegraf.ValueType
mType plugins.ValueType
aggregate bool
// cached values for reuse in "get" functions
@@ -134,6 +137,11 @@ type metric struct {
nsec int64
}
func (m *metric) Point() *client.Point {
c, _ := client.NewPoint(m.Name(), m.Tags(), m.Fields(), m.Time())
return c
}
func (m *metric) String() string {
return string(m.name) + string(m.tags) + " " + string(m.fields) + " " + string(m.t) + "\n"
}
@@ -146,7 +154,7 @@ func (m *metric) IsAggregate() bool {
return m.aggregate
}
func (m *metric) Type() telegraf.ValueType {
func (m *metric) Type() plugins.ValueType {
return m.mType
}
@@ -170,53 +178,11 @@ func (m *metric) Serialize() []byte {
return tmp
}
func (m *metric) SerializeTo(dst []byte) int {
i := 0
if i >= len(dst) {
return i
}
i += copy(dst[i:], m.name)
if i >= len(dst) {
return i
}
i += copy(dst[i:], m.tags)
if i >= len(dst) {
return i
}
dst[i] = ' '
i++
if i >= len(dst) {
return i
}
i += copy(dst[i:], m.fields)
if i >= len(dst) {
return i
}
dst[i] = ' '
i++
if i >= len(dst) {
return i
}
i += copy(dst[i:], m.t)
if i >= len(dst) {
return i
}
dst[i] = '\n'
return i + 1
}
func (m *metric) Split(maxSize int) []telegraf.Metric {
func (m *metric) Split(maxSize int) []plugins.Metric {
if m.Len() < maxSize {
return []telegraf.Metric{m}
return []plugins.Metric{m}
}
var out []telegraf.Metric
var out []plugins.Metric
// constant number of bytes for each metric (in addition to field bytes)
constant := len(m.name) + len(m.tags) + len(m.t) + 3
@@ -297,7 +263,7 @@ func (m *metric) Fields() map[string]interface{} {
case '"':
// string field
fieldMap[unescape(string(m.fields[i:][0:i1]), "fieldkey")] = unescape(string(m.fields[i:][i2+1:i3-1]), "fieldval")
case '-', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9':
case '0', '1', '2', '3', '4', '5', '6', '7', '8', '9':
// number field
switch m.fields[i:][i3-1] {
case 'i':
@@ -464,11 +430,11 @@ func (m *metric) RemoveField(key string) error {
return nil
}
func (m *metric) Copy() telegraf.Metric {
func (m *metric) Copy() plugins.Metric {
return copyWith(m.name, m.tags, m.fields, m.t)
}
func copyWith(name, tags, fields, t []byte) telegraf.Metric {
func copyWith(name, tags, fields, t []byte) plugins.Metric {
out := metric{
name: make([]byte, len(name)),
tags: make([]byte, len(tags)),

View File

@@ -5,7 +5,7 @@ import (
"testing"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins"
)
// vars for making sure that the compiler doesnt optimize out the benchmarks:
@@ -17,7 +17,7 @@ var (
)
func BenchmarkNewMetric(b *testing.B) {
var mt telegraf.Metric
var mt plugins.Metric
for n := 0; n < b.N; n++ {
mt, _ = New("test_metric",
map[string]string{
@@ -37,7 +37,7 @@ func BenchmarkNewMetric(b *testing.B) {
}
func BenchmarkAddTag(b *testing.B) {
var mt telegraf.Metric
var mt plugins.Metric
mt = &metric{
name: []byte("cpu"),
tags: []byte(",host=localhost"),
@@ -51,14 +51,14 @@ func BenchmarkAddTag(b *testing.B) {
}
func BenchmarkSplit(b *testing.B) {
var mt telegraf.Metric
var mt plugins.Metric
mt = &metric{
name: []byte("cpu"),
tags: []byte(",host=localhost"),
fields: []byte("a=101,b=10i,c=10101,d=101010,e=42"),
t: []byte("1480614053000000000"),
}
var metrics []telegraf.Metric
var metrics []plugins.Metric
for n := 0; n < b.N; n++ {
metrics = mt.Split(60)
}

View File

@@ -7,7 +7,7 @@ import (
"testing"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins"
"github.com/stretchr/testify/assert"
)
@@ -26,7 +26,7 @@ func TestNewMetric(t *testing.T) {
m, err := New("cpu", tags, fields, now)
assert.NoError(t, err)
assert.Equal(t, telegraf.Untyped, m.Type())
assert.Equal(t, plugins.Untyped, m.Type())
assert.Equal(t, tags, m.Tags())
assert.Equal(t, fields, m.Fields())
assert.Equal(t, "cpu", m.Name())
@@ -402,10 +402,10 @@ func TestNewGaugeMetric(t *testing.T) {
"usage_idle": float64(99),
"usage_busy": float64(1),
}
m, err := New("cpu", tags, fields, now, telegraf.Gauge)
m, err := New("cpu", tags, fields, now, plugins.Gauge)
assert.NoError(t, err)
assert.Equal(t, telegraf.Gauge, m.Type())
assert.Equal(t, plugins.Gauge, m.Type())
assert.Equal(t, tags, m.Tags())
assert.Equal(t, fields, m.Fields())
assert.Equal(t, "cpu", m.Name())
@@ -424,10 +424,10 @@ func TestNewCounterMetric(t *testing.T) {
"usage_idle": float64(99),
"usage_busy": float64(1),
}
m, err := New("cpu", tags, fields, now, telegraf.Counter)
m, err := New("cpu", tags, fields, now, plugins.Counter)
assert.NoError(t, err)
assert.Equal(t, telegraf.Counter, m.Type())
assert.Equal(t, plugins.Counter, m.Type())
assert.Equal(t, tags, m.Tags())
assert.Equal(t, fields, m.Fields())
assert.Equal(t, "cpu", m.Name())
@@ -595,6 +595,25 @@ func TestNewMetricAggregate(t *testing.T) {
assert.True(t, m.IsAggregate())
}
func TestNewMetricPoint(t *testing.T) {
now := time.Now()
tags := map[string]string{
"host": "localhost",
}
fields := map[string]interface{}{
"usage_idle": float64(99),
}
m, err := New("cpu", tags, fields, now)
assert.NoError(t, err)
p := m.Point()
assert.Equal(t, fields, m.Fields())
assert.Equal(t, fields, p.Fields())
assert.Equal(t, "cpu", p.Name())
}
func TestNewMetricString(t *testing.T) {
now := time.Now()

View File

@@ -6,7 +6,7 @@ import (
"fmt"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins"
)
var (
@@ -39,15 +39,15 @@ const (
fieldsState
)
func Parse(buf []byte) ([]telegraf.Metric, error) {
func Parse(buf []byte) ([]plugins.Metric, error) {
return ParseWithDefaultTime(buf, time.Now())
}
func ParseWithDefaultTime(buf []byte, t time.Time) ([]telegraf.Metric, error) {
func ParseWithDefaultTime(buf []byte, t time.Time) ([]plugins.Metric, error) {
if len(buf) <= 6 {
return []telegraf.Metric{}, makeError("buffer too short", buf, 0)
return []plugins.Metric{}, makeError("buffer too short", buf, 0)
}
metrics := make([]telegraf.Metric, 0, bytes.Count(buf, []byte("\n"))+1)
metrics := make([]plugins.Metric, 0, bytes.Count(buf, []byte("\n"))+1)
var errStr string
i := 0
for {
@@ -77,7 +77,7 @@ func ParseWithDefaultTime(buf []byte, t time.Time) ([]telegraf.Metric, error) {
return metrics, nil
}
func parseMetric(buf []byte, defaultTime time.Time) (telegraf.Metric, error) {
func parseMetric(buf []byte, defaultTime time.Time) (plugins.Metric, error) {
var dTime string
// scan the first block which is measurement[,tag1=value1,tag2=value=2...]
pos, key, err := scanKey(buf, 0)

View File

@@ -44,9 +44,6 @@ cpu,host=foo,datacenter=us-east idle=99,busy=1i,b=true,s="string"
cpu,host=foo,datacenter=us-east idle=99,busy=1i,b=true,s="string"
`
const negMetrics = `weather,host=local temp=-99i,temp_float=-99.4 1465839830100400200
`
// some metrics are invalid
const someInvalid = `cpu,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
cpu,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
@@ -88,26 +85,6 @@ func TestParse(t *testing.T) {
}
}
func TestParseNegNumbers(t *testing.T) {
metrics, err := Parse([]byte(negMetrics))
assert.NoError(t, err)
assert.Len(t, metrics, 1)
assert.Equal(t,
map[string]interface{}{
"temp": int64(-99),
"temp_float": float64(-99.4),
},
metrics[0].Fields(),
)
assert.Equal(t,
map[string]string{
"host": "local",
},
metrics[0].Tags(),
)
}
func TestParseErrors(t *testing.T) {
start := time.Now()
metrics, err := Parse([]byte(someInvalid))

View File

@@ -1,155 +0,0 @@
package metric
import (
"io"
"github.com/influxdata/telegraf"
)
type state int
const (
_ state = iota
// normal state copies whole metrics into the given buffer until we can't
// fit the next metric.
normal
// split state means that we have a metric that we were able to split, so
// that we can fit it into multiple metrics (and calls to Read)
split
// overflow state means that we have a metric that didn't fit into a single
// buffer, and needs to be split across multiple calls to Read.
overflow
// splitOverflow state means that a split metric didn't fit into a single
// buffer, and needs to be split across multiple calls to Read.
splitOverflow
// done means we're done reading metrics, and now always return (0, io.EOF)
done
)
type reader struct {
metrics []telegraf.Metric
splitMetrics []telegraf.Metric
buf []byte
state state
// metric index
iM int
// split metric index
iSM int
// buffer index
iB int
}
func NewReader(metrics []telegraf.Metric) io.Reader {
return &reader{
metrics: metrics,
state: normal,
}
}
func (r *reader) Read(p []byte) (n int, err error) {
var i int
switch r.state {
case done:
return 0, io.EOF
case normal:
for {
// this for-loop is the sunny-day scenario, where we are given a
// buffer that is large enough to hold at least a single metric.
// all of the cases below it are edge-cases.
if r.metrics[r.iM].Len() < len(p[i:]) {
i += r.metrics[r.iM].SerializeTo(p[i:])
} else {
break
}
r.iM++
if r.iM == len(r.metrics) {
r.state = done
return i, io.EOF
}
}
// if we haven't written any bytes, check if we can split the current
// metric into multiple full metrics at a smaller size.
if i == 0 {
tmp := r.metrics[r.iM].Split(len(p))
if len(tmp) > 1 {
r.splitMetrics = tmp
r.state = split
if r.splitMetrics[0].Len() < len(p) {
i += r.splitMetrics[0].SerializeTo(p)
r.iSM = 1
} else {
// splitting didn't quite work, so we'll drop down and
// overflow the metric.
r.state = normal
r.iSM = 0
}
}
}
// if we haven't written any bytes and we're not at the end of the metrics
// slice, then it means we have a single metric that is larger than the
// provided buffer.
if i == 0 {
r.buf = r.metrics[r.iM].Serialize()
i += copy(p, r.buf[r.iB:])
r.iB += i
r.state = overflow
}
case split:
if r.splitMetrics[r.iSM].Len() < len(p) {
// write the current split metric
i += r.splitMetrics[r.iSM].SerializeTo(p)
r.iSM++
if r.iSM >= len(r.splitMetrics) {
// done writing the current split metrics
r.iSM = 0
r.iM++
if r.iM == len(r.metrics) {
r.state = done
return i, io.EOF
}
r.state = normal
}
} else {
// This would only happen if we split the metric, and then a
// subsequent buffer was smaller than the initial one given,
// so that our split metric no longer fits.
r.buf = r.splitMetrics[r.iSM].Serialize()
i += copy(p, r.buf[r.iB:])
r.iB += i
r.state = splitOverflow
}
case splitOverflow:
i = copy(p, r.buf[r.iB:])
r.iB += i
if r.iB >= len(r.buf) {
r.iB = 0
r.iSM++
if r.iSM == len(r.splitMetrics) {
r.iM++
r.state = normal
} else {
r.state = split
}
}
case overflow:
i = copy(p, r.buf[r.iB:])
r.iB += i
if r.iB >= len(r.buf) {
r.iB = 0
r.iM++
if r.iM == len(r.metrics) {
r.state = done
return i, io.EOF
}
r.state = normal
}
}
return i, nil
}

View File

@@ -1,487 +0,0 @@
package metric
import (
"io"
"io/ioutil"
"regexp"
"testing"
"time"
"github.com/influxdata/telegraf"
"github.com/stretchr/testify/assert"
)
func BenchmarkMetricReader(b *testing.B) {
metrics := make([]telegraf.Metric, 10)
for i := 0; i < 10; i++ {
metrics[i], _ = New("foo", map[string]string{},
map[string]interface{}{"value": int64(1)}, time.Now())
}
for n := 0; n < b.N; n++ {
r := NewReader(metrics)
io.Copy(ioutil.Discard, r)
}
}
func TestMetricReader(t *testing.T) {
ts := time.Unix(1481032190, 0)
metrics := make([]telegraf.Metric, 10)
for i := 0; i < 10; i++ {
metrics[i], _ = New("foo", map[string]string{},
map[string]interface{}{"value": int64(1)}, ts)
}
r := NewReader(metrics)
buf := make([]byte, 35)
for i := 0; i < 10; i++ {
n, err := r.Read(buf)
if err != nil {
assert.True(t, err == io.EOF, err.Error())
}
assert.Equal(t, 33, n)
assert.Equal(t, "foo value=1i 1481032190000000000\n", string(buf[0:n]))
}
// reader should now be done, and always return 0, io.EOF
for i := 0; i < 10; i++ {
n, err := r.Read(buf)
assert.True(t, err == io.EOF, err.Error())
assert.Equal(t, 0, n)
}
}
func TestMetricReader_OverflowMetric(t *testing.T) {
ts := time.Unix(1481032190, 0)
m, _ := New("foo", map[string]string{},
map[string]interface{}{"value": int64(10)}, ts)
metrics := []telegraf.Metric{m}
r := NewReader(metrics)
buf := make([]byte, 5)
tests := []struct {
exp string
err error
n int
}{
{
"foo v",
nil,
5,
},
{
"alue=",
nil,
5,
},
{
"10i 1",
nil,
5,
},
{
"48103",
nil,
5,
},
{
"21900",
nil,
5,
},
{
"00000",
nil,
5,
},
{
"000\n",
io.EOF,
4,
},
{
"",
io.EOF,
0,
},
}
for _, test := range tests {
n, err := r.Read(buf)
assert.Equal(t, test.n, n)
assert.Equal(t, test.exp, string(buf[0:n]))
assert.Equal(t, test.err, err)
}
}
func TestMetricReader_OverflowMultipleMetrics(t *testing.T) {
ts := time.Unix(1481032190, 0)
m, _ := New("foo", map[string]string{},
map[string]interface{}{"value": int64(10)}, ts)
metrics := []telegraf.Metric{m, m.Copy()}
r := NewReader(metrics)
buf := make([]byte, 10)
tests := []struct {
exp string
err error
n int
}{
{
"foo value=",
nil,
10,
},
{
"10i 148103",
nil,
10,
},
{
"2190000000",
nil,
10,
},
{
"000\n",
nil,
4,
},
{
"foo value=",
nil,
10,
},
{
"10i 148103",
nil,
10,
},
{
"2190000000",
nil,
10,
},
{
"000\n",
io.EOF,
4,
},
{
"",
io.EOF,
0,
},
}
for _, test := range tests {
n, err := r.Read(buf)
assert.Equal(t, test.n, n)
assert.Equal(t, test.exp, string(buf[0:n]))
assert.Equal(t, test.err, err)
}
}
// test splitting a metric
func TestMetricReader_SplitMetric(t *testing.T) {
ts := time.Unix(1481032190, 0)
m1, _ := New("foo", map[string]string{},
map[string]interface{}{
"value1": int64(10),
"value2": int64(10),
"value3": int64(10),
"value4": int64(10),
"value5": int64(10),
"value6": int64(10),
},
ts,
)
metrics := []telegraf.Metric{m1}
r := NewReader(metrics)
buf := make([]byte, 60)
tests := []struct {
expRegex string
err error
n int
}{
{
`foo value\d=10i,value\d=10i,value\d=10i 1481032190000000000\n`,
nil,
57,
},
{
`foo value\d=10i,value\d=10i,value\d=10i 1481032190000000000\n`,
io.EOF,
57,
},
{
"",
io.EOF,
0,
},
}
for _, test := range tests {
n, err := r.Read(buf)
assert.Equal(t, test.n, n)
re := regexp.MustCompile(test.expRegex)
assert.True(t, re.MatchString(string(buf[0:n])), string(buf[0:n]))
assert.Equal(t, test.err, err)
}
}
// test an array with one split metric and one unsplit
func TestMetricReader_SplitMetric2(t *testing.T) {
ts := time.Unix(1481032190, 0)
m1, _ := New("foo", map[string]string{},
map[string]interface{}{
"value1": int64(10),
"value2": int64(10),
"value3": int64(10),
"value4": int64(10),
"value5": int64(10),
"value6": int64(10),
},
ts,
)
m2, _ := New("foo", map[string]string{},
map[string]interface{}{
"value1": int64(10),
},
ts,
)
metrics := []telegraf.Metric{m1, m2}
r := NewReader(metrics)
buf := make([]byte, 60)
tests := []struct {
expRegex string
err error
n int
}{
{
`foo value\d=10i,value\d=10i,value\d=10i 1481032190000000000\n`,
nil,
57,
},
{
`foo value\d=10i,value\d=10i,value\d=10i 1481032190000000000\n`,
nil,
57,
},
{
`foo value1=10i 1481032190000000000\n`,
io.EOF,
35,
},
{
"",
io.EOF,
0,
},
}
for _, test := range tests {
n, err := r.Read(buf)
assert.Equal(t, test.n, n)
re := regexp.MustCompile(test.expRegex)
assert.True(t, re.MatchString(string(buf[0:n])), string(buf[0:n]))
assert.Equal(t, test.err, err)
}
}
// test split that results in metrics that are still too long, which results in
// the reader falling back to regular overflow.
func TestMetricReader_SplitMetricTooLong(t *testing.T) {
ts := time.Unix(1481032190, 0)
m1, _ := New("foo", map[string]string{},
map[string]interface{}{
"value1": int64(10),
"value2": int64(10),
},
ts,
)
metrics := []telegraf.Metric{m1}
r := NewReader(metrics)
buf := make([]byte, 30)
tests := []struct {
expRegex string
err error
n int
}{
{
`foo value\d=10i,value\d=10i 1481`,
nil,
30,
},
{
`032190000000000\n`,
io.EOF,
16,
},
{
"",
io.EOF,
0,
},
}
for _, test := range tests {
n, err := r.Read(buf)
assert.Equal(t, test.n, n)
re := regexp.MustCompile(test.expRegex)
assert.True(t, re.MatchString(string(buf[0:n])), string(buf[0:n]))
assert.Equal(t, test.err, err)
}
}
// test split with a changing buffer size in the middle of subsequent calls
// to Read
func TestMetricReader_SplitMetricChangingBuffer(t *testing.T) {
ts := time.Unix(1481032190, 0)
m1, _ := New("foo", map[string]string{},
map[string]interface{}{
"value1": int64(10),
"value2": int64(10),
"value3": int64(10),
},
ts,
)
m2, _ := New("foo", map[string]string{},
map[string]interface{}{
"value1": int64(10),
},
ts,
)
metrics := []telegraf.Metric{m1, m2}
r := NewReader(metrics)
tests := []struct {
expRegex string
err error
n int
buf []byte
}{
{
`foo value\d=10i 1481032190000000000\n`,
nil,
35,
make([]byte, 36),
},
{
`foo value\d=10i 148103219000000`,
nil,
30,
make([]byte, 30),
},
{
`0000\n`,
nil,
5,
make([]byte, 30),
},
{
`foo value\d=10i 1481032190000000000\n`,
nil,
35,
make([]byte, 36),
},
{
`foo value1=10i 1481032190000000000\n`,
io.EOF,
35,
make([]byte, 36),
},
{
"",
io.EOF,
0,
make([]byte, 36),
},
}
for _, test := range tests {
n, err := r.Read(test.buf)
assert.Equal(t, test.n, n, test.expRegex)
re := regexp.MustCompile(test.expRegex)
assert.True(t, re.MatchString(string(test.buf[0:n])), string(test.buf[0:n]))
assert.Equal(t, test.err, err, test.expRegex)
}
}
// test split with a changing buffer size in the middle of subsequent calls
// to Read
func TestMetricReader_SplitMetricChangingBuffer2(t *testing.T) {
ts := time.Unix(1481032190, 0)
m1, _ := New("foo", map[string]string{},
map[string]interface{}{
"value1": int64(10),
"value2": int64(10),
},
ts,
)
m2, _ := New("foo", map[string]string{},
map[string]interface{}{
"value1": int64(10),
},
ts,
)
metrics := []telegraf.Metric{m1, m2}
r := NewReader(metrics)
tests := []struct {
expRegex string
err error
n int
buf []byte
}{
{
`foo value\d=10i 1481032190000000000\n`,
nil,
35,
make([]byte, 36),
},
{
`foo value\d=10i 148103219000000`,
nil,
30,
make([]byte, 30),
},
{
`0000\n`,
nil,
5,
make([]byte, 30),
},
{
`foo value1=10i 1481032190000000000\n`,
io.EOF,
35,
make([]byte, 36),
},
{
"",
io.EOF,
0,
make([]byte, 36),
},
}
for _, test := range tests {
n, err := r.Read(test.buf)
assert.Equal(t, test.n, n, test.expRegex)
re := regexp.MustCompile(test.expRegex)
assert.True(t, re.MatchString(string(test.buf[0:n])), string(test.buf[0:n]))
assert.Equal(t, test.err, err, test.expRegex)
}
}

View File

@@ -1,4 +1,4 @@
package telegraf
package plugins
import "time"

View File

@@ -1,4 +1,4 @@
package telegraf
package plugins
// Aggregator is an interface for implementing an Aggregator plugin.
// the RunningAggregator wraps this interface and guarantees that

View File

@@ -1,7 +1,7 @@
package minmax
import (
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/plugins/aggregators"
)
@@ -9,7 +9,7 @@ type MinMax struct {
cache map[uint64]aggregate
}
func NewMinMax() telegraf.Aggregator {
func NewMinMax() plugins.Aggregator {
mm := &MinMax{}
mm.Reset()
return mm
@@ -43,7 +43,7 @@ func (m *MinMax) Description() string {
return "Keep the aggregate min/max of each metric passing through."
}
func (m *MinMax) Add(in telegraf.Metric) {
func (m *MinMax) Add(in plugins.Metric) {
id := in.HashID()
if _, ok := m.cache[id]; !ok {
// hit an uncached metric, create caches for first time:
@@ -86,7 +86,7 @@ func (m *MinMax) Add(in telegraf.Metric) {
}
}
func (m *MinMax) Push(acc telegraf.Accumulator) {
func (m *MinMax) Push(acc plugins.Accumulator) {
for _, aggregate := range m.cache {
fields := map[string]interface{}{}
for k, v := range aggregate.fields {
@@ -113,7 +113,7 @@ func convert(in interface{}) (float64, bool) {
}
func init() {
aggregators.Add("minmax", func() telegraf.Aggregator {
aggregators.Add("minmax", func() plugins.Aggregator {
return NewMinMax()
})
}

View File

@@ -1,8 +1,8 @@
package aggregators
import "github.com/influxdata/telegraf"
import "github.com/influxdata/telegraf/plugins"
type Creator func() telegraf.Aggregator
type Creator func() plugins.Aggregator
var Aggregators = map[string]Creator{}

View File

@@ -1,4 +1,4 @@
package telegraf
package plugins
type Input interface {
// SampleConfig returns the default configuration of the Input

View File

@@ -9,7 +9,7 @@ import (
"sync"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/internal/errchan"
"github.com/influxdata/telegraf/plugins/inputs"
@@ -35,7 +35,7 @@ func (a *Aerospike) Description() string {
return "Read stats from aerospike server(s)"
}
func (a *Aerospike) Gather(acc telegraf.Accumulator) error {
func (a *Aerospike) Gather(acc plugins.Accumulator) error {
if len(a.Servers) == 0 {
return a.gatherServer("127.0.0.1:3000", acc)
}
@@ -54,7 +54,7 @@ func (a *Aerospike) Gather(acc telegraf.Accumulator) error {
return errChan.Error()
}
func (a *Aerospike) gatherServer(hostport string, acc telegraf.Accumulator) error {
func (a *Aerospike) gatherServer(hostport string, acc plugins.Accumulator) error {
host, port, err := net.SplitHostPort(hostport)
if err != nil {
return err
@@ -152,7 +152,7 @@ func copyTags(m map[string]string) map[string]string {
}
func init() {
inputs.Add("aerospike", func() telegraf.Input {
inputs.Add("aerospike", func() plugins.Input {
return &Aerospike{}
})
}

View File

@@ -66,7 +66,6 @@ import (
_ "github.com/influxdata/telegraf/plugins/inputs/sensors"
_ "github.com/influxdata/telegraf/plugins/inputs/snmp"
_ "github.com/influxdata/telegraf/plugins/inputs/snmp_legacy"
_ "github.com/influxdata/telegraf/plugins/inputs/socket_listener"
_ "github.com/influxdata/telegraf/plugins/inputs/sqlserver"
_ "github.com/influxdata/telegraf/plugins/inputs/statsd"
_ "github.com/influxdata/telegraf/plugins/inputs/sysstat"

View File

@@ -10,7 +10,7 @@ import (
"strings"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs"
)
@@ -57,7 +57,7 @@ func (n *Apache) Description() string {
return "Read Apache status information (mod_status)"
}
func (n *Apache) Gather(acc telegraf.Accumulator) error {
func (n *Apache) Gather(acc plugins.Accumulator) error {
if len(n.Urls) == 0 {
n.Urls = []string{"http://localhost/server-status?auto"}
}
@@ -89,7 +89,7 @@ func (n *Apache) Gather(acc telegraf.Accumulator) error {
return outerr
}
func (n *Apache) gatherUrl(addr *url.URL, acc telegraf.Accumulator) error {
func (n *Apache) gatherUrl(addr *url.URL, acc plugins.Accumulator) error {
var tr *http.Transport
@@ -228,7 +228,7 @@ func getTags(addr *url.URL) map[string]string {
}
func init() {
inputs.Add("apache", func() telegraf.Input {
inputs.Add("apache", func() plugins.Input {
return &Apache{}
})
}

View File

@@ -8,7 +8,7 @@ import (
"strconv"
"strings"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/plugins/inputs"
)
@@ -70,7 +70,7 @@ func prettyToBytes(v string) uint64 {
return uint64(result)
}
func (b *Bcache) gatherBcache(bdev string, acc telegraf.Accumulator) error {
func (b *Bcache) gatherBcache(bdev string, acc plugins.Accumulator) error {
tags := getTags(bdev)
metrics, err := filepath.Glob(bdev + "/stats_total/*")
if len(metrics) < 0 {
@@ -105,7 +105,7 @@ func (b *Bcache) gatherBcache(bdev string, acc telegraf.Accumulator) error {
return nil
}
func (b *Bcache) Gather(acc telegraf.Accumulator) error {
func (b *Bcache) Gather(acc plugins.Accumulator) error {
bcacheDevsChecked := make(map[string]bool)
var restrictDevs bool
if len(b.BcacheDevs) != 0 {
@@ -136,7 +136,7 @@ func (b *Bcache) Gather(acc telegraf.Accumulator) error {
}
func init() {
inputs.Add("bcache", func() telegraf.Input {
inputs.Add("bcache", func() plugins.Input {
return &Bcache{}
})
}

View File

@@ -4,7 +4,7 @@ import (
"encoding/json"
"errors"
"fmt"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/plugins/inputs"
"io/ioutil"
"log"
@@ -35,13 +35,13 @@ type Cassandra struct {
type javaMetric struct {
host string
metric string
acc telegraf.Accumulator
acc plugins.Accumulator
}
type cassandraMetric struct {
host string
metric string
acc telegraf.Accumulator
acc plugins.Accumulator
}
type jmxMetric interface {
@@ -49,12 +49,12 @@ type jmxMetric interface {
}
func newJavaMetric(host string, metric string,
acc telegraf.Accumulator) *javaMetric {
acc plugins.Accumulator) *javaMetric {
return &javaMetric{host: host, metric: metric, acc: acc}
}
func newCassandraMetric(host string, metric string,
acc telegraf.Accumulator) *cassandraMetric {
acc plugins.Accumulator) *cassandraMetric {
return &cassandraMetric{host: host, metric: metric, acc: acc}
}
@@ -257,7 +257,7 @@ func parseServerTokens(server string) map[string]string {
return serverTokens
}
func (c *Cassandra) Gather(acc telegraf.Accumulator) error {
func (c *Cassandra) Gather(acc plugins.Accumulator) error {
context := c.Context
servers := c.Servers
metrics := c.Metrics
@@ -302,7 +302,7 @@ func (c *Cassandra) Gather(acc telegraf.Accumulator) error {
}
func init() {
inputs.Add("cassandra", func() telegraf.Input {
inputs.Add("cassandra", func() plugins.Input {
return &Cassandra{jClient: &JolokiaClientImpl{client: &http.Client{}}}
})
}

View File

@@ -82,7 +82,7 @@ the cluster. The currently supported commands are:
## Whether to gather statistics via ceph commands, requires ceph_user and ceph_config
## to be specified
gather_cluster_stats = false
gather_cluster_stats = true
```
### Measurements & Fields:
@@ -117,7 +117,7 @@ All fields are collected under the **ceph** measurement and stored as float64s.
* recovering\_objects\_per\_sec (float)
* ceph\_pgmap\_state
* count (float)
* state name e.g. active+clean (float)
* ceph\_usage
* bytes\_used (float)
@@ -186,7 +186,7 @@ All measurements will have the following tags:
*Cluster Stats*
* ceph\_pgmap\_state has the following tags:
* ceph\_pg\_state has the following tags:
* state (state for which the value applies e.g. active+clean, active+remapped+backfill)
* ceph\_pool\_usage has the following tags:
* id
@@ -213,8 +213,7 @@ telegraf -test -config /etc/telegraf/telegraf.conf -config-directory /etc/telegr
<pre>
> ceph_osdmap,host=ceph-mon-0 epoch=170772,full=false,nearfull=false,num_in_osds=340,num_osds=340,num_remapped_pgs=0,num_up_osds=340 1468841037000000000
> ceph_pgmap,host=ceph-mon-0 bytes_avail=634895531270144,bytes_total=812117151809536,bytes_used=177221620539392,data_bytes=56979991615058,num_pgs=22952,op_per_sec=15869,read_bytes_sec=43956026,version=39387592,write_bytes_sec=165344818 1468841037000000000
> ceph_pgmap_state,host=ceph-mon-0,state=active+clean count=22952 1468928660000000000
> ceph_pgmap_state,host=ceph-mon-0,state=active+degraded count=16 1468928660000000000
> ceph_pgmap_state,host=ceph-mon-0 active+clean=22952 1468928660000000000
> ceph_usage,host=ceph-mon-0 total_avail_bytes=634895514791936,total_bytes=812117151809536,total_used_bytes=177221637017600 1468841037000000000
> ceph_pool_usage,host=ceph-mon-0,id=150,name=cinder.volumes bytes_used=12648553794802,kb_used=12352103316,max_avail=154342562489244,objects=3026295 1468841037000000000
> ceph_pool_usage,host=ceph-mon-0,id=182,name=cinder.volumes.flash bytes_used=8541308223964,kb_used=8341121313,max_avail=39388593563936,objects=2075066 1468841037000000000

View File

@@ -4,14 +4,13 @@ import (
"bytes"
"encoding/json"
"fmt"
"github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/plugins/inputs"
"io/ioutil"
"log"
"os/exec"
"path/filepath"
"strings"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins/inputs"
)
const (
@@ -69,14 +68,14 @@ var sampleConfig = `
gather_admin_socket_stats = true
## Whether to gather statistics via ceph commands
gather_cluster_stats = false
gather_cluster_stats = true
`
func (c *Ceph) SampleConfig() string {
return sampleConfig
}
func (c *Ceph) Gather(acc telegraf.Accumulator) error {
func (c *Ceph) Gather(acc plugins.Accumulator) error {
if c.GatherAdminSocketStats {
if err := c.gatherAdminSocketStats(acc); err != nil {
return err
@@ -92,7 +91,7 @@ func (c *Ceph) Gather(acc telegraf.Accumulator) error {
return nil
}
func (c *Ceph) gatherAdminSocketStats(acc telegraf.Accumulator) error {
func (c *Ceph) gatherAdminSocketStats(acc plugins.Accumulator) error {
sockets, err := findSockets(c)
if err != nil {
return fmt.Errorf("failed to find sockets at path '%s': %v", c.SocketDir, err)
@@ -109,7 +108,7 @@ func (c *Ceph) gatherAdminSocketStats(acc telegraf.Accumulator) error {
log.Printf("E! error parsing dump from socket '%s': %v", s.socket, err)
continue
}
for tag, metrics := range data {
for tag, metrics := range *data {
acc.AddFields(measurement,
map[string]interface{}(metrics),
map[string]string{"type": s.sockType, "id": s.sockId, "collection": tag})
@@ -118,10 +117,10 @@ func (c *Ceph) gatherAdminSocketStats(acc telegraf.Accumulator) error {
return nil
}
func (c *Ceph) gatherClusterStats(acc telegraf.Accumulator) error {
func (c *Ceph) gatherClusterStats(acc plugins.Accumulator) error {
jobs := []struct {
command string
parser func(telegraf.Accumulator, string) error
parser func(plugins.Accumulator, string) error
}{
{"status", decodeStatus},
{"df", decodeDf},
@@ -156,7 +155,7 @@ func init() {
GatherClusterStats: false,
}
inputs.Add(measurement, func() telegraf.Input { return &c })
inputs.Add(measurement, func() plugins.Input { return &c })
}
@@ -245,19 +244,25 @@ type taggedMetricMap map[string]metricMap
// Parses a raw JSON string into a taggedMetricMap
// Delegates the actual parsing to newTaggedMetricMap(..)
func parseDump(dump string) (taggedMetricMap, error) {
func parseDump(dump string) (*taggedMetricMap, error) {
data := make(map[string]interface{})
err := json.Unmarshal([]byte(dump), &data)
if err != nil {
return nil, fmt.Errorf("failed to parse json: '%s': %v", dump, err)
}
return newTaggedMetricMap(data), nil
tmm := newTaggedMetricMap(data)
if err != nil {
return nil, fmt.Errorf("failed to tag dataset: '%v': %v", tmm, err)
}
return tmm, nil
}
// Builds a TaggedMetricMap out of a generic string map.
// The top-level key is used as a tag and all sub-keys are flattened into metrics
func newTaggedMetricMap(data map[string]interface{}) taggedMetricMap {
func newTaggedMetricMap(data map[string]interface{}) *taggedMetricMap {
tmm := make(taggedMetricMap)
for tag, datapoints := range data {
mm := make(metricMap)
@@ -266,7 +271,7 @@ func newTaggedMetricMap(data map[string]interface{}) taggedMetricMap {
}
tmm[tag] = mm
}
return tmm
return &tmm
}
// Recursively flattens any k-v hierarchy present in data.
@@ -317,7 +322,7 @@ func (c *Ceph) exec(command string) (string, error) {
return output, nil
}
func decodeStatus(acc telegraf.Accumulator, input string) error {
func decodeStatus(acc plugins.Accumulator, input string) error {
data := make(map[string]interface{})
err := json.Unmarshal([]byte(input), &data)
if err != nil {
@@ -342,7 +347,7 @@ func decodeStatus(acc telegraf.Accumulator, input string) error {
return nil
}
func decodeStatusOsdmap(acc telegraf.Accumulator, data map[string]interface{}) error {
func decodeStatusOsdmap(acc plugins.Accumulator, data map[string]interface{}) error {
osdmap, ok := data["osdmap"].(map[string]interface{})
if !ok {
return fmt.Errorf("WARNING %s - unable to decode osdmap", measurement)
@@ -355,7 +360,7 @@ func decodeStatusOsdmap(acc telegraf.Accumulator, data map[string]interface{}) e
return nil
}
func decodeStatusPgmap(acc telegraf.Accumulator, data map[string]interface{}) error {
func decodeStatusPgmap(acc plugins.Accumulator, data map[string]interface{}) error {
pgmap, ok := data["pgmap"].(map[string]interface{})
if !ok {
return fmt.Errorf("WARNING %s - unable to decode pgmap", measurement)
@@ -371,57 +376,40 @@ func decodeStatusPgmap(acc telegraf.Accumulator, data map[string]interface{}) er
return nil
}
func extractPgmapStates(data map[string]interface{}) ([]interface{}, error) {
const key = "pgs_by_state"
func decodeStatusPgmapState(acc plugins.Accumulator, data map[string]interface{}) error {
pgmap, ok := data["pgmap"].(map[string]interface{})
if !ok {
return nil, fmt.Errorf("WARNING %s - unable to decode pgmap", measurement)
return fmt.Errorf("WARNING %s - unable to decode pgmap", measurement)
}
s, ok := pgmap[key]
if !ok {
return nil, fmt.Errorf("WARNING %s - pgmap is missing the %s field", measurement, key)
}
states, ok := s.([]interface{})
if !ok {
return nil, fmt.Errorf("WARNING %s - pgmap[%s] is not a list", measurement, key)
}
return states, nil
}
func decodeStatusPgmapState(acc telegraf.Accumulator, data map[string]interface{}) error {
states, err := extractPgmapStates(data)
if err != nil {
return err
}
for _, state := range states {
stateMap, ok := state.(map[string]interface{})
if !ok {
return fmt.Errorf("WARNING %s - unable to decode pg state", measurement)
}
stateName, ok := stateMap["state_name"].(string)
if !ok {
return fmt.Errorf("WARNING %s - unable to decode pg state name", measurement)
}
stateCount, ok := stateMap["count"].(float64)
if !ok {
return fmt.Errorf("WARNING %s - unable to decode pg state count", measurement)
}
tags := map[string]string{
"state": stateName,
}
fields := map[string]interface{}{
"count": stateCount,
}
acc.AddFields("ceph_pgmap_state", fields, tags)
fields := make(map[string]interface{})
for key, value := range pgmap {
switch value.(type) {
case []interface{}:
if key != "pgs_by_state" {
continue
}
for _, state := range value.([]interface{}) {
state_map, ok := state.(map[string]interface{})
if !ok {
return fmt.Errorf("WARNING %s - unable to decode pg state", measurement)
}
state_name, ok := state_map["state_name"].(string)
if !ok {
return fmt.Errorf("WARNING %s - unable to decode pg state name", measurement)
}
state_count, ok := state_map["count"].(float64)
if !ok {
return fmt.Errorf("WARNING %s - unable to decode pg state count", measurement)
}
fields[state_name] = state_count
}
}
}
acc.AddFields("ceph_pgmap_state", fields, map[string]string{})
return nil
}
func decodeDf(acc telegraf.Accumulator, input string) error {
func decodeDf(acc plugins.Accumulator, input string) error {
data := make(map[string]interface{})
err := json.Unmarshal([]byte(input), &data)
if err != nil {
@@ -463,7 +451,7 @@ func decodeDf(acc telegraf.Accumulator, input string) error {
return nil
}
func decodeOsdPoolStats(acc telegraf.Accumulator, input string) error {
func decodeOsdPoolStats(acc plugins.Accumulator, input string) error {
data := make([]map[string]interface{}, 0)
err := json.Unmarshal([]byte(input), &data)
if err != nil {

View File

@@ -1,17 +1,15 @@
package ceph
import (
"encoding/json"
"fmt"
"github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/assert"
"io/ioutil"
"os"
"path"
"strconv"
"strings"
"testing"
"github.com/influxdata/telegraf/testutil"
"github.com/stretchr/testify/assert"
)
const (
@@ -26,38 +24,15 @@ func TestParseSockId(t *testing.T) {
func TestParseMonDump(t *testing.T) {
dump, err := parseDump(monPerfDump)
assert.NoError(t, err)
assert.InEpsilon(t, 5678670180, dump["cluster"]["osd_kb_used"], epsilon)
assert.InEpsilon(t, 6866.540527000, dump["paxos"]["store_state_latency.sum"], epsilon)
assert.InEpsilon(t, 5678670180, (*dump)["cluster"]["osd_kb_used"], epsilon)
assert.InEpsilon(t, 6866.540527000, (*dump)["paxos"]["store_state_latency.sum"], epsilon)
}
func TestParseOsdDump(t *testing.T) {
dump, err := parseDump(osdPerfDump)
assert.NoError(t, err)
assert.InEpsilon(t, 552132.109360000, dump["filestore"]["commitcycle_interval.sum"], epsilon)
assert.Equal(t, float64(0), dump["mutex-FileJournal::finisher_lock"]["wait.avgcount"])
}
func TestDecodeStatusPgmapState(t *testing.T) {
data := make(map[string]interface{})
err := json.Unmarshal([]byte(clusterStatusDump), &data)
assert.NoError(t, err)
acc := &testutil.Accumulator{}
err = decodeStatusPgmapState(acc, data)
assert.NoError(t, err)
var results = []struct {
fields map[string]interface{}
tags map[string]string
}{
{map[string]interface{}{"count": float64(2560)}, map[string]string{"state": "active+clean"}},
{map[string]interface{}{"count": float64(10)}, map[string]string{"state": "active+scrubbing"}},
{map[string]interface{}{"count": float64(5)}, map[string]string{"state": "active+backfilling"}},
}
for _, r := range results {
acc.AssertContainsTaggedFields(t, "ceph_pgmap_state", r.fields, r.tags)
}
assert.InEpsilon(t, 552132.109360000, (*dump)["filestore"]["commitcycle_interval.sum"], epsilon)
assert.Equal(t, float64(0), (*dump)["mutex-FileJournal::finisher_lock"]["wait.avgcount"])
}
func TestGather(t *testing.T) {
@@ -710,127 +685,3 @@ var osdPerfDump = `
"wait": { "avgcount": 0,
"sum": 0.000000000}}}
`
var clusterStatusDump = `
{
"health": {
"health": {
"health_services": [
{
"mons": [
{
"name": "a",
"kb_total": 114289256,
"kb_used": 26995516,
"kb_avail": 81465132,
"avail_percent": 71,
"last_updated": "2017-01-03 17:20:57.595004",
"store_stats": {
"bytes_total": 942117141,
"bytes_sst": 0,
"bytes_log": 4345406,
"bytes_misc": 937771735,
"last_updated": "0.000000"
},
"health": "HEALTH_OK"
},
{
"name": "b",
"kb_total": 114289256,
"kb_used": 27871624,
"kb_avail": 80589024,
"avail_percent": 70,
"last_updated": "2017-01-03 17:20:47.784331",
"store_stats": {
"bytes_total": 454853104,
"bytes_sst": 0,
"bytes_log": 5788320,
"bytes_misc": 449064784,
"last_updated": "0.000000"
},
"health": "HEALTH_OK"
},
{
"name": "c",
"kb_total": 130258508,
"kb_used": 38076996,
"kb_avail": 85541692,
"avail_percent": 65,
"last_updated": "2017-01-03 17:21:03.311123",
"store_stats": {
"bytes_total": 455555199,
"bytes_sst": 0,
"bytes_log": 6950876,
"bytes_misc": 448604323,
"last_updated": "0.000000"
},
"health": "HEALTH_OK"
}
]
}
]
},
"timechecks": {
"epoch": 504,
"round": 34642,
"round_status": "finished",
"mons": [
{ "name": "a", "skew": 0, "latency": 0, "health": "HEALTH_OK" },
{ "name": "b", "skew": -0, "latency": 0.000951, "health": "HEALTH_OK" },
{ "name": "c", "skew": -0, "latency": 0.000946, "health": "HEALTH_OK" }
]
},
"summary": [],
"overall_status": "HEALTH_OK",
"detail": []
},
"fsid": "01234567-abcd-9876-0123-ffeeddccbbaa",
"election_epoch": 504,
"quorum": [ 0, 1, 2 ],
"quorum_names": [ "a", "b", "c" ],
"monmap": {
"epoch": 17,
"fsid": "01234567-abcd-9876-0123-ffeeddccbbaa",
"modified": "2016-04-11 14:01:52.600198",
"created": "0.000000",
"mons": [
{ "rank": 0, "name": "a", "addr": "192.168.0.1:6789/0" },
{ "rank": 1, "name": "b", "addr": "192.168.0.2:6789/0" },
{ "rank": 2, "name": "c", "addr": "192.168.0.3:6789/0" }
]
},
"osdmap": {
"osdmap": {
"epoch": 21734,
"num_osds": 24,
"num_up_osds": 24,
"num_in_osds": 24,
"full": false,
"nearfull": false,
"num_remapped_pgs": 0
}
},
"pgmap": {
"pgs_by_state": [
{ "state_name": "active+clean", "count": 2560 },
{ "state_name": "active+scrubbing", "count": 10 },
{ "state_name": "active+backfilling", "count": 5 }
],
"version": 52314277,
"num_pgs": 2560,
"data_bytes": 2700031960713,
"bytes_used": 7478347665408,
"bytes_avail": 9857462382592,
"bytes_total": 17335810048000,
"read_bytes_sec": 0,
"write_bytes_sec": 367217,
"op_per_sec": 98
},
"mdsmap": {
"epoch": 1,
"up": 0,
"in": 0,
"max": 0,
"by_rank": []
}
}
`

View File

@@ -1,7 +1,7 @@
package cgroup
import (
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/plugins/inputs"
)
@@ -34,5 +34,5 @@ func (g *CGroup) Description() string {
}
func init() {
inputs.Add("cgroup", func() telegraf.Input { return &CGroup{} })
inputs.Add("cgroup", func() plugins.Input { return &CGroup{} })
}

View File

@@ -11,12 +11,12 @@ import (
"regexp"
"strconv"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins"
)
const metricName = "cgroup"
func (g *CGroup) Gather(acc telegraf.Accumulator) error {
func (g *CGroup) Gather(acc plugins.Accumulator) error {
list := make(chan pathInfo)
go g.generateDirs(list)
@@ -32,7 +32,7 @@ func (g *CGroup) Gather(acc telegraf.Accumulator) error {
return nil
}
func (g *CGroup) gatherDir(dir string, acc telegraf.Accumulator) error {
func (g *CGroup) gatherDir(dir string, acc plugins.Accumulator) error {
fields := make(map[string]interface{})
list := make(chan pathInfo)

View File

@@ -3,9 +3,9 @@
package cgroup
import (
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins"
)
func (g *CGroup) Gather(acc telegraf.Accumulator) error {
func (g *CGroup) Gather(acc plugins.Accumulator) error {
return nil
}

View File

@@ -10,7 +10,7 @@ import (
"strings"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs"
)
@@ -35,7 +35,7 @@ func (*Chrony) SampleConfig() string {
`
}
func (c *Chrony) Gather(acc telegraf.Accumulator) error {
func (c *Chrony) Gather(acc plugins.Accumulator) error {
if len(c.path) == 0 {
return errors.New("chronyc not found: verify that chrony is installed and that chronyc is in your PATH")
}
@@ -127,7 +127,7 @@ func init() {
if len(path) > 0 {
c.path = path
}
inputs.Add("chrony", func() telegraf.Input {
inputs.Add("chrony", func() plugins.Input {
return &c
})
}

View File

@@ -10,7 +10,7 @@ import (
"github.com/aws/aws-sdk-go/service/cloudwatch"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/internal"
internalaws "github.com/influxdata/telegraf/internal/config/aws"
"github.com/influxdata/telegraf/internal/errchan"
@@ -176,7 +176,7 @@ func SelectMetrics(c *CloudWatch) ([]*cloudwatch.Metric, error) {
return metrics, nil
}
func (c *CloudWatch) Gather(acc telegraf.Accumulator) error {
func (c *CloudWatch) Gather(acc plugins.Accumulator) error {
if c.client == nil {
c.initializeCloudWatch()
}
@@ -210,7 +210,7 @@ func (c *CloudWatch) Gather(acc telegraf.Accumulator) error {
}
func init() {
inputs.Add("cloudwatch", func() telegraf.Input {
inputs.Add("cloudwatch", func() plugins.Input {
ttl, _ := time.ParseDuration("1hr")
return &CloudWatch{
CacheTTL: internal.Duration{Duration: ttl},
@@ -281,7 +281,7 @@ func (c *CloudWatch) fetchNamespaceMetrics() ([]*cloudwatch.Metric, error) {
* Gather given Metric and emit any error
*/
func (c *CloudWatch) gatherMetric(
acc telegraf.Accumulator,
acc plugins.Accumulator,
metric *cloudwatch.Metric,
now time.Time,
errChan chan error,

View File

@@ -9,7 +9,7 @@ import (
"strconv"
"strings"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/plugins/inputs"
"log"
"path/filepath"
@@ -70,7 +70,7 @@ func (c *Conntrack) SampleConfig() string {
return sampleConfig
}
func (c *Conntrack) Gather(acc telegraf.Accumulator) error {
func (c *Conntrack) Gather(acc plugins.Accumulator) error {
c.setDefaults()
var metricKey string
@@ -116,5 +116,5 @@ func (c *Conntrack) Gather(acc telegraf.Accumulator) error {
}
func init() {
inputs.Add(inputName, func() telegraf.Input { return &Conntrack{} })
inputs.Add(inputName, func() plugins.Input { return &Conntrack{} })
}

View File

@@ -35,19 +35,12 @@ Fields:
- check_name
- service_id
- status
- passing
- critical
- warning
`passing`, `critical`, and `warning` are integer representations of the health
check state. A value of `1` represents that the status was the state of the
the health check at this sample.
## Example output
```
$ telegraf --config ./telegraf.conf -input-filter consul -test
* Plugin: consul, Collection 1
> consul_health_checks,host=wolfpit,node=consul-server-node,check_id="serfHealth" check_name="Serf Health Status",service_id="",status="passing",passing=1i,critical=0i,warning=0i 1464698464486439902
> consul_health_checks,host=wolfpit,node=consul-server-node,service_name=www.example.com,check_id="service:www-example-com.test01" check_name="Service 'www.example.com' check",service_id="www-example-com.test01",status="critical",passing=0i,critical=1i,warning=0i 1464698464486519036
> consul_health_checks,host=wolfpit,node=consul-server-node,check_id="serfHealth" check_name="Serf Health Status",service_id="",status="passing" 1464698464486439902
> consul_health_checks,host=wolfpit,node=consul-server-node,service_name=www.example.com,check_id="service:www-example-com.test01" check_name="Service 'www.example.com' check",service_id="www-example-com.test01",status="critical" 1464698464486519036
```

View File

@@ -4,7 +4,7 @@ import (
"net/http"
"github.com/hashicorp/consul/api"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs"
)
@@ -90,19 +90,14 @@ func (c *Consul) createAPIClient() (*api.Client, error) {
return api.NewClient(config)
}
func (c *Consul) GatherHealthCheck(acc telegraf.Accumulator, checks []*api.HealthCheck) {
func (c *Consul) GatherHealthCheck(acc plugins.Accumulator, checks []*api.HealthCheck) {
for _, check := range checks {
record := make(map[string]interface{})
tags := make(map[string]string)
record["check_name"] = check.Name
record["service_id"] = check.ServiceID
record["status"] = check.Status
record["passing"] = 0
record["critical"] = 0
record["warning"] = 0
record[check.Status] = 1
tags["node"] = check.Node
tags["service_name"] = check.ServiceName
@@ -112,7 +107,7 @@ func (c *Consul) GatherHealthCheck(acc telegraf.Accumulator, checks []*api.Healt
}
}
func (c *Consul) Gather(acc telegraf.Accumulator) error {
func (c *Consul) Gather(acc plugins.Accumulator) error {
if c.client == nil {
newClient, err := c.createAPIClient()
@@ -135,7 +130,7 @@ func (c *Consul) Gather(acc telegraf.Accumulator) error {
}
func init() {
inputs.Add("consul", func() telegraf.Input {
inputs.Add("consul", func() plugins.Input {
return &Consul{}
})
}

View File

@@ -24,9 +24,6 @@ func TestGatherHealtCheck(t *testing.T) {
expectedFields := map[string]interface{}{
"check_name": "foo.health",
"status": "passing",
"passing": 1,
"critical": 0,
"warning": 0,
"service_id": "foo.123",
}

View File

@@ -2,7 +2,7 @@ package couchbase
import (
couchbase "github.com/couchbase/go-couchbase"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/plugins/inputs"
"sync"
)
@@ -34,7 +34,7 @@ func (r *Couchbase) Description() string {
// Reads stats from all configured clusters. Accumulates stats.
// Returns one of the errors encountered while gathering stats (if any).
func (r *Couchbase) Gather(acc telegraf.Accumulator) error {
func (r *Couchbase) Gather(acc plugins.Accumulator) error {
if len(r.Servers) == 0 {
r.gatherServer("http://localhost:8091/", acc, nil)
return nil
@@ -57,7 +57,7 @@ func (r *Couchbase) Gather(acc telegraf.Accumulator) error {
return outerr
}
func (r *Couchbase) gatherServer(addr string, acc telegraf.Accumulator, pool *couchbase.Pool) error {
func (r *Couchbase) gatherServer(addr string, acc plugins.Accumulator, pool *couchbase.Pool) error {
if pool == nil {
client, err := couchbase.Connect(addr)
if err != nil {
@@ -98,7 +98,7 @@ func (r *Couchbase) gatherServer(addr string, acc telegraf.Accumulator, pool *co
}
func init() {
inputs.Add("couchbase", func() telegraf.Input {
inputs.Add("couchbase", func() plugins.Input {
return &Couchbase{}
})
}

View File

@@ -4,7 +4,7 @@ import (
"encoding/json"
"errors"
"fmt"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/plugins/inputs"
"net/http"
"reflect"
@@ -82,7 +82,7 @@ func (*CouchDB) SampleConfig() string {
`
}
func (c *CouchDB) Gather(accumulator telegraf.Accumulator) error {
func (c *CouchDB) Gather(accumulator plugins.Accumulator) error {
errorChannel := make(chan error, len(c.HOSTs))
var wg sync.WaitGroup
for _, u := range c.HOSTs {
@@ -122,7 +122,7 @@ var client = &http.Client{
Timeout: time.Duration(4 * time.Second),
}
func (c *CouchDB) fetchAndInsertData(accumulator telegraf.Accumulator, host string) error {
func (c *CouchDB) fetchAndInsertData(accumulator plugins.Accumulator, host string) error {
response, error := client.Get(host)
if error != nil {
@@ -209,7 +209,7 @@ func (c *CouchDB) generateFields(prefix string, obj metaData) map[string]interfa
}
func init() {
inputs.Add("couchdb", func() telegraf.Input {
inputs.Add("couchdb", func() plugins.Input {
return &CouchDB{}
})
}

View File

@@ -11,7 +11,7 @@ import (
"sync"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/plugins/inputs"
)
@@ -64,7 +64,7 @@ var ErrProtocolError = errors.New("disque protocol error")
// Reads stats from all configured servers accumulates stats.
// Returns one of the errors encountered while gather stats (if any).
func (g *Disque) Gather(acc telegraf.Accumulator) error {
func (g *Disque) Gather(acc plugins.Accumulator) error {
if len(g.Servers) == 0 {
url := &url.URL{
Host: ":7711",
@@ -101,7 +101,7 @@ func (g *Disque) Gather(acc telegraf.Accumulator) error {
const defaultPort = "7711"
func (g *Disque) gatherServer(addr *url.URL, acc telegraf.Accumulator) error {
func (g *Disque) gatherServer(addr *url.URL, acc plugins.Accumulator) error {
if g.c == nil {
_, _, err := net.SplitHostPort(addr.Host)
@@ -204,7 +204,7 @@ func (g *Disque) gatherServer(addr *url.URL, acc telegraf.Accumulator) error {
}
func init() {
inputs.Add("disque", func() telegraf.Input {
inputs.Add("disque", func() plugins.Input {
return &Disque{}
})
}

View File

@@ -8,7 +8,7 @@ import (
"strconv"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/internal/errchan"
"github.com/influxdata/telegraf/plugins/inputs"
)
@@ -55,7 +55,7 @@ func (d *DnsQuery) SampleConfig() string {
func (d *DnsQuery) Description() string {
return "Query given DNS server and gives statistics"
}
func (d *DnsQuery) Gather(acc telegraf.Accumulator) error {
func (d *DnsQuery) Gather(acc plugins.Accumulator) error {
d.setDefaultValues()
errChan := errchan.New(len(d.Domains) * len(d.Servers))
@@ -156,7 +156,7 @@ func (d *DnsQuery) parseRecordType() (uint16, error) {
}
func init() {
inputs.Add("dns_query", func() telegraf.Input {
inputs.Add("dns_query", func() plugins.Input {
return &DnsQuery{}
})
}

View File

@@ -15,7 +15,7 @@ import (
"github.com/docker/engine-api/client"
"github.com/docker/engine-api/types"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs"
)
@@ -79,7 +79,7 @@ func (d *Docker) Description() string {
func (d *Docker) SampleConfig() string { return sampleConfig }
// Gather starts stats collection
func (d *Docker) Gather(acc telegraf.Accumulator) error {
func (d *Docker) Gather(acc plugins.Accumulator) error {
if d.client == nil {
var c *client.Client
var err error
@@ -136,7 +136,7 @@ func (d *Docker) Gather(acc telegraf.Accumulator) error {
return nil
}
func (d *Docker) gatherInfo(acc telegraf.Accumulator) error {
func (d *Docker) gatherInfo(acc plugins.Accumulator) error {
// Init vars
dataFields := make(map[string]interface{})
metadataFields := make(map[string]interface{})
@@ -211,7 +211,7 @@ func (d *Docker) gatherInfo(acc telegraf.Accumulator) error {
func (d *Docker) gatherContainer(
container types.Container,
acc telegraf.Accumulator,
acc plugins.Accumulator,
) error {
var v *types.StatsJSON
// Parse container name
@@ -272,7 +272,7 @@ func (d *Docker) gatherContainer(
func gatherContainerStats(
stat *types.StatsJSON,
acc telegraf.Accumulator,
acc plugins.Accumulator,
tags map[string]string,
id string,
perDevice bool,
@@ -422,7 +422,7 @@ func calculateCPUPercent(stat *types.StatsJSON) float64 {
func gatherBlockIOMetrics(
stat *types.StatsJSON,
acc telegraf.Accumulator,
acc plugins.Accumulator,
tags map[string]string,
now time.Time,
id string,
@@ -569,7 +569,7 @@ func parseSize(sizeStr string) (int64, error) {
}
func init() {
inputs.Add("docker", func() telegraf.Input {
inputs.Add("docker", func() plugins.Input {
return &Docker{
PerDevice: true,
Timeout: internal.Duration{Duration: time.Second * 5},

View File

@@ -273,6 +273,7 @@ func (d FakeDockerClient) Info(ctx context.Context) (types.Info, error) {
Name: "absol",
SwapLimit: false,
IPv4Forwarding: true,
ExecutionDriver: "native-0.2",
ExperimentalBuild: false,
CPUCfsPeriod: true,
RegistryConfig: &registry.ServiceConfig{

View File

@@ -11,7 +11,7 @@ import (
"sync"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/internal/errchan"
"github.com/influxdata/telegraf/plugins/inputs"
)
@@ -51,7 +51,7 @@ func (d *Dovecot) SampleConfig() string { return sampleConfig }
const defaultPort = "24242"
// Reads stats from all configured servers.
func (d *Dovecot) Gather(acc telegraf.Accumulator) error {
func (d *Dovecot) Gather(acc plugins.Accumulator) error {
if !validQuery[d.Type] {
return fmt.Errorf("Error: %s is not a valid query type\n",
d.Type)
@@ -81,7 +81,7 @@ func (d *Dovecot) Gather(acc telegraf.Accumulator) error {
return errChan.Error()
}
func (d *Dovecot) gatherServer(addr string, acc telegraf.Accumulator, qtype string, filter string) error {
func (d *Dovecot) gatherServer(addr string, acc plugins.Accumulator, qtype string, filter string) error {
_, _, err := net.SplitHostPort(addr)
if err != nil {
return fmt.Errorf("Error: %s on url %s\n", err, addr)
@@ -111,7 +111,7 @@ func (d *Dovecot) gatherServer(addr string, acc telegraf.Accumulator, qtype stri
return gatherStats(&buf, acc, host, qtype)
}
func gatherStats(buf *bytes.Buffer, acc telegraf.Accumulator, host string, qtype string) error {
func gatherStats(buf *bytes.Buffer, acc plugins.Accumulator, host string, qtype string) error {
lines := strings.Split(buf.String(), "\n")
head := strings.Split(lines[0], "\t")
@@ -183,7 +183,7 @@ func secParser(tm string) float64 {
}
func init() {
inputs.Add("dovecot", func() telegraf.Input {
inputs.Add("dovecot", func() plugins.Input {
return &Dovecot{}
})
}

View File

@@ -8,7 +8,7 @@ import (
"sync"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/internal/errchan"
"github.com/influxdata/telegraf/plugins/inputs"
@@ -143,7 +143,7 @@ func (e *Elasticsearch) Description() string {
// Gather reads the stats from Elasticsearch and writes it to the
// Accumulator.
func (e *Elasticsearch) Gather(acc telegraf.Accumulator) error {
func (e *Elasticsearch) Gather(acc plugins.Accumulator) error {
if e.client == nil {
client, err := e.createHttpClient()
@@ -158,7 +158,7 @@ func (e *Elasticsearch) Gather(acc telegraf.Accumulator) error {
wg.Add(len(e.Servers))
for _, serv := range e.Servers {
go func(s string, acc telegraf.Accumulator) {
go func(s string, acc plugins.Accumulator) {
defer wg.Done()
var url string
if e.Local {
@@ -221,7 +221,7 @@ func (e *Elasticsearch) createHttpClient() (*http.Client, error) {
return client, nil
}
func (e *Elasticsearch) gatherNodeStats(url string, acc telegraf.Accumulator) error {
func (e *Elasticsearch) gatherNodeStats(url string, acc plugins.Accumulator) error {
nodeStats := &struct {
ClusterName string `json:"cluster_name"`
Nodes map[string]*nodeStat `json:"nodes"`
@@ -273,7 +273,7 @@ func (e *Elasticsearch) gatherNodeStats(url string, acc telegraf.Accumulator) er
return nil
}
func (e *Elasticsearch) gatherClusterHealth(url string, acc telegraf.Accumulator) error {
func (e *Elasticsearch) gatherClusterHealth(url string, acc plugins.Accumulator) error {
healthStats := &clusterHealth{}
if err := e.gatherJsonData(url, healthStats); err != nil {
return err
@@ -318,7 +318,7 @@ func (e *Elasticsearch) gatherClusterHealth(url string, acc telegraf.Accumulator
return nil
}
func (e *Elasticsearch) gatherClusterStats(url string, acc telegraf.Accumulator) error {
func (e *Elasticsearch) gatherClusterStats(url string, acc plugins.Accumulator) error {
clusterStats := &clusterStats{}
if err := e.gatherJsonData(url, clusterStats); err != nil {
return err
@@ -393,7 +393,7 @@ func (e *Elasticsearch) gatherJsonData(url string, v interface{}) error {
}
func init() {
inputs.Add("elasticsearch", func() telegraf.Input {
inputs.Add("elasticsearch", func() plugins.Input {
return NewElasticsearch()
})
}

View File

@@ -13,7 +13,7 @@ import (
"github.com/kballard/go-shellquote"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/internal/errchan"
"github.com/influxdata/telegraf/plugins/inputs"
@@ -61,12 +61,12 @@ func NewExec() *Exec {
}
type Runner interface {
Run(*Exec, string, telegraf.Accumulator) ([]byte, error)
Run(*Exec, string, plugins.Accumulator) ([]byte, error)
}
type CommandRunner struct{}
func AddNagiosState(exitCode error, acc telegraf.Accumulator) error {
func AddNagiosState(exitCode error, acc plugins.Accumulator) error {
nagiosState := 0
if exitCode != nil {
exiterr, ok := exitCode.(*exec.ExitError)
@@ -89,7 +89,7 @@ func AddNagiosState(exitCode error, acc telegraf.Accumulator) error {
func (c CommandRunner) Run(
e *Exec,
command string,
acc telegraf.Accumulator,
acc plugins.Accumulator,
) ([]byte, error) {
split_cmd, err := shellquote.Split(command)
if err != nil || len(split_cmd) == 0 {
@@ -145,7 +145,7 @@ func removeCarriageReturns(b bytes.Buffer) bytes.Buffer {
}
func (e *Exec) ProcessCommand(command string, acc telegraf.Accumulator, wg *sync.WaitGroup) {
func (e *Exec) ProcessCommand(command string, acc plugins.Accumulator, wg *sync.WaitGroup) {
defer wg.Done()
out, err := e.runner.Run(e, command, acc)
@@ -176,7 +176,7 @@ func (e *Exec) SetParser(parser parsers.Parser) {
e.parser = parser
}
func (e *Exec) Gather(acc telegraf.Accumulator) error {
func (e *Exec) Gather(acc plugins.Accumulator) error {
var wg sync.WaitGroup
// Legacy single command support
if e.Command != "" {
@@ -226,7 +226,7 @@ func (e *Exec) Gather(acc telegraf.Accumulator) error {
}
func init() {
inputs.Add("exec", func() telegraf.Input {
inputs.Add("exec", func() plugins.Input {
return NewExec()
})
}

View File

@@ -6,7 +6,7 @@ import (
"runtime"
"testing"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/plugins/parsers"
"github.com/influxdata/telegraf/testutil"
@@ -83,7 +83,7 @@ func newRunnerMock(out []byte, err error) Runner {
}
}
func (r runnerMock) Run(e *Exec, command string, acc telegraf.Accumulator) ([]byte, error) {
func (r runnerMock) Run(e *Exec, command string, acc plugins.Accumulator) ([]byte, error) {
if r.err != nil {
return nil, r.err
}

View File

@@ -7,7 +7,7 @@ import (
"log"
"os"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/internal/globpath"
"github.com/influxdata/telegraf/plugins/inputs"
)
@@ -47,7 +47,7 @@ func (_ *FileStat) Description() string {
func (_ *FileStat) SampleConfig() string { return sampleConfig }
func (f *FileStat) Gather(acc telegraf.Accumulator) error {
func (f *FileStat) Gather(acc plugins.Accumulator) error {
var errS string
var err error
@@ -126,7 +126,7 @@ func getMd5(file string) (string, error) {
}
func init() {
inputs.Add("filestat", func() telegraf.Input {
inputs.Add("filestat", func() plugins.Input {
return NewFileStat()
})
}

View File

@@ -14,7 +14,7 @@ import (
"sync"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs"
)
@@ -129,7 +129,7 @@ func (h *GrayLog) Description() string {
}
// Gathers data for all servers.
func (h *GrayLog) Gather(acc telegraf.Accumulator) error {
func (h *GrayLog) Gather(acc plugins.Accumulator) error {
var wg sync.WaitGroup
if h.client.HTTPClient() == nil {
@@ -178,14 +178,14 @@ func (h *GrayLog) Gather(acc telegraf.Accumulator) error {
// Gathers data from a particular server
// Parameters:
// acc : The telegraf Accumulator to use
// acc : The plugins.Accumulator to use
// serverURL: endpoint to send request to
// service : the service being queried
//
// Returns:
// error: Any error that may have occurred
func (h *GrayLog) gatherServer(
acc telegraf.Accumulator,
acc plugins.Accumulator,
serverURL string,
) error {
resp, _, err := h.sendRequest(serverURL)
@@ -304,7 +304,7 @@ func (h *GrayLog) sendRequest(serverURL string) (string, float64, error) {
}
func init() {
inputs.Add("graylog", func() telegraf.Input {
inputs.Add("graylog", func() plugins.Input {
return &GrayLog{
client: &RealHTTPClient{},
}

View File

@@ -10,11 +10,8 @@
servers = ["http://1.2.3.4/haproxy?stats", "/var/run/haproxy*.sock"]
```
#### `servers`
Server addresses need to explicitly start with 'http' if you wish to use HAproxy status page. Otherwise, address will be assumed to be an UNIX socket and protocol (if present) will be discarded.
For basic authentication you need to add username and password in the URL: `http://user:password@1.2.3.4/haproxy?stats`.
Following examples will all resolve to the same socket:
```
socket:/var/run/haproxy.sock
@@ -27,12 +24,9 @@ When using socket names, wildcard expansion is supported so plugin can gather st
If no servers are specified, then the default address of `http://127.0.0.1:1936/haproxy?stats` will be used.
#### `keep_field_names`
By default, some of the fields are renamed from what haproxy calls them. Setting the `keep_field_names` parameter to `true` will result in the plugin keeping the original field names.
### Measurements & Fields:
Plugin will gather measurements outlined in [HAproxy CSV format documentation](https://cbonte.github.io/haproxy-dconv/1.7/management.html#9.1).
Plugin will gather measurements outlined in [HAproxy CSV format documentation](https://cbonte.github.io/haproxy-dconv/1.5/configuration.html#9.1).
### Tags:

View File

@@ -13,18 +13,81 @@ import (
"sync"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/internal/errchan"
"github.com/influxdata/telegraf/plugins/inputs"
)
//CSV format: https://cbonte.github.io/haproxy-dconv/1.5/configuration.html#9.1
const (
HF_PXNAME = 0 // 0. pxname [LFBS]: proxy name
HF_SVNAME = 1 // 1. svname [LFBS]: service name (FRONTEND for frontend, BACKEND for backend, any name for server/listener)
HF_QCUR = 2 //2. qcur [..BS]: current queued requests. For the backend this reports the number queued without a server assigned.
HF_QMAX = 3 //3. qmax [..BS]: max value of qcur
HF_SCUR = 4 // 4. scur [LFBS]: current sessions
HF_SMAX = 5 //5. smax [LFBS]: max sessions
HF_SLIM = 6 //6. slim [LFBS]: configured session limit
HF_STOT = 7 //7. stot [LFBS]: cumulative number of connections
HF_BIN = 8 //8. bin [LFBS]: bytes in
HF_BOUT = 9 //9. bout [LFBS]: bytes out
HF_DREQ = 10 //10. dreq [LFB.]: requests denied because of security concerns.
HF_DRESP = 11 //11. dresp [LFBS]: responses denied because of security concerns.
HF_EREQ = 12 //12. ereq [LF..]: request errors. Some of the possible causes are:
HF_ECON = 13 //13. econ [..BS]: number of requests that encountered an error trying to
HF_ERESP = 14 //14. eresp [..BS]: response errors. srv_abrt will be counted here also. Some other errors are: - write error on the client socket (won't be counted for the server stat) - failure applying filters to the response.
HF_WRETR = 15 //15. wretr [..BS]: number of times a connection to a server was retried.
HF_WREDIS = 16 //16. wredis [..BS]: number of times a request was redispatched to another server. The server value counts the number of times that server was switched away from.
HF_STATUS = 17 //17. status [LFBS]: status (UP/DOWN/NOLB/MAINT/MAINT(via)...)
HF_WEIGHT = 18 //18. weight [..BS]: total weight (backend), server weight (server)
HF_ACT = 19 //19. act [..BS]: number of active servers (backend), server is active (server)
HF_BCK = 20 //20. bck [..BS]: number of backup servers (backend), server is backup (server)
HF_CHKFAIL = 21 //21. chkfail [...S]: number of failed checks. (Only counts checks failed when the server is up.)
HF_CHKDOWN = 22 //22. chkdown [..BS]: number of UP->DOWN transitions. The backend counter counts transitions to the whole backend being down, rather than the sum of the counters for each server.
HF_LASTCHG = 23 //23. lastchg [..BS]: number of seconds since the last UP<->DOWN transition
HF_DOWNTIME = 24 //24. downtime [..BS]: total downtime (in seconds). The value for the backend is the downtime for the whole backend, not the sum of the server downtime.
HF_QLIMIT = 25 //25. qlimit [...S]: configured maxqueue for the server, or nothing in the value is 0 (default, meaning no limit)
HF_PID = 26 //26. pid [LFBS]: process id (0 for first instance, 1 for second, ...)
HF_IID = 27 //27. iid [LFBS]: unique proxy id
HF_SID = 28 //28. sid [L..S]: server id (unique inside a proxy)
HF_THROTTLE = 29 //29. throttle [...S]: current throttle percentage for the server, when slowstart is active, or no value if not in slowstart.
HF_LBTOT = 30 //30. lbtot [..BS]: total number of times a server was selected, either for new sessions, or when re-dispatching. The server counter is the number of times that server was selected.
HF_TRACKED = 31 //31. tracked [...S]: id of proxy/server if tracking is enabled.
HF_TYPE = 32 //32. type [LFBS]: (0 = frontend, 1 = backend, 2 = server, 3 = socket/listener)
HF_RATE = 33 //33. rate [.FBS]: number of sessions per second over last elapsed second
HF_RATE_LIM = 34 //34. rate_lim [.F..]: configured limit on new sessions per second
HF_RATE_MAX = 35 //35. rate_max [.FBS]: max number of new sessions per second
HF_CHECK_STATUS = 36 //36. check_status [...S]: status of last health check, one of:
HF_CHECK_CODE = 37 //37. check_code [...S]: layer5-7 code, if available
HF_CHECK_DURATION = 38 //38. check_duration [...S]: time in ms took to finish last health check
HF_HRSP_1xx = 39 //39. hrsp_1xx [.FBS]: http responses with 1xx code
HF_HRSP_2xx = 40 //40. hrsp_2xx [.FBS]: http responses with 2xx code
HF_HRSP_3xx = 41 //41. hrsp_3xx [.FBS]: http responses with 3xx code
HF_HRSP_4xx = 42 //42. hrsp_4xx [.FBS]: http responses with 4xx code
HF_HRSP_5xx = 43 //43. hrsp_5xx [.FBS]: http responses with 5xx code
HF_HRSP_OTHER = 44 //44. hrsp_other [.FBS]: http responses with other codes (protocol error)
HF_HANAFAIL = 45 //45. hanafail [...S]: failed health checks details
HF_REQ_RATE = 46 //46. req_rate [.F..]: HTTP requests per second over last elapsed second
HF_REQ_RATE_MAX = 47 //47. req_rate_max [.F..]: max number of HTTP requests per second observed
HF_REQ_TOT = 48 //48. req_tot [.F..]: total number of HTTP requests received
HF_CLI_ABRT = 49 //49. cli_abrt [..BS]: number of data transfers aborted by the client
HF_SRV_ABRT = 50 //50. srv_abrt [..BS]: number of data transfers aborted by the server (inc. in eresp)
HF_COMP_IN = 51 //51. comp_in [.FB.]: number of HTTP response bytes fed to the compressor
HF_COMP_OUT = 52 //52. comp_out [.FB.]: number of HTTP response bytes emitted by the compressor
HF_COMP_BYP = 53 //53. comp_byp [.FB.]: number of bytes that bypassed the HTTP compressor (CPU/BW limit)
HF_COMP_RSP = 54 //54. comp_rsp [.FB.]: number of HTTP responses that were compressed
HF_LASTSESS = 55 //55. lastsess [..BS]: number of seconds since last session assigned to server/backend
HF_LAST_CHK = 56 //56. last_chk [...S]: last health check contents or textual error
HF_LAST_AGT = 57 //57. last_agt [...S]: last agent check contents or textual error
HF_QTIME = 58 //58. qtime [..BS]:
HF_CTIME = 59 //59. ctime [..BS]:
HF_RTIME = 60 //60. rtime [..BS]: (0 for TCP)
HF_TTIME = 61 //61. ttime [..BS]: the average total session time in ms over the 1024 last requests
)
type haproxy struct {
Servers []string
client *http.Client
KeepFieldNames bool
}
var sampleConfig = `
@@ -40,11 +103,6 @@ var sampleConfig = `
## Server address not starting with 'http' will be treated as a possible
## socket, so both examples below are valid.
## servers = ["socket:/run/haproxy/admin.sock", "/run/haproxy/*.sock"]
#
## By default, some of the fields are renamed from what haproxy calls them.
## Setting this option to true results in the plugin keeping the original
## field names.
## keep_field_names = true
`
func (r *haproxy) SampleConfig() string {
@@ -57,7 +115,7 @@ func (r *haproxy) Description() string {
// Reads stats from all configured servers accumulates stats.
// Returns one of the errors encountered while gather stats (if any).
func (g *haproxy) Gather(acc telegraf.Accumulator) error {
func (g *haproxy) Gather(acc plugins.Accumulator) error {
if len(g.Servers) == 0 {
return g.gatherServer("http://127.0.0.1:1936/haproxy?stats", acc)
}
@@ -89,21 +147,20 @@ func (g *haproxy) Gather(acc telegraf.Accumulator) error {
}
var wg sync.WaitGroup
errChan := errchan.New(len(endpoints))
wg.Add(len(endpoints))
for _, server := range endpoints {
go func(serv string) {
defer wg.Done()
if err := g.gatherServer(serv, acc); err != nil {
acc.AddError(err)
}
errChan.C <- g.gatherServer(serv, acc)
}(server)
}
wg.Wait()
return nil
return errChan.Error()
}
func (g *haproxy) gatherServerSocket(addr string, acc telegraf.Accumulator) error {
func (g *haproxy) gatherServerSocket(addr string, acc plugins.Accumulator) error {
socketPath := getSocketAddr(addr)
c, err := net.Dial("unix", socketPath)
@@ -118,10 +175,10 @@ func (g *haproxy) gatherServerSocket(addr string, acc telegraf.Accumulator) erro
return fmt.Errorf("Could not write to socket '%s': %s", addr, errw)
}
return g.importCsvResult(c, acc, socketPath)
return importCsvResult(c, acc, socketPath)
}
func (g *haproxy) gatherServer(addr string, acc telegraf.Accumulator) error {
func (g *haproxy) gatherServer(addr string, acc plugins.Accumulator) error {
if !strings.HasPrefix(addr, "http") {
return g.gatherServerSocket(addr, acc)
}
@@ -159,11 +216,7 @@ func (g *haproxy) gatherServer(addr string, acc telegraf.Accumulator) error {
return fmt.Errorf("Unable to get valid stat result from '%s', http response code : %d", addr, res.StatusCode)
}
if err := g.importCsvResult(res.Body, acc, u.Host); err != nil {
return fmt.Errorf("Unable to parse stat result from '%s': %s", addr, err)
}
return nil
return importCsvResult(res.Body, acc, u.Host)
}
func getSocketAddr(sock string) string {
@@ -176,96 +229,205 @@ func getSocketAddr(sock string) string {
}
}
var typeNames = []string{"frontend", "backend", "server", "listener"}
var fieldRenames = map[string]string{
"pxname": "proxy",
"svname": "sv",
"act": "active_servers",
"bck": "backup_servers",
"cli_abrt": "cli_abort",
"srv_abrt": "srv_abort",
"hrsp_1xx": "http_response.1xx",
"hrsp_2xx": "http_response.2xx",
"hrsp_3xx": "http_response.3xx",
"hrsp_4xx": "http_response.4xx",
"hrsp_5xx": "http_response.5xx",
"hrsp_other": "http_response.other",
}
func (g *haproxy) importCsvResult(r io.Reader, acc telegraf.Accumulator, host string) error {
csvr := csv.NewReader(r)
func importCsvResult(r io.Reader, acc plugins.Accumulator, host string) error {
csv := csv.NewReader(r)
result, err := csv.ReadAll()
now := time.Now()
headers, err := csvr.Read()
if err != nil {
return err
}
if len(headers[0]) <= 2 || headers[0][:2] != "# " {
return fmt.Errorf("did not receive standard haproxy headers")
}
headers[0] = headers[0][2:]
for {
row, err := csvr.Read()
if err == io.EOF {
break
}
if err != nil {
return err
}
for _, row := range result {
fields := make(map[string]interface{})
tags := map[string]string{
"server": host,
"proxy": row[HF_PXNAME],
"sv": row[HF_SVNAME],
}
if len(row) != len(headers) {
return fmt.Errorf("number of columns does not match number of headers. headers=%d columns=%d", len(headers), len(row))
}
for i, v := range row {
if v == "" {
continue
}
colName := headers[i]
fieldName := colName
if !g.KeepFieldNames {
if fieldRename, ok := fieldRenames[colName]; ok {
fieldName = fieldRename
for field, v := range row {
switch field {
case HF_QCUR:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
fields["qcur"] = ival
}
}
switch colName {
case "pxname", "svname":
tags[fieldName] = v
case "type":
vi, err := strconv.ParseInt(v, 10, 64)
if err != nil {
return fmt.Errorf("unable to parse type value '%s'", v)
case HF_QMAX:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
fields["qmax"] = ival
}
if int(vi) >= len(typeNames) {
return fmt.Errorf("received unknown type value: %d", vi)
case HF_SCUR:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
fields["scur"] = ival
}
tags[fieldName] = typeNames[vi]
case "check_desc", "agent_desc":
// do nothing. These fields are just a more verbose description of the check_status & agent_status fields
case "status", "check_status", "last_chk", "mode", "tracked", "agent_status", "last_agt", "addr", "cookie":
// these are string fields
fields[fieldName] = v
case "lastsess":
vi, err := strconv.ParseInt(v, 10, 64)
if err != nil {
//TODO log the error. And just once (per column) so we don't spam the log
continue
case HF_SMAX:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
fields["smax"] = ival
}
fields[fieldName] = vi
default:
vi, err := strconv.ParseUint(v, 10, 64)
if err != nil {
//TODO log the error. And just once (per column) so we don't spam the log
continue
case HF_SLIM:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
fields["slim"] = ival
}
case HF_STOT:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
fields["stot"] = ival
}
case HF_BIN:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
fields["bin"] = ival
}
case HF_BOUT:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
fields["bout"] = ival
}
case HF_DREQ:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
fields["dreq"] = ival
}
case HF_DRESP:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
fields["dresp"] = ival
}
case HF_EREQ:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
fields["ereq"] = ival
}
case HF_ECON:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
fields["econ"] = ival
}
case HF_ERESP:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
fields["eresp"] = ival
}
case HF_WRETR:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
fields["wretr"] = ival
}
case HF_WREDIS:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
fields["wredis"] = ival
}
case HF_ACT:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
fields["active_servers"] = ival
}
case HF_BCK:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
fields["backup_servers"] = ival
}
case HF_DOWNTIME:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
fields["downtime"] = ival
}
case HF_THROTTLE:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
fields["throttle"] = ival
}
case HF_LBTOT:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
fields["lbtot"] = ival
}
case HF_RATE:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
fields["rate"] = ival
}
case HF_RATE_MAX:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
fields["rate_max"] = ival
}
case HF_CHECK_DURATION:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
fields["check_duration"] = ival
}
case HF_HRSP_1xx:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
fields["http_response.1xx"] = ival
}
case HF_HRSP_2xx:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
fields["http_response.2xx"] = ival
}
case HF_HRSP_3xx:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
fields["http_response.3xx"] = ival
}
case HF_HRSP_4xx:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
fields["http_response.4xx"] = ival
}
case HF_HRSP_5xx:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
fields["http_response.5xx"] = ival
}
case HF_REQ_RATE:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
fields["req_rate"] = ival
}
case HF_REQ_RATE_MAX:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
fields["req_rate_max"] = ival
}
case HF_REQ_TOT:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
fields["req_tot"] = ival
}
case HF_CLI_ABRT:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
fields["cli_abort"] = ival
}
case HF_SRV_ABRT:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
fields["srv_abort"] = ival
}
case HF_QTIME:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
fields["qtime"] = ival
}
case HF_CTIME:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
fields["ctime"] = ival
}
case HF_RTIME:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
fields["rtime"] = ival
}
case HF_TTIME:
ival, err := strconv.ParseUint(v, 10, 64)
if err == nil {
fields["ttime"] = ival
}
fields[fieldName] = vi
}
}
acc.AddFields("haproxy", fields, tags, now)
@@ -274,7 +436,7 @@ func (g *haproxy) importCsvResult(r io.Reader, acc telegraf.Accumulator, host st
}
func init() {
inputs.Add("haproxy", func() telegraf.Input {
inputs.Add("haproxy", func() plugins.Input {
return &haproxy{}
})
}

View File

@@ -68,9 +68,8 @@ func TestHaproxyGeneratesMetricsWithAuthentication(t *testing.T) {
tags := map[string]string{
"server": ts.Listener.Addr().String(),
"proxy": "git",
"sv": "www",
"type": "server",
"proxy": "be_app",
"sv": "host0",
}
fields := HaproxyGetFieldValues()
@@ -81,8 +80,8 @@ func TestHaproxyGeneratesMetricsWithAuthentication(t *testing.T) {
Servers: []string{ts.URL},
}
r.Gather(&acc)
require.NotEmpty(t, acc.Errors)
err = r.Gather(&acc)
require.Error(t, err)
}
func TestHaproxyGeneratesMetricsWithoutAuthentication(t *testing.T) {
@@ -101,10 +100,9 @@ func TestHaproxyGeneratesMetricsWithoutAuthentication(t *testing.T) {
require.NoError(t, err)
tags := map[string]string{
"proxy": "be_app",
"server": ts.Listener.Addr().String(),
"proxy": "git",
"sv": "www",
"type": "server",
"sv": "host0",
}
fields := HaproxyGetFieldValues()
@@ -146,10 +144,9 @@ func TestHaproxyGeneratesMetricsUsingSocket(t *testing.T) {
for _, sock := range sockets {
tags := map[string]string{
"proxy": "be_app",
"server": sock.Addr().String(),
"proxy": "git",
"sv": "www",
"type": "server",
"sv": "host0",
}
acc.AssertContainsTaggedFields(t, "haproxy", fields, tags)
@@ -158,8 +155,8 @@ func TestHaproxyGeneratesMetricsUsingSocket(t *testing.T) {
// This mask should not match any socket
r.Servers = []string{_badmask}
r.Gather(&acc)
require.NotEmpty(t, acc.Errors)
err = r.Gather(&acc)
require.Error(t, err)
}
//When not passing server config, we default to localhost
@@ -174,122 +171,59 @@ func TestHaproxyDefaultGetFromLocalhost(t *testing.T) {
assert.Contains(t, err.Error(), "127.0.0.1:1936/haproxy?stats/;csv")
}
func TestHaproxyKeepFieldNames(t *testing.T) {
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
fmt.Fprint(w, csvOutputSample)
}))
defer ts.Close()
r := &haproxy{
Servers: []string{ts.URL},
KeepFieldNames: true,
}
var acc testutil.Accumulator
err := r.Gather(&acc)
require.NoError(t, err)
tags := map[string]string{
"server": ts.Listener.Addr().String(),
"pxname": "git",
"svname": "www",
"type": "server",
}
fields := HaproxyGetFieldValues()
fields["act"] = fields["active_servers"]
delete(fields, "active_servers")
fields["bck"] = fields["backup_servers"]
delete(fields, "backup_servers")
fields["cli_abrt"] = fields["cli_abort"]
delete(fields, "cli_abort")
fields["srv_abrt"] = fields["srv_abort"]
delete(fields, "srv_abort")
fields["hrsp_1xx"] = fields["http_response.1xx"]
delete(fields, "http_response.1xx")
fields["hrsp_2xx"] = fields["http_response.2xx"]
delete(fields, "http_response.2xx")
fields["hrsp_3xx"] = fields["http_response.3xx"]
delete(fields, "http_response.3xx")
fields["hrsp_4xx"] = fields["http_response.4xx"]
delete(fields, "http_response.4xx")
fields["hrsp_5xx"] = fields["http_response.5xx"]
delete(fields, "http_response.5xx")
fields["hrsp_other"] = fields["http_response.other"]
delete(fields, "http_response.other")
acc.AssertContainsTaggedFields(t, "haproxy", fields, tags)
}
func HaproxyGetFieldValues() map[string]interface{} {
fields := map[string]interface{}{
"active_servers": uint64(1),
"backup_servers": uint64(0),
"bin": uint64(5228218),
"bout": uint64(303747244),
"check_code": uint64(200),
"check_duration": uint64(3),
"check_fall": uint64(3),
"check_health": uint64(4),
"check_rise": uint64(2),
"check_status": "L7OK",
"chkdown": uint64(84),
"chkfail": uint64(559),
"cli_abort": uint64(690),
"ctime": uint64(1),
"downtime": uint64(3352),
"dresp": uint64(0),
"econ": uint64(0),
"eresp": uint64(21),
"http_response.1xx": uint64(0),
"http_response.2xx": uint64(5668),
"http_response.3xx": uint64(8710),
"http_response.4xx": uint64(140),
"http_response.5xx": uint64(0),
"http_response.other": uint64(0),
"iid": uint64(4),
"last_chk": "OK",
"lastchg": uint64(1036557),
"lastsess": int64(1342),
"lbtot": uint64(9481),
"mode": "http",
"pid": uint64(1),
"qcur": uint64(0),
"qmax": uint64(0),
"qtime": uint64(1268),
"rate": uint64(0),
"rate_max": uint64(2),
"rtime": uint64(2908),
"sid": uint64(1),
"scur": uint64(0),
"slim": uint64(2),
"smax": uint64(2),
"srv_abort": uint64(0),
"status": "UP",
"stot": uint64(14539),
"ttime": uint64(4500),
"weight": uint64(1),
"wredis": uint64(0),
"wretr": uint64(0),
"active_servers": uint64(1),
"backup_servers": uint64(0),
"bin": uint64(510913516),
"bout": uint64(2193856571),
"check_duration": uint64(10),
"cli_abort": uint64(73),
"ctime": uint64(2),
"downtime": uint64(0),
"dresp": uint64(0),
"econ": uint64(0),
"eresp": uint64(1),
"http_response.1xx": uint64(0),
"http_response.2xx": uint64(119534),
"http_response.3xx": uint64(48051),
"http_response.4xx": uint64(2345),
"http_response.5xx": uint64(1056),
"lbtot": uint64(171013),
"qcur": uint64(0),
"qmax": uint64(0),
"qtime": uint64(0),
"rate": uint64(3),
"rate_max": uint64(12),
"rtime": uint64(312),
"scur": uint64(1),
"smax": uint64(32),
"slim": uint64(32),
"srv_abort": uint64(1),
"stot": uint64(171014),
"ttime": uint64(2341),
"wredis": uint64(0),
"wretr": uint64(1),
}
return fields
}
// Can obtain from official haproxy demo: 'http://demo.haproxy.org/;csv'
const csvOutputSample = `
# pxname,svname,qcur,qmax,scur,smax,slim,stot,bin,bout,dreq,dresp,ereq,econ,eresp,wretr,wredis,status,weight,act,bck,chkfail,chkdown,lastchg,downtime,qlimit,pid,iid,sid,throttle,lbtot,tracked,type,rate,rate_lim,rate_max,check_status,check_code,check_duration,hrsp_1xx,hrsp_2xx,hrsp_3xx,hrsp_4xx,hrsp_5xx,hrsp_other,hanafail,req_rate,req_rate_max,req_tot,cli_abrt,srv_abrt,comp_in,comp_out,comp_byp,comp_rsp,lastsess,last_chk,last_agt,qtime,ctime,rtime,ttime,agent_status,agent_code,agent_duration,check_desc,agent_desc,check_rise,check_fall,check_health,agent_rise,agent_fall,agent_health,addr,cookie,mode,algo,conn_rate,conn_rate_max,conn_tot,intercepted,dcon,dses,
http-in,FRONTEND,,,3,100,100,2639994,813557487,65937668635,505252,0,47567,,,,,OPEN,,,,,,,,,1,2,0,,,,0,1,0,157,,,,0,1514640,606647,136264,496535,14948,,1,155,2754255,,,36370569635,17435137766,0,642264,,,,,,,,,,,,,,,,,,,,,http,,1,157,2649922,339471,0,0,
http-in,IPv4-direct,,,3,41,100,349801,57445827,1503928881,269899,0,287,,,,,OPEN,,,,,,,,,1,2,1,,,,3,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,http,,,,,,0,0,
http-in,IPv4-cached,,,0,33,100,1786155,644395819,57905460294,60511,0,1,,,,,OPEN,,,,,,,,,1,2,2,,,,3,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,http,,,,,,0,0,
http-in,IPv6-direct,,,0,100,100,325619,92414745,6205208728,3399,0,47279,,,,,OPEN,,,,,,,,,1,2,3,,,,3,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,http,,,,,,0,0,
http-in,local,,,0,0,100,0,0,0,0,0,0,,,,,OPEN,,,,,,,,,1,2,4,,,,3,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,http,,,,,,0,0,
http-in,local-https,,,0,5,100,188347,19301096,323070732,171443,0,0,,,,,OPEN,,,,,,,,,1,2,5,,,,3,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,http,,,,,,0,0,
www,www,0,0,0,20,20,1719698,672044109,64806076656,,0,,0,5285,22,0,UP,1,1,0,561,84,1036557,3356,,1,3,1,,1715117,,2,0,,45,L7OK,200,5,671,1144889,481714,87038,4,0,,,,,105016,167,,,,,5,OK,,0,5,16,1167,,,,Layer7 check passed,,2,3,4,,,,,,http,,,,,,,,
www,bck,0,0,0,10,10,1483,537137,7544118,,0,,0,0,0,0,UP,1,0,1,4,0,5218087,0,,1,3,2,,1371,,2,0,,17,L7OK,200,2,0,629,99,755,0,0,,,,,16,0,,,,,1036557,OK,,756,1,13,1184,,,,Layer7 check passed,,2,5,6,,,,,,http,,,,,,,,
www,BACKEND,0,25,0,46,100,1721835,674684790,64813732170,314,0,,130,5285,22,0,UP,1,1,1,,0,5218087,0,,1,3,0,,1716488,,1,0,,45,,,,0,1145518,481813,88664,5719,121,,,,1721835,105172,167,35669268059,17250148556,0,556042,5,,,0,5,16,1167,,,,,,,,,,,,,,http,,,,,,,,
git,www,0,0,0,2,2,14539,5228218,303747244,,0,,0,21,0,0,UP,1,1,0,559,84,1036557,3352,,1,4,1,,9481,,2,0,,2,L7OK,200,3,0,5668,8710,140,0,0,,,,,690,0,,,,,1342,OK,,1268,1,2908,4500,,,,Layer7 check passed,,2,3,4,,,,,,http,,,,,,,,
git,bck,0,0,0,0,2,0,0,0,,0,,0,0,0,0,UP,1,0,1,2,0,5218087,0,,1,4,2,,0,,2,0,,0,L7OK,200,2,0,0,0,0,0,0,,,,,0,0,,,,,-1,OK,,0,0,0,0,,,,Layer7 check passed,,2,3,4,,,,,,http,,,,,,,,
git,BACKEND,0,6,0,8,2,14541,8082393,303747668,0,0,,2,21,0,0,UP,1,1,1,,0,5218087,0,,1,4,0,,9481,,1,0,,7,,,,0,5668,8710,140,23,0,,,,14541,690,0,133458298,38104818,0,4379,1342,,,1268,1,2908,4500,,,,,,,,,,,,,,http,,,,,,,,
demo,BACKEND,0,0,1,5,20,24063,7876647,659864417,48,0,,1,0,0,0,UP,0,0,0,,0,5218087,,,1,17,0,,0,,1,1,,26,,,,0,23983,21,0,1,57,,,,24062,111,0,567843278,146884392,0,1083,0,,,2706,0,0,887,,,,,,,,,,,,,,http,,,,,,,,
# pxname,svname,qcur,qmax,scur,smax,slim,stot,bin,bout,dreq,dresp,ereq,econ,eresp,wretr,wredis,status,weight,act,bck,chkfail,chkdown,lastchg,downtime,qlimit,pid,iid,sid,throttle,lbtot,tracked,type,rate,rate_lim,rate_max,check_status,check_code,check_duration,hrsp_1xx,hrsp_2xx,hrsp_3xx,hrsp_4xx,hrsp_5xx,hrsp_other,hanafail,req_rate,req_rate_max,req_tot,cli_abrt,srv_abrt,comp_in,comp_out,comp_byp,comp_rsp,lastsess,last_chk,last_agt,qtime,ctime,rtime,ttime,
fe_app,FRONTEND,,81,288,713,2000,1094063,5557055817,24096715169,1102,80,95740,,,17,19,OPEN,,,,,,,,,2,16,113,13,114,,0,18,0,102,,,,0,1314093,537036,123452,11966,1360,,35,140,1987928,,,0,0,0,0,,,,,,,,
be_static,host0,0,0,0,3,,3209,1141294,17389596,,0,,0,0,0,0,no check,1,1,0,,,,,,2,17,1,,3209,,2,0,,7,,,,0,218,1497,1494,0,0,0,,,,0,0,,,,,2,,,0,2,23,545,
be_static,BACKEND,0,0,0,3,200,3209,1141294,17389596,0,0,,0,0,0,0,UP,1,1,0,,0,70698,0,,2,17,0,,3209,,1,0,,7,,,,0,218,1497,1494,0,0,,,,,0,0,0,0,0,0,2,,,0,2,23,545,
be_static,host0,0,0,0,1,,28,17313,466003,,0,,0,0,0,0,UP,1,1,0,0,0,70698,0,,2,18,1,,28,,2,0,,1,L4OK,,1,0,17,6,5,0,0,0,,,,0,0,,,,,2103,,,0,1,1,36,
be_static,host4,0,0,0,1,,28,15358,1281073,,0,,0,0,0,0,UP,1,1,0,0,0,70698,0,,2,18,2,,28,,2,0,,1,L4OK,,1,0,20,5,3,0,0,0,,,,0,0,,,,,2076,,,0,1,1,54,
be_static,host5,0,0,0,1,,28,17547,1970404,,0,,0,0,0,0,UP,1,1,0,0,0,70698,0,,2,18,3,,28,,2,0,,1,L4OK,,0,0,20,5,3,0,0,0,,,,0,0,,,,,1495,,,0,1,1,53,
be_static,host6,0,0,0,1,,28,14105,1328679,,0,,0,0,0,0,UP,1,1,0,0,0,70698,0,,2,18,4,,28,,2,0,,1,L4OK,,0,0,18,8,2,0,0,0,,,,0,0,,,,,1418,,,0,0,1,49,
be_static,host7,0,0,0,1,,28,15258,1965185,,0,,0,0,0,0,UP,1,1,0,0,0,70698,0,,2,18,5,,28,,2,0,,1,L4OK,,0,0,17,8,3,0,0,0,,,,0,0,,,,,935,,,0,0,1,28,
be_static,host8,0,0,0,1,,28,12934,1034779,,0,,0,0,0,0,UP,1,1,0,0,0,70698,0,,2,18,6,,28,,2,0,,1,L4OK,,0,0,17,9,2,0,0,0,,,,0,0,,,,,582,,,0,1,1,66,
be_static,host9,0,0,0,1,,28,13434,134063,,0,,0,0,0,0,UP,1,1,0,0,0,70698,0,,2,18,7,,28,,2,0,,1,L4OK,,0,0,17,8,3,0,0,0,,,,0,0,,,,,539,,,0,0,1,80,
be_static,host1,0,0,0,1,,28,7873,1209688,,0,,0,0,0,0,UP,1,1,0,0,0,70698,0,,2,18,8,,28,,2,0,,1,L4OK,,0,0,22,6,0,0,0,0,,,,0,0,,,,,487,,,0,0,1,36,
be_static,host2,0,0,0,1,,28,13830,1085929,,0,,0,0,0,0,UP,1,1,0,0,0,70698,0,,2,18,9,,28,,2,0,,1,L4OK,,0,0,19,6,3,0,0,0,,,,0,0,,,,,338,,,0,1,1,38,
be_static,host3,0,0,0,1,,28,17959,1259760,,0,,0,0,0,0,UP,1,1,0,0,0,70698,0,,2,18,10,,28,,2,0,,1,L4OK,,1,0,20,6,2,0,0,0,,,,0,0,,,,,92,,,0,1,1,17,
be_static,BACKEND,0,0,0,2,200,307,160276,13322728,0,0,,0,0,0,0,UP,11,11,0,,0,70698,0,,2,18,0,,307,,1,0,,4,,,,0,205,73,29,0,0,,,,,0,0,0,0,0,0,92,,,0,1,3,381,
be_app,host0,0,0,1,32,32,171014,510913516,2193856571,,0,,0,1,1,0,UP,100,1,0,1,0,70698,0,,2,19,1,,171013,,2,3,,12,L7OK,301,10,0,119534,48051,2345,1056,0,0,,,,73,1,,,,,0,Moved Permanently,,0,2,312,2341,
be_app,host4,0,0,2,29,32,171013,499318742,2195595896,12,34,,0,2,0,0,UP,100,1,0,2,0,70698,0,,2,19,2,,171013,,2,3,,12,L7OK,301,12,0,119572,47882,2441,1088,0,0,,,,84,2,,,,,0,Moved Permanently,,0,2,316,2355,
`

View File

@@ -3,7 +3,7 @@
package hddtemp
import (
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/plugins/inputs"
gohddtemp "github.com/influxdata/telegraf/plugins/inputs/hddtemp/go-hddtemp"
)
@@ -40,7 +40,7 @@ func (_ *HDDTemp) SampleConfig() string {
return hddtempSampleConfig
}
func (h *HDDTemp) Gather(acc telegraf.Accumulator) error {
func (h *HDDTemp) Gather(acc plugins.Accumulator) error {
if h.fetcher == nil {
h.fetcher = gohddtemp.New()
}
@@ -73,7 +73,7 @@ func (h *HDDTemp) Gather(acc telegraf.Accumulator) error {
}
func init() {
inputs.Add("hddtemp", func() telegraf.Input {
inputs.Add("hddtemp", func() plugins.Input {
return &HDDTemp{
Address: defaultAddress,
Devices: []string{"*"},

View File

@@ -10,7 +10,7 @@ import (
"sync"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs"
"github.com/influxdata/telegraf/plugins/parsers/influx"
@@ -42,7 +42,7 @@ type HTTPListener struct {
listener net.Listener
parser influx.InfluxParser
acc telegraf.Accumulator
acc plugins.Accumulator
pool *pool
BytesRecv selfstat.Stat
@@ -84,13 +84,13 @@ func (h *HTTPListener) Description() string {
return "Influx HTTP write listener"
}
func (h *HTTPListener) Gather(_ telegraf.Accumulator) error {
func (h *HTTPListener) Gather(_ plugins.Accumulator) error {
h.BuffersCreated.Set(h.pool.ncreated())
return nil
}
// Start starts the http listener service.
func (h *HTTPListener) Start(acc telegraf.Accumulator) error {
func (h *HTTPListener) Start(acc plugins.Accumulator) error {
h.mu.Lock()
defer h.mu.Unlock()
@@ -324,7 +324,7 @@ func badRequest(res http.ResponseWriter) {
}
func init() {
inputs.Add("http_listener", func() telegraf.Input {
inputs.Add("http_listener", func() plugins.Input {
return &HTTPListener{
ServiceAddress: ":8186",
}

View File

@@ -16,8 +16,6 @@ import (
const (
testMsg = "cpu_load_short,host=server01 value=12.0 1422568543702900257\n"
testMsgNoNewline = "cpu_load_short,host=server01 value=12.0 1422568543702900257"
testMsgs = `cpu_load_short,host=server02 value=12.0 1422568543702900257
cpu_load_short,host=server03 value=12.0 1422568543702900257
cpu_load_short,host=server04 value=12.0 1422568543702900257
@@ -83,28 +81,6 @@ func TestWriteHTTP(t *testing.T) {
)
}
// http listener should add a newline at the end of the buffer if it's not there
func TestWriteHTTPNoNewline(t *testing.T) {
listener := newTestHTTPListener()
acc := &testutil.Accumulator{}
require.NoError(t, listener.Start(acc))
defer listener.Stop()
time.Sleep(time.Millisecond * 25)
// post single message to listener
resp, err := http.Post("http://localhost:8186/write?db=mydb", "", bytes.NewBuffer([]byte(testMsgNoNewline)))
require.NoError(t, err)
require.EqualValues(t, 204, resp.StatusCode)
time.Sleep(time.Millisecond * 15)
acc.AssertContainsTaggedFields(t, "cpu_load_short",
map[string]interface{}{"value": float64(12)},
map[string]string{"host": "server01"},
)
}
func TestWriteHTTPMaxLineSizeIncrease(t *testing.T) {
listener := &HTTPListener{
ServiceAddress: ":8296",

View File

@@ -23,11 +23,6 @@ This input plugin will test HTTP/HTTPS connections.
# {'fake':'data'}
# '''
## Optional substring or regex match in body of the response
## response_string_match = "\"service_status\": \"up\""
## response_string_match = "ok"
## response_string_match = "\".*_status\".?:.?\"up\""
## Optional SSL Config
# ssl_ca = "/etc/telegraf/ca.pem"
# ssl_cert = "/etc/telegraf/cert.pem"

View File

@@ -3,29 +3,24 @@ package http_response
import (
"errors"
"io"
"io/ioutil"
"log"
"net/http"
"net/url"
"regexp"
"strings"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs"
)
// HTTPResponse struct
type HTTPResponse struct {
Address string
Body string
Method string
ResponseTimeout internal.Duration
Headers map[string]string
FollowRedirects bool
ResponseStringMatch string
compiledStringMatch *regexp.Regexp
Address string
Body string
Method string
ResponseTimeout internal.Duration
Headers map[string]string
FollowRedirects bool
// Path to CA file
SSLCA string `toml:"ssl_ca"`
@@ -59,11 +54,6 @@ var sampleConfig = `
# {'fake':'data'}
# '''
## Optional substring or regex match in body of the response
## response_string_match = "\"service_status\": \"up\""
## response_string_match = "ok"
## response_string_match = "\".*_status\".?:.?\"up\""
## Optional SSL Config
# ssl_ca = "/etc/telegraf/ca.pem"
# ssl_cert = "/etc/telegraf/cert.pem"
@@ -147,40 +137,11 @@ func (h *HTTPResponse) HTTPGather() (map[string]interface{}, error) {
}
fields["response_time"] = time.Since(start).Seconds()
fields["http_response_code"] = resp.StatusCode
// Check the response for a regex match.
if h.ResponseStringMatch != "" {
// Compile once and reuse
if h.compiledStringMatch == nil {
h.compiledStringMatch = regexp.MustCompile(h.ResponseStringMatch)
if err != nil {
log.Printf("E! Failed to compile regular expression %s : %s", h.ResponseStringMatch, err)
fields["response_string_match"] = 0
return fields, nil
}
}
bodyBytes, err := ioutil.ReadAll(resp.Body)
if err != nil {
log.Printf("E! Failed to read body of HTTP Response : %s", err)
fields["response_string_match"] = 0
return fields, nil
}
if h.compiledStringMatch.Match(bodyBytes) {
fields["response_string_match"] = 1
} else {
fields["response_string_match"] = 0
}
}
return fields, nil
}
// Gather gets all metric fields and tags and returns any errors it encounters
func (h *HTTPResponse) Gather(acc telegraf.Accumulator) error {
func (h *HTTPResponse) Gather(acc plugins.Accumulator) error {
// Set default values
if h.ResponseTimeout.Duration < time.Second {
h.ResponseTimeout.Duration = time.Second * 5
@@ -213,7 +174,7 @@ func (h *HTTPResponse) Gather(acc telegraf.Accumulator) error {
}
func init() {
inputs.Add("http_response", func() telegraf.Input {
inputs.Add("http_response", func() plugins.Input {
return &HTTPResponse{}
})
}

View File

@@ -22,9 +22,6 @@ func setUpTestMux() http.Handler {
mux.HandleFunc("/good", func(w http.ResponseWriter, req *http.Request) {
fmt.Fprintf(w, "hit the good page!")
})
mux.HandleFunc("/jsonresponse", func(w http.ResponseWriter, req *http.Request) {
fmt.Fprintf(w, "\"service_status\": \"up\", \"healthy\" : \"true\"")
})
mux.HandleFunc("/badredirect", func(w http.ResponseWriter, req *http.Request) {
http.Redirect(w, req, "/badredirect", http.StatusMovedPermanently)
})
@@ -239,87 +236,6 @@ func TestBody(t *testing.T) {
}
}
func TestStringMatch(t *testing.T) {
mux := setUpTestMux()
ts := httptest.NewServer(mux)
defer ts.Close()
h := &HTTPResponse{
Address: ts.URL + "/good",
Body: "{ 'test': 'data'}",
Method: "GET",
ResponseStringMatch: "hit the good page",
ResponseTimeout: internal.Duration{Duration: time.Second * 20},
Headers: map[string]string{
"Content-Type": "application/json",
},
FollowRedirects: true,
}
fields, err := h.HTTPGather()
require.NoError(t, err)
assert.NotEmpty(t, fields)
if assert.NotNil(t, fields["http_response_code"]) {
assert.Equal(t, http.StatusOK, fields["http_response_code"])
}
assert.Equal(t, 1, fields["response_string_match"])
assert.NotNil(t, fields["response_time"])
}
func TestStringMatchJson(t *testing.T) {
mux := setUpTestMux()
ts := httptest.NewServer(mux)
defer ts.Close()
h := &HTTPResponse{
Address: ts.URL + "/jsonresponse",
Body: "{ 'test': 'data'}",
Method: "GET",
ResponseStringMatch: "\"service_status\": \"up\"",
ResponseTimeout: internal.Duration{Duration: time.Second * 20},
Headers: map[string]string{
"Content-Type": "application/json",
},
FollowRedirects: true,
}
fields, err := h.HTTPGather()
require.NoError(t, err)
assert.NotEmpty(t, fields)
if assert.NotNil(t, fields["http_response_code"]) {
assert.Equal(t, http.StatusOK, fields["http_response_code"])
}
assert.Equal(t, 1, fields["response_string_match"])
assert.NotNil(t, fields["response_time"])
}
func TestStringMatchFail(t *testing.T) {
mux := setUpTestMux()
ts := httptest.NewServer(mux)
defer ts.Close()
h := &HTTPResponse{
Address: ts.URL + "/good",
Body: "{ 'test': 'data'}",
Method: "GET",
ResponseStringMatch: "hit the bad page",
ResponseTimeout: internal.Duration{Duration: time.Second * 20},
Headers: map[string]string{
"Content-Type": "application/json",
},
FollowRedirects: true,
}
fields, err := h.HTTPGather()
require.NoError(t, err)
assert.NotEmpty(t, fields)
if assert.NotNil(t, fields["http_response_code"]) {
assert.Equal(t, http.StatusOK, fields["http_response_code"])
}
assert.Equal(t, 0, fields["response_string_match"])
assert.NotNil(t, fields["response_time"])
}
func TestTimeout(t *testing.T) {
mux := setUpTestMux()
ts := httptest.NewServer(mux)

View File

@@ -10,7 +10,7 @@ import (
"sync"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs"
"github.com/influxdata/telegraf/plugins/parsers"
@@ -120,7 +120,7 @@ func (h *HttpJson) Description() string {
}
// Gathers data for all servers.
func (h *HttpJson) Gather(acc telegraf.Accumulator) error {
func (h *HttpJson) Gather(acc plugins.Accumulator) error {
var wg sync.WaitGroup
if h.client.HTTPClient() == nil {
@@ -169,14 +169,14 @@ func (h *HttpJson) Gather(acc telegraf.Accumulator) error {
// Gathers data from a particular server
// Parameters:
// acc : The telegraf Accumulator to use
// acc : The plugins.Accumulator to use
// serverURL: endpoint to send request to
// service : the service being queried
//
// Returns:
// error: Any error that may have occurred
func (h *HttpJson) gatherServer(
acc telegraf.Accumulator,
acc plugins.Accumulator,
serverURL string,
) error {
resp, responseTime, err := h.sendRequest(serverURL)
@@ -292,7 +292,7 @@ func (h *HttpJson) sendRequest(serverURL string) (string, float64, error) {
}
func init() {
inputs.Add("httpjson", func() telegraf.Input {
inputs.Add("httpjson", func() plugins.Input {
return &HttpJson{
client: &RealHTTPClient{},
ResponseTimeout: internal.Duration{

View File

@@ -9,7 +9,7 @@ import (
"sync"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs"
)
@@ -43,7 +43,7 @@ func (*InfluxDB) SampleConfig() string {
`
}
func (i *InfluxDB) Gather(acc telegraf.Accumulator) error {
func (i *InfluxDB) Gather(acc plugins.Accumulator) error {
if len(i.URLs) == 0 {
i.URLs = []string{"http://localhost:8086/debug/vars"}
}
@@ -125,13 +125,13 @@ type memstats struct {
// Gathers data from a particular URL
// Parameters:
// acc : The telegraf Accumulator to use
// acc : The plugins.Accumulator to use
// url : endpoint to send request to
//
// Returns:
// error: Any error that may have occurred
func (i *InfluxDB) gatherURL(
acc telegraf.Accumulator,
acc plugins.Accumulator,
url string,
) error {
shardCounter := 0
@@ -258,7 +258,7 @@ func (i *InfluxDB) gatherURL(
}
func init() {
inputs.Add("influxdb", func() telegraf.Input {
inputs.Add("influxdb", func() plugins.Input {
return &InfluxDB{
Timeout: internal.Duration{Duration: time.Second * 5},
}

View File

@@ -3,7 +3,7 @@ package internal
import (
"runtime"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/plugins/inputs"
"github.com/influxdata/telegraf/selfstat"
)
@@ -12,7 +12,7 @@ type Self struct {
CollectMemstats bool
}
func NewSelf() telegraf.Input {
func NewSelf() plugins.Input {
return &Self{
CollectMemstats: true,
}
@@ -31,7 +31,7 @@ func (s *Self) SampleConfig() string {
return sampleConfig
}
func (s *Self) Gather(acc telegraf.Accumulator) error {
func (s *Self) Gather(acc plugins.Accumulator) error {
if s.CollectMemstats {
m := &runtime.MemStats{}
runtime.ReadMemStats(m)

View File

@@ -4,50 +4,33 @@ Get bare metal metrics using the command line utility `ipmitool`
see ipmitool(https://sourceforge.net/projects/ipmitool/files/ipmitool/)
If no servers are specified, the plugin will query the local machine sensor stats via the following command:
The plugin will use the following command to collect remote host sensor stats:
```
ipmitool sdr
```
When one or more servers are specified, the plugin will use the following command to collect remote host sensor stats:
```
ipmitool -I lan -H SERVER -U USERID -P PASSW0RD sdr
```
ipmitool -I lan -H 192.168.1.1 -U USERID -P PASSW0RD sdr
## Measurements
- ipmi_sensor:
* Tags: `name`, `unit`
* Tags: `name`, `server`, `unit`
* Fields:
- status
- value
The `server` tag will be made available when retrieving stats from remote server(s).
## Configuration
```toml
# Read metrics from the bare metal servers via IPMI
[[inputs.ipmi_sensor]]
## optionally specify the path to the ipmitool executable
# path = "/usr/bin/ipmitool"
#
## optionally specify one or more servers via a url matching
## specify servers via a url matching:
## [username[:password]@][protocol[(address)]]
## e.g.
## root:passwd@lan(127.0.0.1)
##
## if no servers are specified, local machine sensor stats will be queried
##
# servers = ["USERID:PASSW0RD@lan(192.168.1.1)"]
servers = ["USERID:PASSW0RD@lan(10.20.2.203)"]
```
## Output
When retrieving stats from a remote server:
```
> ipmi_sensor,server=10.20.2.203,unit=degrees_c,name=ambient_temp status=1i,value=20 1458488465012559455
> ipmi_sensor,server=10.20.2.203,unit=feet,name=altitude status=1i,value=80 1458488465012688613
@@ -57,14 +40,3 @@ When retrieving stats from a remote server:
> ipmi_sensor,server=10.20.2.203,unit=rpm,name=fan_1a_tach status=1i,value=2610 1458488465013137932
> ipmi_sensor,server=10.20.2.203,unit=rpm,name=fan_1b_tach status=1i,value=1775 1458488465013279896
```
When retrieving stats from the local machine (no server specified):
```
> ipmi_sensor,unit=degrees_c,name=ambient_temp status=1i,value=20 1458488465012559455
> ipmi_sensor,unit=feet,name=altitude status=1i,value=80 1458488465012688613
> ipmi_sensor,unit=watts,name=avg_power status=1i,value=220 1458488465012776511
> ipmi_sensor,unit=volts,name=planar_3.3v status=1i,value=3.28 1458488465012861875
> ipmi_sensor,unit=volts,name=planar_vbat status=1i,value=3.04 1458488465013072508
> ipmi_sensor,unit=rpm,name=fan_1a_tach status=1i,value=2610 1458488465013137932
> ipmi_sensor,unit=rpm,name=fan_1b_tach status=1i,value=1775 1458488465013279896
```

View File

@@ -0,0 +1,35 @@
package ipmi_sensor
import (
"fmt"
"os/exec"
"strings"
"time"
"github.com/influxdata/telegraf/internal"
)
type CommandRunner struct{}
func (t CommandRunner) cmd(conn *Connection, args ...string) *exec.Cmd {
path := conn.Path
opts := append(conn.options(), args...)
if path == "" {
path = "ipmitool"
}
return exec.Command(path, opts...)
}
func (t CommandRunner) Run(conn *Connection, args ...string) (string, error) {
cmd := t.cmd(conn, args...)
output, err := internal.CombinedOutputTimeout(cmd, time.Second*5)
if err != nil {
return "", fmt.Errorf("run %s %s: %s (%s)",
cmd.Path, strings.Join(cmd.Args, " "), string(output), err)
}
return string(output), err
}

View File

@@ -12,6 +12,7 @@ type Connection struct {
Hostname string
Username string
Password string
Path string
Port int
Interface string
}

View File

@@ -1,62 +1,48 @@
package ipmi_sensor
import (
"fmt"
"os/exec"
"strconv"
"strings"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/plugins/inputs"
)
var (
execCommand = exec.Command // execCommand is used to mock commands in tests.
)
type Ipmi struct {
path string
Servers []string
runner Runner
}
var sampleConfig = `
## optionally specify the path to the ipmitool executable
# path = "/usr/bin/ipmitool"
#
## optionally specify one or more servers via a url matching
## specify servers via a url matching:
## [username[:password]@][protocol[(address)]]
## e.g.
## root:passwd@lan(127.0.0.1)
##
## if no servers are specified, local machine sensor stats will be queried
##
# servers = ["USERID:PASSW0RD@lan(192.168.1.1)"]
servers = ["USERID:PASSW0RD@lan(192.168.1.1)"]
`
func NewIpmi() *Ipmi {
return &Ipmi{
runner: CommandRunner{},
}
}
func (m *Ipmi) SampleConfig() string {
return sampleConfig
}
func (m *Ipmi) Description() string {
return "Read metrics from the bare metal servers via IPMI"
return "Read metrics from one or many bare metal servers"
}
func (m *Ipmi) Gather(acc telegraf.Accumulator) error {
if len(m.path) == 0 {
return fmt.Errorf("ipmitool not found: verify that ipmitool is installed and that ipmitool is in your PATH")
func (m *Ipmi) Gather(acc plugins.Accumulator) error {
if m.runner == nil {
m.runner = CommandRunner{}
}
if len(m.Servers) > 0 {
for _, server := range m.Servers {
err := m.parse(acc, server)
if err != nil {
return err
}
}
} else {
err := m.parse(acc, "")
for _, serv := range m.Servers {
err := m.gatherServer(serv, acc)
if err != nil {
return err
}
@@ -65,26 +51,17 @@ func (m *Ipmi) Gather(acc telegraf.Accumulator) error {
return nil
}
func (m *Ipmi) parse(acc telegraf.Accumulator, server string) error {
opts := make([]string, 0)
hostname := ""
func (m *Ipmi) gatherServer(serv string, acc plugins.Accumulator) error {
conn := NewConnection(serv)
if server != "" {
conn := NewConnection(server)
hostname = conn.Hostname
opts = conn.options()
}
opts = append(opts, "sdr")
cmd := execCommand(m.path, opts...)
out, err := internal.CombinedOutputTimeout(cmd, time.Second*5)
res, err := m.runner.Run(conn, "sdr")
if err != nil {
return fmt.Errorf("failed to run command %s: %s - %s", strings.Join(cmd.Args, " "), err, string(out))
return err
}
// each line will look something like
// Planar VBAT | 3.05 Volts | ok
lines := strings.Split(string(out), "\n")
lines := strings.Split(res, "\n")
for i := 0; i < len(lines); i++ {
vals := strings.Split(lines[i], "|")
if len(vals) != 3 {
@@ -92,12 +69,8 @@ func (m *Ipmi) parse(acc telegraf.Accumulator, server string) error {
}
tags := map[string]string{
"name": transform(vals[0]),
}
// tag the server is we have one
if hostname != "" {
tags["server"] = hostname
"server": conn.Hostname,
"name": transform(vals[0]),
}
fields := make(map[string]interface{})
@@ -126,6 +99,10 @@ func (m *Ipmi) parse(acc telegraf.Accumulator, server string) error {
return nil
}
type Runner interface {
Run(conn *Connection, args ...string) (string, error)
}
func Atofloat(val string) float64 {
f, err := strconv.ParseFloat(val, 64)
if err != nil {
@@ -146,12 +123,7 @@ func transform(s string) string {
}
func init() {
m := Ipmi{}
path, _ := exec.LookPath("ipmitool")
if len(path) > 0 {
m.path = path
}
inputs.Add("ipmi_sensor", func() telegraf.Input {
return &m
inputs.Add("ipmi_sensor", func() plugins.Input {
return &Ipmi{}
})
}

View File

@@ -1,9 +1,6 @@
package ipmi_sensor
import (
"fmt"
"os"
"os/exec"
"testing"
"github.com/influxdata/telegraf/testutil"
@@ -11,219 +8,10 @@ import (
"github.com/stretchr/testify/require"
)
func TestGather(t *testing.T) {
i := &Ipmi{
Servers: []string{"USERID:PASSW0RD@lan(192.168.1.1)"},
path: "ipmitool",
}
// overwriting exec commands with mock commands
execCommand = fakeExecCommand
var acc testutil.Accumulator
const serv = "USERID:PASSW0RD@lan(192.168.1.1)"
err := i.Gather(&acc)
require.NoError(t, err)
assert.Equal(t, acc.NFields(), 266, "non-numeric measurements should be ignored")
conn := NewConnection(i.Servers[0])
assert.Equal(t, "USERID", conn.Username)
assert.Equal(t, "lan", conn.Interface)
var testsWithServer = []struct {
fields map[string]interface{}
tags map[string]string
}{
{
map[string]interface{}{
"value": float64(20),
"status": int(1),
},
map[string]string{
"name": "ambient_temp",
"server": "192.168.1.1",
"unit": "degrees_c",
},
},
{
map[string]interface{}{
"value": float64(80),
"status": int(1),
},
map[string]string{
"name": "altitude",
"server": "192.168.1.1",
"unit": "feet",
},
},
{
map[string]interface{}{
"value": float64(210),
"status": int(1),
},
map[string]string{
"name": "avg_power",
"server": "192.168.1.1",
"unit": "watts",
},
},
{
map[string]interface{}{
"value": float64(4.9),
"status": int(1),
},
map[string]string{
"name": "planar_5v",
"server": "192.168.1.1",
"unit": "volts",
},
},
{
map[string]interface{}{
"value": float64(3.05),
"status": int(1),
},
map[string]string{
"name": "planar_vbat",
"server": "192.168.1.1",
"unit": "volts",
},
},
{
map[string]interface{}{
"value": float64(2610),
"status": int(1),
},
map[string]string{
"name": "fan_1a_tach",
"server": "192.168.1.1",
"unit": "rpm",
},
},
{
map[string]interface{}{
"value": float64(1775),
"status": int(1),
},
map[string]string{
"name": "fan_1b_tach",
"server": "192.168.1.1",
"unit": "rpm",
},
},
}
for _, test := range testsWithServer {
acc.AssertContainsTaggedFields(t, "ipmi_sensor", test.fields, test.tags)
}
i = &Ipmi{
path: "ipmitool",
}
err = i.Gather(&acc)
var testsWithoutServer = []struct {
fields map[string]interface{}
tags map[string]string
}{
{
map[string]interface{}{
"value": float64(20),
"status": int(1),
},
map[string]string{
"name": "ambient_temp",
"unit": "degrees_c",
},
},
{
map[string]interface{}{
"value": float64(80),
"status": int(1),
},
map[string]string{
"name": "altitude",
"unit": "feet",
},
},
{
map[string]interface{}{
"value": float64(210),
"status": int(1),
},
map[string]string{
"name": "avg_power",
"unit": "watts",
},
},
{
map[string]interface{}{
"value": float64(4.9),
"status": int(1),
},
map[string]string{
"name": "planar_5v",
"unit": "volts",
},
},
{
map[string]interface{}{
"value": float64(3.05),
"status": int(1),
},
map[string]string{
"name": "planar_vbat",
"unit": "volts",
},
},
{
map[string]interface{}{
"value": float64(2610),
"status": int(1),
},
map[string]string{
"name": "fan_1a_tach",
"unit": "rpm",
},
},
{
map[string]interface{}{
"value": float64(1775),
"status": int(1),
},
map[string]string{
"name": "fan_1b_tach",
"unit": "rpm",
},
},
}
for _, test := range testsWithoutServer {
acc.AssertContainsTaggedFields(t, "ipmi_sensor", test.fields, test.tags)
}
}
// fackeExecCommand is a helper function that mock
// the exec.Command call (and call the test binary)
func fakeExecCommand(command string, args ...string) *exec.Cmd {
cs := []string{"-test.run=TestHelperProcess", "--", command}
cs = append(cs, args...)
cmd := exec.Command(os.Args[0], cs...)
cmd.Env = []string{"GO_WANT_HELPER_PROCESS=1"}
return cmd
}
// TestHelperProcess isn't a real test. It's used to mock exec.Command
// For example, if you run:
// GO_WANT_HELPER_PROCESS=1 go test -test.run=TestHelperProcess -- chrony tracking
// it returns below mockData.
func TestHelperProcess(t *testing.T) {
if os.Getenv("GO_WANT_HELPER_PROCESS") != "1" {
return
}
mockData := `Ambient Temp | 20 degrees C | ok
const cmdReturn = `
Ambient Temp | 20 degrees C | ok
Altitude | 80 feet | ok
Avg Power | 210 Watts | ok
Planar 3.3V | 3.29 Volts | ok
@@ -358,18 +146,130 @@ PCI 5 | 0x00 | ok
OS RealTime Mod | 0x00 | ok
`
args := os.Args
// Previous arguments are tests stuff, that looks like :
// /tmp/go-build970079519/…/_test/integration.test -test.run=TestHelperProcess --
cmd, args := args[3], args[4:]
if cmd == "ipmitool" {
fmt.Fprint(os.Stdout, mockData)
} else {
fmt.Fprint(os.Stdout, "command not found")
os.Exit(1)
}
os.Exit(0)
type runnerMock struct {
out string
err error
}
func newRunnerMock(out string, err error) Runner {
return &runnerMock{
out: out,
err: err,
}
}
func (r runnerMock) Run(conn *Connection, args ...string) (out string, err error) {
if r.err != nil {
return out, r.err
}
return r.out, nil
}
func TestIpmi(t *testing.T) {
i := &Ipmi{
Servers: []string{"USERID:PASSW0RD@lan(192.168.1.1)"},
runner: newRunnerMock(cmdReturn, nil),
}
var acc testutil.Accumulator
err := i.Gather(&acc)
require.NoError(t, err)
assert.Equal(t, acc.NFields(), 266, "non-numeric measurements should be ignored")
var tests = []struct {
fields map[string]interface{}
tags map[string]string
}{
{
map[string]interface{}{
"value": float64(20),
"status": int(1),
},
map[string]string{
"name": "ambient_temp",
"server": "192.168.1.1",
"unit": "degrees_c",
},
},
{
map[string]interface{}{
"value": float64(80),
"status": int(1),
},
map[string]string{
"name": "altitude",
"server": "192.168.1.1",
"unit": "feet",
},
},
{
map[string]interface{}{
"value": float64(210),
"status": int(1),
},
map[string]string{
"name": "avg_power",
"server": "192.168.1.1",
"unit": "watts",
},
},
{
map[string]interface{}{
"value": float64(4.9),
"status": int(1),
},
map[string]string{
"name": "planar_5v",
"server": "192.168.1.1",
"unit": "volts",
},
},
{
map[string]interface{}{
"value": float64(3.05),
"status": int(1),
},
map[string]string{
"name": "planar_vbat",
"server": "192.168.1.1",
"unit": "volts",
},
},
{
map[string]interface{}{
"value": float64(2610),
"status": int(1),
},
map[string]string{
"name": "fan_1a_tach",
"server": "192.168.1.1",
"unit": "rpm",
},
},
{
map[string]interface{}{
"value": float64(1775),
"status": int(1),
},
map[string]string{
"name": "fan_1b_tach",
"server": "192.168.1.1",
"unit": "rpm",
},
},
}
for _, test := range tests {
acc.AssertContainsTaggedFields(t, "ipmi_sensor", test.fields, test.tags)
}
}
func TestIpmiConnection(t *testing.T) {
conn := NewConnection(serv)
assert.Equal(t, "USERID", conn.Username)
assert.Equal(t, "lan", conn.Interface)
}

View File

@@ -30,17 +30,11 @@ You may edit your sudo configuration with the following:
telegraf ALL=(root) NOPASSWD: /usr/bin/iptables -nvL *
```
### Using IPtables lock feature
Defining multiple instances of this plugin in telegraf.conf can lead to concurrent IPtables access resulting in "ERROR in input [inputs.iptables]: exit status 4" messages in telegraf.log and missing metrics. Setting 'use_lock = true' in the plugin configuration will run IPtables with the '-w' switch, allowing a lock usage to prevent this error.
### Configuration:
```toml
# use sudo to run iptables
use_sudo = false
# run iptables with the lock option
use_lock = false
# defines the table to monitor:
table = "filter"
# defines the chains to monitor:

View File

@@ -9,14 +9,13 @@ import (
"strconv"
"strings"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/plugins/inputs"
)
// Iptables is a telegraf plugin to gather packets and bytes throughput from Linux's iptables packet filter.
type Iptables struct {
UseSudo bool
UseLock bool
Table string
Chains []string
lister chainLister
@@ -33,11 +32,8 @@ func (ipt *Iptables) SampleConfig() string {
## iptables require root access on most systems.
## Setting 'use_sudo' to true will make use of sudo to run iptables.
## Users must configure sudo to allow telegraf user to run iptables with no password.
## iptables can be restricted to only list command "iptables -nvL"
## iptables can be restricted to only list command "iptables -nvL"
use_sudo = false
## Setting 'use_lock' to true runs iptables with the "-w" option.
## Adjust your sudo settings appropriately if using this option ("iptables -wnvl")
use_lock = false
## defines the table to monitor:
table = "filter"
## defines the chains to monitor:
@@ -46,7 +42,7 @@ func (ipt *Iptables) SampleConfig() string {
}
// Gather gathers iptables packets and bytes throughput from the configured tables and chains.
func (ipt *Iptables) Gather(acc telegraf.Accumulator) error {
func (ipt *Iptables) Gather(acc plugins.Accumulator) error {
if ipt.Table == "" || len(ipt.Chains) == 0 {
return nil
}
@@ -79,11 +75,7 @@ func (ipt *Iptables) chainList(table, chain string) (string, error) {
name = "sudo"
args = append(args, iptablePath)
}
iptablesBaseArgs := "-nvL"
if ipt.UseLock {
iptablesBaseArgs = "-wnvL"
}
args = append(args, iptablesBaseArgs, chain, "-t", table, "-x")
args = append(args, "-nvL", chain, "-t", table, "-x")
c := exec.Command(name, args...)
out, err := c.Output()
return string(out), err
@@ -96,7 +88,7 @@ var chainNameRe = regexp.MustCompile(`^Chain\s+(\S+)`)
var fieldsHeaderRe = regexp.MustCompile(`^\s*pkts\s+bytes\s+`)
var valuesRe = regexp.MustCompile(`^\s*([0-9]+)\s+([0-9]+)\s+.*?(/\*\s(.*)\s\*/)?$`)
func (ipt *Iptables) parseAndGather(data string, acc telegraf.Accumulator) error {
func (ipt *Iptables) parseAndGather(data string, acc plugins.Accumulator) error {
lines := strings.Split(data, "\n")
if len(lines) < 3 {
return nil
@@ -128,7 +120,7 @@ func (ipt *Iptables) parseAndGather(data string, acc telegraf.Accumulator) error
type chainLister func(table, chain string) (string, error)
func init() {
inputs.Add("iptables", func() telegraf.Input {
inputs.Add("iptables", func() plugins.Input {
ipt := new(Iptables)
ipt.lister = ipt.chainList
return ipt

View File

@@ -10,7 +10,7 @@ import (
"net/url"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs"
)
@@ -47,13 +47,12 @@ func (c JolokiaClientImpl) MakeRequest(req *http.Request) (*http.Response, error
}
type Jolokia struct {
jClient JolokiaClient
Context string
Mode string
Servers []Server
Metrics []Metric
Proxy Server
Delimiter string
jClient JolokiaClient
Context string
Mode string
Servers []Server
Metrics []Metric
Proxy Server
ResponseHeaderTimeout internal.Duration `toml:"response_header_timeout"`
ClientTimeout internal.Duration `toml:"client_timeout"`
@@ -85,13 +84,6 @@ const sampleConfig = `
## Includes connection time, any redirects, and reading the response body.
# client_timeout = "4s"
## Attribute delimiter
##
## When multiple attributes are returned for a single
## [inputs.jolokia.metrics], the field name is a concatenation of the metric
## name, and the attribute name, separated by the given delimiter.
# delimiter = "_"
## List of servers exposing jolokia read service
[[inputs.jolokia.servers]]
name = "as-server-01"
@@ -246,17 +238,17 @@ func (j *Jolokia) prepareRequest(server Server, metric Metric) (*http.Request, e
return req, nil
}
func (j *Jolokia) extractValues(measurement string, value interface{}, fields map[string]interface{}) {
func extractValues(measurement string, value interface{}, fields map[string]interface{}) {
if mapValues, ok := value.(map[string]interface{}); ok {
for k2, v2 := range mapValues {
j.extractValues(measurement+j.Delimiter+k2, v2, fields)
extractValues(measurement+"_"+k2, v2, fields)
}
} else {
fields[measurement] = value
}
}
func (j *Jolokia) Gather(acc telegraf.Accumulator) error {
func (j *Jolokia) Gather(acc plugins.Accumulator) error {
if j.jClient == nil {
tr := &http.Transport{ResponseHeaderTimeout: j.ResponseHeaderTimeout.Duration}
@@ -290,7 +282,7 @@ func (j *Jolokia) Gather(acc telegraf.Accumulator) error {
fmt.Printf("Error handling response: %s\n", err)
} else {
if values, ok := out["value"]; ok {
j.extractValues(measurement, values, fields)
extractValues(measurement, values, fields)
} else {
fmt.Printf("Missing key 'value' in output response\n")
}
@@ -305,11 +297,10 @@ func (j *Jolokia) Gather(acc telegraf.Accumulator) error {
}
func init() {
inputs.Add("jolokia", func() telegraf.Input {
inputs.Add("jolokia", func() plugins.Input {
return &Jolokia{
ResponseHeaderTimeout: DefaultResponseHeaderTimeout,
ClientTimeout: DefaultClientTimeout,
Delimiter: "_",
}
})
}

View File

@@ -104,10 +104,9 @@ func (c jolokiaClientStub) MakeRequest(req *http.Request) (*http.Response, error
// *HttpJson: Pointer to an HttpJson object that uses the generated mock HTTP client
func genJolokiaClientStub(response string, statusCode int, servers []Server, metrics []Metric) *Jolokia {
return &Jolokia{
jClient: jolokiaClientStub{responseBody: response, statusCode: statusCode},
Servers: servers,
Metrics: metrics,
Delimiter: "_",
jClient: jolokiaClientStub{responseBody: response, statusCode: statusCode},
Servers: servers,
Metrics: metrics,
}
}

View File

@@ -5,7 +5,7 @@ import (
"strings"
"sync"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/plugins/inputs"
"github.com/influxdata/telegraf/plugins/parsers"
@@ -33,11 +33,11 @@ type Kafka struct {
// channel for all incoming kafka messages
in <-chan *sarama.ConsumerMessage
// channel for all kafka consumer errors
errs <-chan error
errs <-chan *sarama.ConsumerError
done chan struct{}
// keep the accumulator internally:
acc telegraf.Accumulator
acc plugins.Accumulator
// doNotCommitMsgs tells the parser not to call CommitUpTo on the consumer
// this is mostly for test purposes, but there may be a use-case for it later.
@@ -75,7 +75,7 @@ func (k *Kafka) SetParser(parser parsers.Parser) {
k.parser = parser
}
func (k *Kafka) Start(acc telegraf.Accumulator) error {
func (k *Kafka) Start(acc plugins.Accumulator) error {
k.Lock()
defer k.Unlock()
var consumerErr error
@@ -162,12 +162,12 @@ func (k *Kafka) Stop() {
}
}
func (k *Kafka) Gather(acc telegraf.Accumulator) error {
func (k *Kafka) Gather(acc plugins.Accumulator) error {
return nil
}
func init() {
inputs.Add("kafka_consumer", func() telegraf.Input {
inputs.Add("kafka_consumer", func() plugins.Input {
return &Kafka{}
})
}

View File

@@ -27,7 +27,7 @@ func newTestKafka() (*Kafka, chan *sarama.ConsumerMessage) {
Offset: "oldest",
in: in,
doNotCommitMsgs: true,
errs: make(chan error, 1000),
errs: make(chan *sarama.ConsumerError, 1000),
done: make(chan struct{}),
}
return &k, in

View File

@@ -9,7 +9,7 @@ import (
"sync"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/internal/errchan"
"github.com/influxdata/telegraf/plugins/inputs"
@@ -54,7 +54,7 @@ const (
)
func init() {
inputs.Add("kubernetes", func() telegraf.Input {
inputs.Add("kubernetes", func() plugins.Input {
return &Kubernetes{}
})
}
@@ -70,7 +70,7 @@ func (k *Kubernetes) Description() string {
}
//Gather collects kubernetes metrics from a given URL
func (k *Kubernetes) Gather(acc telegraf.Accumulator) error {
func (k *Kubernetes) Gather(acc plugins.Accumulator) error {
var wg sync.WaitGroup
errChan := errchan.New(1)
wg.Add(1)
@@ -91,7 +91,7 @@ func buildURL(endpoint string, base string) (*url.URL, error) {
return addr, nil
}
func (k *Kubernetes) gatherSummary(baseURL string, acc telegraf.Accumulator) error {
func (k *Kubernetes) gatherSummary(baseURL string, acc plugins.Accumulator) error {
url := fmt.Sprintf("%s/stats/summary", baseURL)
var req, err = http.NewRequest("GET", url, nil)
var token []byte
@@ -139,7 +139,7 @@ func (k *Kubernetes) gatherSummary(baseURL string, acc telegraf.Accumulator) err
return nil
}
func buildSystemContainerMetrics(summaryMetrics *SummaryMetrics, acc telegraf.Accumulator) {
func buildSystemContainerMetrics(summaryMetrics *SummaryMetrics, acc plugins.Accumulator) {
for _, container := range summaryMetrics.Node.SystemContainers {
tags := map[string]string{
"node_name": summaryMetrics.Node.NodeName,
@@ -161,7 +161,7 @@ func buildSystemContainerMetrics(summaryMetrics *SummaryMetrics, acc telegraf.Ac
}
}
func buildNodeMetrics(summaryMetrics *SummaryMetrics, acc telegraf.Accumulator) {
func buildNodeMetrics(summaryMetrics *SummaryMetrics, acc plugins.Accumulator) {
tags := map[string]string{
"node_name": summaryMetrics.Node.NodeName,
}
@@ -187,7 +187,7 @@ func buildNodeMetrics(summaryMetrics *SummaryMetrics, acc telegraf.Accumulator)
acc.AddFields("kubernetes_node", fields, tags)
}
func buildPodMetrics(summaryMetrics *SummaryMetrics, acc telegraf.Accumulator) {
func buildPodMetrics(summaryMetrics *SummaryMetrics, acc plugins.Accumulator) {
for _, pod := range summaryMetrics.Pods {
for _, container := range pod.Containers {
tags := map[string]string{

View File

@@ -45,7 +45,7 @@ type CPUMetrics struct {
// PodMetrics contains metric data on a given pod
type PodMetrics struct {
PodRef PodReference `json:"podRef"`
StartTime *time.Time `json:"startTime"`
StartTime time.Time `json:"startTime"`
Containers []ContainerMetrics `json:"containers"`
Network NetworkMetrics `json:"network"`
Volumes []VolumeMetrics `json:"volume"`

View File

@@ -92,29 +92,6 @@ func TestKubernetesStats(t *testing.T) {
}
acc.AssertContainsTaggedFields(t, "kubernetes_pod_container", fields, tags)
fields = map[string]interface{}{
"cpu_usage_nanocores": int64(846503),
"cpu_usage_core_nanoseconds": int64(56507553554),
"memory_usage_bytes": int64(0),
"memory_working_set_bytes": int64(0),
"memory_rss_bytes": int64(0),
"memory_page_faults": int64(0),
"memory_major_page_faults": int64(0),
"rootfs_available_bytes": int64(0),
"rootfs_capacity_bytes": int64(0),
"rootfs_used_bytes": int64(0),
"logsfs_avaialble_bytes": int64(0),
"logsfs_capacity_bytes": int64(0),
"logsfs_used_bytes": int64(0),
}
tags = map[string]string{
"node_name": "node1",
"container_name": "stopped-container",
"namespace": "foons",
"pod_name": "stopped-pod",
}
acc.AssertContainsTaggedFields(t, "kubernetes_pod_container", fields, tags)
fields = map[string]interface{}{
"available_bytes": int64(7903948800),
"capacity_bytes": int64(7903961088),
@@ -307,25 +284,6 @@ var response = `
"name": "volume4"
}
]
},
{
"podRef": {
"name": "stopped-pod",
"namespace": "foons",
"uid": "da7c1865-d67d-4688-b679-c485ed44b2aa"
},
"startTime": null,
"containers": [
{
"name": "stopped-container",
"startTime": "2016-09-26T18:46:43Z",
"cpu": {
"time": "2016-09-27T16:57:32Z",
"usageNanoCores": 846503,
"usageCoreNanoSeconds": 56507553554
}
}
]
}
]
}`

View File

@@ -10,7 +10,7 @@ import (
"sync"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs"
)
@@ -148,7 +148,7 @@ func (l *LeoFS) Description() string {
return "Read metrics from a LeoFS Server via SNMP"
}
func (l *LeoFS) Gather(acc telegraf.Accumulator) error {
func (l *LeoFS) Gather(acc plugins.Accumulator) error {
if len(l.Servers) == 0 {
l.gatherServer(defaultEndpoint, ServerTypeManagerMaster, acc)
return nil
@@ -181,7 +181,7 @@ func (l *LeoFS) Gather(acc telegraf.Accumulator) error {
func (l *LeoFS) gatherServer(
endpoint string,
serverType ServerType,
acc telegraf.Accumulator,
acc plugins.Accumulator,
) error {
cmd := exec.Command("snmpwalk", "-v2c", "-cpublic", endpoint, oid)
stdout, err := cmd.StdoutPipe()
@@ -231,7 +231,7 @@ func retrieveTokenAfterColon(line string) (string, error) {
}
func init() {
inputs.Add("leofs", func() telegraf.Input {
inputs.Add("leofs", func() plugins.Input {
return &LeoFS{}
})
}

View File

@@ -12,7 +12,7 @@ import (
"github.com/vjeantet/grok"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/metric"
)
@@ -151,7 +151,7 @@ func (p *Parser) Compile() error {
return p.compileCustomPatterns()
}
func (p *Parser) ParseLine(line string) (telegraf.Metric, error) {
func (p *Parser) ParseLine(line string) (plugins.Metric, error) {
var err error
// values are the parsed fields from the log line
var values map[string]string

View File

@@ -4,13 +4,13 @@ import (
"testing"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
var benchM telegraf.Metric
var benchM plugins.Metric
func Benchmark_ParseLine_CommonLogFormat(b *testing.B) {
p := &Parser{
@@ -18,7 +18,7 @@ func Benchmark_ParseLine_CommonLogFormat(b *testing.B) {
}
p.Compile()
var m telegraf.Metric
var m plugins.Metric
for n := 0; n < b.N; n++ {
m, _ = p.ParseLine(`127.0.0.1 user-identifier frank [10/Oct/2000:13:55:36 -0700] "GET /apache_pb.gif HTTP/1.0" 200 2326`)
}
@@ -31,7 +31,7 @@ func Benchmark_ParseLine_CombinedLogFormat(b *testing.B) {
}
p.Compile()
var m telegraf.Metric
var m plugins.Metric
for n := 0; n < b.N; n++ {
m, _ = p.ParseLine(`127.0.0.1 user-identifier frank [10/Oct/2000:13:55:36 -0700] "GET /apache_pb.gif HTTP/1.0" 200 2326 "-" "Mozilla"`)
}
@@ -50,7 +50,7 @@ func Benchmark_ParseLine_CustomPattern(b *testing.B) {
}
p.Compile()
var m telegraf.Metric
var m plugins.Metric
for n := 0; n < b.N; n++ {
m, _ = p.ParseLine(`[04/Jun/2016:12:41:45 +0100] 1.25 200 192.168.1.1 5.432µs 101`)
}

View File

@@ -8,7 +8,7 @@ import (
"github.com/hpcloud/tail"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/internal/errchan"
"github.com/influxdata/telegraf/internal/globpath"
"github.com/influxdata/telegraf/plugins/inputs"
@@ -18,7 +18,7 @@ import (
)
type LogParser interface {
ParseLine(line string) (telegraf.Metric, error)
ParseLine(line string) (plugins.Metric, error)
Compile() error
}
@@ -26,11 +26,11 @@ type LogParserPlugin struct {
Files []string
FromBeginning bool
tailers map[string]*tail.Tail
tailers []*tail.Tail
lines chan string
done chan struct{}
wg sync.WaitGroup
acc telegraf.Accumulator
acc plugins.Accumulator
parsers []LogParser
sync.Mutex
@@ -46,9 +46,7 @@ const sampleConfig = `
## /var/log/*/*.log -> find all .log files with a parent dir in /var/log
## /var/log/apache.log -> only tail the apache log file
files = ["/var/log/apache/access.log"]
## Read files that currently exist from the beginning. Files that are created
## while telegraf is running (and that match the "files" globs) will always
## be read from the beginning.
## Read file from beginning.
from_beginning = false
## Parse logstash-style "grok" patterns:
@@ -78,22 +76,17 @@ func (l *LogParserPlugin) Description() string {
return "Stream and parse log file(s)."
}
func (l *LogParserPlugin) Gather(acc telegraf.Accumulator) error {
l.Lock()
defer l.Unlock()
// always start from the beginning of files that appear while we're running
return l.tailNewfiles(true)
func (l *LogParserPlugin) Gather(acc plugins.Accumulator) error {
return nil
}
func (l *LogParserPlugin) Start(acc telegraf.Accumulator) error {
func (l *LogParserPlugin) Start(acc plugins.Accumulator) error {
l.Lock()
defer l.Unlock()
l.acc = acc
l.lines = make(chan string, 1000)
l.done = make(chan struct{})
l.tailers = make(map[string]*tail.Tail)
// Looks for fields which implement LogParser interface
l.parsers = []LogParser{}
@@ -128,22 +121,14 @@ func (l *LogParserPlugin) Start(acc telegraf.Accumulator) error {
return err
}
l.wg.Add(1)
go l.parser()
return l.tailNewfiles(l.FromBeginning)
}
// check the globs against files on disk, and start tailing any new files.
// Assumes l's lock is held!
func (l *LogParserPlugin) tailNewfiles(fromBeginning bool) error {
var seek tail.SeekInfo
if !fromBeginning {
if !l.FromBeginning {
seek.Whence = 2
seek.Offset = 0
}
errChan := errchan.New(len(l.Files))
l.wg.Add(1)
go l.parser()
// Create a "tailer" for each file
for _, filepath := range l.Files {
@@ -154,13 +139,7 @@ func (l *LogParserPlugin) tailNewfiles(fromBeginning bool) error {
}
files := g.Match()
errChan = errchan.New(len(files))
for file, _ := range files {
if _, ok := l.tailers[file]; ok {
// we're already tailing this file
continue
}
tailer, err := tail.TailFile(file,
tail.Config{
ReOpen: true,
@@ -173,7 +152,7 @@ func (l *LogParserPlugin) tailNewfiles(fromBeginning bool) error {
// create a goroutine for each "tailer"
l.wg.Add(1)
go l.receiver(tailer)
l.tailers[file] = tailer
l.tailers = append(l.tailers, tailer)
}
}
@@ -187,7 +166,6 @@ func (l *LogParserPlugin) receiver(tailer *tail.Tail) {
var line *tail.Line
for line = range tailer.Lines {
if line.Err != nil {
log.Printf("E! Error tailing file %s, Error: %s\n",
tailer.Filename, line.Err)
@@ -207,7 +185,7 @@ func (l *LogParserPlugin) receiver(tailer *tail.Tail) {
func (l *LogParserPlugin) parser() {
defer l.wg.Done()
var m telegraf.Metric
var m plugins.Metric
var err error
var line string
for {
@@ -247,7 +225,7 @@ func (l *LogParserPlugin) Stop() {
}
func init() {
inputs.Add("logparser", func() telegraf.Input {
inputs.Add("logparser", func() plugins.Input {
return &LogParserPlugin{}
})
}

View File

@@ -1,8 +1,6 @@
package logparser
import (
"io/ioutil"
"os"
"runtime"
"strings"
"testing"
@@ -82,47 +80,6 @@ func TestGrokParseLogFiles(t *testing.T) {
map[string]string{})
}
func TestGrokParseLogFilesAppearLater(t *testing.T) {
emptydir, err := ioutil.TempDir("", "TestGrokParseLogFilesAppearLater")
defer os.RemoveAll(emptydir)
assert.NoError(t, err)
thisdir := getCurrentDir()
p := &grok.Parser{
Patterns: []string{"%{TEST_LOG_A}", "%{TEST_LOG_B}"},
CustomPatternFiles: []string{thisdir + "grok/testdata/test-patterns"},
}
logparser := &LogParserPlugin{
FromBeginning: true,
Files: []string{emptydir + "/*.log"},
GrokParser: p,
}
acc := testutil.Accumulator{}
assert.NoError(t, logparser.Start(&acc))
time.Sleep(time.Millisecond * 500)
assert.Equal(t, acc.NFields(), 0)
os.Symlink(
thisdir+"grok/testdata/test_a.log",
emptydir+"/test_a.log")
assert.NoError(t, logparser.Gather(&acc))
time.Sleep(time.Millisecond * 500)
logparser.Stop()
acc.AssertContainsTaggedFields(t, "logparser_grok",
map[string]interface{}{
"clientip": "192.168.1.1",
"myfloat": float64(1.25),
"response_time": int64(5432),
"myint": int64(101),
},
map[string]string{"response_code": "200"})
}
// Test that test_a.log line gets parsed even though we don't have the correct
// pattern available for test_b.log
func TestGrokParseLogFilesOneBad(t *testing.T) {

View File

@@ -13,7 +13,7 @@ import (
"strconv"
"strings"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/internal"
"github.com/influxdata/telegraf/plugins/inputs"
)
@@ -353,7 +353,7 @@ var wanted_mdt_jobstats_fields = []*mapping{
},
}
func (l *Lustre2) GetLustreProcStats(fileglob string, wanted_fields []*mapping, acc telegraf.Accumulator) error {
func (l *Lustre2) GetLustreProcStats(fileglob string, wanted_fields []*mapping, acc plugins.Accumulator) error {
files, err := filepath.Glob(fileglob)
if err != nil {
return err
@@ -422,7 +422,7 @@ func (l *Lustre2) Description() string {
}
// Gather reads stats from all lustre targets
func (l *Lustre2) Gather(acc telegraf.Accumulator) error {
func (l *Lustre2) Gather(acc plugins.Accumulator) error {
l.allFields = make(map[string]map[string]interface{})
if len(l.Ost_procfiles) == 0 {
@@ -500,7 +500,7 @@ func (l *Lustre2) Gather(acc telegraf.Accumulator) error {
}
func init() {
inputs.Add("lustre2", func() telegraf.Input {
inputs.Add("lustre2", func() plugins.Input {
return &Lustre2{}
})
}

View File

@@ -4,7 +4,7 @@ import (
"fmt"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/plugins/inputs"
)
@@ -35,7 +35,7 @@ func (m *MailChimp) Description() string {
return "Gathers metrics from the /3.0/reports MailChimp API"
}
func (m *MailChimp) Gather(acc telegraf.Accumulator) error {
func (m *MailChimp) Gather(acc plugins.Accumulator) error {
if m.api == nil {
m.api = NewChimpAPI(m.ApiKey)
}
@@ -72,7 +72,7 @@ func (m *MailChimp) Gather(acc telegraf.Accumulator) error {
return nil
}
func gatherReport(acc telegraf.Accumulator, report Report, now time.Time) {
func gatherReport(acc plugins.Accumulator, report Report, now time.Time) {
tags := make(map[string]string)
tags["id"] = report.ID
tags["campaign_title"] = report.CampaignTitle
@@ -111,7 +111,7 @@ func gatherReport(acc telegraf.Accumulator, report Report, now time.Time) {
}
func init() {
inputs.Add("mailchimp", func() telegraf.Input {
inputs.Add("mailchimp", func() plugins.Input {
return &MailChimp{}
})
}

View File

@@ -8,7 +8,7 @@ import (
"strconv"
"time"
"github.com/influxdata/telegraf"
"github.com/influxdata/telegraf/plugins"
"github.com/influxdata/telegraf/internal/errchan"
"github.com/influxdata/telegraf/plugins/inputs"
)
@@ -69,7 +69,7 @@ func (m *Memcached) Description() string {
}
// Gather reads stats from all configured servers accumulates stats
func (m *Memcached) Gather(acc telegraf.Accumulator) error {
func (m *Memcached) Gather(acc plugins.Accumulator) error {
if len(m.Servers) == 0 && len(m.UnixSockets) == 0 {
return m.gatherServer(":11211", false, acc)
}
@@ -89,7 +89,7 @@ func (m *Memcached) Gather(acc telegraf.Accumulator) error {
func (m *Memcached) gatherServer(
address string,
unix bool,
acc telegraf.Accumulator,
acc plugins.Accumulator,
) error {
var conn net.Conn
var err error
@@ -180,7 +180,7 @@ func parseResponse(r *bufio.Reader) (map[string]string, error) {
}
func init() {
inputs.Add("memcached", func() telegraf.Input {
inputs.Add("memcached", func() plugins.Input {
return &Memcached{}
})
}

Some files were not shown because too many files have changed in this diff Show More