Compare commits
1 Commits
0.12.0
...
0.10.3-win
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
0074696e67 |
100
CHANGELOG.md
100
CHANGELOG.md
@@ -1,111 +1,13 @@
|
|||||||
## v0.12.0 [2016-04-05]
|
## v0.10.4 [unreleased]
|
||||||
|
|
||||||
### Features
|
|
||||||
- [#951](https://github.com/influxdata/telegraf/pull/951): Parse environment variables in the config file.
|
|
||||||
- [#948](https://github.com/influxdata/telegraf/pull/948): Cleanup config file and make default package version include all plugins (but commented).
|
|
||||||
- [#927](https://github.com/influxdata/telegraf/pull/927): Adds parsing of tags to the statsd input when using DataDog's dogstatsd extension
|
|
||||||
- [#863](https://github.com/influxdata/telegraf/pull/863): AMQP output: allow external auth. Thanks @ekini!
|
|
||||||
- [#707](https://github.com/influxdata/telegraf/pull/707): Improved prometheus plugin. Thanks @titilambert!
|
|
||||||
- [#878](https://github.com/influxdata/telegraf/pull/878): Added json serializer. Thanks @ch3lo!
|
|
||||||
- [#880](https://github.com/influxdata/telegraf/pull/880): Add the ability to specify the bearer token to the prometheus plugin. Thanks @jchauncey!
|
|
||||||
- [#882](https://github.com/influxdata/telegraf/pull/882): Fixed SQL Server Plugin issues
|
|
||||||
- [#849](https://github.com/influxdata/telegraf/issues/849): Adding ability to parse single values as an input data type.
|
|
||||||
- [#844](https://github.com/influxdata/telegraf/pull/844): postgres_extensible plugin added. Thanks @menardorama!
|
|
||||||
- [#866](https://github.com/influxdata/telegraf/pull/866): couchbase input plugin. Thanks @ljosa!
|
|
||||||
- [#789](https://github.com/influxdata/telegraf/pull/789): Support multiple field specification and `field*` in graphite templates. Thanks @chrusty!
|
|
||||||
- [#762](https://github.com/influxdata/telegraf/pull/762): Nagios parser for the exec plugin. Thanks @titilambert!
|
|
||||||
- [#848](https://github.com/influxdata/telegraf/issues/848): Provide option to omit host tag from telegraf agent.
|
|
||||||
- [#928](https://github.com/influxdata/telegraf/pull/928): Deprecating the statsd "convert_names" options, expose separator config.
|
|
||||||
- [#919](https://github.com/influxdata/telegraf/pull/919): ipmi_sensor input plugin. Thanks @ebookbug!
|
|
||||||
- [#945](https://github.com/influxdata/telegraf/pull/945): KAFKA output: codec, acks, and retry configuration. Thanks @framiere!
|
|
||||||
|
|
||||||
### Bugfixes
|
|
||||||
- [#890](https://github.com/influxdata/telegraf/issues/890): Create TLS config even if only ssl_ca is provided.
|
|
||||||
- [#884](https://github.com/influxdata/telegraf/issues/884): Do not call write method if there are 0 metrics to write.
|
|
||||||
- [#898](https://github.com/influxdata/telegraf/issues/898): Put database name in quotes, fixes special characters in the database name.
|
|
||||||
- [#656](https://github.com/influxdata/telegraf/issues/656): No longer run `lsof` on linux to get netstat data, fixes permissions issue.
|
|
||||||
- [#907](https://github.com/influxdata/telegraf/issues/907): Fix prometheus invalid label/measurement name key.
|
|
||||||
- [#841](https://github.com/influxdata/telegraf/issues/841): Fix memcached unix socket panic.
|
|
||||||
- [#873](https://github.com/influxdata/telegraf/issues/873): Fix SNMP plugin sometimes not returning metrics. Thanks @titiliambert!
|
|
||||||
- [#934](https://github.com/influxdata/telegraf/pull/934): phpfpm: Fix fcgi uri path. Thanks @rudenkovk!
|
|
||||||
- [#805](https://github.com/influxdata/telegraf/issues/805): Kafka consumer stops gathering after i/o timeout.
|
|
||||||
- [#959](https://github.com/influxdata/telegraf/pull/959): reduce mongodb & prometheus collection timeouts. Thanks @PierreF!
|
|
||||||
|
|
||||||
## v0.11.1 [2016-03-17]
|
|
||||||
|
|
||||||
### Release Notes
|
|
||||||
- Primarily this release was cut to fix [#859](https://github.com/influxdata/telegraf/issues/859)
|
|
||||||
|
|
||||||
### Features
|
|
||||||
- [#747](https://github.com/influxdata/telegraf/pull/747): Start telegraf on install & remove on uninstall. Thanks @pierref!
|
|
||||||
- [#794](https://github.com/influxdata/telegraf/pull/794): Add service reload ability. Thanks @entertainyou!
|
|
||||||
|
|
||||||
### Bugfixes
|
|
||||||
- [#852](https://github.com/influxdata/telegraf/issues/852): Windows zip package fix
|
|
||||||
- [#859](https://github.com/influxdata/telegraf/issues/859): httpjson plugin panic
|
|
||||||
|
|
||||||
## v0.11.0 [2016-03-15]
|
|
||||||
|
|
||||||
### Release Notes
|
|
||||||
|
|
||||||
### Features
|
|
||||||
- [#692](https://github.com/influxdata/telegraf/pull/770): Support InfluxDB retention policies
|
|
||||||
- [#771](https://github.com/influxdata/telegraf/pull/771): Default timeouts for input plugns. Thanks @PierreF!
|
|
||||||
- [#758](https://github.com/influxdata/telegraf/pull/758): UDP Listener input plugin, thanks @whatyouhide!
|
|
||||||
- [#769](https://github.com/influxdata/telegraf/issues/769): httpjson plugin: allow specifying SSL configuration.
|
|
||||||
- [#735](https://github.com/influxdata/telegraf/pull/735): SNMP Table feature. Thanks @titilambert!
|
|
||||||
- [#754](https://github.com/influxdata/telegraf/pull/754): docker plugin: adding `docker info` metrics to output. Thanks @titilambert!
|
|
||||||
- [#788](https://github.com/influxdata/telegraf/pull/788): -input-list and -output-list command-line options. Thanks @ebookbug!
|
|
||||||
- [#778](https://github.com/influxdata/telegraf/pull/778): Adding a TCP input listener.
|
|
||||||
- [#797](https://github.com/influxdata/telegraf/issues/797): Provide option for persistent MQTT consumer client sessions.
|
|
||||||
- [#799](https://github.com/influxdata/telegraf/pull/799): Add number of threads for procstat input plugin. Thanks @titilambert!
|
|
||||||
- [#776](https://github.com/influxdata/telegraf/pull/776): Add Zookeeper chroot option to kafka_consumer. Thanks @prune998!
|
|
||||||
- [#811](https://github.com/influxdata/telegraf/pull/811): Add processes plugin for classifying total procs on system. Thanks @titilambert!
|
|
||||||
- [#235](https://github.com/influxdata/telegraf/issues/235): Add number of users to the `system` input plugin.
|
|
||||||
- [#826](https://github.com/influxdata/telegraf/pull/826): "kernel" linux plugin for /proc/stat metrics (context switches, interrupts, etc.)
|
|
||||||
- [#847](https://github.com/influxdata/telegraf/pull/847): `ntpq`: Input plugin for running ntp query executable and gathering metrics.
|
|
||||||
|
|
||||||
### Bugfixes
|
|
||||||
- [#748](https://github.com/influxdata/telegraf/issues/748): Fix sensor plugin split on ":"
|
|
||||||
- [#722](https://github.com/influxdata/telegraf/pull/722): Librato output plugin fixes. Thanks @chrusty!
|
|
||||||
- [#745](https://github.com/influxdata/telegraf/issues/745): Fix Telegraf toml parse panic on large config files. Thanks @titilambert!
|
|
||||||
- [#781](https://github.com/influxdata/telegraf/pull/781): Fix mqtt_consumer username not being set. Thanks @chaton78!
|
|
||||||
- [#786](https://github.com/influxdata/telegraf/pull/786): Fix mqtt output username not being set. Thanks @msangoi!
|
|
||||||
- [#773](https://github.com/influxdata/telegraf/issues/773): Fix duplicate measurements in snmp plugin. Thanks @titilambert!
|
|
||||||
- [#708](https://github.com/influxdata/telegraf/issues/708): packaging: build ARM package
|
|
||||||
- [#713](https://github.com/influxdata/telegraf/issues/713): packaging: insecure permissions error on log directory
|
|
||||||
- [#816](https://github.com/influxdata/telegraf/issues/816): Fix phpfpm panic if fcgi endpoint unreachable.
|
|
||||||
- [#828](https://github.com/influxdata/telegraf/issues/828): fix net_response plugin overwriting host tag.
|
|
||||||
- [#821](https://github.com/influxdata/telegraf/issues/821): Remove postgres password from server tag. Thanks @menardorama!
|
|
||||||
|
|
||||||
## v0.10.4.1
|
|
||||||
|
|
||||||
### Release Notes
|
|
||||||
- Bug in the build script broke deb and rpm packages.
|
|
||||||
|
|
||||||
### Bugfixes
|
|
||||||
- [#750](https://github.com/influxdata/telegraf/issues/750): deb package broken
|
|
||||||
- [#752](https://github.com/influxdata/telegraf/issues/752): rpm package broken
|
|
||||||
|
|
||||||
## v0.10.4 [2016-02-24]
|
|
||||||
|
|
||||||
### Release Notes
|
|
||||||
- The pass/drop parameters have been renamed to fielddrop/fieldpass parameters,
|
|
||||||
to more accurately indicate their purpose.
|
|
||||||
- There are also now namedrop/namepass parameters for passing/dropping based
|
|
||||||
on the metric _name_.
|
|
||||||
- Experimental windows builds now available.
|
|
||||||
|
|
||||||
### Features
|
### Features
|
||||||
- [#727](https://github.com/influxdata/telegraf/pull/727): riak input, thanks @jcoene!
|
- [#727](https://github.com/influxdata/telegraf/pull/727): riak input, thanks @jcoene!
|
||||||
- [#694](https://github.com/influxdata/telegraf/pull/694): DNS Query input, thanks @mjasion!
|
- [#694](https://github.com/influxdata/telegraf/pull/694): DNS Query input, thanks @mjasion!
|
||||||
- [#724](https://github.com/influxdata/telegraf/pull/724): username matching for procstat input, thanks @zorel!
|
- [#724](https://github.com/influxdata/telegraf/pull/724): username matching for procstat input, thanks @zorel!
|
||||||
- [#736](https://github.com/influxdata/telegraf/pull/736): Ignore dummy filesystems from disk plugin. Thanks @PierreF!
|
- [#736](https://github.com/influxdata/telegraf/pull/736): Ignore dummy filesystems from disk plugin. Thanks @PierreF!
|
||||||
- [#737](https://github.com/influxdata/telegraf/pull/737): Support multiple fields for statsd input. Thanks @mattheath!
|
|
||||||
|
|
||||||
### Bugfixes
|
### Bugfixes
|
||||||
- [#701](https://github.com/influxdata/telegraf/pull/701): output write count shouldnt print in quiet mode.
|
- [#701](https://github.com/influxdata/telegraf/pull/701): output write count shouldnt print in quiet mode.
|
||||||
- [#746](https://github.com/influxdata/telegraf/pull/746): httpjson plugin: Fix HTTP GET parameters.
|
|
||||||
|
|
||||||
## v0.10.3 [2016-02-18]
|
## v0.10.3 [2016-02-18]
|
||||||
|
|
||||||
|
|||||||
@@ -80,7 +80,7 @@ func (s *Simple) SampleConfig() string {
|
|||||||
return "ok = true # indicate if everything is fine"
|
return "ok = true # indicate if everything is fine"
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s *Simple) Gather(acc telegraf.Accumulator) error {
|
func (s *Simple) Gather(acc inputs.Accumulator) error {
|
||||||
if s.Ok {
|
if s.Ok {
|
||||||
acc.Add("state", "pretty good", nil)
|
acc.Add("state", "pretty good", nil)
|
||||||
} else {
|
} else {
|
||||||
|
|||||||
72
Godeps
72
Godeps
@@ -1,53 +1,53 @@
|
|||||||
github.com/Shopify/sarama 8aadb476e66ca998f2f6bb3c993e9a2daa3666b9
|
git.eclipse.org/gitroot/paho/org.eclipse.paho.mqtt.golang.git 617c801af238c3af2d9e72c5d4a0f02edad03ce5
|
||||||
github.com/Sirupsen/logrus 219c8cb75c258c552e999735be6df753ffc7afdc
|
github.com/Shopify/sarama d37c73f2b2bce85f7fa16b6a550d26c5372892ef
|
||||||
github.com/amir/raidman 53c1b967405155bfc8758557863bf2e14f814687
|
github.com/Sirupsen/logrus f7f79f729e0fbe2fcc061db48a9ba0263f588252
|
||||||
github.com/aws/aws-sdk-go 13a12060f716145019378a10e2806c174356b857
|
github.com/amir/raidman 6a8e089bbe32e6b907feae5ba688841974b3c339
|
||||||
github.com/beorn7/perks 3ac7bf7a47d159a033b107610db8a1b6575507a4
|
github.com/aws/aws-sdk-go 87b1e60a50b09e4812dee560b33a238f67305804
|
||||||
|
github.com/beorn7/perks b965b613227fddccbfffe13eae360ed3fa822f8d
|
||||||
github.com/cenkalti/backoff 4dc77674aceaabba2c7e3da25d4c823edfb73f99
|
github.com/cenkalti/backoff 4dc77674aceaabba2c7e3da25d4c823edfb73f99
|
||||||
github.com/couchbase/go-couchbase cb664315a324d87d19c879d9cc67fda6be8c2ac1
|
github.com/dancannon/gorethink 6f088135ff288deb9d5546f4c71919207f891a70
|
||||||
github.com/couchbase/gomemcached a5ea6356f648fec6ab89add00edd09151455b4b2
|
|
||||||
github.com/couchbase/goutils 5823a0cbaaa9008406021dc5daf80125ea30bba6
|
|
||||||
github.com/dancannon/gorethink e7cac92ea2bc52638791a021f212145acfedb1fc
|
|
||||||
github.com/davecgh/go-spew 5215b55f46b2b919f50a1df0eaa5886afe4e3b3d
|
github.com/davecgh/go-spew 5215b55f46b2b919f50a1df0eaa5886afe4e3b3d
|
||||||
github.com/eapache/go-resiliency b86b1ec0dd4209a588dc1285cdd471e73525c0b3
|
github.com/eapache/go-resiliency b86b1ec0dd4209a588dc1285cdd471e73525c0b3
|
||||||
github.com/eapache/queue ded5959c0d4e360646dc9e9908cff48666781367
|
github.com/eapache/queue ded5959c0d4e360646dc9e9908cff48666781367
|
||||||
github.com/eclipse/paho.mqtt.golang 4ab3e867810d1ec5f35157c59e965054dbf43a0d
|
github.com/fsouza/go-dockerclient 7b651349f9479f5114913eefbfd3c4eeddd79ab4
|
||||||
github.com/fsouza/go-dockerclient a49c8269a6899cae30da1f8a4b82e0ce945f9967
|
github.com/go-ini/ini afbd495e5aaea13597b5e14fe514ddeaa4d76fc3
|
||||||
github.com/go-sql-driver/mysql 1fca743146605a172a266e1654e01e5cd5669bee
|
github.com/go-sql-driver/mysql 7c7f556282622f94213bc028b4d0a7b6151ba239
|
||||||
github.com/golang/protobuf 552c7b9542c194800fd493123b3798ef0a832032
|
github.com/golang/protobuf 6aaa8d47701fa6cf07e914ec01fde3d4a1fe79c3
|
||||||
github.com/golang/snappy 427fb6fc07997f43afa32f35e850833760e489a7
|
github.com/golang/snappy 723cc1e459b8eea2dea4583200fd60757d40097a
|
||||||
github.com/gonuts/go-shellquote e842a11b24c6abfb3dd27af69a17f482e4b483c2
|
github.com/gonuts/go-shellquote e842a11b24c6abfb3dd27af69a17f482e4b483c2
|
||||||
github.com/gorilla/context 1ea25387ff6f684839d82767c1733ff4d4d15d0a
|
github.com/gorilla/context 1c83b3eabd45b6d76072b66b746c20815fb2872d
|
||||||
github.com/gorilla/mux c9e326e2bdec29039a3761c07bece13133863e1e
|
github.com/gorilla/mux 26a6070f849969ba72b72256e9f14cf519751690
|
||||||
github.com/hailocab/go-hostpool e80d13ce29ede4452c43dea11e79b9bc8a15b478
|
github.com/hailocab/go-hostpool e80d13ce29ede4452c43dea11e79b9bc8a15b478
|
||||||
github.com/influxdata/config b79f6829346b8d6e78ba73544b1e1038f1f1c9da
|
github.com/influxdata/config bae7cb98197d842374d3b8403905924094930f24
|
||||||
github.com/influxdata/influxdb e3fef5593c21644f2b43af55d6e17e70910b0e48
|
github.com/influxdata/influxdb ef571fc104dc24b77cd3710c156cd95e5cfd7aa5
|
||||||
github.com/influxdata/toml af4df43894b16e3fd2b788d01bd27ad0776ef2d0
|
github.com/jmespath/go-jmespath c01cf91b011868172fdcd9f41838e80c9d716264
|
||||||
github.com/klauspost/crc32 19b0b332c9e4516a6370a0456e6182c3b5036720
|
github.com/klauspost/crc32 999f3125931f6557b991b2f8472172bdfa578d38
|
||||||
github.com/lib/pq e182dc4027e2ded4b19396d638610f2653295f36
|
github.com/lib/pq 8ad2b298cadd691a77015666a5372eae5dbfac8f
|
||||||
github.com/matttproud/golang_protobuf_extensions d0c3fe89de86839aecf2e0579c40ba3bb336a453
|
github.com/matttproud/golang_protobuf_extensions d0c3fe89de86839aecf2e0579c40ba3bb336a453
|
||||||
github.com/miekg/dns cce6c130cdb92c752850880fd285bea1d64439dd
|
|
||||||
github.com/mreiferson/go-snappystream 028eae7ab5c4c9e2d1cb4c4ca1e53259bbe7e504
|
github.com/mreiferson/go-snappystream 028eae7ab5c4c9e2d1cb4c4ca1e53259bbe7e504
|
||||||
github.com/naoina/go-stringutil 6b638e95a32d0c1131db0e7fe83775cbea4a0d0b
|
github.com/naoina/go-stringutil 6b638e95a32d0c1131db0e7fe83775cbea4a0d0b
|
||||||
github.com/nats-io/nats b13fc9d12b0b123ebc374e6b808c6228ae4234a3
|
github.com/naoina/toml 751171607256bb66e64c9f0220c00662420c38e9
|
||||||
github.com/nats-io/nuid 4f84f5f3b2786224e336af2e13dba0a0a80b76fa
|
github.com/nats-io/nats 6a83f1a633cfbfd90aa648ac99fb38c06a8b40df
|
||||||
github.com/nsqio/go-nsq 0b80d6f05e15ca1930e0c5e1d540ed627e299980
|
github.com/nsqio/go-nsq 2118015c120962edc5d03325c680daf3163a8b5f
|
||||||
github.com/prometheus/client_golang 18acf9993a863f4c4b40612e19cdd243e7c86831
|
github.com/pmezard/go-difflib 792786c7400a136282c1664665ae0a8db921c6c2
|
||||||
|
github.com/prometheus/client_golang 67994f177195311c3ea3d4407ed0175e34a4256f
|
||||||
github.com/prometheus/client_model fa8ad6fec33561be4280a8f0514318c79d7f6cb6
|
github.com/prometheus/client_model fa8ad6fec33561be4280a8f0514318c79d7f6cb6
|
||||||
github.com/prometheus/common e8eabff8812b05acf522b45fdcd725a785188e37
|
github.com/prometheus/common 14ca1097bbe21584194c15e391a9dab95ad42a59
|
||||||
github.com/prometheus/procfs 406e5b7bfd8201a36e2bb5f7bdae0b03380c2ce8
|
github.com/prometheus/procfs 406e5b7bfd8201a36e2bb5f7bdae0b03380c2ce8
|
||||||
github.com/samuel/go-zookeeper 218e9c81c0dd8b3b18172b2bbfad92cc7d6db55f
|
github.com/samuel/go-zookeeper 218e9c81c0dd8b3b18172b2bbfad92cc7d6db55f
|
||||||
github.com/shirou/gopsutil 1f32ce1bb380845be7f5d174ac641a2c592c0c42
|
github.com/shirou/gopsutil e77438504d45b9985c99a75730fe65220ceea00e
|
||||||
github.com/soniah/gosnmp b1b4f885b12c5dcbd021c5cee1c904110de6db7d
|
github.com/soniah/gosnmp b1b4f885b12c5dcbd021c5cee1c904110de6db7d
|
||||||
github.com/streadway/amqp b4f3ceab0337f013208d31348b578d83c0064744
|
github.com/streadway/amqp b4f3ceab0337f013208d31348b578d83c0064744
|
||||||
github.com/stretchr/testify 1f4a1643a57e798696635ea4c126e9127adb7d3c
|
github.com/stretchr/objx 1a9d0bb9f541897e62256577b352fdbc1fb4fd94
|
||||||
github.com/wvanbergen/kafka 46f9a1cf3f670edec492029fadded9c2d9e18866
|
github.com/stretchr/testify f390dcf405f7b83c997eac1b06768bb9f44dec18
|
||||||
|
github.com/wvanbergen/kafka 1a8639a45164fcc245d5c7b4bd3ccfbd1a0ffbf3
|
||||||
github.com/wvanbergen/kazoo-go 0f768712ae6f76454f987c3356177e138df258f8
|
github.com/wvanbergen/kazoo-go 0f768712ae6f76454f987c3356177e138df258f8
|
||||||
github.com/zensqlmonitor/go-mssqldb ffe5510c6fa5e15e6d983210ab501c815b56b363
|
github.com/zensqlmonitor/go-mssqldb ffe5510c6fa5e15e6d983210ab501c815b56b363
|
||||||
golang.org/x/crypto 5dc8cb4b8a8eb076cbb5a06bc3b8682c15bdbbd3
|
golang.org/x/crypto 1f22c0103821b9390939b6776727195525381532
|
||||||
golang.org/x/net 6acef71eb69611914f7a30939ea9f6e194c78172
|
golang.org/x/net 04b9de9b512f58addf28c9853d50ebef61c3953e
|
||||||
golang.org/x/text a71fd10341b064c10f4a81ceac72bcf70f26ea34
|
golang.org/x/text 6d3c22c4525a4da167968fa2479be5524d2e8bd0
|
||||||
gopkg.in/dancannon/gorethink.v1 7d1af5be49cb5ecc7b177bf387d232050299d6ef
|
gopkg.in/dancannon/gorethink.v1 6f088135ff288deb9d5546f4c71919207f891a70
|
||||||
gopkg.in/fatih/pool.v2 cba550ebf9bce999a02e963296d4bc7a486cb715
|
gopkg.in/fatih/pool.v2 cba550ebf9bce999a02e963296d4bc7a486cb715
|
||||||
gopkg.in/mgo.v2 d90005c5262a3463800497ea5a89aed5fe22c886
|
gopkg.in/mgo.v2 03c9f3ee4c14c8e51ee521a6a7d0425658dd6f64
|
||||||
gopkg.in/yaml.v2 a83829b6f1293c91addabc89d0571c246397bbf4
|
gopkg.in/yaml.v2 f7716cbe52baa25d2e9b0d0da546fcf909fc16b4
|
||||||
|
github.com/miekg/dns e0d84d97e59bcb6561eae269c4e94d25b66822cb
|
||||||
@@ -1,60 +1,56 @@
|
|||||||
github.com/Shopify/sarama 8aadb476e66ca998f2f6bb3c993e9a2daa3666b9
|
git.eclipse.org/gitroot/paho/org.eclipse.paho.mqtt.golang.git 617c801af238c3af2d9e72c5d4a0f02edad03ce5
|
||||||
github.com/Sirupsen/logrus 219c8cb75c258c552e999735be6df753ffc7afdc
|
github.com/Shopify/sarama d37c73f2b2bce85f7fa16b6a550d26c5372892ef
|
||||||
|
github.com/Sirupsen/logrus f7f79f729e0fbe2fcc061db48a9ba0263f588252
|
||||||
github.com/StackExchange/wmi f3e2bae1e0cb5aef83e319133eabfee30013a4a5
|
github.com/StackExchange/wmi f3e2bae1e0cb5aef83e319133eabfee30013a4a5
|
||||||
github.com/amir/raidman 53c1b967405155bfc8758557863bf2e14f814687
|
github.com/amir/raidman 6a8e089bbe32e6b907feae5ba688841974b3c339
|
||||||
github.com/aws/aws-sdk-go 13a12060f716145019378a10e2806c174356b857
|
github.com/aws/aws-sdk-go 87b1e60a50b09e4812dee560b33a238f67305804
|
||||||
github.com/beorn7/perks 3ac7bf7a47d159a033b107610db8a1b6575507a4
|
github.com/beorn7/perks b965b613227fddccbfffe13eae360ed3fa822f8d
|
||||||
github.com/cenkalti/backoff 4dc77674aceaabba2c7e3da25d4c823edfb73f99
|
github.com/cenkalti/backoff 4dc77674aceaabba2c7e3da25d4c823edfb73f99
|
||||||
github.com/couchbase/go-couchbase cb664315a324d87d19c879d9cc67fda6be8c2ac1
|
github.com/dancannon/gorethink 6f088135ff288deb9d5546f4c71919207f891a70
|
||||||
github.com/couchbase/gomemcached a5ea6356f648fec6ab89add00edd09151455b4b2
|
github.com/davecgh/go-spew 5215b55f46b2b919f50a1df0eaa5886afe4e3b3d
|
||||||
github.com/couchbase/goutils 5823a0cbaaa9008406021dc5daf80125ea30bba6
|
|
||||||
github.com/dancannon/gorethink e7cac92ea2bc52638791a021f212145acfedb1fc
|
|
||||||
github.com/davecgh/go-spew fc32781af5e85e548d3f1abaf0fa3dbe8a72495c
|
|
||||||
github.com/eapache/go-resiliency b86b1ec0dd4209a588dc1285cdd471e73525c0b3
|
github.com/eapache/go-resiliency b86b1ec0dd4209a588dc1285cdd471e73525c0b3
|
||||||
github.com/eapache/queue ded5959c0d4e360646dc9e9908cff48666781367
|
github.com/eapache/queue ded5959c0d4e360646dc9e9908cff48666781367
|
||||||
github.com/eclipse/paho.mqtt.golang 4ab3e867810d1ec5f35157c59e965054dbf43a0d
|
github.com/fsouza/go-dockerclient 7b651349f9479f5114913eefbfd3c4eeddd79ab4
|
||||||
github.com/fsouza/go-dockerclient a49c8269a6899cae30da1f8a4b82e0ce945f9967
|
github.com/go-ini/ini afbd495e5aaea13597b5e14fe514ddeaa4d76fc3
|
||||||
github.com/go-ini/ini 776aa739ce9373377cd16f526cdf06cb4c89b40f
|
|
||||||
github.com/go-ole/go-ole 50055884d646dd9434f16bbb5c9801749b9bafe4
|
github.com/go-ole/go-ole 50055884d646dd9434f16bbb5c9801749b9bafe4
|
||||||
github.com/go-sql-driver/mysql 1fca743146605a172a266e1654e01e5cd5669bee
|
github.com/go-sql-driver/mysql 7c7f556282622f94213bc028b4d0a7b6151ba239
|
||||||
github.com/golang/protobuf 552c7b9542c194800fd493123b3798ef0a832032
|
github.com/golang/protobuf 6aaa8d47701fa6cf07e914ec01fde3d4a1fe79c3
|
||||||
github.com/golang/snappy 5979233c5d6225d4a8e438cdd0b411888449ddab
|
github.com/golang/snappy 723cc1e459b8eea2dea4583200fd60757d40097a
|
||||||
github.com/gonuts/go-shellquote e842a11b24c6abfb3dd27af69a17f482e4b483c2
|
github.com/gonuts/go-shellquote e842a11b24c6abfb3dd27af69a17f482e4b483c2
|
||||||
github.com/gorilla/context 1ea25387ff6f684839d82767c1733ff4d4d15d0a
|
github.com/gorilla/context 1c83b3eabd45b6d76072b66b746c20815fb2872d
|
||||||
github.com/gorilla/mux c9e326e2bdec29039a3761c07bece13133863e1e
|
github.com/gorilla/mux 26a6070f849969ba72b72256e9f14cf519751690
|
||||||
github.com/hailocab/go-hostpool e80d13ce29ede4452c43dea11e79b9bc8a15b478
|
github.com/hailocab/go-hostpool e80d13ce29ede4452c43dea11e79b9bc8a15b478
|
||||||
github.com/influxdata/config b79f6829346b8d6e78ba73544b1e1038f1f1c9da
|
github.com/influxdata/config bae7cb98197d842374d3b8403905924094930f24
|
||||||
github.com/influxdata/influxdb c190778997f4154294e6160c41b90140641ac915
|
github.com/influxdata/influxdb ef571fc104dc24b77cd3710c156cd95e5cfd7aa5
|
||||||
github.com/influxdata/toml af4df43894b16e3fd2b788d01bd27ad0776ef2d0
|
github.com/jmespath/go-jmespath c01cf91b011868172fdcd9f41838e80c9d716264
|
||||||
github.com/jmespath/go-jmespath 0b12d6b521d83fc7f755e7cfc1b1fbdd35a01a74
|
github.com/klauspost/crc32 999f3125931f6557b991b2f8472172bdfa578d38
|
||||||
github.com/klauspost/crc32 19b0b332c9e4516a6370a0456e6182c3b5036720
|
github.com/lib/pq 8ad2b298cadd691a77015666a5372eae5dbfac8f
|
||||||
github.com/lib/pq e182dc4027e2ded4b19396d638610f2653295f36
|
|
||||||
github.com/lxn/win 9a7734ea4db26bc593d52f6a8a957afdad39c5c1
|
github.com/lxn/win 9a7734ea4db26bc593d52f6a8a957afdad39c5c1
|
||||||
github.com/matttproud/golang_protobuf_extensions d0c3fe89de86839aecf2e0579c40ba3bb336a453
|
github.com/matttproud/golang_protobuf_extensions d0c3fe89de86839aecf2e0579c40ba3bb336a453
|
||||||
github.com/miekg/dns cce6c130cdb92c752850880fd285bea1d64439dd
|
github.com/miekg/dns e0d84d97e59bcb6561eae269c4e94d25b66822cb
|
||||||
github.com/mreiferson/go-snappystream 028eae7ab5c4c9e2d1cb4c4ca1e53259bbe7e504
|
github.com/mreiferson/go-snappystream 028eae7ab5c4c9e2d1cb4c4ca1e53259bbe7e504
|
||||||
github.com/naoina/go-stringutil 6b638e95a32d0c1131db0e7fe83775cbea4a0d0b
|
github.com/naoina/go-stringutil 6b638e95a32d0c1131db0e7fe83775cbea4a0d0b
|
||||||
github.com/nats-io/nats b13fc9d12b0b123ebc374e6b808c6228ae4234a3
|
github.com/naoina/toml 751171607256bb66e64c9f0220c00662420c38e9
|
||||||
github.com/nats-io/nuid 4f84f5f3b2786224e336af2e13dba0a0a80b76fa
|
github.com/nats-io/nats 6a83f1a633cfbfd90aa648ac99fb38c06a8b40df
|
||||||
github.com/nsqio/go-nsq 0b80d6f05e15ca1930e0c5e1d540ed627e299980
|
github.com/nsqio/go-nsq 2118015c120962edc5d03325c680daf3163a8b5f
|
||||||
github.com/pmezard/go-difflib 792786c7400a136282c1664665ae0a8db921c6c2
|
github.com/pmezard/go-difflib 792786c7400a136282c1664665ae0a8db921c6c2
|
||||||
github.com/prometheus/client_golang 18acf9993a863f4c4b40612e19cdd243e7c86831
|
github.com/prometheus/client_golang 67994f177195311c3ea3d4407ed0175e34a4256f
|
||||||
github.com/prometheus/client_model fa8ad6fec33561be4280a8f0514318c79d7f6cb6
|
github.com/prometheus/client_model fa8ad6fec33561be4280a8f0514318c79d7f6cb6
|
||||||
github.com/prometheus/common e8eabff8812b05acf522b45fdcd725a785188e37
|
github.com/prometheus/common 14ca1097bbe21584194c15e391a9dab95ad42a59
|
||||||
github.com/prometheus/procfs 406e5b7bfd8201a36e2bb5f7bdae0b03380c2ce8
|
github.com/prometheus/procfs 406e5b7bfd8201a36e2bb5f7bdae0b03380c2ce8
|
||||||
github.com/samuel/go-zookeeper 218e9c81c0dd8b3b18172b2bbfad92cc7d6db55f
|
github.com/samuel/go-zookeeper 218e9c81c0dd8b3b18172b2bbfad92cc7d6db55f
|
||||||
github.com/shirou/gopsutil 1f32ce1bb380845be7f5d174ac641a2c592c0c42
|
github.com/shirou/gopsutil e77438504d45b9985c99a75730fe65220ceea00e
|
||||||
github.com/shirou/w32 ada3ba68f000aa1b58580e45c9d308fe0b7fc5c5
|
github.com/shirou/w32 ada3ba68f000aa1b58580e45c9d308fe0b7fc5c5
|
||||||
github.com/soniah/gosnmp b1b4f885b12c5dcbd021c5cee1c904110de6db7d
|
github.com/soniah/gosnmp b1b4f885b12c5dcbd021c5cee1c904110de6db7d
|
||||||
github.com/streadway/amqp b4f3ceab0337f013208d31348b578d83c0064744
|
github.com/streadway/amqp b4f3ceab0337f013208d31348b578d83c0064744
|
||||||
github.com/stretchr/objx 1a9d0bb9f541897e62256577b352fdbc1fb4fd94
|
github.com/stretchr/objx 1a9d0bb9f541897e62256577b352fdbc1fb4fd94
|
||||||
github.com/stretchr/testify 1f4a1643a57e798696635ea4c126e9127adb7d3c
|
github.com/stretchr/testify f390dcf405f7b83c997eac1b06768bb9f44dec18
|
||||||
github.com/wvanbergen/kafka 1a8639a45164fcc245d5c7b4bd3ccfbd1a0ffbf3
|
github.com/wvanbergen/kafka 1a8639a45164fcc245d5c7b4bd3ccfbd1a0ffbf3
|
||||||
github.com/wvanbergen/kazoo-go 0f768712ae6f76454f987c3356177e138df258f8
|
github.com/wvanbergen/kazoo-go 0f768712ae6f76454f987c3356177e138df258f8
|
||||||
github.com/zensqlmonitor/go-mssqldb ffe5510c6fa5e15e6d983210ab501c815b56b363
|
github.com/zensqlmonitor/go-mssqldb ffe5510c6fa5e15e6d983210ab501c815b56b363
|
||||||
golang.org/x/net 6acef71eb69611914f7a30939ea9f6e194c78172
|
golang.org/x/net 04b9de9b512f58addf28c9853d50ebef61c3953e
|
||||||
golang.org/x/text a71fd10341b064c10f4a81ceac72bcf70f26ea34
|
golang.org/x/text 6d3c22c4525a4da167968fa2479be5524d2e8bd0
|
||||||
gopkg.in/dancannon/gorethink.v1 7d1af5be49cb5ecc7b177bf387d232050299d6ef
|
gopkg.in/dancannon/gorethink.v1 6f088135ff288deb9d5546f4c71919207f891a70
|
||||||
gopkg.in/fatih/pool.v2 cba550ebf9bce999a02e963296d4bc7a486cb715
|
gopkg.in/fatih/pool.v2 cba550ebf9bce999a02e963296d4bc7a486cb715
|
||||||
gopkg.in/mgo.v2 d90005c5262a3463800497ea5a89aed5fe22c886
|
gopkg.in/mgo.v2 03c9f3ee4c14c8e51ee521a6a7d0425658dd6f64
|
||||||
gopkg.in/yaml.v2 a83829b6f1293c91addabc89d0571c246397bbf4
|
gopkg.in/yaml.v2 f7716cbe52baa25d2e9b0d0da546fcf909fc16b4
|
||||||
|
|||||||
4
Makefile
4
Makefile
@@ -22,8 +22,8 @@ build-windows:
|
|||||||
./cmd/telegraf/telegraf.go
|
./cmd/telegraf/telegraf.go
|
||||||
|
|
||||||
build-for-docker:
|
build-for-docker:
|
||||||
CGO_ENABLED=0 GOOS=linux go build -installsuffix cgo -o telegraf -ldflags \
|
CGO_ENABLED=0 GOOS=linux go -o telegraf -ldflags \
|
||||||
"-s -X main.Version=$(VERSION)" \
|
"-X main.Version=$(VERSION)" \
|
||||||
./cmd/telegraf/telegraf.go
|
./cmd/telegraf/telegraf.go
|
||||||
|
|
||||||
# Build with race detector
|
# Build with race detector
|
||||||
|
|||||||
190
README.md
190
README.md
@@ -17,15 +17,26 @@ new plugins.
|
|||||||
|
|
||||||
## Installation:
|
## Installation:
|
||||||
|
|
||||||
|
NOTE: Telegraf 0.10.x is **not** backwards-compatible with previous versions
|
||||||
|
of telegraf, both in the database layout and the configuration file. 0.2.x
|
||||||
|
will continue to be supported, see below for download links.
|
||||||
|
|
||||||
|
For more details on the differences between Telegraf 0.2.x and 0.10.x, see
|
||||||
|
the [release blog post](https://influxdata.com/blog/announcing-telegraf-0-10-0/).
|
||||||
|
|
||||||
### Linux deb and rpm Packages:
|
### Linux deb and rpm Packages:
|
||||||
|
|
||||||
Latest:
|
Latest:
|
||||||
* http://get.influxdb.org/telegraf/telegraf_0.12.0-1_amd64.deb
|
* http://get.influxdb.org/telegraf/telegraf_0.10.3-1_amd64.deb
|
||||||
* http://get.influxdb.org/telegraf/telegraf-0.12.0-1.x86_64.rpm
|
* http://get.influxdb.org/telegraf/telegraf-0.10.3-1.x86_64.rpm
|
||||||
|
|
||||||
Latest (arm):
|
Latest (arm):
|
||||||
* http://get.influxdb.org/telegraf/telegraf_0.12.0-1_armhf.deb
|
* http://get.influxdb.org/telegraf/telegraf_0.10.3-1_arm.deb
|
||||||
* http://get.influxdb.org/telegraf/telegraf-0.12.0-1.armhf.rpm
|
* http://get.influxdb.org/telegraf/telegraf-0.10.3-1.arm.rpm
|
||||||
|
|
||||||
|
0.2.x:
|
||||||
|
* http://get.influxdb.org/telegraf/telegraf_0.2.4_amd64.deb
|
||||||
|
* http://get.influxdb.org/telegraf/telegraf-0.2.4-1.x86_64.rpm
|
||||||
|
|
||||||
##### Package Instructions:
|
##### Package Instructions:
|
||||||
|
|
||||||
@@ -39,40 +50,35 @@ controlled via `systemctl [action] telegraf`
|
|||||||
### yum/apt Repositories:
|
### yum/apt Repositories:
|
||||||
|
|
||||||
There is a yum/apt repo available for the whole InfluxData stack, see
|
There is a yum/apt repo available for the whole InfluxData stack, see
|
||||||
[here](https://docs.influxdata.com/influxdb/v0.10/introduction/installation/#installation)
|
[here](https://docs.influxdata.com/influxdb/v0.9/introduction/installation/#installation)
|
||||||
for instructions on setting up the repo. Once it is configured, you will be able
|
for instructions, replacing the `influxdb` package name with `telegraf`.
|
||||||
to use this repo to install & update telegraf.
|
|
||||||
|
|
||||||
### Linux tarballs:
|
### Linux tarballs:
|
||||||
|
|
||||||
Latest:
|
Latest:
|
||||||
* http://get.influxdb.org/telegraf/telegraf-0.12.0-1_linux_amd64.tar.gz
|
* http://get.influxdb.org/telegraf/telegraf-0.10.3-1_linux_amd64.tar.gz
|
||||||
* http://get.influxdb.org/telegraf/telegraf-0.12.0-1_linux_i386.tar.gz
|
* http://get.influxdb.org/telegraf/telegraf-0.10.3-1_linux_i386.tar.gz
|
||||||
* http://get.influxdb.org/telegraf/telegraf-0.12.0-1_linux_armhf.tar.gz
|
* http://get.influxdb.org/telegraf/telegraf-0.10.3-1_linux_arm.tar.gz
|
||||||
|
|
||||||
|
0.2.x:
|
||||||
|
* http://get.influxdb.org/telegraf/telegraf_linux_amd64_0.2.4.tar.gz
|
||||||
|
* http://get.influxdb.org/telegraf/telegraf_linux_386_0.2.4.tar.gz
|
||||||
|
* http://get.influxdb.org/telegraf/telegraf_linux_arm_0.2.4.tar.gz
|
||||||
|
|
||||||
##### tarball Instructions:
|
##### tarball Instructions:
|
||||||
|
|
||||||
To install the full directory structure with config file, run:
|
To install the full directory structure with config file, run:
|
||||||
|
|
||||||
```
|
```
|
||||||
sudo tar -C / -zxvf ./telegraf-0.12.0-1_linux_amd64.tar.gz
|
sudo tar -C / -zxvf ./telegraf-0.10.3-1_linux_amd64.tar.gz
|
||||||
```
|
```
|
||||||
|
|
||||||
To extract only the binary, run:
|
To extract only the binary, run:
|
||||||
|
|
||||||
```
|
```
|
||||||
tar -zxvf telegraf-0.12.0-1_linux_amd64.tar.gz --strip-components=3 ./usr/bin/telegraf
|
tar -zxvf telegraf-0.10.3-1_linux_amd64.tar.gz --strip-components=3 ./usr/bin/telegraf
|
||||||
```
|
```
|
||||||
|
|
||||||
### FreeBSD tarball:
|
|
||||||
|
|
||||||
Latest:
|
|
||||||
* http://get.influxdb.org/telegraf/telegraf-0.12.0-1_freebsd_amd64.tar.gz
|
|
||||||
|
|
||||||
##### tarball Instructions:
|
|
||||||
|
|
||||||
See linux instructions above.
|
|
||||||
|
|
||||||
### Ansible Role:
|
### Ansible Role:
|
||||||
|
|
||||||
Ansible role: https://github.com/rossmcdonald/telegraf
|
Ansible role: https://github.com/rossmcdonald/telegraf
|
||||||
@@ -84,12 +90,6 @@ brew update
|
|||||||
brew install telegraf
|
brew install telegraf
|
||||||
```
|
```
|
||||||
|
|
||||||
### Windows Binaries (EXPERIMENTAL)
|
|
||||||
|
|
||||||
Latest:
|
|
||||||
* http://get.influxdb.org/telegraf/telegraf-0.12.0-1_windows_amd64.zip
|
|
||||||
* http://get.influxdb.org/telegraf/telegraf-0.12.0-1_windows_i386.zip
|
|
||||||
|
|
||||||
### From Source:
|
### From Source:
|
||||||
|
|
||||||
Telegraf manages dependencies via [gdm](https://github.com/sparrc/gdm),
|
Telegraf manages dependencies via [gdm](https://github.com/sparrc/gdm),
|
||||||
@@ -156,55 +156,51 @@ more information on each, please look at the directory of the same name in
|
|||||||
|
|
||||||
Currently implemented sources:
|
Currently implemented sources:
|
||||||
|
|
||||||
* [aerospike](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/aerospike)
|
* aerospike
|
||||||
* [apache](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/apache)
|
* apache
|
||||||
* [bcache](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/bcache)
|
* bcache
|
||||||
* [couchbase](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/couchbase)
|
* couchdb
|
||||||
* [couchdb](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/couchdb)
|
* disque
|
||||||
* [disque](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/disque)
|
* dns query time
|
||||||
* [dns query time](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/dns_query)
|
* docker
|
||||||
* [docker](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/docker)
|
* dovecot
|
||||||
* [dovecot](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/dovecot)
|
* elasticsearch
|
||||||
* [elasticsearch](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/elasticsearch)
|
* exec (generic executable plugin, support JSON, influx and graphite)
|
||||||
* [exec](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/exec ) (generic executable plugin, support JSON, influx, graphite and nagios)
|
* haproxy
|
||||||
* [haproxy](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/haproxy)
|
* httpjson (generic JSON-emitting http service plugin)
|
||||||
* [httpjson ](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/httpjson ) (generic JSON-emitting http service plugin)
|
* influxdb
|
||||||
* [influxdb](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/influxdb)
|
* jolokia
|
||||||
* [ipmi_sensor](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/ipmi_sensor)
|
* leofs
|
||||||
* [jolokia](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/jolokia)
|
* lustre2
|
||||||
* [leofs](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/leofs)
|
* mailchimp
|
||||||
* [lustre2](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/lustre2)
|
* memcached
|
||||||
* [mailchimp](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/mailchimp)
|
* mesos
|
||||||
* [memcached](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/memcached)
|
* mongodb
|
||||||
* [mesos](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/mesos)
|
* mysql
|
||||||
* [mongodb](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/mongodb)
|
* net_response
|
||||||
* [mysql](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/mysql)
|
* nginx
|
||||||
* [net_response](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/net_response)
|
* nsq
|
||||||
* [nginx](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/nginx)
|
* phpfpm
|
||||||
* [nsq](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/nsq)
|
* phusion passenger
|
||||||
* [ntpq](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/ntpq)
|
* ping
|
||||||
* [phpfpm](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/phpfpm)
|
* postgresql
|
||||||
* [phusion passenger](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/passenger)
|
* powerdns
|
||||||
* [ping](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/ping)
|
* procstat
|
||||||
* [postgresql](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/postgresql)
|
* prometheus
|
||||||
* [postgresql_extensible](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/postgresql_extensible)
|
* puppetagent
|
||||||
* [powerdns](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/powerdns)
|
* rabbitmq
|
||||||
* [procstat](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/procstat)
|
* raindrops
|
||||||
* [prometheus](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/prometheus)
|
* redis
|
||||||
* [puppetagent](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/puppetagent)
|
* rethinkdb
|
||||||
* [rabbitmq](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/rabbitmq)
|
* riak
|
||||||
* [raindrops](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/raindrops)
|
* sensors (only available if built from source)
|
||||||
* [redis](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/redis)
|
* snmp
|
||||||
* [rethinkdb](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/rethinkdb)
|
* sql server (microsoft)
|
||||||
* [riak](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/riak)
|
* twemproxy
|
||||||
* [sensors ](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/sensors) (only available if built from source)
|
* zfs
|
||||||
* [snmp](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/snmp)
|
* zookeeper
|
||||||
* [sql server](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/sqlserver) (microsoft)
|
* win_perf_counters (windows performance counters)
|
||||||
* [twemproxy](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/twemproxy)
|
* system
|
||||||
* [zfs](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/zfs)
|
|
||||||
* [zookeeper](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/zookeeper)
|
|
||||||
* [win_perf_counters ](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/win_perf_counters) (windows performance counters)
|
|
||||||
* [system](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/system)
|
|
||||||
* cpu
|
* cpu
|
||||||
* mem
|
* mem
|
||||||
* net
|
* net
|
||||||
@@ -212,38 +208,34 @@ Currently implemented sources:
|
|||||||
* disk
|
* disk
|
||||||
* diskio
|
* diskio
|
||||||
* swap
|
* swap
|
||||||
* processes
|
|
||||||
* kernel (/proc/stat)
|
|
||||||
|
|
||||||
Telegraf can also collect metrics via the following service plugins:
|
Telegraf can also collect metrics via the following service plugins:
|
||||||
|
|
||||||
* [statsd](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/statsd)
|
* statsd
|
||||||
* [udp_listener](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/udp_listener)
|
* mqtt_consumer
|
||||||
* [tcp_listener](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/tcp_listener)
|
* kafka_consumer
|
||||||
* [mqtt_consumer](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/mqtt_consumer)
|
* nats_consumer
|
||||||
* [kafka_consumer](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/kafka_consumer)
|
* github_webhooks
|
||||||
* [nats_consumer](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/nats_consumer)
|
|
||||||
* [github_webhooks](https://github.com/influxdata/telegraf/tree/master/plugins/inputs/github_webhooks)
|
|
||||||
|
|
||||||
We'll be adding support for many more over the coming months. Read on if you
|
We'll be adding support for many more over the coming months. Read on if you
|
||||||
want to add support for another service or third-party API.
|
want to add support for another service or third-party API.
|
||||||
|
|
||||||
## Supported Output Plugins
|
## Supported Output Plugins
|
||||||
|
|
||||||
* [influxdb](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/influxdb)
|
* influxdb
|
||||||
* [amon](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/amon)
|
* amon
|
||||||
* [amqp](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/amqp)
|
* amqp
|
||||||
* [aws kinesis](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/kinesis)
|
* aws kinesis
|
||||||
* [aws cloudwatch](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/cloudwatch)
|
* aws cloudwatch
|
||||||
* [datadog](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/datadog)
|
* datadog
|
||||||
* [graphite](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/graphite)
|
* graphite
|
||||||
* [kafka](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/kafka)
|
* kafka
|
||||||
* [librato](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/librato)
|
* librato
|
||||||
* [mqtt](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/mqtt)
|
* mqtt
|
||||||
* [nsq](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/nsq)
|
* nsq
|
||||||
* [opentsdb](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/opentsdb)
|
* opentsdb
|
||||||
* [prometheus](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/prometheus_client)
|
* prometheus
|
||||||
* [riemann](https://github.com/influxdata/telegraf/tree/master/plugins/outputs/riemann)
|
* riemann
|
||||||
|
|
||||||
## Contributing
|
## Contributing
|
||||||
|
|
||||||
|
|||||||
@@ -105,6 +105,7 @@ func (ac *accumulator) AddFields(
|
|||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
result[k] = v
|
||||||
|
|
||||||
// Validate uint64 and float64 fields
|
// Validate uint64 and float64 fields
|
||||||
switch val := v.(type) {
|
switch val := v.(type) {
|
||||||
@@ -115,7 +116,6 @@ func (ac *accumulator) AddFields(
|
|||||||
} else {
|
} else {
|
||||||
result[k] = int64(9223372036854775807)
|
result[k] = int64(9223372036854775807)
|
||||||
}
|
}
|
||||||
continue
|
|
||||||
case float64:
|
case float64:
|
||||||
// NaNs are invalid values in influxdb, skip measurement
|
// NaNs are invalid values in influxdb, skip measurement
|
||||||
if math.IsNaN(val) || math.IsInf(val, 0) {
|
if math.IsNaN(val) || math.IsInf(val, 0) {
|
||||||
@@ -127,8 +127,6 @@ func (ac *accumulator) AddFields(
|
|||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
result[k] = v
|
|
||||||
}
|
}
|
||||||
fields = nil
|
fields = nil
|
||||||
if len(result) == 0 {
|
if len(result) == 0 {
|
||||||
@@ -170,8 +168,5 @@ func (ac *accumulator) setDefaultTags(tags map[string]string) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (ac *accumulator) addDefaultTag(key, value string) {
|
func (ac *accumulator) addDefaultTag(key, value string) {
|
||||||
if ac.defaultTags == nil {
|
|
||||||
ac.defaultTags = make(map[string]string)
|
|
||||||
}
|
|
||||||
ac.defaultTags[key] = value
|
ac.defaultTags[key] = value
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,302 +0,0 @@
|
|||||||
package agent
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"math"
|
|
||||||
"testing"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/influxdata/telegraf"
|
|
||||||
"github.com/influxdata/telegraf/internal/models"
|
|
||||||
|
|
||||||
"github.com/stretchr/testify/assert"
|
|
||||||
)
|
|
||||||
|
|
||||||
func TestAdd(t *testing.T) {
|
|
||||||
a := accumulator{}
|
|
||||||
now := time.Now()
|
|
||||||
a.metrics = make(chan telegraf.Metric, 10)
|
|
||||||
defer close(a.metrics)
|
|
||||||
a.inputConfig = &internal_models.InputConfig{}
|
|
||||||
|
|
||||||
a.Add("acctest", float64(101), map[string]string{})
|
|
||||||
a.Add("acctest", float64(101), map[string]string{"acc": "test"})
|
|
||||||
a.Add("acctest", float64(101), map[string]string{"acc": "test"}, now)
|
|
||||||
|
|
||||||
testm := <-a.metrics
|
|
||||||
actual := testm.String()
|
|
||||||
assert.Contains(t, actual, "acctest value=101")
|
|
||||||
|
|
||||||
testm = <-a.metrics
|
|
||||||
actual = testm.String()
|
|
||||||
assert.Contains(t, actual, "acctest,acc=test value=101")
|
|
||||||
|
|
||||||
testm = <-a.metrics
|
|
||||||
actual = testm.String()
|
|
||||||
assert.Equal(t,
|
|
||||||
fmt.Sprintf("acctest,acc=test value=101 %d", now.UnixNano()),
|
|
||||||
actual)
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestAddDefaultTags(t *testing.T) {
|
|
||||||
a := accumulator{}
|
|
||||||
a.addDefaultTag("default", "tag")
|
|
||||||
now := time.Now()
|
|
||||||
a.metrics = make(chan telegraf.Metric, 10)
|
|
||||||
defer close(a.metrics)
|
|
||||||
a.inputConfig = &internal_models.InputConfig{}
|
|
||||||
|
|
||||||
a.Add("acctest", float64(101), map[string]string{})
|
|
||||||
a.Add("acctest", float64(101), map[string]string{"acc": "test"})
|
|
||||||
a.Add("acctest", float64(101), map[string]string{"acc": "test"}, now)
|
|
||||||
|
|
||||||
testm := <-a.metrics
|
|
||||||
actual := testm.String()
|
|
||||||
assert.Contains(t, actual, "acctest,default=tag value=101")
|
|
||||||
|
|
||||||
testm = <-a.metrics
|
|
||||||
actual = testm.String()
|
|
||||||
assert.Contains(t, actual, "acctest,acc=test,default=tag value=101")
|
|
||||||
|
|
||||||
testm = <-a.metrics
|
|
||||||
actual = testm.String()
|
|
||||||
assert.Equal(t,
|
|
||||||
fmt.Sprintf("acctest,acc=test,default=tag value=101 %d", now.UnixNano()),
|
|
||||||
actual)
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestAddFields(t *testing.T) {
|
|
||||||
a := accumulator{}
|
|
||||||
now := time.Now()
|
|
||||||
a.metrics = make(chan telegraf.Metric, 10)
|
|
||||||
defer close(a.metrics)
|
|
||||||
a.inputConfig = &internal_models.InputConfig{}
|
|
||||||
|
|
||||||
fields := map[string]interface{}{
|
|
||||||
"usage": float64(99),
|
|
||||||
}
|
|
||||||
a.AddFields("acctest", fields, map[string]string{})
|
|
||||||
a.AddFields("acctest", fields, map[string]string{"acc": "test"})
|
|
||||||
a.AddFields("acctest", fields, map[string]string{"acc": "test"}, now)
|
|
||||||
|
|
||||||
testm := <-a.metrics
|
|
||||||
actual := testm.String()
|
|
||||||
assert.Contains(t, actual, "acctest usage=99")
|
|
||||||
|
|
||||||
testm = <-a.metrics
|
|
||||||
actual = testm.String()
|
|
||||||
assert.Contains(t, actual, "acctest,acc=test usage=99")
|
|
||||||
|
|
||||||
testm = <-a.metrics
|
|
||||||
actual = testm.String()
|
|
||||||
assert.Equal(t,
|
|
||||||
fmt.Sprintf("acctest,acc=test usage=99 %d", now.UnixNano()),
|
|
||||||
actual)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Test that all Inf fields get dropped, and not added to metrics channel
|
|
||||||
func TestAddInfFields(t *testing.T) {
|
|
||||||
inf := math.Inf(1)
|
|
||||||
ninf := math.Inf(-1)
|
|
||||||
|
|
||||||
a := accumulator{}
|
|
||||||
now := time.Now()
|
|
||||||
a.metrics = make(chan telegraf.Metric, 10)
|
|
||||||
defer close(a.metrics)
|
|
||||||
a.inputConfig = &internal_models.InputConfig{}
|
|
||||||
|
|
||||||
fields := map[string]interface{}{
|
|
||||||
"usage": inf,
|
|
||||||
"nusage": ninf,
|
|
||||||
}
|
|
||||||
a.AddFields("acctest", fields, map[string]string{})
|
|
||||||
a.AddFields("acctest", fields, map[string]string{"acc": "test"})
|
|
||||||
a.AddFields("acctest", fields, map[string]string{"acc": "test"}, now)
|
|
||||||
|
|
||||||
assert.Len(t, a.metrics, 0)
|
|
||||||
|
|
||||||
// test that non-inf fields are kept and not dropped
|
|
||||||
fields["notinf"] = float64(100)
|
|
||||||
a.AddFields("acctest", fields, map[string]string{})
|
|
||||||
testm := <-a.metrics
|
|
||||||
actual := testm.String()
|
|
||||||
assert.Contains(t, actual, "acctest notinf=100")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Test that nan fields are dropped and not added
|
|
||||||
func TestAddNaNFields(t *testing.T) {
|
|
||||||
nan := math.NaN()
|
|
||||||
|
|
||||||
a := accumulator{}
|
|
||||||
now := time.Now()
|
|
||||||
a.metrics = make(chan telegraf.Metric, 10)
|
|
||||||
defer close(a.metrics)
|
|
||||||
a.inputConfig = &internal_models.InputConfig{}
|
|
||||||
|
|
||||||
fields := map[string]interface{}{
|
|
||||||
"usage": nan,
|
|
||||||
}
|
|
||||||
a.AddFields("acctest", fields, map[string]string{})
|
|
||||||
a.AddFields("acctest", fields, map[string]string{"acc": "test"})
|
|
||||||
a.AddFields("acctest", fields, map[string]string{"acc": "test"}, now)
|
|
||||||
|
|
||||||
assert.Len(t, a.metrics, 0)
|
|
||||||
|
|
||||||
// test that non-nan fields are kept and not dropped
|
|
||||||
fields["notnan"] = float64(100)
|
|
||||||
a.AddFields("acctest", fields, map[string]string{})
|
|
||||||
testm := <-a.metrics
|
|
||||||
actual := testm.String()
|
|
||||||
assert.Contains(t, actual, "acctest notnan=100")
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestAddUint64Fields(t *testing.T) {
|
|
||||||
a := accumulator{}
|
|
||||||
now := time.Now()
|
|
||||||
a.metrics = make(chan telegraf.Metric, 10)
|
|
||||||
defer close(a.metrics)
|
|
||||||
a.inputConfig = &internal_models.InputConfig{}
|
|
||||||
|
|
||||||
fields := map[string]interface{}{
|
|
||||||
"usage": uint64(99),
|
|
||||||
}
|
|
||||||
a.AddFields("acctest", fields, map[string]string{})
|
|
||||||
a.AddFields("acctest", fields, map[string]string{"acc": "test"})
|
|
||||||
a.AddFields("acctest", fields, map[string]string{"acc": "test"}, now)
|
|
||||||
|
|
||||||
testm := <-a.metrics
|
|
||||||
actual := testm.String()
|
|
||||||
assert.Contains(t, actual, "acctest usage=99i")
|
|
||||||
|
|
||||||
testm = <-a.metrics
|
|
||||||
actual = testm.String()
|
|
||||||
assert.Contains(t, actual, "acctest,acc=test usage=99i")
|
|
||||||
|
|
||||||
testm = <-a.metrics
|
|
||||||
actual = testm.String()
|
|
||||||
assert.Equal(t,
|
|
||||||
fmt.Sprintf("acctest,acc=test usage=99i %d", now.UnixNano()),
|
|
||||||
actual)
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestAddUint64Overflow(t *testing.T) {
|
|
||||||
a := accumulator{}
|
|
||||||
now := time.Now()
|
|
||||||
a.metrics = make(chan telegraf.Metric, 10)
|
|
||||||
defer close(a.metrics)
|
|
||||||
a.inputConfig = &internal_models.InputConfig{}
|
|
||||||
|
|
||||||
fields := map[string]interface{}{
|
|
||||||
"usage": uint64(9223372036854775808),
|
|
||||||
}
|
|
||||||
a.AddFields("acctest", fields, map[string]string{})
|
|
||||||
a.AddFields("acctest", fields, map[string]string{"acc": "test"})
|
|
||||||
a.AddFields("acctest", fields, map[string]string{"acc": "test"}, now)
|
|
||||||
|
|
||||||
testm := <-a.metrics
|
|
||||||
actual := testm.String()
|
|
||||||
assert.Contains(t, actual, "acctest usage=9223372036854775807i")
|
|
||||||
|
|
||||||
testm = <-a.metrics
|
|
||||||
actual = testm.String()
|
|
||||||
assert.Contains(t, actual, "acctest,acc=test usage=9223372036854775807i")
|
|
||||||
|
|
||||||
testm = <-a.metrics
|
|
||||||
actual = testm.String()
|
|
||||||
assert.Equal(t,
|
|
||||||
fmt.Sprintf("acctest,acc=test usage=9223372036854775807i %d", now.UnixNano()),
|
|
||||||
actual)
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestAddInts(t *testing.T) {
|
|
||||||
a := accumulator{}
|
|
||||||
a.addDefaultTag("default", "tag")
|
|
||||||
now := time.Now()
|
|
||||||
a.metrics = make(chan telegraf.Metric, 10)
|
|
||||||
defer close(a.metrics)
|
|
||||||
a.inputConfig = &internal_models.InputConfig{}
|
|
||||||
|
|
||||||
a.Add("acctest", int(101), map[string]string{})
|
|
||||||
a.Add("acctest", int32(101), map[string]string{"acc": "test"})
|
|
||||||
a.Add("acctest", int64(101), map[string]string{"acc": "test"}, now)
|
|
||||||
|
|
||||||
testm := <-a.metrics
|
|
||||||
actual := testm.String()
|
|
||||||
assert.Contains(t, actual, "acctest,default=tag value=101i")
|
|
||||||
|
|
||||||
testm = <-a.metrics
|
|
||||||
actual = testm.String()
|
|
||||||
assert.Contains(t, actual, "acctest,acc=test,default=tag value=101i")
|
|
||||||
|
|
||||||
testm = <-a.metrics
|
|
||||||
actual = testm.String()
|
|
||||||
assert.Equal(t,
|
|
||||||
fmt.Sprintf("acctest,acc=test,default=tag value=101i %d", now.UnixNano()),
|
|
||||||
actual)
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestAddFloats(t *testing.T) {
|
|
||||||
a := accumulator{}
|
|
||||||
a.addDefaultTag("default", "tag")
|
|
||||||
now := time.Now()
|
|
||||||
a.metrics = make(chan telegraf.Metric, 10)
|
|
||||||
defer close(a.metrics)
|
|
||||||
a.inputConfig = &internal_models.InputConfig{}
|
|
||||||
|
|
||||||
a.Add("acctest", float32(101), map[string]string{"acc": "test"})
|
|
||||||
a.Add("acctest", float64(101), map[string]string{"acc": "test"}, now)
|
|
||||||
|
|
||||||
testm := <-a.metrics
|
|
||||||
actual := testm.String()
|
|
||||||
assert.Contains(t, actual, "acctest,acc=test,default=tag value=101")
|
|
||||||
|
|
||||||
testm = <-a.metrics
|
|
||||||
actual = testm.String()
|
|
||||||
assert.Equal(t,
|
|
||||||
fmt.Sprintf("acctest,acc=test,default=tag value=101 %d", now.UnixNano()),
|
|
||||||
actual)
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestAddStrings(t *testing.T) {
|
|
||||||
a := accumulator{}
|
|
||||||
a.addDefaultTag("default", "tag")
|
|
||||||
now := time.Now()
|
|
||||||
a.metrics = make(chan telegraf.Metric, 10)
|
|
||||||
defer close(a.metrics)
|
|
||||||
a.inputConfig = &internal_models.InputConfig{}
|
|
||||||
|
|
||||||
a.Add("acctest", "test", map[string]string{"acc": "test"})
|
|
||||||
a.Add("acctest", "foo", map[string]string{"acc": "test"}, now)
|
|
||||||
|
|
||||||
testm := <-a.metrics
|
|
||||||
actual := testm.String()
|
|
||||||
assert.Contains(t, actual, "acctest,acc=test,default=tag value=\"test\"")
|
|
||||||
|
|
||||||
testm = <-a.metrics
|
|
||||||
actual = testm.String()
|
|
||||||
assert.Equal(t,
|
|
||||||
fmt.Sprintf("acctest,acc=test,default=tag value=\"foo\" %d", now.UnixNano()),
|
|
||||||
actual)
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestAddBools(t *testing.T) {
|
|
||||||
a := accumulator{}
|
|
||||||
a.addDefaultTag("default", "tag")
|
|
||||||
now := time.Now()
|
|
||||||
a.metrics = make(chan telegraf.Metric, 10)
|
|
||||||
defer close(a.metrics)
|
|
||||||
a.inputConfig = &internal_models.InputConfig{}
|
|
||||||
|
|
||||||
a.Add("acctest", true, map[string]string{"acc": "test"})
|
|
||||||
a.Add("acctest", false, map[string]string{"acc": "test"}, now)
|
|
||||||
|
|
||||||
testm := <-a.metrics
|
|
||||||
actual := testm.String()
|
|
||||||
assert.Contains(t, actual, "acctest,acc=test,default=tag value=true")
|
|
||||||
|
|
||||||
testm = <-a.metrics
|
|
||||||
actual = testm.String()
|
|
||||||
assert.Equal(t,
|
|
||||||
fmt.Sprintf("acctest,acc=test,default=tag value=false %d", now.UnixNano()),
|
|
||||||
actual)
|
|
||||||
}
|
|
||||||
@@ -27,19 +27,17 @@ func NewAgent(config *config.Config) (*Agent, error) {
|
|||||||
Config: config,
|
Config: config,
|
||||||
}
|
}
|
||||||
|
|
||||||
if !a.Config.Agent.OmitHostname {
|
if a.Config.Agent.Hostname == "" {
|
||||||
if a.Config.Agent.Hostname == "" {
|
hostname, err := os.Hostname()
|
||||||
hostname, err := os.Hostname()
|
if err != nil {
|
||||||
if err != nil {
|
return nil, err
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
a.Config.Agent.Hostname = hostname
|
|
||||||
}
|
}
|
||||||
|
|
||||||
config.Tags["host"] = a.Config.Agent.Hostname
|
a.Config.Agent.Hostname = hostname
|
||||||
}
|
}
|
||||||
|
|
||||||
|
config.Tags["host"] = a.Config.Agent.Hostname
|
||||||
|
|
||||||
return a, nil
|
return a, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -1,6 +1,7 @@
|
|||||||
package agent
|
package agent
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"github.com/stretchr/testify/assert"
|
||||||
"testing"
|
"testing"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
@@ -10,18 +11,8 @@ import (
|
|||||||
_ "github.com/influxdata/telegraf/plugins/inputs/all"
|
_ "github.com/influxdata/telegraf/plugins/inputs/all"
|
||||||
// needing to load the outputs
|
// needing to load the outputs
|
||||||
_ "github.com/influxdata/telegraf/plugins/outputs/all"
|
_ "github.com/influxdata/telegraf/plugins/outputs/all"
|
||||||
|
|
||||||
"github.com/stretchr/testify/assert"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
func TestAgent_OmitHostname(t *testing.T) {
|
|
||||||
c := config.NewConfig()
|
|
||||||
c.Agent.OmitHostname = true
|
|
||||||
_, err := NewAgent(c)
|
|
||||||
assert.NoError(t, err)
|
|
||||||
assert.NotContains(t, c.Tags, "host")
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestAgent_LoadPlugin(t *testing.T) {
|
func TestAgent_LoadPlugin(t *testing.T) {
|
||||||
c := config.NewConfig()
|
c := config.NewConfig()
|
||||||
c.InputFilters = []string{"mysql"}
|
c.InputFilters = []string{"mysql"}
|
||||||
|
|||||||
@@ -4,9 +4,9 @@ machine:
|
|||||||
post:
|
post:
|
||||||
- sudo service zookeeper stop
|
- sudo service zookeeper stop
|
||||||
- go version
|
- go version
|
||||||
- go version | grep 1.6 || sudo rm -rf /usr/local/go
|
- go version | grep 1.5.3 || sudo rm -rf /usr/local/go
|
||||||
- wget https://storage.googleapis.com/golang/go1.6.linux-amd64.tar.gz
|
- wget https://storage.googleapis.com/golang/go1.5.3.linux-amd64.tar.gz
|
||||||
- sudo tar -C /usr/local -xzf go1.6.linux-amd64.tar.gz
|
- sudo tar -C /usr/local -xzf go1.5.3.linux-amd64.tar.gz
|
||||||
- go version
|
- go version
|
||||||
|
|
||||||
dependencies:
|
dependencies:
|
||||||
|
|||||||
@@ -11,9 +11,8 @@ import (
|
|||||||
|
|
||||||
"github.com/influxdata/telegraf/agent"
|
"github.com/influxdata/telegraf/agent"
|
||||||
"github.com/influxdata/telegraf/internal/config"
|
"github.com/influxdata/telegraf/internal/config"
|
||||||
"github.com/influxdata/telegraf/plugins/inputs"
|
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/all"
|
_ "github.com/influxdata/telegraf/plugins/inputs/all"
|
||||||
"github.com/influxdata/telegraf/plugins/outputs"
|
|
||||||
_ "github.com/influxdata/telegraf/plugins/outputs/all"
|
_ "github.com/influxdata/telegraf/plugins/outputs/all"
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -31,14 +30,11 @@ var fSampleConfig = flag.Bool("sample-config", false,
|
|||||||
var fPidfile = flag.String("pidfile", "", "file to write our pid to")
|
var fPidfile = flag.String("pidfile", "", "file to write our pid to")
|
||||||
var fInputFilters = flag.String("input-filter", "",
|
var fInputFilters = flag.String("input-filter", "",
|
||||||
"filter the inputs to enable, separator is :")
|
"filter the inputs to enable, separator is :")
|
||||||
var fInputList = flag.Bool("input-list", false,
|
|
||||||
"print available input plugins.")
|
|
||||||
var fOutputFilters = flag.String("output-filter", "",
|
var fOutputFilters = flag.String("output-filter", "",
|
||||||
"filter the outputs to enable, separator is :")
|
"filter the outputs to enable, separator is :")
|
||||||
var fOutputList = flag.Bool("output-list", false,
|
|
||||||
"print available output plugins.")
|
|
||||||
var fUsage = flag.String("usage", "",
|
var fUsage = flag.String("usage", "",
|
||||||
"print usage for a plugin, ie, 'telegraf -usage mysql'")
|
"print usage for a plugin, ie, 'telegraf -usage mysql'")
|
||||||
|
|
||||||
var fInputFiltersLegacy = flag.String("filter", "",
|
var fInputFiltersLegacy = flag.String("filter", "",
|
||||||
"filter the inputs to enable, separator is :")
|
"filter the inputs to enable, separator is :")
|
||||||
var fOutputFiltersLegacy = flag.String("outputfilter", "",
|
var fOutputFiltersLegacy = flag.String("outputfilter", "",
|
||||||
@@ -63,9 +59,7 @@ The flags are:
|
|||||||
-sample-config print out full sample configuration to stdout
|
-sample-config print out full sample configuration to stdout
|
||||||
-config-directory directory containing additional *.conf files
|
-config-directory directory containing additional *.conf files
|
||||||
-input-filter filter the input plugins to enable, separator is :
|
-input-filter filter the input plugins to enable, separator is :
|
||||||
-input-list print all the plugins inputs
|
|
||||||
-output-filter filter the output plugins to enable, separator is :
|
-output-filter filter the output plugins to enable, separator is :
|
||||||
-output-list print all the available outputs
|
|
||||||
-usage print usage for a plugin, ie, 'telegraf -usage mysql'
|
-usage print usage for a plugin, ie, 'telegraf -usage mysql'
|
||||||
-debug print metrics as they're generated to stdout
|
-debug print metrics as they're generated to stdout
|
||||||
-quiet run in quiet mode
|
-quiet run in quiet mode
|
||||||
@@ -96,9 +90,8 @@ func main() {
|
|||||||
reload <- false
|
reload <- false
|
||||||
flag.Usage = func() { usageExit(0) }
|
flag.Usage = func() { usageExit(0) }
|
||||||
flag.Parse()
|
flag.Parse()
|
||||||
args := flag.Args()
|
|
||||||
|
|
||||||
if flag.NFlag() == 0 && len(args) == 0 {
|
if flag.NFlag() == 0 {
|
||||||
usageExit(0)
|
usageExit(0)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -122,34 +115,6 @@ func main() {
|
|||||||
outputFilters = strings.Split(":"+outputFilter+":", ":")
|
outputFilters = strings.Split(":"+outputFilter+":", ":")
|
||||||
}
|
}
|
||||||
|
|
||||||
if len(args) > 0 {
|
|
||||||
switch args[0] {
|
|
||||||
case "version":
|
|
||||||
v := fmt.Sprintf("Telegraf - Version %s", Version)
|
|
||||||
fmt.Println(v)
|
|
||||||
return
|
|
||||||
case "config":
|
|
||||||
config.PrintSampleConfig(inputFilters, outputFilters)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if *fOutputList {
|
|
||||||
fmt.Println("Available Output Plugins:")
|
|
||||||
for k, _ := range outputs.Outputs {
|
|
||||||
fmt.Printf(" %s\n", k)
|
|
||||||
}
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
if *fInputList {
|
|
||||||
fmt.Println("Available Input Plugins:")
|
|
||||||
for k, _ := range inputs.Inputs {
|
|
||||||
fmt.Printf(" %s\n", k)
|
|
||||||
}
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
if *fVersion {
|
if *fVersion {
|
||||||
v := fmt.Sprintf("Telegraf - Version %s", Version)
|
v := fmt.Sprintf("Telegraf - Version %s", Version)
|
||||||
fmt.Println(v)
|
fmt.Println(v)
|
||||||
|
|||||||
@@ -9,12 +9,6 @@ To generate a file with specific inputs and outputs, you can use the
|
|||||||
-input-filter and -output-filter flags:
|
-input-filter and -output-filter flags:
|
||||||
`telegraf -sample-config -input-filter cpu:mem:net:swap -output-filter influxdb:kafka`
|
`telegraf -sample-config -input-filter cpu:mem:net:swap -output-filter influxdb:kafka`
|
||||||
|
|
||||||
## Environment Variables
|
|
||||||
|
|
||||||
Environment variables can be used anywhere in the config file, simply prepend
|
|
||||||
them with $. For strings the variable must be within quotes (ie, "$STR_VAR"),
|
|
||||||
for numbers and booleans they should be plain (ie, $INT_VAR, $BOOL_VAR)
|
|
||||||
|
|
||||||
## `[global_tags]` Configuration
|
## `[global_tags]` Configuration
|
||||||
|
|
||||||
Global tags can be specific in the `[global_tags]` section of the config file in
|
Global tags can be specific in the `[global_tags]` section of the config file in
|
||||||
@@ -103,7 +97,7 @@ fields which begin with `time_`.
|
|||||||
percpu = true
|
percpu = true
|
||||||
totalcpu = false
|
totalcpu = false
|
||||||
# filter all fields beginning with 'time_'
|
# filter all fields beginning with 'time_'
|
||||||
fielddrop = ["time_*"]
|
drop = ["time_*"]
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Input Config: tagpass and tagdrop
|
#### Input Config: tagpass and tagdrop
|
||||||
@@ -112,7 +106,7 @@ fields which begin with `time_`.
|
|||||||
[[inputs.cpu]]
|
[[inputs.cpu]]
|
||||||
percpu = true
|
percpu = true
|
||||||
totalcpu = false
|
totalcpu = false
|
||||||
fielddrop = ["cpu_time"]
|
drop = ["cpu_time"]
|
||||||
# Don't collect CPU data for cpu6 & cpu7
|
# Don't collect CPU data for cpu6 & cpu7
|
||||||
[inputs.cpu.tagdrop]
|
[inputs.cpu.tagdrop]
|
||||||
cpu = [ "cpu6", "cpu7" ]
|
cpu = [ "cpu6", "cpu7" ]
|
||||||
@@ -147,12 +141,12 @@ fields which begin with `time_`.
|
|||||||
# Drop all metrics about containers for kubelet
|
# Drop all metrics about containers for kubelet
|
||||||
[[inputs.prometheus]]
|
[[inputs.prometheus]]
|
||||||
urls = ["http://kube-node-1:4194/metrics"]
|
urls = ["http://kube-node-1:4194/metrics"]
|
||||||
namedrop = ["container_*"]
|
namedrop = ["container_"]
|
||||||
|
|
||||||
# Only store rest client related metrics for kubelet
|
# Only store rest client related metrics for kubelet
|
||||||
[[inputs.prometheus]]
|
[[inputs.prometheus]]
|
||||||
urls = ["http://kube-node-1:4194/metrics"]
|
urls = ["http://kube-node-1:4194/metrics"]
|
||||||
namepass = ["rest_client_*"]
|
namepass = ["rest_client_"]
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Input config: prefix, suffix, and override
|
#### Input config: prefix, suffix, and override
|
||||||
@@ -205,7 +199,7 @@ to avoid measurement collisions:
|
|||||||
percpu = true
|
percpu = true
|
||||||
totalcpu = false
|
totalcpu = false
|
||||||
name_override = "percpu_usage"
|
name_override = "percpu_usage"
|
||||||
fielddrop = ["cpu_time*"]
|
drop = ["cpu_time*"]
|
||||||
```
|
```
|
||||||
|
|
||||||
## `[outputs.xxx]` Configuration
|
## `[outputs.xxx]` Configuration
|
||||||
|
|||||||
@@ -1,12 +1,5 @@
|
|||||||
# Telegraf Input Data Formats
|
# Telegraf Input Data Formats
|
||||||
|
|
||||||
Telegraf is able to parse the following input data formats into metrics:
|
|
||||||
|
|
||||||
1. InfluxDB Line Protocol
|
|
||||||
1. JSON
|
|
||||||
1. Graphite
|
|
||||||
1. Value, ie 45 or "booyah"
|
|
||||||
|
|
||||||
Telegraf metrics, like InfluxDB
|
Telegraf metrics, like InfluxDB
|
||||||
[points](https://docs.influxdata.com/influxdb/v0.10/write_protocols/line/),
|
[points](https://docs.influxdata.com/influxdb/v0.10/write_protocols/line/),
|
||||||
are a combination of four basic parts:
|
are a combination of four basic parts:
|
||||||
@@ -141,38 +134,6 @@ Your Telegraf metrics would get tagged with "my_tag_1"
|
|||||||
exec_mycollector,my_tag_1=foo a=5,b_c=6
|
exec_mycollector,my_tag_1=foo a=5,b_c=6
|
||||||
```
|
```
|
||||||
|
|
||||||
## Value:
|
|
||||||
|
|
||||||
The "value" data format translates single values into Telegraf metrics. This
|
|
||||||
is done by assigning a measurement name (which can be overridden using the
|
|
||||||
`name_override` config option), and setting a single field ("value") as the
|
|
||||||
parsed metric.
|
|
||||||
|
|
||||||
#### Value Configuration:
|
|
||||||
|
|
||||||
You can tell Telegraf what type of metric to collect by using the `data_type`
|
|
||||||
configuration option.
|
|
||||||
|
|
||||||
It is also recommended that you set `name_override` to a measurement name that
|
|
||||||
makes sense for your metric, otherwise it will just be set to the name of the
|
|
||||||
plugin.
|
|
||||||
|
|
||||||
```toml
|
|
||||||
[[inputs.exec]]
|
|
||||||
## Commands array
|
|
||||||
commands = ["cat /proc/sys/kernel/random/entropy_avail"]
|
|
||||||
|
|
||||||
## override the default metric name of "exec"
|
|
||||||
name_override = "entropy_available"
|
|
||||||
|
|
||||||
## Data format to consume. This can be "json", "value", influx" or "graphite"
|
|
||||||
## Each data format has it's own unique set of configuration options, read
|
|
||||||
## more about them here:
|
|
||||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
|
||||||
data_format = "value"
|
|
||||||
data_type = "integer"
|
|
||||||
```
|
|
||||||
|
|
||||||
## Graphite:
|
## Graphite:
|
||||||
|
|
||||||
The Graphite data format translates graphite _dot_ buckets directly into
|
The Graphite data format translates graphite _dot_ buckets directly into
|
||||||
@@ -220,32 +181,17 @@ So the following template:
|
|||||||
|
|
||||||
```toml
|
```toml
|
||||||
templates = [
|
templates = [
|
||||||
"measurement.measurement.field.field.region"
|
"measurement.measurement.field.region"
|
||||||
]
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
would result in the following Graphite -> Telegraf transformation.
|
would result in the following Graphite -> Telegraf transformation.
|
||||||
|
|
||||||
```
|
```
|
||||||
cpu.usage.idle.percent.us-west 100
|
cpu.usage.idle.us-west 100
|
||||||
=> cpu_usage,region=us-west idle_percent=100
|
=> cpu_usage,region=us-west idle=100
|
||||||
```
|
```
|
||||||
|
|
||||||
The field key can also be derived from the second "half" of the input metric-name by specifying ```field*```:
|
|
||||||
```toml
|
|
||||||
templates = [
|
|
||||||
"measurement.measurement.region.field*"
|
|
||||||
]
|
|
||||||
```
|
|
||||||
|
|
||||||
would result in the following Graphite -> Telegraf transformation.
|
|
||||||
|
|
||||||
```
|
|
||||||
cpu.usage.us-west.idle.percentage 100
|
|
||||||
=> cpu_usage,region=us-west idle_percentage=100
|
|
||||||
```
|
|
||||||
(This cannot be used in conjunction with "measurement*"!)
|
|
||||||
|
|
||||||
#### Filter Templates:
|
#### Filter Templates:
|
||||||
|
|
||||||
Users can also filter the template(s) to use based on the name of the bucket,
|
Users can also filter the template(s) to use based on the name of the bucket,
|
||||||
@@ -326,27 +272,3 @@ There are many more options available,
|
|||||||
"measurement*"
|
"measurement*"
|
||||||
]
|
]
|
||||||
```
|
```
|
||||||
|
|
||||||
## Nagios:
|
|
||||||
|
|
||||||
There are no additional configuration options for Nagios line-protocol. The
|
|
||||||
metrics are parsed directly into Telegraf metrics.
|
|
||||||
|
|
||||||
Note: Nagios Input Data Formats is only supported in `exec` input plugin.
|
|
||||||
|
|
||||||
#### Nagios Configuration:
|
|
||||||
|
|
||||||
```toml
|
|
||||||
[[inputs.exec]]
|
|
||||||
## Commands array
|
|
||||||
commands = ["/usr/lib/nagios/plugins/check_load", "-w 5,6,7 -c 7,8,9"]
|
|
||||||
|
|
||||||
## measurement name suffix (for separating different commands)
|
|
||||||
name_suffix = "_mycollector"
|
|
||||||
|
|
||||||
## Data format to consume. This can be "json", "influx", "graphite" or "nagios"
|
|
||||||
## Each data format has it's own unique set of configuration options, read
|
|
||||||
## more about them here:
|
|
||||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
|
||||||
data_format = "nagios"
|
|
||||||
```
|
|
||||||
|
|||||||
@@ -53,7 +53,7 @@ metrics are serialized directly into InfluxDB line-protocol.
|
|||||||
## Files to write to, "stdout" is a specially handled file.
|
## Files to write to, "stdout" is a specially handled file.
|
||||||
files = ["stdout", "/tmp/metrics.out"]
|
files = ["stdout", "/tmp/metrics.out"]
|
||||||
|
|
||||||
## Data format to output. This can be "influx", "json" or "graphite"
|
## Data format to output. This can be "influx" or "graphite"
|
||||||
## Each data format has it's own unique set of configuration options, read
|
## Each data format has it's own unique set of configuration options, read
|
||||||
## more about them here:
|
## more about them here:
|
||||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
|
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
|
||||||
@@ -87,7 +87,7 @@ tars.cpu-total.us-east-1.cpu.usage_idle 98.09 1455320690
|
|||||||
## Files to write to, "stdout" is a specially handled file.
|
## Files to write to, "stdout" is a specially handled file.
|
||||||
files = ["stdout", "/tmp/metrics.out"]
|
files = ["stdout", "/tmp/metrics.out"]
|
||||||
|
|
||||||
## Data format to output. This can be "influx", "json" or "graphite"
|
## Data format to output. This can be "influx" or "graphite"
|
||||||
## Each data format has it's own unique set of configuration options, read
|
## Each data format has it's own unique set of configuration options, read
|
||||||
## more about them here:
|
## more about them here:
|
||||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
|
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
|
||||||
@@ -95,37 +95,3 @@ tars.cpu-total.us-east-1.cpu.usage_idle 98.09 1455320690
|
|||||||
|
|
||||||
prefix = "telegraf"
|
prefix = "telegraf"
|
||||||
```
|
```
|
||||||
|
|
||||||
## Json:
|
|
||||||
|
|
||||||
The Json data format serialized Telegraf metrics in json format. The format is:
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"fields":{
|
|
||||||
"field_1":30,
|
|
||||||
"field_2":4,
|
|
||||||
"field_N":59,
|
|
||||||
"n_images":660
|
|
||||||
},
|
|
||||||
"name":"docker",
|
|
||||||
"tags":{
|
|
||||||
"host":"raynor"
|
|
||||||
},
|
|
||||||
"timestamp":1458229140
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Json Configuration:
|
|
||||||
|
|
||||||
```toml
|
|
||||||
[[outputs.file]]
|
|
||||||
## Files to write to, "stdout" is a specially handled file.
|
|
||||||
files = ["stdout", "/tmp/metrics.out"]
|
|
||||||
|
|
||||||
## Data format to output. This can be "influx", "json" or "graphite"
|
|
||||||
## Each data format has it's own unique set of configuration options, read
|
|
||||||
## more about them here:
|
|
||||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_OUTPUT.md
|
|
||||||
data_format = "json"
|
|
||||||
```
|
|
||||||
|
|||||||
1220
etc/telegraf.conf
1220
etc/telegraf.conf
File diff suppressed because it is too large
Load Diff
@@ -1,164 +0,0 @@
|
|||||||
# Telegraf configuration
|
|
||||||
|
|
||||||
# Telegraf is entirely plugin driven. All metrics are gathered from the
|
|
||||||
# declared inputs, and sent to the declared outputs.
|
|
||||||
|
|
||||||
# Plugins must be declared in here to be active.
|
|
||||||
# To deactivate a plugin, comment out the name and any variables.
|
|
||||||
|
|
||||||
# Use 'telegraf -config telegraf.conf -test' to see what metrics a config
|
|
||||||
# file would generate.
|
|
||||||
|
|
||||||
# Global tags can be specified here in key="value" format.
|
|
||||||
[global_tags]
|
|
||||||
# dc = "us-east-1" # will tag all metrics with dc=us-east-1
|
|
||||||
# rack = "1a"
|
|
||||||
|
|
||||||
# Configuration for telegraf agent
|
|
||||||
[agent]
|
|
||||||
## Default data collection interval for all inputs
|
|
||||||
interval = "10s"
|
|
||||||
## Rounds collection interval to 'interval'
|
|
||||||
## ie, if interval="10s" then always collect on :00, :10, :20, etc.
|
|
||||||
round_interval = true
|
|
||||||
|
|
||||||
## Telegraf will cache metric_buffer_limit metrics for each output, and will
|
|
||||||
## flush this buffer on a successful write.
|
|
||||||
metric_buffer_limit = 1000
|
|
||||||
## Flush the buffer whenever full, regardless of flush_interval.
|
|
||||||
flush_buffer_when_full = true
|
|
||||||
|
|
||||||
## Collection jitter is used to jitter the collection by a random amount.
|
|
||||||
## Each plugin will sleep for a random time within jitter before collecting.
|
|
||||||
## This can be used to avoid many plugins querying things like sysfs at the
|
|
||||||
## same time, which can have a measurable effect on the system.
|
|
||||||
collection_jitter = "0s"
|
|
||||||
|
|
||||||
## Default flushing interval for all outputs. You shouldn't set this below
|
|
||||||
## interval. Maximum flush_interval will be flush_interval + flush_jitter
|
|
||||||
flush_interval = "10s"
|
|
||||||
## Jitter the flush interval by a random amount. This is primarily to avoid
|
|
||||||
## large write spikes for users running a large number of telegraf instances.
|
|
||||||
## ie, a jitter of 5s and interval 10s means flushes will happen every 10-15s
|
|
||||||
flush_jitter = "0s"
|
|
||||||
|
|
||||||
## Run telegraf in debug mode
|
|
||||||
debug = false
|
|
||||||
## Run telegraf in quiet mode
|
|
||||||
quiet = false
|
|
||||||
## Override default hostname, if empty use os.Hostname()
|
|
||||||
hostname = ""
|
|
||||||
|
|
||||||
|
|
||||||
###############################################################################
|
|
||||||
# OUTPUTS #
|
|
||||||
###############################################################################
|
|
||||||
|
|
||||||
# Configuration for influxdb server to send metrics to
|
|
||||||
[[outputs.influxdb]]
|
|
||||||
# The full HTTP or UDP endpoint URL for your InfluxDB instance.
|
|
||||||
# Multiple urls can be specified but it is assumed that they are part of the same
|
|
||||||
# cluster, this means that only ONE of the urls will be written to each interval.
|
|
||||||
# urls = ["udp://localhost:8089"] # UDP endpoint example
|
|
||||||
urls = ["http://localhost:8086"] # required
|
|
||||||
# The target database for metrics (telegraf will create it if not exists)
|
|
||||||
database = "telegraf" # required
|
|
||||||
# Precision of writes, valid values are "ns", "us" (or "µs"), "ms", "s", "m", "h".
|
|
||||||
# note: using second precision greatly helps InfluxDB compression
|
|
||||||
precision = "s"
|
|
||||||
|
|
||||||
## Write timeout (for the InfluxDB client), formatted as a string.
|
|
||||||
## If not provided, will default to 5s. 0s means no timeout (not recommended).
|
|
||||||
timeout = "5s"
|
|
||||||
# username = "telegraf"
|
|
||||||
# password = "metricsmetricsmetricsmetrics"
|
|
||||||
# Set the user agent for HTTP POSTs (can be useful for log differentiation)
|
|
||||||
# user_agent = "telegraf"
|
|
||||||
# Set UDP payload size, defaults to InfluxDB UDP Client default (512 bytes)
|
|
||||||
# udp_payload = 512
|
|
||||||
|
|
||||||
|
|
||||||
###############################################################################
|
|
||||||
# INPUTS #
|
|
||||||
###############################################################################
|
|
||||||
|
|
||||||
# Windows Performance Counters plugin.
|
|
||||||
# These are the recommended method of monitoring system metrics on windows,
|
|
||||||
# as the regular system plugins (inputs.cpu, inputs.mem, etc.) rely on WMI,
|
|
||||||
# which utilizes a lot of system resources.
|
|
||||||
#
|
|
||||||
# See more configuration examples at:
|
|
||||||
# https://github.com/influxdata/telegraf/tree/master/plugins/inputs/win_perf_counters
|
|
||||||
|
|
||||||
[[inputs.win_perf_counters]]
|
|
||||||
[[inputs.win_perf_counters.object]]
|
|
||||||
# Processor usage, alternative to native, reports on a per core.
|
|
||||||
ObjectName = "Processor"
|
|
||||||
Instances = ["*"]
|
|
||||||
Counters = ["% Idle Time", "% Interrupt Time", "% Privileged Time", "% User Time", "% Processor Time"]
|
|
||||||
Measurement = "win_cpu"
|
|
||||||
#IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
|
|
||||||
|
|
||||||
[[inputs.win_perf_counters.object]]
|
|
||||||
# Disk times and queues
|
|
||||||
ObjectName = "LogicalDisk"
|
|
||||||
Instances = ["*"]
|
|
||||||
Counters = ["% Idle Time", "% Disk Time","% Disk Read Time", "% Disk Write Time", "% User Time", "Current Disk Queue Length"]
|
|
||||||
Measurement = "win_disk"
|
|
||||||
#IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
|
|
||||||
|
|
||||||
[[inputs.win_perf_counters.object]]
|
|
||||||
ObjectName = "System"
|
|
||||||
Counters = ["Context Switches/sec","System Calls/sec"]
|
|
||||||
Instances = ["------"]
|
|
||||||
Measurement = "win_system"
|
|
||||||
#IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
|
|
||||||
|
|
||||||
[[inputs.win_perf_counters.object]]
|
|
||||||
# Example query where the Instance portion must be removed to get data back, such as from the Memory object.
|
|
||||||
ObjectName = "Memory"
|
|
||||||
Counters = ["Available Bytes","Cache Faults/sec","Demand Zero Faults/sec","Page Faults/sec","Pages/sec","Transition Faults/sec","Pool Nonpaged Bytes","Pool Paged Bytes"]
|
|
||||||
Instances = ["------"] # Use 6 x - to remove the Instance bit from the query.
|
|
||||||
Measurement = "win_mem"
|
|
||||||
#IncludeTotal=false #Set to true to include _Total instance when querying for all (*).
|
|
||||||
|
|
||||||
|
|
||||||
# Windows system plugins using WMI (disabled by default, using
|
|
||||||
# win_perf_counters over WMI is recommended)
|
|
||||||
|
|
||||||
# Read metrics about cpu usage
|
|
||||||
#[[inputs.cpu]]
|
|
||||||
## Whether to report per-cpu stats or not
|
|
||||||
#percpu = true
|
|
||||||
## Whether to report total system cpu stats or not
|
|
||||||
#totalcpu = true
|
|
||||||
## Comment this line if you want the raw CPU time metrics
|
|
||||||
#fielddrop = ["time_*"]
|
|
||||||
|
|
||||||
# Read metrics about disk usage by mount point
|
|
||||||
#[[inputs.disk]]
|
|
||||||
## By default, telegraf gather stats for all mountpoints.
|
|
||||||
## Setting mountpoints will restrict the stats to the specified mountpoints.
|
|
||||||
## mount_points=["/"]
|
|
||||||
|
|
||||||
## Ignore some mountpoints by filesystem type. For example (dev)tmpfs (usually
|
|
||||||
## present on /run, /var/run, /dev/shm or /dev).
|
|
||||||
#ignore_fs = ["tmpfs", "devtmpfs"]
|
|
||||||
|
|
||||||
# Read metrics about disk IO by device
|
|
||||||
#[[inputs.diskio]]
|
|
||||||
## By default, telegraf will gather stats for all devices including
|
|
||||||
## disk partitions.
|
|
||||||
## Setting devices will restrict the stats to the specified devices.
|
|
||||||
## devices = ["sda", "sdb"]
|
|
||||||
## Uncomment the following line if you do not need disk serial numbers.
|
|
||||||
## skip_serial_number = true
|
|
||||||
|
|
||||||
# Read metrics about memory usage
|
|
||||||
#[[inputs.mem]]
|
|
||||||
# no configuration
|
|
||||||
|
|
||||||
# Read metrics about swap memory usage
|
|
||||||
#[[inputs.swap]]
|
|
||||||
# no configuration
|
|
||||||
|
|
||||||
@@ -1,14 +1,11 @@
|
|||||||
package config
|
package config
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"bytes"
|
|
||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
"io/ioutil"
|
"io/ioutil"
|
||||||
"log"
|
"log"
|
||||||
"os"
|
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
"regexp"
|
|
||||||
"sort"
|
"sort"
|
||||||
"strings"
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
@@ -22,20 +19,7 @@ import (
|
|||||||
"github.com/influxdata/telegraf/plugins/serializers"
|
"github.com/influxdata/telegraf/plugins/serializers"
|
||||||
|
|
||||||
"github.com/influxdata/config"
|
"github.com/influxdata/config"
|
||||||
"github.com/influxdata/toml"
|
"github.com/naoina/toml/ast"
|
||||||
"github.com/influxdata/toml/ast"
|
|
||||||
)
|
|
||||||
|
|
||||||
var (
|
|
||||||
// Default input plugins
|
|
||||||
inputDefaults = []string{"cpu", "mem", "swap", "system", "kernel",
|
|
||||||
"processes", "disk", "diskio"}
|
|
||||||
|
|
||||||
// Default output plugins
|
|
||||||
outputDefaults = []string{"influxdb"}
|
|
||||||
|
|
||||||
// envVarRe is a regex to find environment variables in the config file
|
|
||||||
envVarRe = regexp.MustCompile(`\$\w+`)
|
|
||||||
)
|
)
|
||||||
|
|
||||||
// Config specifies the URL/user/password for the database that telegraf
|
// Config specifies the URL/user/password for the database that telegraf
|
||||||
@@ -113,9 +97,8 @@ type AgentConfig struct {
|
|||||||
Debug bool
|
Debug bool
|
||||||
|
|
||||||
// Quiet is the option for running in quiet mode
|
// Quiet is the option for running in quiet mode
|
||||||
Quiet bool
|
Quiet bool
|
||||||
Hostname string
|
Hostname string
|
||||||
OmitHostname bool
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Inputs returns a list of strings of the configured inputs.
|
// Inputs returns a list of strings of the configured inputs.
|
||||||
@@ -151,28 +134,20 @@ func (c *Config) ListTags() string {
|
|||||||
}
|
}
|
||||||
|
|
||||||
var header = `# Telegraf Configuration
|
var header = `# Telegraf Configuration
|
||||||
#
|
|
||||||
# Telegraf is entirely plugin driven. All metrics are gathered from the
|
# Telegraf is entirely plugin driven. All metrics are gathered from the
|
||||||
# declared inputs, and sent to the declared outputs.
|
# declared inputs, and sent to the declared outputs.
|
||||||
#
|
|
||||||
# Plugins must be declared in here to be active.
|
# Plugins must be declared in here to be active.
|
||||||
# To deactivate a plugin, comment out the name and any variables.
|
# To deactivate a plugin, comment out the name and any variables.
|
||||||
#
|
|
||||||
# Use 'telegraf -config telegraf.conf -test' to see what metrics a config
|
# Use 'telegraf -config telegraf.conf -test' to see what metrics a config
|
||||||
# file would generate.
|
# file would generate.
|
||||||
#
|
|
||||||
# Environment variables can be used anywhere in this config file, simply prepend
|
|
||||||
# them with $. For strings the variable must be within quotes (ie, "$STR_VAR"),
|
|
||||||
# for numbers and booleans they should be plain (ie, $INT_VAR, $BOOL_VAR)
|
|
||||||
|
|
||||||
|
|
||||||
# Global tags can be specified here in key="value" format.
|
# Global tags can be specified here in key="value" format.
|
||||||
[global_tags]
|
[global_tags]
|
||||||
# dc = "us-east-1" # will tag all metrics with dc=us-east-1
|
# dc = "us-east-1" # will tag all metrics with dc=us-east-1
|
||||||
# rack = "1a"
|
# rack = "1a"
|
||||||
## Environment variables can be used as tags, and throughout the config file
|
|
||||||
# user = "$USER"
|
|
||||||
|
|
||||||
|
|
||||||
# Configuration for telegraf agent
|
# Configuration for telegraf agent
|
||||||
[agent]
|
[agent]
|
||||||
@@ -184,7 +159,7 @@ var header = `# Telegraf Configuration
|
|||||||
|
|
||||||
## Telegraf will cache metric_buffer_limit metrics for each output, and will
|
## Telegraf will cache metric_buffer_limit metrics for each output, and will
|
||||||
## flush this buffer on a successful write.
|
## flush this buffer on a successful write.
|
||||||
metric_buffer_limit = 1000
|
metric_buffer_limit = 10000
|
||||||
## Flush the buffer whenever full, regardless of flush_interval.
|
## Flush the buffer whenever full, regardless of flush_interval.
|
||||||
flush_buffer_when_full = true
|
flush_buffer_when_full = true
|
||||||
|
|
||||||
@@ -208,111 +183,34 @@ var header = `# Telegraf Configuration
|
|||||||
quiet = false
|
quiet = false
|
||||||
## Override default hostname, if empty use os.Hostname()
|
## Override default hostname, if empty use os.Hostname()
|
||||||
hostname = ""
|
hostname = ""
|
||||||
## If set to true, do no set the "host" tag in the telegraf agent.
|
|
||||||
omit_hostname = false
|
|
||||||
|
|
||||||
|
|
||||||
###############################################################################
|
#
|
||||||
# OUTPUT PLUGINS #
|
# OUTPUTS:
|
||||||
###############################################################################
|
#
|
||||||
|
|
||||||
`
|
`
|
||||||
|
|
||||||
var inputHeader = `
|
var pluginHeader = `
|
||||||
|
#
|
||||||
###############################################################################
|
# INPUTS:
|
||||||
# INPUT PLUGINS #
|
#
|
||||||
###############################################################################
|
|
||||||
`
|
`
|
||||||
|
|
||||||
var serviceInputHeader = `
|
var serviceInputHeader = `
|
||||||
|
#
|
||||||
###############################################################################
|
# SERVICE INPUTS:
|
||||||
# SERVICE INPUT PLUGINS #
|
#
|
||||||
###############################################################################
|
|
||||||
`
|
`
|
||||||
|
|
||||||
// PrintSampleConfig prints the sample config
|
// PrintSampleConfig prints the sample config
|
||||||
func PrintSampleConfig(inputFilters []string, outputFilters []string) {
|
func PrintSampleConfig(pluginFilters []string, outputFilters []string) {
|
||||||
fmt.Printf(header)
|
fmt.Printf(header)
|
||||||
|
|
||||||
if len(outputFilters) != 0 {
|
|
||||||
printFilteredOutputs(outputFilters, false)
|
|
||||||
} else {
|
|
||||||
printFilteredOutputs(outputDefaults, false)
|
|
||||||
// Print non-default outputs, commented
|
|
||||||
var pnames []string
|
|
||||||
for pname := range outputs.Outputs {
|
|
||||||
if !sliceContains(pname, outputDefaults) {
|
|
||||||
pnames = append(pnames, pname)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
sort.Strings(pnames)
|
|
||||||
printFilteredOutputs(pnames, true)
|
|
||||||
}
|
|
||||||
|
|
||||||
fmt.Printf(inputHeader)
|
|
||||||
if len(inputFilters) != 0 {
|
|
||||||
printFilteredInputs(inputFilters, false)
|
|
||||||
} else {
|
|
||||||
printFilteredInputs(inputDefaults, false)
|
|
||||||
// Print non-default inputs, commented
|
|
||||||
var pnames []string
|
|
||||||
for pname := range inputs.Inputs {
|
|
||||||
if !sliceContains(pname, inputDefaults) {
|
|
||||||
pnames = append(pnames, pname)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
sort.Strings(pnames)
|
|
||||||
printFilteredInputs(pnames, true)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func printFilteredInputs(inputFilters []string, commented bool) {
|
|
||||||
// Filter inputs
|
|
||||||
var pnames []string
|
|
||||||
for pname := range inputs.Inputs {
|
|
||||||
if sliceContains(pname, inputFilters) {
|
|
||||||
pnames = append(pnames, pname)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
sort.Strings(pnames)
|
|
||||||
|
|
||||||
// cache service inputs to print them at the end
|
|
||||||
servInputs := make(map[string]telegraf.ServiceInput)
|
|
||||||
// for alphabetical looping:
|
|
||||||
servInputNames := []string{}
|
|
||||||
|
|
||||||
// Print Inputs
|
|
||||||
for _, pname := range pnames {
|
|
||||||
creator := inputs.Inputs[pname]
|
|
||||||
input := creator()
|
|
||||||
|
|
||||||
switch p := input.(type) {
|
|
||||||
case telegraf.ServiceInput:
|
|
||||||
servInputs[pname] = p
|
|
||||||
servInputNames = append(servInputNames, pname)
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
printConfig(pname, input, "inputs", commented)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Print Service Inputs
|
|
||||||
if len(servInputs) == 0 {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
sort.Strings(servInputNames)
|
|
||||||
fmt.Printf(serviceInputHeader)
|
|
||||||
for _, name := range servInputNames {
|
|
||||||
printConfig(name, servInputs[name], "inputs", commented)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func printFilteredOutputs(outputFilters []string, commented bool) {
|
|
||||||
// Filter outputs
|
// Filter outputs
|
||||||
var onames []string
|
var onames []string
|
||||||
for oname := range outputs.Outputs {
|
for oname := range outputs.Outputs {
|
||||||
if sliceContains(oname, outputFilters) {
|
if len(outputFilters) == 0 || sliceContains(oname, outputFilters) {
|
||||||
onames = append(onames, oname)
|
onames = append(onames, oname)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -322,7 +220,38 @@ func printFilteredOutputs(outputFilters []string, commented bool) {
|
|||||||
for _, oname := range onames {
|
for _, oname := range onames {
|
||||||
creator := outputs.Outputs[oname]
|
creator := outputs.Outputs[oname]
|
||||||
output := creator()
|
output := creator()
|
||||||
printConfig(oname, output, "outputs", commented)
|
printConfig(oname, output, "outputs")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Filter inputs
|
||||||
|
var pnames []string
|
||||||
|
for pname := range inputs.Inputs {
|
||||||
|
if len(pluginFilters) == 0 || sliceContains(pname, pluginFilters) {
|
||||||
|
pnames = append(pnames, pname)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
sort.Strings(pnames)
|
||||||
|
|
||||||
|
// Print Inputs
|
||||||
|
fmt.Printf(pluginHeader)
|
||||||
|
servInputs := make(map[string]telegraf.ServiceInput)
|
||||||
|
for _, pname := range pnames {
|
||||||
|
creator := inputs.Inputs[pname]
|
||||||
|
input := creator()
|
||||||
|
|
||||||
|
switch p := input.(type) {
|
||||||
|
case telegraf.ServiceInput:
|
||||||
|
servInputs[pname] = p
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
printConfig(pname, input, "inputs")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Print Service Inputs
|
||||||
|
fmt.Printf(serviceInputHeader)
|
||||||
|
for name, input := range servInputs {
|
||||||
|
printConfig(name, input, "inputs")
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -331,26 +260,13 @@ type printer interface {
|
|||||||
SampleConfig() string
|
SampleConfig() string
|
||||||
}
|
}
|
||||||
|
|
||||||
func printConfig(name string, p printer, op string, commented bool) {
|
func printConfig(name string, p printer, op string) {
|
||||||
comment := ""
|
fmt.Printf("\n# %s\n[[%s.%s]]", p.Description(), op, name)
|
||||||
if commented {
|
|
||||||
comment = "# "
|
|
||||||
}
|
|
||||||
fmt.Printf("\n%s# %s\n%s[[%s.%s]]", comment, p.Description(), comment,
|
|
||||||
op, name)
|
|
||||||
|
|
||||||
config := p.SampleConfig()
|
config := p.SampleConfig()
|
||||||
if config == "" {
|
if config == "" {
|
||||||
fmt.Printf("\n%s # no configuration\n\n", comment)
|
fmt.Printf("\n # no configuration\n")
|
||||||
} else {
|
} else {
|
||||||
lines := strings.Split(config, "\n")
|
fmt.Printf(config)
|
||||||
for i, line := range lines {
|
|
||||||
if i == 0 || i == len(lines)-1 {
|
|
||||||
fmt.Print("\n")
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
fmt.Print(comment + line + "\n")
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -366,7 +282,7 @@ func sliceContains(name string, list []string) bool {
|
|||||||
// PrintInputConfig prints the config usage of a single input.
|
// PrintInputConfig prints the config usage of a single input.
|
||||||
func PrintInputConfig(name string) error {
|
func PrintInputConfig(name string) error {
|
||||||
if creator, ok := inputs.Inputs[name]; ok {
|
if creator, ok := inputs.Inputs[name]; ok {
|
||||||
printConfig(name, creator(), "inputs", false)
|
printConfig(name, creator(), "inputs")
|
||||||
} else {
|
} else {
|
||||||
return errors.New(fmt.Sprintf("Input %s not found", name))
|
return errors.New(fmt.Sprintf("Input %s not found", name))
|
||||||
}
|
}
|
||||||
@@ -376,7 +292,7 @@ func PrintInputConfig(name string) error {
|
|||||||
// PrintOutputConfig prints the config usage of a single output.
|
// PrintOutputConfig prints the config usage of a single output.
|
||||||
func PrintOutputConfig(name string) error {
|
func PrintOutputConfig(name string) error {
|
||||||
if creator, ok := outputs.Outputs[name]; ok {
|
if creator, ok := outputs.Outputs[name]; ok {
|
||||||
printConfig(name, creator(), "outputs", false)
|
printConfig(name, creator(), "outputs")
|
||||||
} else {
|
} else {
|
||||||
return errors.New(fmt.Sprintf("Output %s not found", name))
|
return errors.New(fmt.Sprintf("Output %s not found", name))
|
||||||
}
|
}
|
||||||
@@ -406,44 +322,44 @@ func (c *Config) LoadDirectory(path string) error {
|
|||||||
|
|
||||||
// LoadConfig loads the given config file and applies it to c
|
// LoadConfig loads the given config file and applies it to c
|
||||||
func (c *Config) LoadConfig(path string) error {
|
func (c *Config) LoadConfig(path string) error {
|
||||||
tbl, err := parseFile(path)
|
tbl, err := config.ParseFile(path)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("Error parsing %s, %s", path, err)
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
for name, val := range tbl.Fields {
|
for name, val := range tbl.Fields {
|
||||||
subTable, ok := val.(*ast.Table)
|
subTable, ok := val.(*ast.Table)
|
||||||
if !ok {
|
if !ok {
|
||||||
return fmt.Errorf("%s: invalid configuration", path)
|
return errors.New("invalid configuration")
|
||||||
}
|
}
|
||||||
|
|
||||||
switch name {
|
switch name {
|
||||||
case "agent":
|
case "agent":
|
||||||
if err = config.UnmarshalTable(subTable, c.Agent); err != nil {
|
if err = config.UnmarshalTable(subTable, c.Agent); err != nil {
|
||||||
log.Printf("Could not parse [agent] config\n")
|
log.Printf("Could not parse [agent] config\n")
|
||||||
return fmt.Errorf("Error parsing %s, %s", path, err)
|
return err
|
||||||
}
|
}
|
||||||
case "global_tags", "tags":
|
case "global_tags", "tags":
|
||||||
if err = config.UnmarshalTable(subTable, c.Tags); err != nil {
|
if err = config.UnmarshalTable(subTable, c.Tags); err != nil {
|
||||||
log.Printf("Could not parse [global_tags] config\n")
|
log.Printf("Could not parse [global_tags] config\n")
|
||||||
return fmt.Errorf("Error parsing %s, %s", path, err)
|
return err
|
||||||
}
|
}
|
||||||
case "outputs":
|
case "outputs":
|
||||||
for pluginName, pluginVal := range subTable.Fields {
|
for pluginName, pluginVal := range subTable.Fields {
|
||||||
switch pluginSubTable := pluginVal.(type) {
|
switch pluginSubTable := pluginVal.(type) {
|
||||||
case *ast.Table:
|
case *ast.Table:
|
||||||
if err = c.addOutput(pluginName, pluginSubTable); err != nil {
|
if err = c.addOutput(pluginName, pluginSubTable); err != nil {
|
||||||
return fmt.Errorf("Error parsing %s, %s", path, err)
|
return err
|
||||||
}
|
}
|
||||||
case []*ast.Table:
|
case []*ast.Table:
|
||||||
for _, t := range pluginSubTable {
|
for _, t := range pluginSubTable {
|
||||||
if err = c.addOutput(pluginName, t); err != nil {
|
if err = c.addOutput(pluginName, t); err != nil {
|
||||||
return fmt.Errorf("Error parsing %s, %s", path, err)
|
return err
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
default:
|
default:
|
||||||
return fmt.Errorf("Unsupported config format: %s, file %s",
|
return fmt.Errorf("Unsupported config format: %s",
|
||||||
pluginName, path)
|
pluginName)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
case "inputs", "plugins":
|
case "inputs", "plugins":
|
||||||
@@ -451,50 +367,30 @@ func (c *Config) LoadConfig(path string) error {
|
|||||||
switch pluginSubTable := pluginVal.(type) {
|
switch pluginSubTable := pluginVal.(type) {
|
||||||
case *ast.Table:
|
case *ast.Table:
|
||||||
if err = c.addInput(pluginName, pluginSubTable); err != nil {
|
if err = c.addInput(pluginName, pluginSubTable); err != nil {
|
||||||
return fmt.Errorf("Error parsing %s, %s", path, err)
|
return err
|
||||||
}
|
}
|
||||||
case []*ast.Table:
|
case []*ast.Table:
|
||||||
for _, t := range pluginSubTable {
|
for _, t := range pluginSubTable {
|
||||||
if err = c.addInput(pluginName, t); err != nil {
|
if err = c.addInput(pluginName, t); err != nil {
|
||||||
return fmt.Errorf("Error parsing %s, %s", path, err)
|
return err
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
default:
|
default:
|
||||||
return fmt.Errorf("Unsupported config format: %s, file %s",
|
return fmt.Errorf("Unsupported config format: %s",
|
||||||
pluginName, path)
|
pluginName)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
// Assume it's an input input for legacy config file support if no other
|
// Assume it's an input input for legacy config file support if no other
|
||||||
// identifiers are present
|
// identifiers are present
|
||||||
default:
|
default:
|
||||||
if err = c.addInput(name, subTable); err != nil {
|
if err = c.addInput(name, subTable); err != nil {
|
||||||
return fmt.Errorf("Error parsing %s, %s", path, err)
|
return err
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// parseFile loads a TOML configuration from a provided path and
|
|
||||||
// returns the AST produced from the TOML parser. When loading the file, it
|
|
||||||
// will find environment variables and replace them.
|
|
||||||
func parseFile(fpath string) (*ast.Table, error) {
|
|
||||||
contents, err := ioutil.ReadFile(fpath)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
env_vars := envVarRe.FindAll(contents, -1)
|
|
||||||
for _, env_var := range env_vars {
|
|
||||||
env_val := os.Getenv(strings.TrimPrefix(string(env_var), "$"))
|
|
||||||
if env_val != "" {
|
|
||||||
contents = bytes.Replace(contents, env_var, []byte(env_val), 1)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return toml.Parse(contents)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *Config) addOutput(name string, table *ast.Table) error {
|
func (c *Config) addOutput(name string, table *ast.Table) error {
|
||||||
if len(c.OutputFilters) > 0 && !sliceContains(name, c.OutputFilters) {
|
if len(c.OutputFilters) > 0 && !sliceContains(name, c.OutputFilters) {
|
||||||
return nil
|
return nil
|
||||||
@@ -805,21 +701,12 @@ func buildParser(name string, tbl *ast.Table) (parsers.Parser, error) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if node, ok := tbl.Fields["data_type"]; ok {
|
|
||||||
if kv, ok := node.(*ast.KeyValue); ok {
|
|
||||||
if str, ok := kv.Value.(*ast.String); ok {
|
|
||||||
c.DataType = str.Value
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
c.MetricName = name
|
c.MetricName = name
|
||||||
|
|
||||||
delete(tbl.Fields, "data_format")
|
delete(tbl.Fields, "data_format")
|
||||||
delete(tbl.Fields, "separator")
|
delete(tbl.Fields, "separator")
|
||||||
delete(tbl.Fields, "templates")
|
delete(tbl.Fields, "templates")
|
||||||
delete(tbl.Fields, "tag_keys")
|
delete(tbl.Fields, "tag_keys")
|
||||||
delete(tbl.Fields, "data_type")
|
|
||||||
|
|
||||||
return parsers.NewParser(c)
|
return parsers.NewParser(c)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,7 +1,6 @@
|
|||||||
package config
|
package config
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"os"
|
|
||||||
"testing"
|
"testing"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
@@ -11,52 +10,9 @@ import (
|
|||||||
"github.com/influxdata/telegraf/plugins/inputs/memcached"
|
"github.com/influxdata/telegraf/plugins/inputs/memcached"
|
||||||
"github.com/influxdata/telegraf/plugins/inputs/procstat"
|
"github.com/influxdata/telegraf/plugins/inputs/procstat"
|
||||||
"github.com/influxdata/telegraf/plugins/parsers"
|
"github.com/influxdata/telegraf/plugins/parsers"
|
||||||
|
|
||||||
"github.com/stretchr/testify/assert"
|
"github.com/stretchr/testify/assert"
|
||||||
)
|
)
|
||||||
|
|
||||||
func TestConfig_LoadSingleInputWithEnvVars(t *testing.T) {
|
|
||||||
c := NewConfig()
|
|
||||||
err := os.Setenv("MY_TEST_SERVER", "192.168.1.1")
|
|
||||||
assert.NoError(t, err)
|
|
||||||
err = os.Setenv("TEST_INTERVAL", "10s")
|
|
||||||
assert.NoError(t, err)
|
|
||||||
c.LoadConfig("./testdata/single_plugin_env_vars.toml")
|
|
||||||
|
|
||||||
memcached := inputs.Inputs["memcached"]().(*memcached.Memcached)
|
|
||||||
memcached.Servers = []string{"192.168.1.1"}
|
|
||||||
|
|
||||||
mConfig := &internal_models.InputConfig{
|
|
||||||
Name: "memcached",
|
|
||||||
Filter: internal_models.Filter{
|
|
||||||
NameDrop: []string{"metricname2"},
|
|
||||||
NamePass: []string{"metricname1"},
|
|
||||||
FieldDrop: []string{"other", "stuff"},
|
|
||||||
FieldPass: []string{"some", "strings"},
|
|
||||||
TagDrop: []internal_models.TagFilter{
|
|
||||||
internal_models.TagFilter{
|
|
||||||
Name: "badtag",
|
|
||||||
Filter: []string{"othertag"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
TagPass: []internal_models.TagFilter{
|
|
||||||
internal_models.TagFilter{
|
|
||||||
Name: "goodtag",
|
|
||||||
Filter: []string{"mytag"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
IsActive: true,
|
|
||||||
},
|
|
||||||
Interval: 10 * time.Second,
|
|
||||||
}
|
|
||||||
mConfig.Tags = make(map[string]string)
|
|
||||||
|
|
||||||
assert.Equal(t, memcached, c.Inputs[0].Input,
|
|
||||||
"Testdata did not produce a correct memcached struct.")
|
|
||||||
assert.Equal(t, mConfig, c.Inputs[0].Config,
|
|
||||||
"Testdata did not produce correct memcached metadata.")
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestConfig_LoadSingleInput(t *testing.T) {
|
func TestConfig_LoadSingleInput(t *testing.T) {
|
||||||
c := NewConfig()
|
c := NewConfig()
|
||||||
c.LoadConfig("./testdata/single_plugin.toml")
|
c.LoadConfig("./testdata/single_plugin.toml")
|
||||||
|
|||||||
@@ -1,11 +0,0 @@
|
|||||||
[[inputs.memcached]]
|
|
||||||
servers = ["$MY_TEST_SERVER"]
|
|
||||||
namepass = ["metricname1"]
|
|
||||||
namedrop = ["metricname2"]
|
|
||||||
fieldpass = ["some", "strings"]
|
|
||||||
fielddrop = ["other", "stuff"]
|
|
||||||
interval = "$TEST_INTERVAL"
|
|
||||||
[inputs.memcached.tagpass]
|
|
||||||
goodtag = ["mytag"]
|
|
||||||
[inputs.memcached.tagdrop]
|
|
||||||
badtag = ["othertag"]
|
|
||||||
@@ -11,7 +11,6 @@ import (
|
|||||||
"os"
|
"os"
|
||||||
"strings"
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
"unicode"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
const alphanum string = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz"
|
const alphanum string = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz"
|
||||||
@@ -87,15 +86,15 @@ func GetTLSConfig(
|
|||||||
SSLCert, SSLKey, SSLCA string,
|
SSLCert, SSLKey, SSLCA string,
|
||||||
InsecureSkipVerify bool,
|
InsecureSkipVerify bool,
|
||||||
) (*tls.Config, error) {
|
) (*tls.Config, error) {
|
||||||
if SSLCert == "" && SSLKey == "" && SSLCA == "" && !InsecureSkipVerify {
|
t := &tls.Config{}
|
||||||
return nil, nil
|
if SSLCert != "" && SSLKey != "" && SSLCA != "" {
|
||||||
}
|
cert, err := tls.LoadX509KeyPair(SSLCert, SSLKey)
|
||||||
|
if err != nil {
|
||||||
|
return nil, errors.New(fmt.Sprintf(
|
||||||
|
"Could not load TLS client key/certificate: %s",
|
||||||
|
err))
|
||||||
|
}
|
||||||
|
|
||||||
t := &tls.Config{
|
|
||||||
InsecureSkipVerify: InsecureSkipVerify,
|
|
||||||
}
|
|
||||||
|
|
||||||
if SSLCA != "" {
|
|
||||||
caCert, err := ioutil.ReadFile(SSLCA)
|
caCert, err := ioutil.ReadFile(SSLCA)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, errors.New(fmt.Sprintf("Could not load TLS CA: %s",
|
return nil, errors.New(fmt.Sprintf("Could not load TLS CA: %s",
|
||||||
@@ -104,42 +103,23 @@ func GetTLSConfig(
|
|||||||
|
|
||||||
caCertPool := x509.NewCertPool()
|
caCertPool := x509.NewCertPool()
|
||||||
caCertPool.AppendCertsFromPEM(caCert)
|
caCertPool.AppendCertsFromPEM(caCert)
|
||||||
t.RootCAs = caCertPool
|
|
||||||
}
|
|
||||||
|
|
||||||
if SSLCert != "" && SSLKey != "" {
|
t = &tls.Config{
|
||||||
cert, err := tls.LoadX509KeyPair(SSLCert, SSLKey)
|
Certificates: []tls.Certificate{cert},
|
||||||
if err != nil {
|
RootCAs: caCertPool,
|
||||||
return nil, errors.New(fmt.Sprintf(
|
InsecureSkipVerify: InsecureSkipVerify,
|
||||||
"Could not load TLS client key/certificate: %s",
|
}
|
||||||
err))
|
} else {
|
||||||
|
if InsecureSkipVerify {
|
||||||
|
t.InsecureSkipVerify = true
|
||||||
|
} else {
|
||||||
|
return nil, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
t.Certificates = []tls.Certificate{cert}
|
|
||||||
t.BuildNameToCertificate()
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// will be nil by default if nothing is provided
|
// will be nil by default if nothing is provided
|
||||||
return t, nil
|
return t, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// SnakeCase converts the given string to snake case following the Golang format:
|
|
||||||
// acronyms are converted to lower-case and preceded by an underscore.
|
|
||||||
func SnakeCase(in string) string {
|
|
||||||
runes := []rune(in)
|
|
||||||
length := len(runes)
|
|
||||||
|
|
||||||
var out []rune
|
|
||||||
for i := 0; i < length; i++ {
|
|
||||||
if i > 0 && unicode.IsUpper(runes[i]) && ((i+1 < length && unicode.IsLower(runes[i+1])) || unicode.IsLower(runes[i-1])) {
|
|
||||||
out = append(out, '_')
|
|
||||||
}
|
|
||||||
out = append(out, unicode.ToLower(runes[i]))
|
|
||||||
}
|
|
||||||
|
|
||||||
return string(out)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Glob will test a string pattern, potentially containing globs, against a
|
// Glob will test a string pattern, potentially containing globs, against a
|
||||||
// subject string. The result is a simple true/false, determining whether or
|
// subject string. The result is a simple true/false, determining whether or
|
||||||
// not the glob pattern matched the subject text.
|
// not the glob pattern matched the subject text.
|
||||||
|
|||||||
@@ -42,32 +42,3 @@ func TestGlob(t *testing.T) {
|
|||||||
testGlobNoMatch(t, pattern, "this_is_a_test")
|
testGlobNoMatch(t, pattern, "this_is_a_test")
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
type SnakeTest struct {
|
|
||||||
input string
|
|
||||||
output string
|
|
||||||
}
|
|
||||||
|
|
||||||
var tests = []SnakeTest{
|
|
||||||
{"a", "a"},
|
|
||||||
{"snake", "snake"},
|
|
||||||
{"A", "a"},
|
|
||||||
{"ID", "id"},
|
|
||||||
{"MOTD", "motd"},
|
|
||||||
{"Snake", "snake"},
|
|
||||||
{"SnakeTest", "snake_test"},
|
|
||||||
{"APIResponse", "api_response"},
|
|
||||||
{"SnakeID", "snake_id"},
|
|
||||||
{"SnakeIDGoogle", "snake_id_google"},
|
|
||||||
{"LinuxMOTD", "linux_motd"},
|
|
||||||
{"OMGWTFBBQ", "omgwtfbbq"},
|
|
||||||
{"omg_wtf_bbq", "omg_wtf_bbq"},
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestSnakeCase(t *testing.T) {
|
|
||||||
for _, test := range tests {
|
|
||||||
if SnakeCase(test.input) != test.output {
|
|
||||||
t.Errorf(`SnakeCase("%s"), wanted "%s", got \%s"`, test.input, test.output, SnakeCase(test.input))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|||||||
@@ -10,7 +10,7 @@ import (
|
|||||||
|
|
||||||
const (
|
const (
|
||||||
// Default number of metrics kept between flushes.
|
// Default number of metrics kept between flushes.
|
||||||
DEFAULT_METRIC_BUFFER_LIMIT = 1000
|
DEFAULT_METRIC_BUFFER_LIMIT = 10000
|
||||||
|
|
||||||
// Limit how many full metric buffers are kept due to failed writes.
|
// Limit how many full metric buffers are kept due to failed writes.
|
||||||
FULL_METRIC_BUFFERS_LIMIT = 100
|
FULL_METRIC_BUFFERS_LIMIT = 100
|
||||||
@@ -82,11 +82,9 @@ func (ro *RunningOutput) AddMetric(metric telegraf.Metric) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
if ro.overwriteI == 0 {
|
log.Printf("WARNING: overwriting cached metrics, you may want to " +
|
||||||
log.Printf("WARNING: overwriting cached metrics, you may want to " +
|
"increase the metric_buffer_limit setting in your [agent] " +
|
||||||
"increase the metric_buffer_limit setting in your [agent] " +
|
"config if you do not wish to overwrite metrics.\n")
|
||||||
"config if you do not wish to overwrite metrics.\n")
|
|
||||||
}
|
|
||||||
if ro.overwriteI == len(ro.metrics) {
|
if ro.overwriteI == len(ro.metrics) {
|
||||||
ro.overwriteI = 0
|
ro.overwriteI = 0
|
||||||
}
|
}
|
||||||
@@ -121,9 +119,6 @@ func (ro *RunningOutput) Write() error {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (ro *RunningOutput) write(metrics []telegraf.Metric) error {
|
func (ro *RunningOutput) write(metrics []telegraf.Metric) error {
|
||||||
if len(metrics) == 0 {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
start := time.Now()
|
start := time.Now()
|
||||||
err := ro.Output.Write(metrics)
|
err := ro.Output.Write(metrics)
|
||||||
elapsed := time.Since(start)
|
elapsed := time.Since(start)
|
||||||
|
|||||||
@@ -30,6 +30,8 @@ The example plugin gathers metrics about example things
|
|||||||
|
|
||||||
### Example Output:
|
### Example Output:
|
||||||
|
|
||||||
|
Give an example `-test` output here
|
||||||
|
|
||||||
```
|
```
|
||||||
$ ./telegraf -config telegraf.conf -input-filter example -test
|
$ ./telegraf -config telegraf.conf -input-filter example -test
|
||||||
measurement1,tag1=foo,tag2=bar field1=1i,field2=2.1 1453831884664956455
|
measurement1,tag1=foo,tag2=bar field1=1i,field2=2.1 1453831884664956455
|
||||||
|
|||||||
@@ -4,7 +4,6 @@ import (
|
|||||||
_ "github.com/influxdata/telegraf/plugins/inputs/aerospike"
|
_ "github.com/influxdata/telegraf/plugins/inputs/aerospike"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/apache"
|
_ "github.com/influxdata/telegraf/plugins/inputs/apache"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/bcache"
|
_ "github.com/influxdata/telegraf/plugins/inputs/bcache"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/couchbase"
|
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/couchdb"
|
_ "github.com/influxdata/telegraf/plugins/inputs/couchdb"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/disque"
|
_ "github.com/influxdata/telegraf/plugins/inputs/disque"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/dns_query"
|
_ "github.com/influxdata/telegraf/plugins/inputs/dns_query"
|
||||||
@@ -16,7 +15,6 @@ import (
|
|||||||
_ "github.com/influxdata/telegraf/plugins/inputs/haproxy"
|
_ "github.com/influxdata/telegraf/plugins/inputs/haproxy"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/httpjson"
|
_ "github.com/influxdata/telegraf/plugins/inputs/httpjson"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/influxdb"
|
_ "github.com/influxdata/telegraf/plugins/inputs/influxdb"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/ipmi_sensor"
|
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/jolokia"
|
_ "github.com/influxdata/telegraf/plugins/inputs/jolokia"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/kafka_consumer"
|
_ "github.com/influxdata/telegraf/plugins/inputs/kafka_consumer"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/leofs"
|
_ "github.com/influxdata/telegraf/plugins/inputs/leofs"
|
||||||
@@ -31,12 +29,10 @@ import (
|
|||||||
_ "github.com/influxdata/telegraf/plugins/inputs/net_response"
|
_ "github.com/influxdata/telegraf/plugins/inputs/net_response"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/nginx"
|
_ "github.com/influxdata/telegraf/plugins/inputs/nginx"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/nsq"
|
_ "github.com/influxdata/telegraf/plugins/inputs/nsq"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/ntpq"
|
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/passenger"
|
_ "github.com/influxdata/telegraf/plugins/inputs/passenger"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/phpfpm"
|
_ "github.com/influxdata/telegraf/plugins/inputs/phpfpm"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/ping"
|
_ "github.com/influxdata/telegraf/plugins/inputs/ping"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/postgresql"
|
_ "github.com/influxdata/telegraf/plugins/inputs/postgresql"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/postgresql_extensible"
|
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/powerdns"
|
_ "github.com/influxdata/telegraf/plugins/inputs/powerdns"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/procstat"
|
_ "github.com/influxdata/telegraf/plugins/inputs/procstat"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/prometheus"
|
_ "github.com/influxdata/telegraf/plugins/inputs/prometheus"
|
||||||
@@ -51,10 +47,8 @@ import (
|
|||||||
_ "github.com/influxdata/telegraf/plugins/inputs/sqlserver"
|
_ "github.com/influxdata/telegraf/plugins/inputs/sqlserver"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/statsd"
|
_ "github.com/influxdata/telegraf/plugins/inputs/statsd"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/system"
|
_ "github.com/influxdata/telegraf/plugins/inputs/system"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/tcp_listener"
|
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/trig"
|
_ "github.com/influxdata/telegraf/plugins/inputs/trig"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/twemproxy"
|
_ "github.com/influxdata/telegraf/plugins/inputs/twemproxy"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/udp_listener"
|
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/win_perf_counters"
|
_ "github.com/influxdata/telegraf/plugins/inputs/win_perf_counters"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/zfs"
|
_ "github.com/influxdata/telegraf/plugins/inputs/zfs"
|
||||||
_ "github.com/influxdata/telegraf/plugins/inputs/zookeeper"
|
_ "github.com/influxdata/telegraf/plugins/inputs/zookeeper"
|
||||||
|
|||||||
@@ -58,10 +58,7 @@ var tr = &http.Transport{
|
|||||||
ResponseHeaderTimeout: time.Duration(3 * time.Second),
|
ResponseHeaderTimeout: time.Duration(3 * time.Second),
|
||||||
}
|
}
|
||||||
|
|
||||||
var client = &http.Client{
|
var client = &http.Client{Transport: tr}
|
||||||
Transport: tr,
|
|
||||||
Timeout: time.Duration(4 * time.Second),
|
|
||||||
}
|
|
||||||
|
|
||||||
func (n *Apache) gatherUrl(addr *url.URL, acc telegraf.Accumulator) error {
|
func (n *Apache) gatherUrl(addr *url.URL, acc telegraf.Accumulator) error {
|
||||||
resp, err := client.Get(addr.String())
|
resp, err := client.Get(addr.String())
|
||||||
|
|||||||
@@ -1,63 +0,0 @@
|
|||||||
# Telegraf Plugin: Couchbase
|
|
||||||
|
|
||||||
## Configuration:
|
|
||||||
|
|
||||||
```
|
|
||||||
# Read per-node and per-bucket metrics from Couchbase
|
|
||||||
[[inputs.couchbase]]
|
|
||||||
## specify servers via a url matching:
|
|
||||||
## [protocol://][:password]@address[:port]
|
|
||||||
## e.g.
|
|
||||||
## http://couchbase-0.example.com/
|
|
||||||
## http://admin:secret@couchbase-0.example.com:8091/
|
|
||||||
##
|
|
||||||
## If no servers are specified, then localhost is used as the host.
|
|
||||||
## If no protocol is specifed, HTTP is used.
|
|
||||||
## If no port is specified, 8091 is used.
|
|
||||||
servers = ["http://localhost:8091"]
|
|
||||||
```
|
|
||||||
|
|
||||||
## Measurements:
|
|
||||||
|
|
||||||
### couchbase_node
|
|
||||||
|
|
||||||
Tags:
|
|
||||||
- cluster: whatever you called it in `servers` in the configuration, e.g.: `http://couchbase-0.example.com/`
|
|
||||||
- hostname: Couchbase's name for the node and port, e.g., `172.16.10.187:8091`
|
|
||||||
|
|
||||||
Fields:
|
|
||||||
- memory_free (unit: bytes, example: 23181365248.0)
|
|
||||||
- memory_total (unit: bytes, example: 64424656896.0)
|
|
||||||
|
|
||||||
### couchbase_bucket
|
|
||||||
|
|
||||||
Tags:
|
|
||||||
- cluster: whatever you called it in `servers` in the configuration, e.g.: `http://couchbase-0.example.com/`)
|
|
||||||
- bucket: the name of the couchbase bucket, e.g., `blastro-df`
|
|
||||||
|
|
||||||
Fields:
|
|
||||||
- quota_percent_used (unit: percent, example: 68.85424936294555)
|
|
||||||
- ops_per_sec (unit: count, example: 5686.789686789687)
|
|
||||||
- disk_fetches (unit: count, example: 0.0)
|
|
||||||
- item_count (unit: count, example: 943239752.0)
|
|
||||||
- disk_used (unit: bytes, example: 409178772321.0)
|
|
||||||
- data_used (unit: bytes, example: 212179309111.0)
|
|
||||||
- mem_used (unit: bytes, example: 202156957464.0)
|
|
||||||
|
|
||||||
|
|
||||||
## Example output
|
|
||||||
|
|
||||||
```
|
|
||||||
$ telegraf -config telegraf.conf -input-filter couchbase -test
|
|
||||||
* Plugin: couchbase, Collection 1
|
|
||||||
> couchbase_node,cluster=https://couchbase-0.example.com/,hostname=172.16.10.187:8091 memory_free=22927384576,memory_total=64424656896 1458381183695864929
|
|
||||||
> couchbase_node,cluster=https://couchbase-0.example.com/,hostname=172.16.10.65:8091 memory_free=23520161792,memory_total=64424656896 1458381183695972112
|
|
||||||
> couchbase_node,cluster=https://couchbase-0.example.com/,hostname=172.16.13.105:8091 memory_free=23531704320,memory_total=64424656896 1458381183695995259
|
|
||||||
> couchbase_node,cluster=https://couchbase-0.example.com/,hostname=172.16.13.173:8091 memory_free=23628767232,memory_total=64424656896 1458381183696010870
|
|
||||||
> couchbase_node,cluster=https://couchbase-0.example.com/,hostname=172.16.15.120:8091 memory_free=23616692224,memory_total=64424656896 1458381183696027406
|
|
||||||
> couchbase_node,cluster=https://couchbase-0.example.com/,hostname=172.16.8.127:8091 memory_free=23431770112,memory_total=64424656896 1458381183696041040
|
|
||||||
> couchbase_node,cluster=https://couchbase-0.example.com/,hostname=172.16.8.148:8091 memory_free=23811371008,memory_total=64424656896 1458381183696059060
|
|
||||||
> couchbase_bucket,bucket=default,cluster=https://couchbase-0.example.com/ data_used=25743360,disk_fetches=0,disk_used=31744886,item_count=0,mem_used=77729224,ops_per_sec=0,quota_percent_used=10.58976636614118 1458381183696210074
|
|
||||||
> couchbase_bucket,bucket=demoncat,cluster=https://couchbase-0.example.com/ data_used=38157584951,disk_fetches=0,disk_used=62730302441,item_count=14662532,mem_used=24015304256,ops_per_sec=1207.753207753208,quota_percent_used=79.87855353525707 1458381183696242695
|
|
||||||
> couchbase_bucket,bucket=blastro-df,cluster=https://couchbase-0.example.com/ data_used=212552491622,disk_fetches=0,disk_used=413323157621,item_count=944655680,mem_used=202421103760,ops_per_sec=1692.176692176692,quota_percent_used=68.9442170551845 1458381183696272206
|
|
||||||
```
|
|
||||||
@@ -1,104 +0,0 @@
|
|||||||
package couchbase
|
|
||||||
|
|
||||||
import (
|
|
||||||
couchbase "github.com/couchbase/go-couchbase"
|
|
||||||
"github.com/influxdata/telegraf"
|
|
||||||
"github.com/influxdata/telegraf/plugins/inputs"
|
|
||||||
"sync"
|
|
||||||
)
|
|
||||||
|
|
||||||
type Couchbase struct {
|
|
||||||
Servers []string
|
|
||||||
}
|
|
||||||
|
|
||||||
var sampleConfig = `
|
|
||||||
## specify servers via a url matching:
|
|
||||||
## [protocol://][:password]@address[:port]
|
|
||||||
## e.g.
|
|
||||||
## http://couchbase-0.example.com/
|
|
||||||
## http://admin:secret@couchbase-0.example.com:8091/
|
|
||||||
##
|
|
||||||
## If no servers are specified, then localhost is used as the host.
|
|
||||||
## If no protocol is specifed, HTTP is used.
|
|
||||||
## If no port is specified, 8091 is used.
|
|
||||||
servers = ["http://localhost:8091"]
|
|
||||||
`
|
|
||||||
|
|
||||||
func (r *Couchbase) SampleConfig() string {
|
|
||||||
return sampleConfig
|
|
||||||
}
|
|
||||||
|
|
||||||
func (r *Couchbase) Description() string {
|
|
||||||
return "Read metrics from one or many couchbase clusters"
|
|
||||||
}
|
|
||||||
|
|
||||||
// Reads stats from all configured clusters. Accumulates stats.
|
|
||||||
// Returns one of the errors encountered while gathering stats (if any).
|
|
||||||
func (r *Couchbase) Gather(acc telegraf.Accumulator) error {
|
|
||||||
if len(r.Servers) == 0 {
|
|
||||||
r.gatherServer("http://localhost:8091/", acc, nil)
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
var wg sync.WaitGroup
|
|
||||||
|
|
||||||
var outerr error
|
|
||||||
|
|
||||||
for _, serv := range r.Servers {
|
|
||||||
wg.Add(1)
|
|
||||||
go func(serv string) {
|
|
||||||
defer wg.Done()
|
|
||||||
outerr = r.gatherServer(serv, acc, nil)
|
|
||||||
}(serv)
|
|
||||||
}
|
|
||||||
|
|
||||||
wg.Wait()
|
|
||||||
|
|
||||||
return outerr
|
|
||||||
}
|
|
||||||
|
|
||||||
func (r *Couchbase) gatherServer(addr string, acc telegraf.Accumulator, pool *couchbase.Pool) error {
|
|
||||||
if pool == nil {
|
|
||||||
client, err := couchbase.Connect(addr)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
// `default` is the only possible pool name. It's a
|
|
||||||
// placeholder for a possible future Couchbase feature. See
|
|
||||||
// http://stackoverflow.com/a/16990911/17498.
|
|
||||||
p, err := client.GetPool("default")
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
pool = &p
|
|
||||||
}
|
|
||||||
for i := 0; i < len(pool.Nodes); i++ {
|
|
||||||
node := pool.Nodes[i]
|
|
||||||
tags := map[string]string{"cluster": addr, "hostname": node.Hostname}
|
|
||||||
fields := make(map[string]interface{})
|
|
||||||
fields["memory_free"] = node.MemoryFree
|
|
||||||
fields["memory_total"] = node.MemoryTotal
|
|
||||||
acc.AddFields("couchbase_node", fields, tags)
|
|
||||||
}
|
|
||||||
for bucketName, _ := range pool.BucketMap {
|
|
||||||
tags := map[string]string{"cluster": addr, "bucket": bucketName}
|
|
||||||
bs := pool.BucketMap[bucketName].BasicStats
|
|
||||||
fields := make(map[string]interface{})
|
|
||||||
fields["quota_percent_used"] = bs["quotaPercentUsed"]
|
|
||||||
fields["ops_per_sec"] = bs["opsPerSec"]
|
|
||||||
fields["disk_fetches"] = bs["diskFetches"]
|
|
||||||
fields["item_count"] = bs["itemCount"]
|
|
||||||
fields["disk_used"] = bs["diskUsed"]
|
|
||||||
fields["data_used"] = bs["dataUsed"]
|
|
||||||
fields["mem_used"] = bs["memUsed"]
|
|
||||||
acc.AddFields("couchbase_bucket", fields, tags)
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func init() {
|
|
||||||
inputs.Add("couchbase", func() telegraf.Input {
|
|
||||||
return &Couchbase{}
|
|
||||||
})
|
|
||||||
}
|
|
||||||
File diff suppressed because one or more lines are too long
@@ -10,7 +10,6 @@ import (
|
|||||||
"reflect"
|
"reflect"
|
||||||
"strings"
|
"strings"
|
||||||
"sync"
|
"sync"
|
||||||
"time"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
// Schema:
|
// Schema:
|
||||||
@@ -113,18 +112,9 @@ func (c *CouchDB) Gather(accumulator telegraf.Accumulator) error {
|
|||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
var tr = &http.Transport{
|
|
||||||
ResponseHeaderTimeout: time.Duration(3 * time.Second),
|
|
||||||
}
|
|
||||||
|
|
||||||
var client = &http.Client{
|
|
||||||
Transport: tr,
|
|
||||||
Timeout: time.Duration(4 * time.Second),
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *CouchDB) fetchAndInsertData(accumulator telegraf.Accumulator, host string) error {
|
func (c *CouchDB) fetchAndInsertData(accumulator telegraf.Accumulator, host string) error {
|
||||||
|
|
||||||
response, error := client.Get(host)
|
response, error := http.Get(host)
|
||||||
if error != nil {
|
if error != nil {
|
||||||
return error
|
return error
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -9,7 +9,6 @@ import (
|
|||||||
"strconv"
|
"strconv"
|
||||||
"strings"
|
"strings"
|
||||||
"sync"
|
"sync"
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/influxdata/telegraf"
|
"github.com/influxdata/telegraf"
|
||||||
"github.com/influxdata/telegraf/plugins/inputs"
|
"github.com/influxdata/telegraf/plugins/inputs"
|
||||||
@@ -24,14 +23,13 @@ type Disque struct {
|
|||||||
|
|
||||||
var sampleConfig = `
|
var sampleConfig = `
|
||||||
## An array of URI to gather stats about. Specify an ip or hostname
|
## An array of URI to gather stats about. Specify an ip or hostname
|
||||||
## with optional port and password.
|
## with optional port and password. ie disque://localhost, disque://10.10.3.33:18832,
|
||||||
## ie disque://localhost, disque://10.10.3.33:18832, 10.0.0.1:10000, etc.
|
## 10.0.0.1:10000, etc.
|
||||||
|
|
||||||
## If no servers are specified, then localhost is used as the host.
|
## If no servers are specified, then localhost is used as the host.
|
||||||
servers = ["localhost"]
|
servers = ["localhost"]
|
||||||
`
|
`
|
||||||
|
|
||||||
var defaultTimeout = 5 * time.Second
|
|
||||||
|
|
||||||
func (r *Disque) SampleConfig() string {
|
func (r *Disque) SampleConfig() string {
|
||||||
return sampleConfig
|
return sampleConfig
|
||||||
}
|
}
|
||||||
@@ -109,7 +107,7 @@ func (g *Disque) gatherServer(addr *url.URL, acc telegraf.Accumulator) error {
|
|||||||
addr.Host = addr.Host + ":" + defaultPort
|
addr.Host = addr.Host + ":" + defaultPort
|
||||||
}
|
}
|
||||||
|
|
||||||
c, err := net.DialTimeout("tcp", addr.Host, defaultTimeout)
|
c, err := net.Dial("tcp", addr.Host)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("Unable to connect to disque server '%s': %s", addr.Host, err)
|
return fmt.Errorf("Unable to connect to disque server '%s': %s", addr.Host, err)
|
||||||
}
|
}
|
||||||
@@ -134,9 +132,6 @@ func (g *Disque) gatherServer(addr *url.URL, acc telegraf.Accumulator) error {
|
|||||||
g.c = c
|
g.c = c
|
||||||
}
|
}
|
||||||
|
|
||||||
// Extend connection
|
|
||||||
g.c.SetDeadline(time.Now().Add(defaultTimeout))
|
|
||||||
|
|
||||||
g.c.Write([]byte("info\r\n"))
|
g.c.Write([]byte("info\r\n"))
|
||||||
|
|
||||||
r := bufio.NewReader(g.c)
|
r := bufio.NewReader(g.c)
|
||||||
|
|||||||
@@ -35,8 +35,7 @@ var sampleConfig = `
|
|||||||
## Domains or subdomains to query. "."(root) is default
|
## Domains or subdomains to query. "."(root) is default
|
||||||
domains = ["."] # optional
|
domains = ["."] # optional
|
||||||
|
|
||||||
## Query record type. Default is "A"
|
## Query record type. Posible values: A, AAAA, CNAME, MX, NS, PTR, TXT, SOA, SPF, SRV. Default is "NS"
|
||||||
## Posible values: A, AAAA, CNAME, MX, NS, PTR, TXT, SOA, SPF, SRV.
|
|
||||||
record_type = "A" # optional
|
record_type = "A" # optional
|
||||||
|
|
||||||
## Dns server port. 53 is default
|
## Dns server port. 53 is default
|
||||||
|
|||||||
@@ -1,18 +1,15 @@
|
|||||||
package dns_query
|
package dns_query
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"testing"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/influxdata/telegraf/testutil"
|
"github.com/influxdata/telegraf/testutil"
|
||||||
|
|
||||||
"github.com/miekg/dns"
|
"github.com/miekg/dns"
|
||||||
"github.com/stretchr/testify/assert"
|
"github.com/stretchr/testify/assert"
|
||||||
"github.com/stretchr/testify/require"
|
"testing"
|
||||||
|
"time"
|
||||||
)
|
)
|
||||||
|
|
||||||
var servers = []string{"8.8.8.8"}
|
var servers = []string{"8.8.8.8"}
|
||||||
var domains = []string{"google.com"}
|
var domains = []string{"mjasion.pl"}
|
||||||
|
|
||||||
func TestGathering(t *testing.T) {
|
func TestGathering(t *testing.T) {
|
||||||
var dnsConfig = DnsQuery{
|
var dnsConfig = DnsQuery{
|
||||||
@@ -21,10 +18,8 @@ func TestGathering(t *testing.T) {
|
|||||||
}
|
}
|
||||||
var acc testutil.Accumulator
|
var acc testutil.Accumulator
|
||||||
|
|
||||||
err := dnsConfig.Gather(&acc)
|
dnsConfig.Gather(&acc)
|
||||||
assert.NoError(t, err)
|
metric, _ := acc.Get("dns_query")
|
||||||
metric, ok := acc.Get("dns_query")
|
|
||||||
require.True(t, ok)
|
|
||||||
queryTime, _ := metric.Fields["query_time_ms"].(float64)
|
queryTime, _ := metric.Fields["query_time_ms"].(float64)
|
||||||
|
|
||||||
assert.NotEqual(t, 0, queryTime)
|
assert.NotEqual(t, 0, queryTime)
|
||||||
@@ -38,10 +33,8 @@ func TestGatheringMxRecord(t *testing.T) {
|
|||||||
var acc testutil.Accumulator
|
var acc testutil.Accumulator
|
||||||
dnsConfig.RecordType = "MX"
|
dnsConfig.RecordType = "MX"
|
||||||
|
|
||||||
err := dnsConfig.Gather(&acc)
|
dnsConfig.Gather(&acc)
|
||||||
assert.NoError(t, err)
|
metric, _ := acc.Get("dns_query")
|
||||||
metric, ok := acc.Get("dns_query")
|
|
||||||
require.True(t, ok)
|
|
||||||
queryTime, _ := metric.Fields["query_time_ms"].(float64)
|
queryTime, _ := metric.Fields["query_time_ms"].(float64)
|
||||||
|
|
||||||
assert.NotEqual(t, 0, queryTime)
|
assert.NotEqual(t, 0, queryTime)
|
||||||
@@ -61,10 +54,8 @@ func TestGatheringRootDomain(t *testing.T) {
|
|||||||
}
|
}
|
||||||
fields := map[string]interface{}{}
|
fields := map[string]interface{}{}
|
||||||
|
|
||||||
err := dnsConfig.Gather(&acc)
|
dnsConfig.Gather(&acc)
|
||||||
assert.NoError(t, err)
|
metric, _ := acc.Get("dns_query")
|
||||||
metric, ok := acc.Get("dns_query")
|
|
||||||
require.True(t, ok)
|
|
||||||
queryTime, _ := metric.Fields["query_time_ms"].(float64)
|
queryTime, _ := metric.Fields["query_time_ms"].(float64)
|
||||||
|
|
||||||
fields["query_time_ms"] = queryTime
|
fields["query_time_ms"] = queryTime
|
||||||
@@ -79,15 +70,13 @@ func TestMetricContainsServerAndDomainAndRecordTypeTags(t *testing.T) {
|
|||||||
var acc testutil.Accumulator
|
var acc testutil.Accumulator
|
||||||
tags := map[string]string{
|
tags := map[string]string{
|
||||||
"server": "8.8.8.8",
|
"server": "8.8.8.8",
|
||||||
"domain": "google.com",
|
"domain": "mjasion.pl",
|
||||||
"record_type": "NS",
|
"record_type": "NS",
|
||||||
}
|
}
|
||||||
fields := map[string]interface{}{}
|
fields := map[string]interface{}{}
|
||||||
|
|
||||||
err := dnsConfig.Gather(&acc)
|
dnsConfig.Gather(&acc)
|
||||||
assert.NoError(t, err)
|
metric, _ := acc.Get("dns_query")
|
||||||
metric, ok := acc.Get("dns_query")
|
|
||||||
require.True(t, ok)
|
|
||||||
queryTime, _ := metric.Fields["query_time_ms"].(float64)
|
queryTime, _ := metric.Fields["query_time_ms"].(float64)
|
||||||
|
|
||||||
fields["query_time_ms"] = queryTime
|
fields["query_time_ms"] = queryTime
|
||||||
|
|||||||
@@ -74,7 +74,6 @@ on the availability of per-cpu stats on your system.
|
|||||||
- usage_in_usermode
|
- usage_in_usermode
|
||||||
- usage_system
|
- usage_system
|
||||||
- usage_total
|
- usage_total
|
||||||
- usage_percent
|
|
||||||
- docker_net
|
- docker_net
|
||||||
- rx_dropped
|
- rx_dropped
|
||||||
- rx_bytes
|
- rx_bytes
|
||||||
@@ -95,50 +94,18 @@ on the availability of per-cpu stats on your system.
|
|||||||
- io_serviced_recursive_sync
|
- io_serviced_recursive_sync
|
||||||
- io_serviced_recursive_total
|
- io_serviced_recursive_total
|
||||||
- io_serviced_recursive_write
|
- io_serviced_recursive_write
|
||||||
- docker_
|
|
||||||
- n_used_file_descriptors
|
|
||||||
- n_cpus
|
|
||||||
- n_containers
|
|
||||||
- n_images
|
|
||||||
- n_goroutines
|
|
||||||
- n_listener_events
|
|
||||||
- memory_total
|
|
||||||
- pool_blocksize
|
|
||||||
- docker_data
|
|
||||||
- available
|
|
||||||
- total
|
|
||||||
- used
|
|
||||||
- docker_metadata
|
|
||||||
- available
|
|
||||||
- total
|
|
||||||
- used
|
|
||||||
|
|
||||||
|
|
||||||
### Tags:
|
### Tags:
|
||||||
|
|
||||||
- docker (memory_total)
|
- All stats have the following tags:
|
||||||
- unit=bytes
|
|
||||||
- docker (pool_blocksize)
|
|
||||||
- unit=bytes
|
|
||||||
- docker_data
|
|
||||||
- unit=bytes
|
|
||||||
- docker_metadata
|
|
||||||
- unit=bytes
|
|
||||||
|
|
||||||
- docker_cpu specific:
|
|
||||||
- cont_id (container ID)
|
- cont_id (container ID)
|
||||||
- cont_image (container image)
|
- cont_image (container image)
|
||||||
- cont_name (container name)
|
- cont_name (container name)
|
||||||
|
- docker_cpu specific:
|
||||||
- cpu
|
- cpu
|
||||||
- docker_net specific:
|
- docker_net specific:
|
||||||
- cont_id (container ID)
|
|
||||||
- cont_image (container image)
|
|
||||||
- cont_name (container name)
|
|
||||||
- network
|
- network
|
||||||
- docker_blkio specific:
|
- docker_blkio specific:
|
||||||
- cont_id (container ID)
|
|
||||||
- cont_image (container image)
|
|
||||||
- cont_name (container name)
|
|
||||||
- device
|
- device
|
||||||
|
|
||||||
### Example Output:
|
### Example Output:
|
||||||
@@ -146,16 +113,6 @@ on the availability of per-cpu stats on your system.
|
|||||||
```
|
```
|
||||||
% ./telegraf -config ~/ws/telegraf.conf -input-filter docker -test
|
% ./telegraf -config ~/ws/telegraf.conf -input-filter docker -test
|
||||||
* Plugin: docker, Collection 1
|
* Plugin: docker, Collection 1
|
||||||
> docker n_cpus=8i 1456926671065383978
|
|
||||||
> docker n_used_file_descriptors=15i 1456926671065383978
|
|
||||||
> docker n_containers=7i 1456926671065383978
|
|
||||||
> docker n_images=152i 1456926671065383978
|
|
||||||
> docker n_goroutines=36i 1456926671065383978
|
|
||||||
> docker n_listener_events=0i 1456926671065383978
|
|
||||||
> docker,unit=bytes memory_total=18935443456i 1456926671065383978
|
|
||||||
> docker,unit=bytes pool_blocksize=65540i 1456926671065383978
|
|
||||||
> docker_data,unit=bytes available=24340000000i,total=107400000000i,used=14820000000i 1456926671065383978
|
|
||||||
> docker_metadata,unit=bytes available=2126999999i,total=2146999999i,used=20420000i 145692667106538
|
|
||||||
> docker_mem,cont_id=5705ba8ed8fb47527410653d60a8bb2f3af5e62372297c419022a3cc6d45d848,\
|
> docker_mem,cont_id=5705ba8ed8fb47527410653d60a8bb2f3af5e62372297c419022a3cc6d45d848,\
|
||||||
cont_image=spotify/kafka,cont_name=kafka \
|
cont_image=spotify/kafka,cont_name=kafka \
|
||||||
active_anon=52568064i,active_file=6926336i,cache=12038144i,fail_count=0i,\
|
active_anon=52568064i,active_file=6926336i,cache=12038144i,fail_count=0i,\
|
||||||
|
|||||||
@@ -1,11 +1,8 @@
|
|||||||
package system
|
package system
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"encoding/json"
|
|
||||||
"fmt"
|
"fmt"
|
||||||
"log"
|
"log"
|
||||||
"regexp"
|
|
||||||
"strconv"
|
|
||||||
"strings"
|
"strings"
|
||||||
"sync"
|
"sync"
|
||||||
"time"
|
"time"
|
||||||
@@ -20,29 +17,9 @@ type Docker struct {
|
|||||||
Endpoint string
|
Endpoint string
|
||||||
ContainerNames []string
|
ContainerNames []string
|
||||||
|
|
||||||
client DockerClient
|
client *docker.Client
|
||||||
}
|
}
|
||||||
|
|
||||||
type DockerClient interface {
|
|
||||||
// Docker Client wrapper
|
|
||||||
// Useful for test
|
|
||||||
Info() (*docker.Env, error)
|
|
||||||
ListContainers(opts docker.ListContainersOptions) ([]docker.APIContainers, error)
|
|
||||||
Stats(opts docker.StatsOptions) error
|
|
||||||
}
|
|
||||||
|
|
||||||
const (
|
|
||||||
KB = 1000
|
|
||||||
MB = 1000 * KB
|
|
||||||
GB = 1000 * MB
|
|
||||||
TB = 1000 * GB
|
|
||||||
PB = 1000 * TB
|
|
||||||
)
|
|
||||||
|
|
||||||
var (
|
|
||||||
sizeRegex = regexp.MustCompile(`^(\d+(\.\d+)*) ?([kKmMgGtTpP])?[bB]?$`)
|
|
||||||
)
|
|
||||||
|
|
||||||
var sampleConfig = `
|
var sampleConfig = `
|
||||||
## Docker Endpoint
|
## Docker Endpoint
|
||||||
## To use TCP, set endpoint = "tcp://[ip]:[port]"
|
## To use TCP, set endpoint = "tcp://[ip]:[port]"
|
||||||
@@ -81,20 +58,12 @@ func (d *Docker) Gather(acc telegraf.Accumulator) error {
|
|||||||
d.client = c
|
d.client = c
|
||||||
}
|
}
|
||||||
|
|
||||||
// Get daemon info
|
|
||||||
err := d.gatherInfo(acc)
|
|
||||||
if err != nil {
|
|
||||||
fmt.Println(err.Error())
|
|
||||||
}
|
|
||||||
|
|
||||||
// List containers
|
|
||||||
opts := docker.ListContainersOptions{}
|
opts := docker.ListContainersOptions{}
|
||||||
containers, err := d.client.ListContainers(opts)
|
containers, err := d.client.ListContainers(opts)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
// Get container data
|
|
||||||
var wg sync.WaitGroup
|
var wg sync.WaitGroup
|
||||||
wg.Add(len(containers))
|
wg.Add(len(containers))
|
||||||
for _, container := range containers {
|
for _, container := range containers {
|
||||||
@@ -112,76 +81,6 @@ func (d *Docker) Gather(acc telegraf.Accumulator) error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (d *Docker) gatherInfo(acc telegraf.Accumulator) error {
|
|
||||||
// Init vars
|
|
||||||
var driverStatus [][]string
|
|
||||||
dataFields := make(map[string]interface{})
|
|
||||||
metadataFields := make(map[string]interface{})
|
|
||||||
now := time.Now()
|
|
||||||
// Get info from docker daemon
|
|
||||||
info, err := d.client.Info()
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
fields := map[string]interface{}{
|
|
||||||
"n_cpus": info.GetInt64("NCPU"),
|
|
||||||
"n_used_file_descriptors": info.GetInt64("NFd"),
|
|
||||||
"n_containers": info.GetInt64("Containers"),
|
|
||||||
"n_images": info.GetInt64("Images"),
|
|
||||||
"n_goroutines": info.GetInt64("NGoroutines"),
|
|
||||||
"n_listener_events": info.GetInt64("NEventsListener"),
|
|
||||||
}
|
|
||||||
// Add metrics
|
|
||||||
acc.AddFields("docker",
|
|
||||||
fields,
|
|
||||||
nil,
|
|
||||||
now)
|
|
||||||
acc.AddFields("docker",
|
|
||||||
map[string]interface{}{"memory_total": info.GetInt64("MemTotal")},
|
|
||||||
map[string]string{"unit": "bytes"},
|
|
||||||
now)
|
|
||||||
// Get storage metrics
|
|
||||||
driverStatusRaw := []byte(info.Get("DriverStatus"))
|
|
||||||
json.Unmarshal(driverStatusRaw, &driverStatus)
|
|
||||||
for _, rawData := range driverStatus {
|
|
||||||
// Try to convert string to int (bytes)
|
|
||||||
value, err := parseSize(rawData[1])
|
|
||||||
if err != nil {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
name := strings.ToLower(strings.Replace(rawData[0], " ", "_", -1))
|
|
||||||
if name == "pool_blocksize" {
|
|
||||||
// pool blocksize
|
|
||||||
acc.AddFields("docker",
|
|
||||||
map[string]interface{}{"pool_blocksize": value},
|
|
||||||
map[string]string{"unit": "bytes"},
|
|
||||||
now)
|
|
||||||
} else if strings.HasPrefix(name, "data_space_") {
|
|
||||||
// data space
|
|
||||||
field_name := strings.TrimPrefix(name, "data_space_")
|
|
||||||
dataFields[field_name] = value
|
|
||||||
} else if strings.HasPrefix(name, "metadata_space_") {
|
|
||||||
// metadata space
|
|
||||||
field_name := strings.TrimPrefix(name, "metadata_space_")
|
|
||||||
metadataFields[field_name] = value
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if len(dataFields) > 0 {
|
|
||||||
acc.AddFields("docker_data",
|
|
||||||
dataFields,
|
|
||||||
map[string]string{"unit": "bytes"},
|
|
||||||
now)
|
|
||||||
}
|
|
||||||
if len(metadataFields) > 0 {
|
|
||||||
acc.AddFields("docker_metadata",
|
|
||||||
metadataFields,
|
|
||||||
map[string]string{"unit": "bytes"},
|
|
||||||
now)
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (d *Docker) gatherContainer(
|
func (d *Docker) gatherContainer(
|
||||||
container docker.APIContainers,
|
container docker.APIContainers,
|
||||||
acc telegraf.Accumulator,
|
acc telegraf.Accumulator,
|
||||||
@@ -435,27 +334,6 @@ func sliceContains(in string, sl []string) bool {
|
|||||||
return false
|
return false
|
||||||
}
|
}
|
||||||
|
|
||||||
// Parses the human-readable size string into the amount it represents.
|
|
||||||
func parseSize(sizeStr string) (int64, error) {
|
|
||||||
matches := sizeRegex.FindStringSubmatch(sizeStr)
|
|
||||||
if len(matches) != 4 {
|
|
||||||
return -1, fmt.Errorf("invalid size: '%s'", sizeStr)
|
|
||||||
}
|
|
||||||
|
|
||||||
size, err := strconv.ParseFloat(matches[1], 64)
|
|
||||||
if err != nil {
|
|
||||||
return -1, err
|
|
||||||
}
|
|
||||||
|
|
||||||
uMap := map[string]int64{"k": KB, "m": MB, "g": GB, "t": TB, "p": PB}
|
|
||||||
unitPrefix := strings.ToLower(matches[3])
|
|
||||||
if mul, ok := uMap[unitPrefix]; ok {
|
|
||||||
size *= float64(mul)
|
|
||||||
}
|
|
||||||
|
|
||||||
return int64(size), nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
inputs.Add("docker", func() telegraf.Input {
|
inputs.Add("docker", func() telegraf.Input {
|
||||||
return &Docker{}
|
return &Docker{}
|
||||||
|
|||||||
@@ -1,14 +1,12 @@
|
|||||||
package system
|
package system
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"encoding/json"
|
|
||||||
"testing"
|
"testing"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/influxdata/telegraf/testutil"
|
"github.com/influxdata/telegraf/testutil"
|
||||||
|
|
||||||
"github.com/fsouza/go-dockerclient"
|
"github.com/fsouza/go-dockerclient"
|
||||||
"github.com/stretchr/testify/require"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
func TestDockerGatherContainerStats(t *testing.T) {
|
func TestDockerGatherContainerStats(t *testing.T) {
|
||||||
@@ -196,186 +194,3 @@ func testStats() *docker.Stats {
|
|||||||
|
|
||||||
return stats
|
return stats
|
||||||
}
|
}
|
||||||
|
|
||||||
type FakeDockerClient struct {
|
|
||||||
}
|
|
||||||
|
|
||||||
func (d FakeDockerClient) Info() (*docker.Env, error) {
|
|
||||||
env := docker.Env{"Containers=108", "OomKillDisable=false", "SystemTime=2016-02-24T00:55:09.15073105-05:00", "NEventsListener=0", "ID=5WQQ:TFWR:FDNG:OKQ3:37Y4:FJWG:QIKK:623T:R3ME:QTKB:A7F7:OLHD", "Debug=false", "LoggingDriver=json-file", "KernelVersion=4.3.0-1-amd64", "IndexServerAddress=https://index.docker.io/v1/", "MemTotal=3840757760", "Images=199", "CpuCfsQuota=true", "Name=absol", "SwapLimit=false", "IPv4Forwarding=true", "ExecutionDriver=native-0.2", "InitSha1=23a51f3c916d2b5a3bbb31caf301fd2d14edd518", "ExperimentalBuild=false", "CpuCfsPeriod=true", "RegistryConfig={\"IndexConfigs\":{\"docker.io\":{\"Mirrors\":null,\"Name\":\"docker.io\",\"Official\":true,\"Secure\":true}},\"InsecureRegistryCIDRs\":[\"127.0.0.0/8\"],\"Mirrors\":null}", "OperatingSystem=Linux Mint LMDE (containerized)", "BridgeNfIptables=true", "HttpsProxy=", "Labels=null", "MemoryLimit=false", "DriverStatus=[[\"Pool Name\",\"docker-8:1-1182287-pool\"],[\"Pool Blocksize\",\"65.54 kB\"],[\"Backing Filesystem\",\"extfs\"],[\"Data file\",\"/dev/loop0\"],[\"Metadata file\",\"/dev/loop1\"],[\"Data Space Used\",\"17.3 GB\"],[\"Data Space Total\",\"107.4 GB\"],[\"Data Space Available\",\"36.53 GB\"],[\"Metadata Space Used\",\"20.97 MB\"],[\"Metadata Space Total\",\"2.147 GB\"],[\"Metadata Space Available\",\"2.127 GB\"],[\"Udev Sync Supported\",\"true\"],[\"Deferred Removal Enabled\",\"false\"],[\"Data loop file\",\"/var/lib/docker/devicemapper/devicemapper/data\"],[\"Metadata loop file\",\"/var/lib/docker/devicemapper/devicemapper/metadata\"],[\"Library Version\",\"1.02.115 (2016-01-25)\"]]", "NFd=19", "HttpProxy=", "Driver=devicemapper", "NGoroutines=39", "InitPath=/usr/lib/docker.io/dockerinit", "NCPU=4", "DockerRootDir=/var/lib/docker", "NoProxy=", "BridgeNfIp6tables=true"}
|
|
||||||
return &env, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (d FakeDockerClient) ListContainers(opts docker.ListContainersOptions) ([]docker.APIContainers, error) {
|
|
||||||
container1 := docker.APIContainers{
|
|
||||||
ID: "e2173b9478a6ae55e237d4d74f8bbb753f0817192b5081334dc78476296b7dfb",
|
|
||||||
Image: "quay.io/coreos/etcd:v2.2.2",
|
|
||||||
Command: "/etcd -name etcd0 -advertise-client-urls http://localhost:2379 -listen-client-urls http://0.0.0.0:2379",
|
|
||||||
Created: 1455941930,
|
|
||||||
Status: "Up 4 hours",
|
|
||||||
Ports: []docker.APIPort{
|
|
||||||
docker.APIPort{
|
|
||||||
PrivatePort: 7001,
|
|
||||||
PublicPort: 0,
|
|
||||||
Type: "tcp",
|
|
||||||
},
|
|
||||||
docker.APIPort{
|
|
||||||
PrivatePort: 4001,
|
|
||||||
PublicPort: 0,
|
|
||||||
Type: "tcp",
|
|
||||||
},
|
|
||||||
docker.APIPort{
|
|
||||||
PrivatePort: 2380,
|
|
||||||
PublicPort: 0,
|
|
||||||
Type: "tcp",
|
|
||||||
},
|
|
||||||
docker.APIPort{
|
|
||||||
PrivatePort: 2379,
|
|
||||||
PublicPort: 2379,
|
|
||||||
Type: "tcp",
|
|
||||||
IP: "0.0.0.0",
|
|
||||||
},
|
|
||||||
},
|
|
||||||
SizeRw: 0,
|
|
||||||
SizeRootFs: 0,
|
|
||||||
Names: []string{"/etcd"},
|
|
||||||
}
|
|
||||||
container2 := docker.APIContainers{
|
|
||||||
ID: "b7dfbb9478a6ae55e237d4d74f8bbb753f0817192b5081334dc78476296e2173",
|
|
||||||
Image: "quay.io/coreos/etcd:v2.2.2",
|
|
||||||
Command: "/etcd -name etcd2 -advertise-client-urls http://localhost:2379 -listen-client-urls http://0.0.0.0:2379",
|
|
||||||
Created: 1455941933,
|
|
||||||
Status: "Up 4 hours",
|
|
||||||
Ports: []docker.APIPort{
|
|
||||||
docker.APIPort{
|
|
||||||
PrivatePort: 7002,
|
|
||||||
PublicPort: 0,
|
|
||||||
Type: "tcp",
|
|
||||||
},
|
|
||||||
docker.APIPort{
|
|
||||||
PrivatePort: 4002,
|
|
||||||
PublicPort: 0,
|
|
||||||
Type: "tcp",
|
|
||||||
},
|
|
||||||
docker.APIPort{
|
|
||||||
PrivatePort: 2381,
|
|
||||||
PublicPort: 0,
|
|
||||||
Type: "tcp",
|
|
||||||
},
|
|
||||||
docker.APIPort{
|
|
||||||
PrivatePort: 2382,
|
|
||||||
PublicPort: 2382,
|
|
||||||
Type: "tcp",
|
|
||||||
IP: "0.0.0.0",
|
|
||||||
},
|
|
||||||
},
|
|
||||||
SizeRw: 0,
|
|
||||||
SizeRootFs: 0,
|
|
||||||
Names: []string{"/etcd2"},
|
|
||||||
}
|
|
||||||
|
|
||||||
containers := []docker.APIContainers{container1, container2}
|
|
||||||
return containers, nil
|
|
||||||
|
|
||||||
//#{e6a96c84ca91a5258b7cb752579fb68826b68b49ff957487695cd4d13c343b44 titilambert/snmpsim /bin/sh -c 'snmpsimd --agent-udpv4-endpoint=0.0.0.0:31161 --process-user=root --process-group=user' 1455724831 Up 4 hours [{31161 31161 udp 0.0.0.0}] 0 0 [/snmp] map[]}]2016/02/24 01:05:01 Gathered metrics, (3s interval), from 1 inputs in 1.233836656s
|
|
||||||
}
|
|
||||||
|
|
||||||
func (d FakeDockerClient) Stats(opts docker.StatsOptions) error {
|
|
||||||
jsonStat := `{"read":"2016-02-24T11:42:27.472459608-05:00","memory_stats":{"stats":{},"limit":18935443456},"blkio_stats":{"io_service_bytes_recursive":[{"major":252,"minor":1,"op":"Read","value":753664},{"major":252,"minor":1,"op":"Write"},{"major":252,"minor":1,"op":"Sync"},{"major":252,"minor":1,"op":"Async","value":753664},{"major":252,"minor":1,"op":"Total","value":753664}],"io_serviced_recursive":[{"major":252,"minor":1,"op":"Read","value":26},{"major":252,"minor":1,"op":"Write"},{"major":252,"minor":1,"op":"Sync"},{"major":252,"minor":1,"op":"Async","value":26},{"major":252,"minor":1,"op":"Total","value":26}]},"cpu_stats":{"cpu_usage":{"percpu_usage":[17871,4959158,1646137,1231652,11829401,244656,369972,0],"usage_in_usermode":10000000,"total_usage":20298847},"system_cpu_usage":24052607520000000,"throttling_data":{}},"precpu_stats":{"cpu_usage":{"percpu_usage":[17871,4959158,1646137,1231652,11829401,244656,369972,0],"usage_in_usermode":10000000,"total_usage":20298847},"system_cpu_usage":24052599550000000,"throttling_data":{}}}`
|
|
||||||
var stat docker.Stats
|
|
||||||
json.Unmarshal([]byte(jsonStat), &stat)
|
|
||||||
opts.Stats <- &stat
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestDockerGatherInfo(t *testing.T) {
|
|
||||||
var acc testutil.Accumulator
|
|
||||||
client := FakeDockerClient{}
|
|
||||||
d := Docker{client: client}
|
|
||||||
|
|
||||||
err := d.Gather(&acc)
|
|
||||||
|
|
||||||
require.NoError(t, err)
|
|
||||||
|
|
||||||
acc.AssertContainsTaggedFields(t,
|
|
||||||
"docker",
|
|
||||||
map[string]interface{}{
|
|
||||||
"n_listener_events": int64(0),
|
|
||||||
"n_cpus": int64(4),
|
|
||||||
"n_used_file_descriptors": int64(19),
|
|
||||||
"n_containers": int64(108),
|
|
||||||
"n_images": int64(199),
|
|
||||||
"n_goroutines": int64(39),
|
|
||||||
},
|
|
||||||
map[string]string{},
|
|
||||||
)
|
|
||||||
|
|
||||||
acc.AssertContainsTaggedFields(t,
|
|
||||||
"docker_data",
|
|
||||||
map[string]interface{}{
|
|
||||||
"used": int64(17300000000),
|
|
||||||
"total": int64(107400000000),
|
|
||||||
"available": int64(36530000000),
|
|
||||||
},
|
|
||||||
map[string]string{
|
|
||||||
"unit": "bytes",
|
|
||||||
},
|
|
||||||
)
|
|
||||||
acc.AssertContainsTaggedFields(t,
|
|
||||||
"docker_cpu",
|
|
||||||
map[string]interface{}{
|
|
||||||
"usage_total": uint64(1231652),
|
|
||||||
},
|
|
||||||
map[string]string{
|
|
||||||
"cont_id": "b7dfbb9478a6ae55e237d4d74f8bbb753f0817192b5081334dc78476296e2173",
|
|
||||||
"cont_name": "etcd2",
|
|
||||||
"cont_image": "quay.io/coreos/etcd:v2.2.2",
|
|
||||||
"cpu": "cpu3",
|
|
||||||
},
|
|
||||||
)
|
|
||||||
acc.AssertContainsTaggedFields(t,
|
|
||||||
"docker_mem",
|
|
||||||
map[string]interface{}{
|
|
||||||
"total_pgpgout": uint64(0),
|
|
||||||
"usage_percent": float64(0),
|
|
||||||
"rss": uint64(0),
|
|
||||||
"total_writeback": uint64(0),
|
|
||||||
"active_anon": uint64(0),
|
|
||||||
"total_pgmafault": uint64(0),
|
|
||||||
"total_rss": uint64(0),
|
|
||||||
"total_unevictable": uint64(0),
|
|
||||||
"active_file": uint64(0),
|
|
||||||
"total_mapped_file": uint64(0),
|
|
||||||
"pgpgin": uint64(0),
|
|
||||||
"total_active_file": uint64(0),
|
|
||||||
"total_active_anon": uint64(0),
|
|
||||||
"total_cache": uint64(0),
|
|
||||||
"inactive_anon": uint64(0),
|
|
||||||
"pgmajfault": uint64(0),
|
|
||||||
"total_inactive_anon": uint64(0),
|
|
||||||
"total_rss_huge": uint64(0),
|
|
||||||
"rss_huge": uint64(0),
|
|
||||||
"hierarchical_memory_limit": uint64(0),
|
|
||||||
"pgpgout": uint64(0),
|
|
||||||
"unevictable": uint64(0),
|
|
||||||
"total_inactive_file": uint64(0),
|
|
||||||
"writeback": uint64(0),
|
|
||||||
"total_pgfault": uint64(0),
|
|
||||||
"total_pgpgin": uint64(0),
|
|
||||||
"cache": uint64(0),
|
|
||||||
"mapped_file": uint64(0),
|
|
||||||
"inactive_file": uint64(0),
|
|
||||||
"max_usage": uint64(0),
|
|
||||||
"fail_count": uint64(0),
|
|
||||||
"pgfault": uint64(0),
|
|
||||||
"usage": uint64(0),
|
|
||||||
"limit": uint64(18935443456),
|
|
||||||
},
|
|
||||||
map[string]string{
|
|
||||||
"cont_id": "b7dfbb9478a6ae55e237d4d74f8bbb753f0817192b5081334dc78476296e2173",
|
|
||||||
"cont_name": "etcd2",
|
|
||||||
"cont_image": "quay.io/coreos/etcd:v2.2.2",
|
|
||||||
},
|
|
||||||
)
|
|
||||||
|
|
||||||
//fmt.Print(info)
|
|
||||||
}
|
|
||||||
|
|||||||
@@ -34,8 +34,6 @@ var sampleConfig = `
|
|||||||
domains = []
|
domains = []
|
||||||
`
|
`
|
||||||
|
|
||||||
var defaultTimeout = time.Second * time.Duration(5)
|
|
||||||
|
|
||||||
func (d *Dovecot) SampleConfig() string { return sampleConfig }
|
func (d *Dovecot) SampleConfig() string { return sampleConfig }
|
||||||
|
|
||||||
const defaultPort = "24242"
|
const defaultPort = "24242"
|
||||||
@@ -76,15 +74,12 @@ func (d *Dovecot) gatherServer(addr string, acc telegraf.Accumulator, doms map[s
|
|||||||
return fmt.Errorf("Error: %s on url %s\n", err, addr)
|
return fmt.Errorf("Error: %s on url %s\n", err, addr)
|
||||||
}
|
}
|
||||||
|
|
||||||
c, err := net.DialTimeout("tcp", addr, defaultTimeout)
|
c, err := net.Dial("tcp", addr)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("Unable to connect to dovecot server '%s': %s", addr, err)
|
return fmt.Errorf("Unable to connect to dovecot server '%s': %s", addr, err)
|
||||||
}
|
}
|
||||||
defer c.Close()
|
defer c.Close()
|
||||||
|
|
||||||
// Extend connection
|
|
||||||
c.SetDeadline(time.Now().Add(defaultTimeout))
|
|
||||||
|
|
||||||
c.Write([]byte("EXPORT\tdomain\n\n"))
|
c.Write([]byte("EXPORT\tdomain\n\n"))
|
||||||
var buf bytes.Buffer
|
var buf bytes.Buffer
|
||||||
io.Copy(&buf, c)
|
io.Copy(&buf, c)
|
||||||
|
|||||||
@@ -81,12 +81,7 @@ type Elasticsearch struct {
|
|||||||
|
|
||||||
// NewElasticsearch return a new instance of Elasticsearch
|
// NewElasticsearch return a new instance of Elasticsearch
|
||||||
func NewElasticsearch() *Elasticsearch {
|
func NewElasticsearch() *Elasticsearch {
|
||||||
tr := &http.Transport{ResponseHeaderTimeout: time.Duration(3 * time.Second)}
|
return &Elasticsearch{client: http.DefaultClient}
|
||||||
client := &http.Client{
|
|
||||||
Transport: tr,
|
|
||||||
Timeout: time.Duration(4 * time.Second),
|
|
||||||
}
|
|
||||||
return &Elasticsearch{client: client}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// SampleConfig returns sample configuration for this plugin.
|
// SampleConfig returns sample configuration for this plugin.
|
||||||
|
|||||||
@@ -34,9 +34,6 @@ func (t *transportMock) RoundTrip(r *http.Request) (*http.Response, error) {
|
|||||||
return res, nil
|
return res, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (t *transportMock) CancelRequest(_ *http.Request) {
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestElasticsearch(t *testing.T) {
|
func TestElasticsearch(t *testing.T) {
|
||||||
es := NewElasticsearch()
|
es := NewElasticsearch()
|
||||||
es.Servers = []string{"http://example.com:9200"}
|
es.Servers = []string{"http://example.com:9200"}
|
||||||
|
|||||||
@@ -1,20 +1,28 @@
|
|||||||
# Exec Input Plugin
|
# Exec Input Plugin
|
||||||
|
|
||||||
Please also see: [Telegraf Input Data Formats](https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md)
|
The exec plugin can execute arbitrary commands which output:
|
||||||
|
|
||||||
The exec input plugin can execute arbitrary commands which output:
|
* JSON
|
||||||
|
* InfluxDB [line-protocol](https://docs.influxdata.com/influxdb/v0.9/write_protocols/line/)
|
||||||
* JSON [javascript object notation](http://www.json.org/)
|
|
||||||
* InfluxDB [line-protocol](https://docs.influxdata.com/influxdb/v0.10/write_protocols/line/)
|
|
||||||
* Graphite [graphite-protocol](http://graphite.readthedocs.org/en/latest/feeding-carbon.html)
|
* Graphite [graphite-protocol](http://graphite.readthedocs.org/en/latest/feeding-carbon.html)
|
||||||
|
|
||||||
|
> Graphite understands messages with this format:
|
||||||
|
|
||||||
### Example 1 - JSON
|
> ```
|
||||||
|
metric_path value timestamp\n
|
||||||
|
```
|
||||||
|
|
||||||
#### Configuration
|
> __metric_path__ is the metric namespace that you want to populate.
|
||||||
|
|
||||||
In this example a script called ```/tmp/test.sh``` and a script called ```/tmp/test2.sh```
|
> __value__ is the value that you want to assign to the metric at this time.
|
||||||
are configured for ```[[inputs.exec]]``` in JSON format.
|
|
||||||
|
> __timestamp__ is the unix epoch time.
|
||||||
|
|
||||||
|
|
||||||
|
If using JSON, only numeric values are parsed and turned into floats. Booleans
|
||||||
|
and strings will be ignored.
|
||||||
|
|
||||||
|
### Configuration
|
||||||
|
|
||||||
```
|
```
|
||||||
# Read flattened metrics from one or more commands that output JSON to stdout
|
# Read flattened metrics from one or more commands that output JSON to stdout
|
||||||
@@ -56,6 +64,8 @@ Other options for modifying the measurement names are:
|
|||||||
name_prefix = "prefix_"
|
name_prefix = "prefix_"
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### Example 1
|
||||||
|
|
||||||
Let's say that we have the above configuration, and mycollector outputs the
|
Let's say that we have the above configuration, and mycollector outputs the
|
||||||
following JSON:
|
following JSON:
|
||||||
|
|
||||||
@@ -75,16 +85,10 @@ The collected metrics will be stored as fields under the measurement
|
|||||||
```
|
```
|
||||||
exec_mycollector a=0.5,b_c=0.1,b_d=5 1452815002357578567
|
exec_mycollector a=0.5,b_c=0.1,b_d=5 1452815002357578567
|
||||||
```
|
```
|
||||||
If using JSON, only numeric values are parsed and turned into floats. Booleans
|
|
||||||
and strings will be ignored.
|
|
||||||
|
|
||||||
### Example 2 - Influx Line-Protocol
|
### Example 2
|
||||||
|
|
||||||
In this example an application called ```/usr/bin/line_protocol_collector```
|
Now let's say we have the following configuration:
|
||||||
and a script called ```/tmp/test2.sh``` are configured for ```[[inputs.exec]]```
|
|
||||||
in influx line-protocol format.
|
|
||||||
|
|
||||||
#### Configuration
|
|
||||||
|
|
||||||
```
|
```
|
||||||
[[inputs.exec]]
|
[[inputs.exec]]
|
||||||
@@ -99,7 +103,7 @@ in influx line-protocol format.
|
|||||||
data_format = "influx"
|
data_format = "influx"
|
||||||
```
|
```
|
||||||
|
|
||||||
The line_protocol_collector application outputs the following line protocol:
|
And line_protocol_collector outputs the following line protocol:
|
||||||
|
|
||||||
```
|
```
|
||||||
cpu,cpu=cpu0,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
|
cpu,cpu=cpu0,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
|
||||||
@@ -113,19 +117,16 @@ cpu,cpu=cpu6,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
|
|||||||
|
|
||||||
You will get data in InfluxDB exactly as it is defined above,
|
You will get data in InfluxDB exactly as it is defined above,
|
||||||
tags are cpu=cpuN, host=foo, and datacenter=us-east with fields usage_idle
|
tags are cpu=cpuN, host=foo, and datacenter=us-east with fields usage_idle
|
||||||
and usage_busy. They will receive a timestamp at collection time.
|
and usage_busy. They will receive a timestamp at collection time.
|
||||||
Each line must end in \n, just as the Influx line protocol does.
|
|
||||||
|
|
||||||
|
|
||||||
### Example 3 - Graphite
|
### Example 3
|
||||||
|
|
||||||
We can also change the data_format to "graphite" to use the metrics collecting scripts such as (compatible with graphite):
|
We can also change the data_format to "graphite" to use the metrics collecting scripts such as (compatible with graphite):
|
||||||
|
|
||||||
* Nagios [Mertics Plugins] (https://exchange.nagios.org/directory/Plugins)
|
* Nagios [Mertics Plugins] (https://exchange.nagios.org/directory/Plugins)
|
||||||
* Sensu [Mertics Plugins] (https://github.com/sensu-plugins)
|
* Sensu [Mertics Plugins] (https://github.com/sensu-plugins)
|
||||||
|
|
||||||
In this example a script called /tmp/test.sh and a script called /tmp/test2.sh are configured for [[inputs.exec]] in graphite format.
|
|
||||||
|
|
||||||
#### Configuration
|
#### Configuration
|
||||||
```
|
```
|
||||||
# Read flattened metrics from one or more commands that output JSON to stdout
|
# Read flattened metrics from one or more commands that output JSON to stdout
|
||||||
@@ -160,17 +161,6 @@ In this example a script called /tmp/test.sh and a script called /tmp/test2.sh a
|
|||||||
"measurement*"
|
"measurement*"
|
||||||
]
|
]
|
||||||
```
|
```
|
||||||
Graphite messages are in this format:
|
|
||||||
|
|
||||||
```
|
|
||||||
metric_path value timestamp\n
|
|
||||||
```
|
|
||||||
|
|
||||||
__metric_path__ is the metric namespace that you want to populate.
|
|
||||||
|
|
||||||
__value__ is the value that you want to assign to the metric at this time.
|
|
||||||
|
|
||||||
__timestamp__ is the unix epoch time.
|
|
||||||
|
|
||||||
And test.sh/test2.sh will output:
|
And test.sh/test2.sh will output:
|
||||||
|
|
||||||
@@ -187,4 +177,4 @@ sensu.metric.net.server0.eth0.rx_dropped 0 1444234982
|
|||||||
The templates configuration will be used to parse the graphite metrics to support influxdb/opentsdb tagging store engines.
|
The templates configuration will be used to parse the graphite metrics to support influxdb/opentsdb tagging store engines.
|
||||||
|
|
||||||
More detail information about templates, please refer to [The graphite Input] (https://github.com/influxdata/influxdb/blob/master/services/graphite/README.md)
|
More detail information about templates, please refer to [The graphite Input] (https://github.com/influxdata/influxdb/blob/master/services/graphite/README.md)
|
||||||
|
|
||||||
|
|||||||
@@ -5,14 +5,12 @@ import (
|
|||||||
"fmt"
|
"fmt"
|
||||||
"os/exec"
|
"os/exec"
|
||||||
"sync"
|
"sync"
|
||||||
"syscall"
|
|
||||||
|
|
||||||
"github.com/gonuts/go-shellquote"
|
"github.com/gonuts/go-shellquote"
|
||||||
|
|
||||||
"github.com/influxdata/telegraf"
|
"github.com/influxdata/telegraf"
|
||||||
"github.com/influxdata/telegraf/plugins/inputs"
|
"github.com/influxdata/telegraf/plugins/inputs"
|
||||||
"github.com/influxdata/telegraf/plugins/parsers"
|
"github.com/influxdata/telegraf/plugins/parsers"
|
||||||
"github.com/influxdata/telegraf/plugins/parsers/nagios"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
const sampleConfig = `
|
const sampleConfig = `
|
||||||
@@ -22,7 +20,7 @@ const sampleConfig = `
|
|||||||
## measurement name suffix (for separating different commands)
|
## measurement name suffix (for separating different commands)
|
||||||
name_suffix = "_mycollector"
|
name_suffix = "_mycollector"
|
||||||
|
|
||||||
## Data format to consume.
|
## Data format to consume. This can be "json", "influx" or "graphite"
|
||||||
## Each data format has it's own unique set of configuration options, read
|
## Each data format has it's own unique set of configuration options, read
|
||||||
## more about them here:
|
## more about them here:
|
||||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
||||||
@@ -48,32 +46,12 @@ func NewExec() *Exec {
|
|||||||
}
|
}
|
||||||
|
|
||||||
type Runner interface {
|
type Runner interface {
|
||||||
Run(*Exec, string, telegraf.Accumulator) ([]byte, error)
|
Run(*Exec, string) ([]byte, error)
|
||||||
}
|
}
|
||||||
|
|
||||||
type CommandRunner struct{}
|
type CommandRunner struct{}
|
||||||
|
|
||||||
func AddNagiosState(exitCode error, acc telegraf.Accumulator) error {
|
func (c CommandRunner) Run(e *Exec, command string) ([]byte, error) {
|
||||||
nagiosState := 0
|
|
||||||
if exitCode != nil {
|
|
||||||
exiterr, ok := exitCode.(*exec.ExitError)
|
|
||||||
if ok {
|
|
||||||
status, ok := exiterr.Sys().(syscall.WaitStatus)
|
|
||||||
if ok {
|
|
||||||
nagiosState = status.ExitStatus()
|
|
||||||
} else {
|
|
||||||
return fmt.Errorf("exec: unable to get nagios plugin exit code")
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
return fmt.Errorf("exec: unable to get nagios plugin exit code")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
fields := map[string]interface{}{"state": nagiosState}
|
|
||||||
acc.AddFields("nagios_state", fields, nil)
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c CommandRunner) Run(e *Exec, command string, acc telegraf.Accumulator) ([]byte, error) {
|
|
||||||
split_cmd, err := shellquote.Split(command)
|
split_cmd, err := shellquote.Split(command)
|
||||||
if err != nil || len(split_cmd) == 0 {
|
if err != nil || len(split_cmd) == 0 {
|
||||||
return nil, fmt.Errorf("exec: unable to parse command, %s", err)
|
return nil, fmt.Errorf("exec: unable to parse command, %s", err)
|
||||||
@@ -85,17 +63,7 @@ func (c CommandRunner) Run(e *Exec, command string, acc telegraf.Accumulator) ([
|
|||||||
cmd.Stdout = &out
|
cmd.Stdout = &out
|
||||||
|
|
||||||
if err := cmd.Run(); err != nil {
|
if err := cmd.Run(); err != nil {
|
||||||
switch e.parser.(type) {
|
return nil, fmt.Errorf("exec: %s for command '%s'", err, command)
|
||||||
case *nagios.NagiosParser:
|
|
||||||
AddNagiosState(err, acc)
|
|
||||||
default:
|
|
||||||
return nil, fmt.Errorf("exec: %s for command '%s'", err, command)
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
switch e.parser.(type) {
|
|
||||||
case *nagios.NagiosParser:
|
|
||||||
AddNagiosState(nil, acc)
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
return out.Bytes(), nil
|
return out.Bytes(), nil
|
||||||
@@ -104,7 +72,7 @@ func (c CommandRunner) Run(e *Exec, command string, acc telegraf.Accumulator) ([
|
|||||||
func (e *Exec) ProcessCommand(command string, acc telegraf.Accumulator) {
|
func (e *Exec) ProcessCommand(command string, acc telegraf.Accumulator) {
|
||||||
defer e.wg.Done()
|
defer e.wg.Done()
|
||||||
|
|
||||||
out, err := e.runner.Run(e, command, acc)
|
out, err := e.runner.Run(e, command)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
e.errChan <- err
|
e.errChan <- err
|
||||||
return
|
return
|
||||||
|
|||||||
@@ -4,7 +4,6 @@ import (
|
|||||||
"fmt"
|
"fmt"
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
"github.com/influxdata/telegraf"
|
|
||||||
"github.com/influxdata/telegraf/plugins/parsers"
|
"github.com/influxdata/telegraf/plugins/parsers"
|
||||||
|
|
||||||
"github.com/influxdata/telegraf/testutil"
|
"github.com/influxdata/telegraf/testutil"
|
||||||
@@ -58,7 +57,7 @@ func newRunnerMock(out []byte, err error) Runner {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func (r runnerMock) Run(e *Exec, command string, acc telegraf.Accumulator) ([]byte, error) {
|
func (r runnerMock) Run(e *Exec, command string) ([]byte, error) {
|
||||||
if r.err != nil {
|
if r.err != nil {
|
||||||
return nil, r.err
|
return nil, r.err
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -73,17 +73,14 @@ func (gh *GithubWebhooks) Stop() {
|
|||||||
|
|
||||||
// Handles the / route
|
// Handles the / route
|
||||||
func (gh *GithubWebhooks) eventHandler(w http.ResponseWriter, r *http.Request) {
|
func (gh *GithubWebhooks) eventHandler(w http.ResponseWriter, r *http.Request) {
|
||||||
defer r.Body.Close()
|
|
||||||
eventType := r.Header["X-Github-Event"][0]
|
eventType := r.Header["X-Github-Event"][0]
|
||||||
data, err := ioutil.ReadAll(r.Body)
|
data, err := ioutil.ReadAll(r.Body)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
w.WriteHeader(http.StatusBadRequest)
|
w.WriteHeader(http.StatusBadRequest)
|
||||||
return
|
|
||||||
}
|
}
|
||||||
e, err := NewEvent(data, eventType)
|
e, err := NewEvent(data, eventType)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
w.WriteHeader(http.StatusBadRequest)
|
w.WriteHeader(http.StatusBadRequest)
|
||||||
return
|
|
||||||
}
|
}
|
||||||
gh.Lock()
|
gh.Lock()
|
||||||
gh.events = append(gh.events, e)
|
gh.events = append(gh.events, e)
|
||||||
|
|||||||
@@ -129,11 +129,8 @@ func (g *haproxy) Gather(acc telegraf.Accumulator) error {
|
|||||||
|
|
||||||
func (g *haproxy) gatherServer(addr string, acc telegraf.Accumulator) error {
|
func (g *haproxy) gatherServer(addr string, acc telegraf.Accumulator) error {
|
||||||
if g.client == nil {
|
if g.client == nil {
|
||||||
tr := &http.Transport{ResponseHeaderTimeout: time.Duration(3 * time.Second)}
|
|
||||||
client := &http.Client{
|
client := &http.Client{}
|
||||||
Transport: tr,
|
|
||||||
Timeout: time.Duration(4 * time.Second),
|
|
||||||
}
|
|
||||||
g.client = client
|
g.client = client
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -6,7 +6,7 @@ For example, if you have a service called _mycollector_, which has HTTP endpoint
|
|||||||
plugin like this:
|
plugin like this:
|
||||||
|
|
||||||
```
|
```
|
||||||
[[inputs.httpjson]]
|
[[httpjson.services]]
|
||||||
name = "mycollector"
|
name = "mycollector"
|
||||||
|
|
||||||
servers = [
|
servers = [
|
||||||
@@ -24,7 +24,7 @@ plugin like this:
|
|||||||
You can also specify which keys from server response should be considered tags:
|
You can also specify which keys from server response should be considered tags:
|
||||||
|
|
||||||
```
|
```
|
||||||
[[inputs.httpjson]]
|
[[httpjson.services]]
|
||||||
...
|
...
|
||||||
|
|
||||||
tag_keys = [
|
tag_keys = [
|
||||||
@@ -36,10 +36,10 @@ You can also specify which keys from server response should be considered tags:
|
|||||||
You can also specify additional request parameters for the service:
|
You can also specify additional request parameters for the service:
|
||||||
|
|
||||||
```
|
```
|
||||||
[[inputs.httpjson]]
|
[[httpjson.services]]
|
||||||
...
|
...
|
||||||
|
|
||||||
[inputs.httpjson.parameters]
|
[httpjson.services.parameters]
|
||||||
event_type = "cpu_spike"
|
event_type = "cpu_spike"
|
||||||
threshold = "0.75"
|
threshold = "0.75"
|
||||||
|
|
||||||
@@ -48,10 +48,10 @@ You can also specify additional request parameters for the service:
|
|||||||
You can also specify additional request header parameters for the service:
|
You can also specify additional request header parameters for the service:
|
||||||
|
|
||||||
```
|
```
|
||||||
[[inputs.httpjson]]
|
[[httpjson.services]]
|
||||||
...
|
...
|
||||||
|
|
||||||
[inputs.httpjson.headers]
|
[httpjson.services.headers]
|
||||||
X-Auth-Token = "my-xauth-token"
|
X-Auth-Token = "my-xauth-token"
|
||||||
apiVersion = "v1"
|
apiVersion = "v1"
|
||||||
```
|
```
|
||||||
@@ -61,14 +61,18 @@ You can also specify additional request header parameters for the service:
|
|||||||
Let's say that we have a service named "mycollector" configured like this:
|
Let's say that we have a service named "mycollector" configured like this:
|
||||||
|
|
||||||
```
|
```
|
||||||
[[inputs.httpjson]]
|
[httpjson]
|
||||||
name = "mycollector"
|
[[httpjson.services]]
|
||||||
servers = [
|
name = "mycollector"
|
||||||
"http://my.service.com/_stats"
|
|
||||||
]
|
servers = [
|
||||||
# HTTP method to use (case-sensitive)
|
"http://my.service.com/_stats"
|
||||||
method = "GET"
|
]
|
||||||
tag_keys = ["service"]
|
|
||||||
|
# HTTP method to use (case-sensitive)
|
||||||
|
method = "GET"
|
||||||
|
|
||||||
|
tag_keys = ["service"]
|
||||||
```
|
```
|
||||||
|
|
||||||
which responds with the following JSON:
|
which responds with the following JSON:
|
||||||
@@ -98,21 +102,26 @@ There is also the option to collect JSON from multiple services, here is an
|
|||||||
example doing that.
|
example doing that.
|
||||||
|
|
||||||
```
|
```
|
||||||
[[inputs.httpjson]]
|
[httpjson]
|
||||||
name = "mycollector1"
|
[[httpjson.services]]
|
||||||
servers = [
|
name = "mycollector1"
|
||||||
"http://my.service1.com/_stats"
|
|
||||||
]
|
|
||||||
# HTTP method to use (case-sensitive)
|
|
||||||
method = "GET"
|
|
||||||
|
|
||||||
[[inputs.httpjson]]
|
servers = [
|
||||||
name = "mycollector2"
|
"http://my.service1.com/_stats"
|
||||||
servers = [
|
]
|
||||||
"http://service.net/json/stats"
|
|
||||||
]
|
# HTTP method to use (case-sensitive)
|
||||||
# HTTP method to use (case-sensitive)
|
method = "GET"
|
||||||
method = "POST"
|
|
||||||
|
[[httpjson.services]]
|
||||||
|
name = "mycollector2"
|
||||||
|
|
||||||
|
servers = [
|
||||||
|
"http://service.net/json/stats"
|
||||||
|
]
|
||||||
|
|
||||||
|
# HTTP method to use (case-sensitive)
|
||||||
|
method = "POST"
|
||||||
```
|
```
|
||||||
|
|
||||||
The services respond with the following JSON:
|
The services respond with the following JSON:
|
||||||
|
|||||||
@@ -1,6 +1,7 @@
|
|||||||
package httpjson
|
package httpjson
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"bytes"
|
||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
"io/ioutil"
|
"io/ioutil"
|
||||||
@@ -11,7 +12,6 @@ import (
|
|||||||
"time"
|
"time"
|
||||||
|
|
||||||
"github.com/influxdata/telegraf"
|
"github.com/influxdata/telegraf"
|
||||||
"github.com/influxdata/telegraf/internal"
|
|
||||||
"github.com/influxdata/telegraf/plugins/inputs"
|
"github.com/influxdata/telegraf/plugins/inputs"
|
||||||
"github.com/influxdata/telegraf/plugins/parsers"
|
"github.com/influxdata/telegraf/plugins/parsers"
|
||||||
)
|
)
|
||||||
@@ -23,17 +23,7 @@ type HttpJson struct {
|
|||||||
TagKeys []string
|
TagKeys []string
|
||||||
Parameters map[string]string
|
Parameters map[string]string
|
||||||
Headers map[string]string
|
Headers map[string]string
|
||||||
|
client HTTPClient
|
||||||
// Path to CA file
|
|
||||||
SSLCA string `toml:"ssl_ca"`
|
|
||||||
// Path to host cert file
|
|
||||||
SSLCert string `toml:"ssl_cert"`
|
|
||||||
// Path to cert key file
|
|
||||||
SSLKey string `toml:"ssl_key"`
|
|
||||||
// Use SSL but skip chain & host verification
|
|
||||||
InsecureSkipVerify bool
|
|
||||||
|
|
||||||
client HTTPClient
|
|
||||||
}
|
}
|
||||||
|
|
||||||
type HTTPClient interface {
|
type HTTPClient interface {
|
||||||
@@ -46,27 +36,16 @@ type HTTPClient interface {
|
|||||||
// http.Response: HTTP respons object
|
// http.Response: HTTP respons object
|
||||||
// error : Any error that may have occurred
|
// error : Any error that may have occurred
|
||||||
MakeRequest(req *http.Request) (*http.Response, error)
|
MakeRequest(req *http.Request) (*http.Response, error)
|
||||||
|
|
||||||
SetHTTPClient(client *http.Client)
|
|
||||||
HTTPClient() *http.Client
|
|
||||||
}
|
}
|
||||||
|
|
||||||
type RealHTTPClient struct {
|
type RealHTTPClient struct {
|
||||||
client *http.Client
|
client *http.Client
|
||||||
}
|
}
|
||||||
|
|
||||||
func (c *RealHTTPClient) MakeRequest(req *http.Request) (*http.Response, error) {
|
func (c RealHTTPClient) MakeRequest(req *http.Request) (*http.Response, error) {
|
||||||
return c.client.Do(req)
|
return c.client.Do(req)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (c *RealHTTPClient) SetHTTPClient(client *http.Client) {
|
|
||||||
c.client = client
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *RealHTTPClient) HTTPClient() *http.Client {
|
|
||||||
return c.client
|
|
||||||
}
|
|
||||||
|
|
||||||
var sampleConfig = `
|
var sampleConfig = `
|
||||||
## NOTE This plugin only reads numerical measurements, strings and booleans
|
## NOTE This plugin only reads numerical measurements, strings and booleans
|
||||||
## will be ignored.
|
## will be ignored.
|
||||||
@@ -98,13 +77,6 @@ var sampleConfig = `
|
|||||||
# [inputs.httpjson.headers]
|
# [inputs.httpjson.headers]
|
||||||
# X-Auth-Token = "my-xauth-token"
|
# X-Auth-Token = "my-xauth-token"
|
||||||
# apiVersion = "v1"
|
# apiVersion = "v1"
|
||||||
|
|
||||||
## Optional SSL Config
|
|
||||||
# ssl_ca = "/etc/telegraf/ca.pem"
|
|
||||||
# ssl_cert = "/etc/telegraf/cert.pem"
|
|
||||||
# ssl_key = "/etc/telegraf/key.pem"
|
|
||||||
## Use SSL but skip chain & host verification
|
|
||||||
# insecure_skip_verify = false
|
|
||||||
`
|
`
|
||||||
|
|
||||||
func (h *HttpJson) SampleConfig() string {
|
func (h *HttpJson) SampleConfig() string {
|
||||||
@@ -119,23 +91,6 @@ func (h *HttpJson) Description() string {
|
|||||||
func (h *HttpJson) Gather(acc telegraf.Accumulator) error {
|
func (h *HttpJson) Gather(acc telegraf.Accumulator) error {
|
||||||
var wg sync.WaitGroup
|
var wg sync.WaitGroup
|
||||||
|
|
||||||
if h.client.HTTPClient() == nil {
|
|
||||||
tlsCfg, err := internal.GetTLSConfig(
|
|
||||||
h.SSLCert, h.SSLKey, h.SSLCA, h.InsecureSkipVerify)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
tr := &http.Transport{
|
|
||||||
ResponseHeaderTimeout: time.Duration(3 * time.Second),
|
|
||||||
TLSClientConfig: tlsCfg,
|
|
||||||
}
|
|
||||||
client := &http.Client{
|
|
||||||
Transport: tr,
|
|
||||||
Timeout: time.Duration(4 * time.Second),
|
|
||||||
}
|
|
||||||
h.client.SetHTTPClient(client)
|
|
||||||
}
|
|
||||||
|
|
||||||
errorChannel := make(chan error, len(h.Servers))
|
errorChannel := make(chan error, len(h.Servers))
|
||||||
|
|
||||||
for _, server := range h.Servers {
|
for _, server := range h.Servers {
|
||||||
@@ -227,14 +182,15 @@ func (h *HttpJson) sendRequest(serverURL string) (string, float64, error) {
|
|||||||
return "", -1, fmt.Errorf("Invalid server URL \"%s\"", serverURL)
|
return "", -1, fmt.Errorf("Invalid server URL \"%s\"", serverURL)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
params := url.Values{}
|
||||||
data := url.Values{}
|
data := url.Values{}
|
||||||
|
|
||||||
switch {
|
switch {
|
||||||
case h.Method == "GET":
|
case h.Method == "GET":
|
||||||
params := requestURL.Query()
|
requestURL.RawQuery = params.Encode()
|
||||||
for k, v := range h.Parameters {
|
for k, v := range h.Parameters {
|
||||||
params.Add(k, v)
|
params.Add(k, v)
|
||||||
}
|
}
|
||||||
requestURL.RawQuery = params.Encode()
|
|
||||||
|
|
||||||
case h.Method == "POST":
|
case h.Method == "POST":
|
||||||
requestURL.RawQuery = ""
|
requestURL.RawQuery = ""
|
||||||
@@ -244,8 +200,7 @@ func (h *HttpJson) sendRequest(serverURL string) (string, float64, error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
// Create + send request
|
// Create + send request
|
||||||
req, err := http.NewRequest(h.Method, requestURL.String(),
|
req, err := http.NewRequest(h.Method, requestURL.String(), bytes.NewBufferString(data.Encode()))
|
||||||
strings.NewReader(data.Encode()))
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return "", -1, err
|
return "", -1, err
|
||||||
}
|
}
|
||||||
@@ -289,8 +244,6 @@ func (h *HttpJson) sendRequest(serverURL string) (string, float64, error) {
|
|||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
inputs.Add("httpjson", func() telegraf.Input {
|
inputs.Add("httpjson", func() telegraf.Input {
|
||||||
return &HttpJson{
|
return &HttpJson{client: RealHTTPClient{client: &http.Client{}}}
|
||||||
client: &RealHTTPClient{},
|
|
||||||
}
|
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,10 +1,8 @@
|
|||||||
package httpjson
|
package httpjson
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"fmt"
|
|
||||||
"io/ioutil"
|
"io/ioutil"
|
||||||
"net/http"
|
"net/http"
|
||||||
"net/http/httptest"
|
|
||||||
"strings"
|
"strings"
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
@@ -29,75 +27,6 @@ const validJSON = `
|
|||||||
"another_list": [4]
|
"another_list": [4]
|
||||||
}`
|
}`
|
||||||
|
|
||||||
const validJSON2 = `{
|
|
||||||
"user":{
|
|
||||||
"hash_rate":0,
|
|
||||||
"expected_24h_rewards":0,
|
|
||||||
"total_rewards":0.000595109232,
|
|
||||||
"paid_rewards":0,
|
|
||||||
"unpaid_rewards":0.000595109232,
|
|
||||||
"past_24h_rewards":0,
|
|
||||||
"total_work":"5172625408",
|
|
||||||
"blocks_found":0
|
|
||||||
},
|
|
||||||
"workers":{
|
|
||||||
"brminer.1":{
|
|
||||||
"hash_rate":0,
|
|
||||||
"hash_rate_24h":0,
|
|
||||||
"valid_shares":"6176",
|
|
||||||
"stale_shares":"0",
|
|
||||||
"invalid_shares":"0",
|
|
||||||
"rewards":4.5506464e-5,
|
|
||||||
"rewards_24h":0,
|
|
||||||
"reset_time":1455409950
|
|
||||||
},
|
|
||||||
"brminer.2":{
|
|
||||||
"hash_rate":0,
|
|
||||||
"hash_rate_24h":0,
|
|
||||||
"valid_shares":"0",
|
|
||||||
"stale_shares":"0",
|
|
||||||
"invalid_shares":"0",
|
|
||||||
"rewards":0,
|
|
||||||
"rewards_24h":0,
|
|
||||||
"reset_time":1455936726
|
|
||||||
},
|
|
||||||
"brminer.3":{
|
|
||||||
"hash_rate":0,
|
|
||||||
"hash_rate_24h":0,
|
|
||||||
"valid_shares":"0",
|
|
||||||
"stale_shares":"0",
|
|
||||||
"invalid_shares":"0",
|
|
||||||
"rewards":0,
|
|
||||||
"rewards_24h":0,
|
|
||||||
"reset_time":1455936733
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"pool":{
|
|
||||||
"hash_rate":114100000,
|
|
||||||
"active_users":843,
|
|
||||||
"total_work":"5015346808842682368",
|
|
||||||
"pps_ratio":1.04,
|
|
||||||
"pps_rate":7.655e-9
|
|
||||||
},
|
|
||||||
"network":{
|
|
||||||
"hash_rate":1426117703,
|
|
||||||
"block_number":944895,
|
|
||||||
"time_per_block":156,
|
|
||||||
"difficulty":51825.72835216,
|
|
||||||
"next_difficulty":51916.15249019,
|
|
||||||
"retarget_time":95053
|
|
||||||
},
|
|
||||||
"market":{
|
|
||||||
"ltc_btc":0.00798,
|
|
||||||
"ltc_usd":3.37801,
|
|
||||||
"ltc_eur":3.113,
|
|
||||||
"ltc_gbp":2.32807,
|
|
||||||
"ltc_rub":241.796,
|
|
||||||
"ltc_cny":21.3883,
|
|
||||||
"btc_usd":422.852
|
|
||||||
}
|
|
||||||
}`
|
|
||||||
|
|
||||||
const validJSONTags = `
|
const validJSONTags = `
|
||||||
{
|
{
|
||||||
"value": 15,
|
"value": 15,
|
||||||
@@ -125,7 +54,7 @@ type mockHTTPClient struct {
|
|||||||
// Mock implementation of MakeRequest. Usually returns an http.Response with
|
// Mock implementation of MakeRequest. Usually returns an http.Response with
|
||||||
// hard-coded responseBody and statusCode. However, if the request uses a
|
// hard-coded responseBody and statusCode. However, if the request uses a
|
||||||
// nonstandard method, it uses status code 405 (method not allowed)
|
// nonstandard method, it uses status code 405 (method not allowed)
|
||||||
func (c *mockHTTPClient) MakeRequest(req *http.Request) (*http.Response, error) {
|
func (c mockHTTPClient) MakeRequest(req *http.Request) (*http.Response, error) {
|
||||||
resp := http.Response{}
|
resp := http.Response{}
|
||||||
resp.StatusCode = c.statusCode
|
resp.StatusCode = c.statusCode
|
||||||
|
|
||||||
@@ -147,13 +76,6 @@ func (c *mockHTTPClient) MakeRequest(req *http.Request) (*http.Response, error)
|
|||||||
return &resp, nil
|
return &resp, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (c *mockHTTPClient) SetHTTPClient(_ *http.Client) {
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *mockHTTPClient) HTTPClient() *http.Client {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Generates a pointer to an HttpJson object that uses a mock HTTP client.
|
// Generates a pointer to an HttpJson object that uses a mock HTTP client.
|
||||||
// Parameters:
|
// Parameters:
|
||||||
// response : Body of the response that the mock HTTP client should return
|
// response : Body of the response that the mock HTTP client should return
|
||||||
@@ -164,7 +86,7 @@ func (c *mockHTTPClient) HTTPClient() *http.Client {
|
|||||||
func genMockHttpJson(response string, statusCode int) []*HttpJson {
|
func genMockHttpJson(response string, statusCode int) []*HttpJson {
|
||||||
return []*HttpJson{
|
return []*HttpJson{
|
||||||
&HttpJson{
|
&HttpJson{
|
||||||
client: &mockHTTPClient{responseBody: response, statusCode: statusCode},
|
client: mockHTTPClient{responseBody: response, statusCode: statusCode},
|
||||||
Servers: []string{
|
Servers: []string{
|
||||||
"http://server1.example.com/metrics/",
|
"http://server1.example.com/metrics/",
|
||||||
"http://server2.example.com/metrics/",
|
"http://server2.example.com/metrics/",
|
||||||
@@ -181,7 +103,7 @@ func genMockHttpJson(response string, statusCode int) []*HttpJson {
|
|||||||
},
|
},
|
||||||
},
|
},
|
||||||
&HttpJson{
|
&HttpJson{
|
||||||
client: &mockHTTPClient{responseBody: response, statusCode: statusCode},
|
client: mockHTTPClient{responseBody: response, statusCode: statusCode},
|
||||||
Servers: []string{
|
Servers: []string{
|
||||||
"http://server3.example.com/metrics/",
|
"http://server3.example.com/metrics/",
|
||||||
"http://server4.example.com/metrics/",
|
"http://server4.example.com/metrics/",
|
||||||
@@ -227,222 +149,6 @@ func TestHttpJson200(t *testing.T) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Test that GET Parameters from the url string are applied properly
|
|
||||||
func TestHttpJsonGET_URL(t *testing.T) {
|
|
||||||
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
|
||||||
key := r.FormValue("api_key")
|
|
||||||
assert.Equal(t, "mykey", key)
|
|
||||||
w.WriteHeader(http.StatusOK)
|
|
||||||
fmt.Fprintln(w, validJSON2)
|
|
||||||
}))
|
|
||||||
defer ts.Close()
|
|
||||||
|
|
||||||
a := HttpJson{
|
|
||||||
Servers: []string{ts.URL + "?api_key=mykey"},
|
|
||||||
Name: "",
|
|
||||||
Method: "GET",
|
|
||||||
client: &RealHTTPClient{client: &http.Client{}},
|
|
||||||
}
|
|
||||||
|
|
||||||
var acc testutil.Accumulator
|
|
||||||
err := a.Gather(&acc)
|
|
||||||
require.NoError(t, err)
|
|
||||||
|
|
||||||
// remove response_time from gathered fields because it's non-deterministic
|
|
||||||
delete(acc.Metrics[0].Fields, "response_time")
|
|
||||||
|
|
||||||
fields := map[string]interface{}{
|
|
||||||
"market_btc_usd": float64(422.852),
|
|
||||||
"market_ltc_btc": float64(0.00798),
|
|
||||||
"market_ltc_cny": float64(21.3883),
|
|
||||||
"market_ltc_eur": float64(3.113),
|
|
||||||
"market_ltc_gbp": float64(2.32807),
|
|
||||||
"market_ltc_rub": float64(241.796),
|
|
||||||
"market_ltc_usd": float64(3.37801),
|
|
||||||
"network_block_number": float64(944895),
|
|
||||||
"network_difficulty": float64(51825.72835216),
|
|
||||||
"network_hash_rate": float64(1.426117703e+09),
|
|
||||||
"network_next_difficulty": float64(51916.15249019),
|
|
||||||
"network_retarget_time": float64(95053),
|
|
||||||
"network_time_per_block": float64(156),
|
|
||||||
"pool_active_users": float64(843),
|
|
||||||
"pool_hash_rate": float64(1.141e+08),
|
|
||||||
"pool_pps_rate": float64(7.655e-09),
|
|
||||||
"pool_pps_ratio": float64(1.04),
|
|
||||||
"user_blocks_found": float64(0),
|
|
||||||
"user_expected_24h_rewards": float64(0),
|
|
||||||
"user_hash_rate": float64(0),
|
|
||||||
"user_paid_rewards": float64(0),
|
|
||||||
"user_past_24h_rewards": float64(0),
|
|
||||||
"user_total_rewards": float64(0.000595109232),
|
|
||||||
"user_unpaid_rewards": float64(0.000595109232),
|
|
||||||
"workers_brminer.1_hash_rate": float64(0),
|
|
||||||
"workers_brminer.1_hash_rate_24h": float64(0),
|
|
||||||
"workers_brminer.1_reset_time": float64(1.45540995e+09),
|
|
||||||
"workers_brminer.1_rewards": float64(4.5506464e-05),
|
|
||||||
"workers_brminer.1_rewards_24h": float64(0),
|
|
||||||
"workers_brminer.2_hash_rate": float64(0),
|
|
||||||
"workers_brminer.2_hash_rate_24h": float64(0),
|
|
||||||
"workers_brminer.2_reset_time": float64(1.455936726e+09),
|
|
||||||
"workers_brminer.2_rewards": float64(0),
|
|
||||||
"workers_brminer.2_rewards_24h": float64(0),
|
|
||||||
"workers_brminer.3_hash_rate": float64(0),
|
|
||||||
"workers_brminer.3_hash_rate_24h": float64(0),
|
|
||||||
"workers_brminer.3_reset_time": float64(1.455936733e+09),
|
|
||||||
"workers_brminer.3_rewards": float64(0),
|
|
||||||
"workers_brminer.3_rewards_24h": float64(0),
|
|
||||||
}
|
|
||||||
|
|
||||||
acc.AssertContainsFields(t, "httpjson", fields)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Test that GET Parameters are applied properly
|
|
||||||
func TestHttpJsonGET(t *testing.T) {
|
|
||||||
params := map[string]string{
|
|
||||||
"api_key": "mykey",
|
|
||||||
}
|
|
||||||
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
|
||||||
key := r.FormValue("api_key")
|
|
||||||
assert.Equal(t, "mykey", key)
|
|
||||||
w.WriteHeader(http.StatusOK)
|
|
||||||
fmt.Fprintln(w, validJSON2)
|
|
||||||
}))
|
|
||||||
defer ts.Close()
|
|
||||||
|
|
||||||
a := HttpJson{
|
|
||||||
Servers: []string{ts.URL},
|
|
||||||
Name: "",
|
|
||||||
Method: "GET",
|
|
||||||
Parameters: params,
|
|
||||||
client: &RealHTTPClient{client: &http.Client{}},
|
|
||||||
}
|
|
||||||
|
|
||||||
var acc testutil.Accumulator
|
|
||||||
err := a.Gather(&acc)
|
|
||||||
require.NoError(t, err)
|
|
||||||
|
|
||||||
// remove response_time from gathered fields because it's non-deterministic
|
|
||||||
delete(acc.Metrics[0].Fields, "response_time")
|
|
||||||
|
|
||||||
fields := map[string]interface{}{
|
|
||||||
"market_btc_usd": float64(422.852),
|
|
||||||
"market_ltc_btc": float64(0.00798),
|
|
||||||
"market_ltc_cny": float64(21.3883),
|
|
||||||
"market_ltc_eur": float64(3.113),
|
|
||||||
"market_ltc_gbp": float64(2.32807),
|
|
||||||
"market_ltc_rub": float64(241.796),
|
|
||||||
"market_ltc_usd": float64(3.37801),
|
|
||||||
"network_block_number": float64(944895),
|
|
||||||
"network_difficulty": float64(51825.72835216),
|
|
||||||
"network_hash_rate": float64(1.426117703e+09),
|
|
||||||
"network_next_difficulty": float64(51916.15249019),
|
|
||||||
"network_retarget_time": float64(95053),
|
|
||||||
"network_time_per_block": float64(156),
|
|
||||||
"pool_active_users": float64(843),
|
|
||||||
"pool_hash_rate": float64(1.141e+08),
|
|
||||||
"pool_pps_rate": float64(7.655e-09),
|
|
||||||
"pool_pps_ratio": float64(1.04),
|
|
||||||
"user_blocks_found": float64(0),
|
|
||||||
"user_expected_24h_rewards": float64(0),
|
|
||||||
"user_hash_rate": float64(0),
|
|
||||||
"user_paid_rewards": float64(0),
|
|
||||||
"user_past_24h_rewards": float64(0),
|
|
||||||
"user_total_rewards": float64(0.000595109232),
|
|
||||||
"user_unpaid_rewards": float64(0.000595109232),
|
|
||||||
"workers_brminer.1_hash_rate": float64(0),
|
|
||||||
"workers_brminer.1_hash_rate_24h": float64(0),
|
|
||||||
"workers_brminer.1_reset_time": float64(1.45540995e+09),
|
|
||||||
"workers_brminer.1_rewards": float64(4.5506464e-05),
|
|
||||||
"workers_brminer.1_rewards_24h": float64(0),
|
|
||||||
"workers_brminer.2_hash_rate": float64(0),
|
|
||||||
"workers_brminer.2_hash_rate_24h": float64(0),
|
|
||||||
"workers_brminer.2_reset_time": float64(1.455936726e+09),
|
|
||||||
"workers_brminer.2_rewards": float64(0),
|
|
||||||
"workers_brminer.2_rewards_24h": float64(0),
|
|
||||||
"workers_brminer.3_hash_rate": float64(0),
|
|
||||||
"workers_brminer.3_hash_rate_24h": float64(0),
|
|
||||||
"workers_brminer.3_reset_time": float64(1.455936733e+09),
|
|
||||||
"workers_brminer.3_rewards": float64(0),
|
|
||||||
"workers_brminer.3_rewards_24h": float64(0),
|
|
||||||
}
|
|
||||||
|
|
||||||
acc.AssertContainsFields(t, "httpjson", fields)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Test that POST Parameters are applied properly
|
|
||||||
func TestHttpJsonPOST(t *testing.T) {
|
|
||||||
params := map[string]string{
|
|
||||||
"api_key": "mykey",
|
|
||||||
}
|
|
||||||
ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
|
||||||
body, err := ioutil.ReadAll(r.Body)
|
|
||||||
assert.NoError(t, err)
|
|
||||||
assert.Equal(t, "api_key=mykey", string(body))
|
|
||||||
w.WriteHeader(http.StatusOK)
|
|
||||||
fmt.Fprintln(w, validJSON2)
|
|
||||||
}))
|
|
||||||
defer ts.Close()
|
|
||||||
|
|
||||||
a := HttpJson{
|
|
||||||
Servers: []string{ts.URL},
|
|
||||||
Name: "",
|
|
||||||
Method: "POST",
|
|
||||||
Parameters: params,
|
|
||||||
client: &RealHTTPClient{client: &http.Client{}},
|
|
||||||
}
|
|
||||||
|
|
||||||
var acc testutil.Accumulator
|
|
||||||
err := a.Gather(&acc)
|
|
||||||
require.NoError(t, err)
|
|
||||||
|
|
||||||
// remove response_time from gathered fields because it's non-deterministic
|
|
||||||
delete(acc.Metrics[0].Fields, "response_time")
|
|
||||||
|
|
||||||
fields := map[string]interface{}{
|
|
||||||
"market_btc_usd": float64(422.852),
|
|
||||||
"market_ltc_btc": float64(0.00798),
|
|
||||||
"market_ltc_cny": float64(21.3883),
|
|
||||||
"market_ltc_eur": float64(3.113),
|
|
||||||
"market_ltc_gbp": float64(2.32807),
|
|
||||||
"market_ltc_rub": float64(241.796),
|
|
||||||
"market_ltc_usd": float64(3.37801),
|
|
||||||
"network_block_number": float64(944895),
|
|
||||||
"network_difficulty": float64(51825.72835216),
|
|
||||||
"network_hash_rate": float64(1.426117703e+09),
|
|
||||||
"network_next_difficulty": float64(51916.15249019),
|
|
||||||
"network_retarget_time": float64(95053),
|
|
||||||
"network_time_per_block": float64(156),
|
|
||||||
"pool_active_users": float64(843),
|
|
||||||
"pool_hash_rate": float64(1.141e+08),
|
|
||||||
"pool_pps_rate": float64(7.655e-09),
|
|
||||||
"pool_pps_ratio": float64(1.04),
|
|
||||||
"user_blocks_found": float64(0),
|
|
||||||
"user_expected_24h_rewards": float64(0),
|
|
||||||
"user_hash_rate": float64(0),
|
|
||||||
"user_paid_rewards": float64(0),
|
|
||||||
"user_past_24h_rewards": float64(0),
|
|
||||||
"user_total_rewards": float64(0.000595109232),
|
|
||||||
"user_unpaid_rewards": float64(0.000595109232),
|
|
||||||
"workers_brminer.1_hash_rate": float64(0),
|
|
||||||
"workers_brminer.1_hash_rate_24h": float64(0),
|
|
||||||
"workers_brminer.1_reset_time": float64(1.45540995e+09),
|
|
||||||
"workers_brminer.1_rewards": float64(4.5506464e-05),
|
|
||||||
"workers_brminer.1_rewards_24h": float64(0),
|
|
||||||
"workers_brminer.2_hash_rate": float64(0),
|
|
||||||
"workers_brminer.2_hash_rate_24h": float64(0),
|
|
||||||
"workers_brminer.2_reset_time": float64(1.455936726e+09),
|
|
||||||
"workers_brminer.2_rewards": float64(0),
|
|
||||||
"workers_brminer.2_rewards_24h": float64(0),
|
|
||||||
"workers_brminer.3_hash_rate": float64(0),
|
|
||||||
"workers_brminer.3_hash_rate_24h": float64(0),
|
|
||||||
"workers_brminer.3_reset_time": float64(1.455936733e+09),
|
|
||||||
"workers_brminer.3_rewards": float64(0),
|
|
||||||
"workers_brminer.3_rewards_24h": float64(0),
|
|
||||||
}
|
|
||||||
|
|
||||||
acc.AssertContainsFields(t, "httpjson", fields)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Test response to HTTP 500
|
// Test response to HTTP 500
|
||||||
func TestHttpJson500(t *testing.T) {
|
func TestHttpJson500(t *testing.T) {
|
||||||
httpjson := genMockHttpJson(validJSON, 500)
|
httpjson := genMockHttpJson(validJSON, 500)
|
||||||
|
|||||||
@@ -7,7 +7,6 @@ import (
|
|||||||
"net/http"
|
"net/http"
|
||||||
"strings"
|
"strings"
|
||||||
"sync"
|
"sync"
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/influxdata/telegraf"
|
"github.com/influxdata/telegraf"
|
||||||
"github.com/influxdata/telegraf/plugins/inputs"
|
"github.com/influxdata/telegraf/plugins/inputs"
|
||||||
@@ -71,15 +70,6 @@ type point struct {
|
|||||||
Values map[string]interface{} `json:"values"`
|
Values map[string]interface{} `json:"values"`
|
||||||
}
|
}
|
||||||
|
|
||||||
var tr = &http.Transport{
|
|
||||||
ResponseHeaderTimeout: time.Duration(3 * time.Second),
|
|
||||||
}
|
|
||||||
|
|
||||||
var client = &http.Client{
|
|
||||||
Transport: tr,
|
|
||||||
Timeout: time.Duration(4 * time.Second),
|
|
||||||
}
|
|
||||||
|
|
||||||
// Gathers data from a particular URL
|
// Gathers data from a particular URL
|
||||||
// Parameters:
|
// Parameters:
|
||||||
// acc : The telegraf Accumulator to use
|
// acc : The telegraf Accumulator to use
|
||||||
@@ -91,7 +81,7 @@ func (i *InfluxDB) gatherURL(
|
|||||||
acc telegraf.Accumulator,
|
acc telegraf.Accumulator,
|
||||||
url string,
|
url string,
|
||||||
) error {
|
) error {
|
||||||
resp, err := client.Get(url)
|
resp, err := http.Get(url)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,42 +0,0 @@
|
|||||||
# Telegraf ipmi plugin
|
|
||||||
|
|
||||||
Get bare metal metrics using the command line utility `ipmitool`
|
|
||||||
|
|
||||||
see ipmitool(https://sourceforge.net/projects/ipmitool/files/ipmitool/)
|
|
||||||
|
|
||||||
The plugin will use the following command to collect remote host sensor stats:
|
|
||||||
|
|
||||||
ipmitool -I lan -H 192.168.1.1 -U USERID -P PASSW0RD sdr
|
|
||||||
|
|
||||||
## Measurements
|
|
||||||
|
|
||||||
- ipmi_sensor:
|
|
||||||
|
|
||||||
* Tags: `name`, `server`, `unit`
|
|
||||||
* Fields:
|
|
||||||
- status
|
|
||||||
- value
|
|
||||||
|
|
||||||
## Configuration
|
|
||||||
|
|
||||||
```toml
|
|
||||||
[[inputs.ipmi]]
|
|
||||||
## specify servers via a url matching:
|
|
||||||
## [username[:password]@][protocol[(address)]]
|
|
||||||
## e.g.
|
|
||||||
## root:passwd@lan(127.0.0.1)
|
|
||||||
##
|
|
||||||
servers = ["USERID:PASSW0RD@lan(10.20.2.203)"]
|
|
||||||
```
|
|
||||||
|
|
||||||
## Output
|
|
||||||
|
|
||||||
```
|
|
||||||
> ipmi_sensor,server=10.20.2.203,unit=degrees_c,name=ambient_temp status=1i,value=20 1458488465012559455
|
|
||||||
> ipmi_sensor,server=10.20.2.203,unit=feet,name=altitude status=1i,value=80 1458488465012688613
|
|
||||||
> ipmi_sensor,server=10.20.2.203,unit=watts,name=avg_power status=1i,value=220 1458488465012776511
|
|
||||||
> ipmi_sensor,server=10.20.2.203,unit=volts,name=planar_3.3v status=1i,value=3.28 1458488465012861875
|
|
||||||
> ipmi_sensor,server=10.20.2.203,unit=volts,name=planar_vbat status=1i,value=3.04 1458488465013072508
|
|
||||||
> ipmi_sensor,server=10.20.2.203,unit=rpm,name=fan_1a_tach status=1i,value=2610 1458488465013137932
|
|
||||||
> ipmi_sensor,server=10.20.2.203,unit=rpm,name=fan_1b_tach status=1i,value=1775 1458488465013279896
|
|
||||||
```
|
|
||||||
@@ -1,38 +0,0 @@
|
|||||||
package ipmi_sensor
|
|
||||||
|
|
||||||
import (
|
|
||||||
"bytes"
|
|
||||||
"fmt"
|
|
||||||
"os/exec"
|
|
||||||
"strings"
|
|
||||||
)
|
|
||||||
|
|
||||||
type CommandRunner struct{}
|
|
||||||
|
|
||||||
func (t CommandRunner) cmd(conn *Connection, args ...string) *exec.Cmd {
|
|
||||||
path := conn.Path
|
|
||||||
opts := append(conn.options(), args...)
|
|
||||||
|
|
||||||
if path == "" {
|
|
||||||
path = "ipmitool"
|
|
||||||
}
|
|
||||||
|
|
||||||
return exec.Command(path, opts...)
|
|
||||||
|
|
||||||
}
|
|
||||||
|
|
||||||
func (t CommandRunner) Run(conn *Connection, args ...string) (string, error) {
|
|
||||||
cmd := t.cmd(conn, args...)
|
|
||||||
var stdout bytes.Buffer
|
|
||||||
var stderr bytes.Buffer
|
|
||||||
cmd.Stdout = &stdout
|
|
||||||
cmd.Stderr = &stderr
|
|
||||||
|
|
||||||
err := cmd.Run()
|
|
||||||
if err != nil {
|
|
||||||
return "", fmt.Errorf("run %s %s: %s (%s)",
|
|
||||||
cmd.Path, strings.Join(cmd.Args, " "), stderr.String(), err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return stdout.String(), err
|
|
||||||
}
|
|
||||||
@@ -1,89 +0,0 @@
|
|||||||
package ipmi_sensor
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"net"
|
|
||||||
"strconv"
|
|
||||||
"strings"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Connection properties for a Client
|
|
||||||
type Connection struct {
|
|
||||||
Hostname string
|
|
||||||
Username string
|
|
||||||
Password string
|
|
||||||
Path string
|
|
||||||
Port int
|
|
||||||
Interface string
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewConnection(server string) *Connection {
|
|
||||||
conn := &Connection{}
|
|
||||||
inx1 := strings.Index(server, "@")
|
|
||||||
inx2 := strings.Index(server, "(")
|
|
||||||
inx3 := strings.Index(server, ")")
|
|
||||||
|
|
||||||
connstr := server
|
|
||||||
|
|
||||||
if inx1 > 0 {
|
|
||||||
security := server[0:inx1]
|
|
||||||
connstr = server[inx1+1 : len(server)]
|
|
||||||
up := strings.Split(security, ":")
|
|
||||||
conn.Username = up[0]
|
|
||||||
conn.Password = up[1]
|
|
||||||
}
|
|
||||||
|
|
||||||
if inx2 > 0 {
|
|
||||||
inx2 = strings.Index(connstr, "(")
|
|
||||||
inx3 = strings.Index(connstr, ")")
|
|
||||||
|
|
||||||
conn.Interface = connstr[0:inx2]
|
|
||||||
conn.Hostname = connstr[inx2+1 : inx3]
|
|
||||||
}
|
|
||||||
|
|
||||||
return conn
|
|
||||||
}
|
|
||||||
|
|
||||||
func (t *Connection) options() []string {
|
|
||||||
intf := t.Interface
|
|
||||||
if intf == "" {
|
|
||||||
intf = "lan"
|
|
||||||
}
|
|
||||||
|
|
||||||
options := []string{
|
|
||||||
"-H", t.Hostname,
|
|
||||||
"-U", t.Username,
|
|
||||||
"-P", t.Password,
|
|
||||||
"-I", intf,
|
|
||||||
}
|
|
||||||
|
|
||||||
if t.Port != 0 {
|
|
||||||
options = append(options, "-p", strconv.Itoa(t.Port))
|
|
||||||
}
|
|
||||||
|
|
||||||
return options
|
|
||||||
}
|
|
||||||
|
|
||||||
// RemoteIP returns the remote (bmc) IP address of the Connection
|
|
||||||
func (c *Connection) RemoteIP() string {
|
|
||||||
if net.ParseIP(c.Hostname) == nil {
|
|
||||||
addrs, err := net.LookupHost(c.Hostname)
|
|
||||||
if err != nil && len(addrs) > 0 {
|
|
||||||
return addrs[0]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return c.Hostname
|
|
||||||
}
|
|
||||||
|
|
||||||
// LocalIP returns the local (client) IP address of the Connection
|
|
||||||
func (c *Connection) LocalIP() string {
|
|
||||||
conn, err := net.Dial("udp", fmt.Sprintf("%s:%d", c.Hostname, c.Port))
|
|
||||||
if err != nil {
|
|
||||||
// don't bother returning an error, since this value will never
|
|
||||||
// make it to the bmc if we can't connect to it.
|
|
||||||
return c.Hostname
|
|
||||||
}
|
|
||||||
_ = conn.Close()
|
|
||||||
host, _, _ := net.SplitHostPort(conn.LocalAddr().String())
|
|
||||||
return host
|
|
||||||
}
|
|
||||||
@@ -1,129 +0,0 @@
|
|||||||
package ipmi_sensor
|
|
||||||
|
|
||||||
import (
|
|
||||||
"strconv"
|
|
||||||
"strings"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/influxdata/telegraf"
|
|
||||||
"github.com/influxdata/telegraf/plugins/inputs"
|
|
||||||
)
|
|
||||||
|
|
||||||
type Ipmi struct {
|
|
||||||
Servers []string
|
|
||||||
runner Runner
|
|
||||||
}
|
|
||||||
|
|
||||||
var sampleConfig = `
|
|
||||||
## specify servers via a url matching:
|
|
||||||
## [username[:password]@][protocol[(address)]]
|
|
||||||
## e.g.
|
|
||||||
## root:passwd@lan(127.0.0.1)
|
|
||||||
##
|
|
||||||
servers = ["USERID:PASSW0RD@lan(192.168.1.1)"]
|
|
||||||
`
|
|
||||||
|
|
||||||
func NewIpmi() *Ipmi {
|
|
||||||
return &Ipmi{
|
|
||||||
runner: CommandRunner{},
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (m *Ipmi) SampleConfig() string {
|
|
||||||
return sampleConfig
|
|
||||||
}
|
|
||||||
|
|
||||||
func (m *Ipmi) Description() string {
|
|
||||||
return "Read metrics from one or many bare metal servers"
|
|
||||||
}
|
|
||||||
|
|
||||||
func (m *Ipmi) Gather(acc telegraf.Accumulator) error {
|
|
||||||
if m.runner == nil {
|
|
||||||
m.runner = CommandRunner{}
|
|
||||||
}
|
|
||||||
for _, serv := range m.Servers {
|
|
||||||
err := m.gatherServer(serv, acc)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (m *Ipmi) gatherServer(serv string, acc telegraf.Accumulator) error {
|
|
||||||
conn := NewConnection(serv)
|
|
||||||
|
|
||||||
res, err := m.runner.Run(conn, "sdr")
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
// each line will look something like
|
|
||||||
// Planar VBAT | 3.05 Volts | ok
|
|
||||||
lines := strings.Split(res, "\n")
|
|
||||||
for i := 0; i < len(lines); i++ {
|
|
||||||
vals := strings.Split(lines[i], "|")
|
|
||||||
if len(vals) != 3 {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
tags := map[string]string{
|
|
||||||
"server": conn.Hostname,
|
|
||||||
"name": transform(vals[0]),
|
|
||||||
}
|
|
||||||
|
|
||||||
fields := make(map[string]interface{})
|
|
||||||
if strings.EqualFold("ok", trim(vals[2])) {
|
|
||||||
fields["status"] = 1
|
|
||||||
} else {
|
|
||||||
fields["status"] = 0
|
|
||||||
}
|
|
||||||
|
|
||||||
val1 := trim(vals[1])
|
|
||||||
|
|
||||||
if strings.Index(val1, " ") > 0 {
|
|
||||||
// split middle column into value and unit
|
|
||||||
valunit := strings.SplitN(val1, " ", 2)
|
|
||||||
fields["value"] = Atofloat(valunit[0])
|
|
||||||
if len(valunit) > 1 {
|
|
||||||
tags["unit"] = transform(valunit[1])
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
fields["value"] = 0.0
|
|
||||||
}
|
|
||||||
|
|
||||||
acc.AddFields("ipmi_sensor", fields, tags, time.Now())
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
type Runner interface {
|
|
||||||
Run(conn *Connection, args ...string) (string, error)
|
|
||||||
}
|
|
||||||
|
|
||||||
func Atofloat(val string) float64 {
|
|
||||||
f, err := strconv.ParseFloat(val, 64)
|
|
||||||
if err != nil {
|
|
||||||
return 0.0
|
|
||||||
} else {
|
|
||||||
return f
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func trim(s string) string {
|
|
||||||
return strings.TrimSpace(s)
|
|
||||||
}
|
|
||||||
|
|
||||||
func transform(s string) string {
|
|
||||||
s = trim(s)
|
|
||||||
s = strings.ToLower(s)
|
|
||||||
return strings.Replace(s, " ", "_", -1)
|
|
||||||
}
|
|
||||||
|
|
||||||
func init() {
|
|
||||||
inputs.Add("ipmi_sensor", func() telegraf.Input {
|
|
||||||
return &Ipmi{}
|
|
||||||
})
|
|
||||||
}
|
|
||||||
@@ -1,275 +0,0 @@
|
|||||||
package ipmi_sensor
|
|
||||||
|
|
||||||
import (
|
|
||||||
"testing"
|
|
||||||
|
|
||||||
"github.com/influxdata/telegraf/testutil"
|
|
||||||
"github.com/stretchr/testify/assert"
|
|
||||||
"github.com/stretchr/testify/require"
|
|
||||||
)
|
|
||||||
|
|
||||||
const serv = "USERID:PASSW0RD@lan(192.168.1.1)"
|
|
||||||
|
|
||||||
const cmdReturn = `
|
|
||||||
Ambient Temp | 20 degrees C | ok
|
|
||||||
Altitude | 80 feet | ok
|
|
||||||
Avg Power | 210 Watts | ok
|
|
||||||
Planar 3.3V | 3.29 Volts | ok
|
|
||||||
Planar 5V | 4.90 Volts | ok
|
|
||||||
Planar 12V | 12.04 Volts | ok
|
|
||||||
Planar VBAT | 3.05 Volts | ok
|
|
||||||
Fan 1A Tach | 2610 RPM | ok
|
|
||||||
Fan 1B Tach | 1775 RPM | ok
|
|
||||||
Fan 2A Tach | 2001 RPM | ok
|
|
||||||
Fan 2B Tach | 1275 RPM | ok
|
|
||||||
Fan 3A Tach | 2929 RPM | ok
|
|
||||||
Fan 3B Tach | 2125 RPM | ok
|
|
||||||
Fan 1 | 0x00 | ok
|
|
||||||
Fan 2 | 0x00 | ok
|
|
||||||
Fan 3 | 0x00 | ok
|
|
||||||
Front Panel | 0x00 | ok
|
|
||||||
Video USB | 0x00 | ok
|
|
||||||
DASD Backplane 1 | 0x00 | ok
|
|
||||||
SAS Riser | 0x00 | ok
|
|
||||||
PCI Riser 1 | 0x00 | ok
|
|
||||||
PCI Riser 2 | 0x00 | ok
|
|
||||||
CPU 1 | 0x00 | ok
|
|
||||||
CPU 2 | 0x00 | ok
|
|
||||||
All CPUs | 0x00 | ok
|
|
||||||
One of The CPUs | 0x00 | ok
|
|
||||||
IOH Temp Status | 0x00 | ok
|
|
||||||
CPU 1 OverTemp | 0x00 | ok
|
|
||||||
CPU 2 OverTemp | 0x00 | ok
|
|
||||||
CPU Fault Reboot | 0x00 | ok
|
|
||||||
Aux Log | 0x00 | ok
|
|
||||||
NMI State | 0x00 | ok
|
|
||||||
ABR Status | 0x00 | ok
|
|
||||||
Firmware Error | 0x00 | ok
|
|
||||||
PCIs | 0x00 | ok
|
|
||||||
CPUs | 0x00 | ok
|
|
||||||
DIMMs | 0x00 | ok
|
|
||||||
Sys Board Fault | 0x00 | ok
|
|
||||||
Power Supply 1 | 0x00 | ok
|
|
||||||
Power Supply 2 | 0x00 | ok
|
|
||||||
PS 1 Fan Fault | 0x00 | ok
|
|
||||||
PS 2 Fan Fault | 0x00 | ok
|
|
||||||
VT Fault | 0x00 | ok
|
|
||||||
Pwr Rail A Fault | 0x00 | ok
|
|
||||||
Pwr Rail B Fault | 0x00 | ok
|
|
||||||
Pwr Rail C Fault | 0x00 | ok
|
|
||||||
Pwr Rail D Fault | 0x00 | ok
|
|
||||||
Pwr Rail E Fault | 0x00 | ok
|
|
||||||
PS 1 Therm Fault | 0x00 | ok
|
|
||||||
PS 2 Therm Fault | 0x00 | ok
|
|
||||||
PS1 12V OV Fault | 0x00 | ok
|
|
||||||
PS2 12V OV Fault | 0x00 | ok
|
|
||||||
PS1 12V UV Fault | 0x00 | ok
|
|
||||||
PS2 12V UV Fault | 0x00 | ok
|
|
||||||
PS1 12V OC Fault | 0x00 | ok
|
|
||||||
PS2 12V OC Fault | 0x00 | ok
|
|
||||||
PS 1 VCO Fault | 0x00 | ok
|
|
||||||
PS 2 VCO Fault | 0x00 | ok
|
|
||||||
Power Unit | 0x00 | ok
|
|
||||||
Cooling Zone 1 | 0x00 | ok
|
|
||||||
Cooling Zone 2 | 0x00 | ok
|
|
||||||
Cooling Zone 3 | 0x00 | ok
|
|
||||||
Drive 0 | 0x00 | ok
|
|
||||||
Drive 1 | 0x00 | ok
|
|
||||||
Drive 2 | 0x00 | ok
|
|
||||||
Drive 3 | 0x00 | ok
|
|
||||||
Drive 4 | 0x00 | ok
|
|
||||||
Drive 5 | 0x00 | ok
|
|
||||||
Drive 6 | 0x00 | ok
|
|
||||||
Drive 7 | 0x00 | ok
|
|
||||||
Drive 8 | 0x00 | ok
|
|
||||||
Drive 9 | 0x00 | ok
|
|
||||||
Drive 10 | 0x00 | ok
|
|
||||||
Drive 11 | 0x00 | ok
|
|
||||||
Drive 12 | 0x00 | ok
|
|
||||||
Drive 13 | 0x00 | ok
|
|
||||||
Drive 14 | 0x00 | ok
|
|
||||||
Drive 15 | 0x00 | ok
|
|
||||||
All DIMMS | 0x00 | ok
|
|
||||||
One of the DIMMs | 0x00 | ok
|
|
||||||
DIMM 1 | 0x00 | ok
|
|
||||||
DIMM 2 | 0x00 | ok
|
|
||||||
DIMM 3 | 0x00 | ok
|
|
||||||
DIMM 4 | 0x00 | ok
|
|
||||||
DIMM 5 | 0x00 | ok
|
|
||||||
DIMM 6 | 0x00 | ok
|
|
||||||
DIMM 7 | 0x00 | ok
|
|
||||||
DIMM 8 | 0x00 | ok
|
|
||||||
DIMM 9 | 0x00 | ok
|
|
||||||
DIMM 10 | 0x00 | ok
|
|
||||||
DIMM 11 | 0x00 | ok
|
|
||||||
DIMM 12 | 0x00 | ok
|
|
||||||
DIMM 13 | 0x00 | ok
|
|
||||||
DIMM 14 | 0x00 | ok
|
|
||||||
DIMM 15 | 0x00 | ok
|
|
||||||
DIMM 16 | 0x00 | ok
|
|
||||||
DIMM 17 | 0x00 | ok
|
|
||||||
DIMM 18 | 0x00 | ok
|
|
||||||
DIMM 1 Temp | 0x00 | ok
|
|
||||||
DIMM 2 Temp | 0x00 | ok
|
|
||||||
DIMM 3 Temp | 0x00 | ok
|
|
||||||
DIMM 4 Temp | 0x00 | ok
|
|
||||||
DIMM 5 Temp | 0x00 | ok
|
|
||||||
DIMM 6 Temp | 0x00 | ok
|
|
||||||
DIMM 7 Temp | 0x00 | ok
|
|
||||||
DIMM 8 Temp | 0x00 | ok
|
|
||||||
DIMM 9 Temp | 0x00 | ok
|
|
||||||
DIMM 10 Temp | 0x00 | ok
|
|
||||||
DIMM 11 Temp | 0x00 | ok
|
|
||||||
DIMM 12 Temp | 0x00 | ok
|
|
||||||
DIMM 13 Temp | 0x00 | ok
|
|
||||||
DIMM 14 Temp | 0x00 | ok
|
|
||||||
DIMM 15 Temp | 0x00 | ok
|
|
||||||
DIMM 16 Temp | 0x00 | ok
|
|
||||||
DIMM 17 Temp | 0x00 | ok
|
|
||||||
DIMM 18 Temp | 0x00 | ok
|
|
||||||
PCI 1 | 0x00 | ok
|
|
||||||
PCI 2 | 0x00 | ok
|
|
||||||
PCI 3 | 0x00 | ok
|
|
||||||
PCI 4 | 0x00 | ok
|
|
||||||
All PCI Error | 0x00 | ok
|
|
||||||
One of PCI Error | 0x00 | ok
|
|
||||||
IPMI Watchdog | 0x00 | ok
|
|
||||||
Host Power | 0x00 | ok
|
|
||||||
DASD Backplane 2 | 0x00 | ok
|
|
||||||
DASD Backplane 3 | Not Readable | ns
|
|
||||||
DASD Backplane 4 | Not Readable | ns
|
|
||||||
Backup Memory | 0x00 | ok
|
|
||||||
Progress | 0x00 | ok
|
|
||||||
Planar Fault | 0x00 | ok
|
|
||||||
SEL Fullness | 0x00 | ok
|
|
||||||
PCI 5 | 0x00 | ok
|
|
||||||
OS RealTime Mod | 0x00 | ok
|
|
||||||
`
|
|
||||||
|
|
||||||
type runnerMock struct {
|
|
||||||
out string
|
|
||||||
err error
|
|
||||||
}
|
|
||||||
|
|
||||||
func newRunnerMock(out string, err error) Runner {
|
|
||||||
return &runnerMock{
|
|
||||||
out: out,
|
|
||||||
err: err,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (r runnerMock) Run(conn *Connection, args ...string) (out string, err error) {
|
|
||||||
if r.err != nil {
|
|
||||||
return out, r.err
|
|
||||||
}
|
|
||||||
return r.out, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestIpmi(t *testing.T) {
|
|
||||||
i := &Ipmi{
|
|
||||||
Servers: []string{"USERID:PASSW0RD@lan(192.168.1.1)"},
|
|
||||||
runner: newRunnerMock(cmdReturn, nil),
|
|
||||||
}
|
|
||||||
|
|
||||||
var acc testutil.Accumulator
|
|
||||||
|
|
||||||
err := i.Gather(&acc)
|
|
||||||
|
|
||||||
require.NoError(t, err)
|
|
||||||
|
|
||||||
assert.Equal(t, acc.NFields(), 266, "non-numeric measurements should be ignored")
|
|
||||||
|
|
||||||
var tests = []struct {
|
|
||||||
fields map[string]interface{}
|
|
||||||
tags map[string]string
|
|
||||||
}{
|
|
||||||
{
|
|
||||||
map[string]interface{}{
|
|
||||||
"value": float64(20),
|
|
||||||
"status": int(1),
|
|
||||||
},
|
|
||||||
map[string]string{
|
|
||||||
"name": "ambient_temp",
|
|
||||||
"server": "192.168.1.1",
|
|
||||||
"unit": "degrees_c",
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
map[string]interface{}{
|
|
||||||
"value": float64(80),
|
|
||||||
"status": int(1),
|
|
||||||
},
|
|
||||||
map[string]string{
|
|
||||||
"name": "altitude",
|
|
||||||
"server": "192.168.1.1",
|
|
||||||
"unit": "feet",
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
map[string]interface{}{
|
|
||||||
"value": float64(210),
|
|
||||||
"status": int(1),
|
|
||||||
},
|
|
||||||
map[string]string{
|
|
||||||
"name": "avg_power",
|
|
||||||
"server": "192.168.1.1",
|
|
||||||
"unit": "watts",
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
map[string]interface{}{
|
|
||||||
"value": float64(4.9),
|
|
||||||
"status": int(1),
|
|
||||||
},
|
|
||||||
map[string]string{
|
|
||||||
"name": "planar_5v",
|
|
||||||
"server": "192.168.1.1",
|
|
||||||
"unit": "volts",
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
map[string]interface{}{
|
|
||||||
"value": float64(3.05),
|
|
||||||
"status": int(1),
|
|
||||||
},
|
|
||||||
map[string]string{
|
|
||||||
"name": "planar_vbat",
|
|
||||||
"server": "192.168.1.1",
|
|
||||||
"unit": "volts",
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
map[string]interface{}{
|
|
||||||
"value": float64(2610),
|
|
||||||
"status": int(1),
|
|
||||||
},
|
|
||||||
map[string]string{
|
|
||||||
"name": "fan_1a_tach",
|
|
||||||
"server": "192.168.1.1",
|
|
||||||
"unit": "rpm",
|
|
||||||
},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
map[string]interface{}{
|
|
||||||
"value": float64(1775),
|
|
||||||
"status": int(1),
|
|
||||||
},
|
|
||||||
map[string]string{
|
|
||||||
"name": "fan_1b_tach",
|
|
||||||
"server": "192.168.1.1",
|
|
||||||
"unit": "rpm",
|
|
||||||
},
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, test := range tests {
|
|
||||||
acc.AssertContainsTaggedFields(t, "ipmi_sensor", test.fields, test.tags)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestIpmiConnection(t *testing.T) {
|
|
||||||
conn := NewConnection(serv)
|
|
||||||
assert.Equal(t, "USERID", conn.Username)
|
|
||||||
assert.Equal(t, "lan", conn.Interface)
|
|
||||||
|
|
||||||
}
|
|
||||||
@@ -7,7 +7,6 @@ import (
|
|||||||
"io/ioutil"
|
"io/ioutil"
|
||||||
"net/http"
|
"net/http"
|
||||||
"net/url"
|
"net/url"
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/influxdata/telegraf"
|
"github.com/influxdata/telegraf"
|
||||||
"github.com/influxdata/telegraf/plugins/inputs"
|
"github.com/influxdata/telegraf/plugins/inputs"
|
||||||
@@ -161,11 +160,6 @@ func (j *Jolokia) Gather(acc telegraf.Accumulator) error {
|
|||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
inputs.Add("jolokia", func() telegraf.Input {
|
inputs.Add("jolokia", func() telegraf.Input {
|
||||||
tr := &http.Transport{ResponseHeaderTimeout: time.Duration(3 * time.Second)}
|
return &Jolokia{jClient: &JolokiaClientImpl{client: &http.Client{}}}
|
||||||
client := &http.Client{
|
|
||||||
Transport: tr,
|
|
||||||
Timeout: time.Duration(4 * time.Second),
|
|
||||||
}
|
|
||||||
return &Jolokia{jClient: &JolokiaClientImpl{client: client}}
|
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -14,11 +14,10 @@ import (
|
|||||||
)
|
)
|
||||||
|
|
||||||
type Kafka struct {
|
type Kafka struct {
|
||||||
ConsumerGroup string
|
ConsumerGroup string
|
||||||
Topics []string
|
Topics []string
|
||||||
ZookeeperPeers []string
|
ZookeeperPeers []string
|
||||||
ZookeeperChroot string
|
Consumer *consumergroup.ConsumerGroup
|
||||||
Consumer *consumergroup.ConsumerGroup
|
|
||||||
|
|
||||||
// Legacy metric buffer support
|
// Legacy metric buffer support
|
||||||
MetricBuffer int
|
MetricBuffer int
|
||||||
@@ -49,14 +48,12 @@ var sampleConfig = `
|
|||||||
topics = ["telegraf"]
|
topics = ["telegraf"]
|
||||||
## an array of Zookeeper connection strings
|
## an array of Zookeeper connection strings
|
||||||
zookeeper_peers = ["localhost:2181"]
|
zookeeper_peers = ["localhost:2181"]
|
||||||
## Zookeeper Chroot
|
|
||||||
zookeeper_chroot = "/"
|
|
||||||
## the name of the consumer group
|
## the name of the consumer group
|
||||||
consumer_group = "telegraf_metrics_consumers"
|
consumer_group = "telegraf_metrics_consumers"
|
||||||
## Offset (must be either "oldest" or "newest")
|
## Offset (must be either "oldest" or "newest")
|
||||||
offset = "oldest"
|
offset = "oldest"
|
||||||
|
|
||||||
## Data format to consume.
|
## Data format to consume. This can be "json", "influx" or "graphite"
|
||||||
## Each data format has it's own unique set of configuration options, read
|
## Each data format has it's own unique set of configuration options, read
|
||||||
## more about them here:
|
## more about them here:
|
||||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
||||||
@@ -83,7 +80,6 @@ func (k *Kafka) Start(acc telegraf.Accumulator) error {
|
|||||||
k.acc = acc
|
k.acc = acc
|
||||||
|
|
||||||
config := consumergroup.NewConfig()
|
config := consumergroup.NewConfig()
|
||||||
config.Zookeeper.Chroot = k.ZookeeperChroot
|
|
||||||
switch strings.ToLower(k.Offset) {
|
switch strings.ToLower(k.Offset) {
|
||||||
case "oldest", "":
|
case "oldest", "":
|
||||||
config.Offsets.Initial = sarama.OffsetOldest
|
config.Offsets.Initial = sarama.OffsetOldest
|
||||||
|
|||||||
@@ -10,7 +10,6 @@ import (
|
|||||||
"net/url"
|
"net/url"
|
||||||
"regexp"
|
"regexp"
|
||||||
"sync"
|
"sync"
|
||||||
"time"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
const (
|
const (
|
||||||
@@ -121,10 +120,7 @@ func (a *ChimpAPI) GetReport(campaignID string) (Report, error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func runChimp(api *ChimpAPI, params ReportsParams) ([]byte, error) {
|
func runChimp(api *ChimpAPI, params ReportsParams) ([]byte, error) {
|
||||||
client := &http.Client{
|
client := &http.Client{Transport: api.Transport}
|
||||||
Transport: api.Transport,
|
|
||||||
Timeout: time.Duration(4 * time.Second),
|
|
||||||
}
|
|
||||||
|
|
||||||
var b bytes.Buffer
|
var b bytes.Buffer
|
||||||
req, err := http.NewRequest("GET", api.url.String(), &b)
|
req, err := http.NewRequest("GET", api.url.String(), &b)
|
||||||
|
|||||||
@@ -94,15 +94,14 @@ func (m *Memcached) gatherServer(
|
|||||||
acc telegraf.Accumulator,
|
acc telegraf.Accumulator,
|
||||||
) error {
|
) error {
|
||||||
var conn net.Conn
|
var conn net.Conn
|
||||||
var err error
|
|
||||||
if unix {
|
if unix {
|
||||||
conn, err = net.DialTimeout("unix", address, defaultTimeout)
|
conn, err := net.DialTimeout("unix", address, defaultTimeout)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
defer conn.Close()
|
defer conn.Close()
|
||||||
} else {
|
} else {
|
||||||
_, _, err = net.SplitHostPort(address)
|
_, _, err := net.SplitHostPort(address)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
address = address + ":11211"
|
address = address + ":11211"
|
||||||
}
|
}
|
||||||
@@ -114,10 +113,6 @@ func (m *Memcached) gatherServer(
|
|||||||
defer conn.Close()
|
defer conn.Close()
|
||||||
}
|
}
|
||||||
|
|
||||||
if conn == nil {
|
|
||||||
return fmt.Errorf("Failed to create net connection")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Extend connection
|
// Extend connection
|
||||||
conn.SetDeadline(time.Now().Add(defaultTimeout))
|
conn.SetDeadline(time.Now().Add(defaultTimeout))
|
||||||
|
|
||||||
|
|||||||
@@ -10,7 +10,6 @@ import (
|
|||||||
"strconv"
|
"strconv"
|
||||||
"strings"
|
"strings"
|
||||||
"sync"
|
"sync"
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/influxdata/telegraf"
|
"github.com/influxdata/telegraf"
|
||||||
"github.com/influxdata/telegraf/plugins/inputs"
|
"github.com/influxdata/telegraf/plugins/inputs"
|
||||||
@@ -34,16 +33,7 @@ var sampleConfig = `
|
|||||||
# A list of Mesos masters, default value is localhost:5050.
|
# A list of Mesos masters, default value is localhost:5050.
|
||||||
masters = ["localhost:5050"]
|
masters = ["localhost:5050"]
|
||||||
# Metrics groups to be collected, by default, all enabled.
|
# Metrics groups to be collected, by default, all enabled.
|
||||||
master_collections = [
|
master_collections = ["resources","master","system","slaves","frameworks","messages","evqueue","registrar"]
|
||||||
"resources",
|
|
||||||
"master",
|
|
||||||
"system",
|
|
||||||
"slaves",
|
|
||||||
"frameworks",
|
|
||||||
"messages",
|
|
||||||
"evqueue",
|
|
||||||
"registrar",
|
|
||||||
]
|
|
||||||
`
|
`
|
||||||
|
|
||||||
// SampleConfig returns a sample configuration block
|
// SampleConfig returns a sample configuration block
|
||||||
@@ -271,15 +261,6 @@ func (m *Mesos) removeGroup(j *map[string]interface{}) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
var tr = &http.Transport{
|
|
||||||
ResponseHeaderTimeout: time.Duration(3 * time.Second),
|
|
||||||
}
|
|
||||||
|
|
||||||
var client = &http.Client{
|
|
||||||
Transport: tr,
|
|
||||||
Timeout: time.Duration(4 * time.Second),
|
|
||||||
}
|
|
||||||
|
|
||||||
// This should not belong to the object
|
// This should not belong to the object
|
||||||
func (m *Mesos) gatherMetrics(a string, acc telegraf.Accumulator) error {
|
func (m *Mesos) gatherMetrics(a string, acc telegraf.Accumulator) error {
|
||||||
var jsonOut map[string]interface{}
|
var jsonOut map[string]interface{}
|
||||||
@@ -301,7 +282,7 @@ func (m *Mesos) gatherMetrics(a string, acc telegraf.Accumulator) error {
|
|||||||
|
|
||||||
ts := strconv.Itoa(m.Timeout) + "ms"
|
ts := strconv.Itoa(m.Timeout) + "ms"
|
||||||
|
|
||||||
resp, err := client.Get("http://" + a + "/metrics/snapshot?timeout=" + ts)
|
resp, err := http.Get("http://" + a + "/metrics/snapshot?timeout=" + ts)
|
||||||
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
|
|||||||
@@ -103,7 +103,7 @@ func (m *MongoDB) gatherServer(server *Server, acc telegraf.Accumulator) error {
|
|||||||
dialAddrs[0], err.Error())
|
dialAddrs[0], err.Error())
|
||||||
}
|
}
|
||||||
dialInfo.Direct = true
|
dialInfo.Direct = true
|
||||||
dialInfo.Timeout = 5 * time.Second
|
dialInfo.Timeout = time.Duration(10) * time.Second
|
||||||
|
|
||||||
if m.Ssl.Enabled {
|
if m.Ssl.Enabled {
|
||||||
tlsConfig := &tls.Config{}
|
tlsConfig := &tls.Config{}
|
||||||
|
|||||||
@@ -43,7 +43,7 @@ func testSetup(m *testing.M) {
|
|||||||
log.Fatalf("Unable to parse URL (%s), %s\n", dialAddrs[0], err.Error())
|
log.Fatalf("Unable to parse URL (%s), %s\n", dialAddrs[0], err.Error())
|
||||||
}
|
}
|
||||||
dialInfo.Direct = true
|
dialInfo.Direct = true
|
||||||
dialInfo.Timeout = 5 * time.Second
|
dialInfo.Timeout = time.Duration(10) * time.Second
|
||||||
sess, err := mgo.DialWithInfo(dialInfo)
|
sess, err := mgo.DialWithInfo(dialInfo)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatalf("Unable to connect to MongoDB, %s\n", err.Error())
|
log.Fatalf("Unable to connect to MongoDB, %s\n", err.Error())
|
||||||
|
|||||||
@@ -11,7 +11,7 @@ import (
|
|||||||
"github.com/influxdata/telegraf/plugins/inputs"
|
"github.com/influxdata/telegraf/plugins/inputs"
|
||||||
"github.com/influxdata/telegraf/plugins/parsers"
|
"github.com/influxdata/telegraf/plugins/parsers"
|
||||||
|
|
||||||
"github.com/eclipse/paho.mqtt.golang"
|
"git.eclipse.org/gitroot/paho/org.eclipse.paho.mqtt.golang.git"
|
||||||
)
|
)
|
||||||
|
|
||||||
type MQTTConsumer struct {
|
type MQTTConsumer struct {
|
||||||
@@ -26,9 +26,6 @@ type MQTTConsumer struct {
|
|||||||
// Legacy metric buffer support
|
// Legacy metric buffer support
|
||||||
MetricBuffer int
|
MetricBuffer int
|
||||||
|
|
||||||
PersistentSession bool
|
|
||||||
ClientID string `toml:"client_id"`
|
|
||||||
|
|
||||||
// Path to CA file
|
// Path to CA file
|
||||||
SSLCA string `toml:"ssl_ca"`
|
SSLCA string `toml:"ssl_ca"`
|
||||||
// Path to host cert file
|
// Path to host cert file
|
||||||
@@ -39,7 +36,7 @@ type MQTTConsumer struct {
|
|||||||
InsecureSkipVerify bool
|
InsecureSkipVerify bool
|
||||||
|
|
||||||
sync.Mutex
|
sync.Mutex
|
||||||
client mqtt.Client
|
client *mqtt.Client
|
||||||
// channel of all incoming raw mqtt messages
|
// channel of all incoming raw mqtt messages
|
||||||
in chan mqtt.Message
|
in chan mqtt.Message
|
||||||
done chan struct{}
|
done chan struct{}
|
||||||
@@ -60,13 +57,6 @@ var sampleConfig = `
|
|||||||
"sensors/#",
|
"sensors/#",
|
||||||
]
|
]
|
||||||
|
|
||||||
# if true, messages that can't be delivered while the subscriber is offline
|
|
||||||
# will be delivered when it comes back (such as on service restart).
|
|
||||||
# NOTE: if true, client_id MUST be set
|
|
||||||
persistent_session = false
|
|
||||||
# If empty, a random client ID will be generated.
|
|
||||||
client_id = ""
|
|
||||||
|
|
||||||
## username and password to connect MQTT server.
|
## username and password to connect MQTT server.
|
||||||
# username = "telegraf"
|
# username = "telegraf"
|
||||||
# password = "metricsmetricsmetricsmetrics"
|
# password = "metricsmetricsmetricsmetrics"
|
||||||
@@ -78,7 +68,7 @@ var sampleConfig = `
|
|||||||
## Use SSL but skip chain & host verification
|
## Use SSL but skip chain & host verification
|
||||||
# insecure_skip_verify = false
|
# insecure_skip_verify = false
|
||||||
|
|
||||||
## Data format to consume.
|
## Data format to consume. This can be "json", "influx" or "graphite"
|
||||||
## Each data format has it's own unique set of configuration options, read
|
## Each data format has it's own unique set of configuration options, read
|
||||||
## more about them here:
|
## more about them here:
|
||||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
||||||
@@ -101,11 +91,6 @@ func (m *MQTTConsumer) Start(acc telegraf.Accumulator) error {
|
|||||||
m.Lock()
|
m.Lock()
|
||||||
defer m.Unlock()
|
defer m.Unlock()
|
||||||
|
|
||||||
if m.PersistentSession && m.ClientID == "" {
|
|
||||||
return fmt.Errorf("ERROR MQTT Consumer: When using persistent_session" +
|
|
||||||
" = true, you MUST also set client_id")
|
|
||||||
}
|
|
||||||
|
|
||||||
m.acc = acc
|
m.acc = acc
|
||||||
if m.QoS > 2 || m.QoS < 0 {
|
if m.QoS > 2 || m.QoS < 0 {
|
||||||
return fmt.Errorf("MQTT Consumer, invalid QoS value: %d", m.QoS)
|
return fmt.Errorf("MQTT Consumer, invalid QoS value: %d", m.QoS)
|
||||||
@@ -163,7 +148,7 @@ func (m *MQTTConsumer) receiver() {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func (m *MQTTConsumer) recvMessage(_ mqtt.Client, msg mqtt.Message) {
|
func (m *MQTTConsumer) recvMessage(_ *mqtt.Client, msg mqtt.Message) {
|
||||||
m.in <- msg
|
m.in <- msg
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -181,11 +166,7 @@ func (m *MQTTConsumer) Gather(acc telegraf.Accumulator) error {
|
|||||||
func (m *MQTTConsumer) createOpts() (*mqtt.ClientOptions, error) {
|
func (m *MQTTConsumer) createOpts() (*mqtt.ClientOptions, error) {
|
||||||
opts := mqtt.NewClientOptions()
|
opts := mqtt.NewClientOptions()
|
||||||
|
|
||||||
if m.ClientID == "" {
|
opts.SetClientID("Telegraf-Consumer-" + internal.RandomString(5))
|
||||||
opts.SetClientID("Telegraf-Consumer-" + internal.RandomString(5))
|
|
||||||
} else {
|
|
||||||
opts.SetClientID(m.ClientID)
|
|
||||||
}
|
|
||||||
|
|
||||||
tlsCfg, err := internal.GetTLSConfig(
|
tlsCfg, err := internal.GetTLSConfig(
|
||||||
m.SSLCert, m.SSLKey, m.SSLCA, m.InsecureSkipVerify)
|
m.SSLCert, m.SSLKey, m.SSLCA, m.InsecureSkipVerify)
|
||||||
@@ -200,7 +181,7 @@ func (m *MQTTConsumer) createOpts() (*mqtt.ClientOptions, error) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
user := m.Username
|
user := m.Username
|
||||||
if user != "" {
|
if user == "" {
|
||||||
opts.SetUsername(user)
|
opts.SetUsername(user)
|
||||||
}
|
}
|
||||||
password := m.Password
|
password := m.Password
|
||||||
@@ -218,7 +199,6 @@ func (m *MQTTConsumer) createOpts() (*mqtt.ClientOptions, error) {
|
|||||||
}
|
}
|
||||||
opts.SetAutoReconnect(true)
|
opts.SetAutoReconnect(true)
|
||||||
opts.SetKeepAlive(time.Second * 60)
|
opts.SetKeepAlive(time.Second * 60)
|
||||||
opts.SetCleanSession(!m.PersistentSession)
|
|
||||||
return opts, nil
|
return opts, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -7,9 +7,7 @@ import (
|
|||||||
"github.com/influxdata/telegraf/plugins/parsers"
|
"github.com/influxdata/telegraf/plugins/parsers"
|
||||||
"github.com/influxdata/telegraf/testutil"
|
"github.com/influxdata/telegraf/testutil"
|
||||||
|
|
||||||
"github.com/stretchr/testify/assert"
|
"git.eclipse.org/gitroot/paho/org.eclipse.paho.mqtt.golang.git"
|
||||||
|
|
||||||
"github.com/eclipse/paho.mqtt.golang"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
const (
|
const (
|
||||||
@@ -30,52 +28,6 @@ func newTestMQTTConsumer() (*MQTTConsumer, chan mqtt.Message) {
|
|||||||
return n, in
|
return n, in
|
||||||
}
|
}
|
||||||
|
|
||||||
// Test that default client has random ID
|
|
||||||
func TestRandomClientID(t *testing.T) {
|
|
||||||
m1 := &MQTTConsumer{
|
|
||||||
Servers: []string{"localhost:1883"}}
|
|
||||||
opts, err := m1.createOpts()
|
|
||||||
assert.NoError(t, err)
|
|
||||||
|
|
||||||
m2 := &MQTTConsumer{
|
|
||||||
Servers: []string{"localhost:1883"}}
|
|
||||||
opts2, err2 := m2.createOpts()
|
|
||||||
assert.NoError(t, err2)
|
|
||||||
|
|
||||||
assert.NotEqual(t, opts.ClientID, opts2.ClientID)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Test that default client has random ID
|
|
||||||
func TestClientID(t *testing.T) {
|
|
||||||
m1 := &MQTTConsumer{
|
|
||||||
Servers: []string{"localhost:1883"},
|
|
||||||
ClientID: "telegraf-test",
|
|
||||||
}
|
|
||||||
opts, err := m1.createOpts()
|
|
||||||
assert.NoError(t, err)
|
|
||||||
|
|
||||||
m2 := &MQTTConsumer{
|
|
||||||
Servers: []string{"localhost:1883"},
|
|
||||||
ClientID: "telegraf-test",
|
|
||||||
}
|
|
||||||
opts2, err2 := m2.createOpts()
|
|
||||||
assert.NoError(t, err2)
|
|
||||||
|
|
||||||
assert.Equal(t, "telegraf-test", opts2.ClientID)
|
|
||||||
assert.Equal(t, "telegraf-test", opts.ClientID)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Test that Start() fails if client ID is not set but persistent is
|
|
||||||
func TestPersistentClientIDFail(t *testing.T) {
|
|
||||||
m1 := &MQTTConsumer{
|
|
||||||
Servers: []string{"localhost:1883"},
|
|
||||||
PersistentSession: true,
|
|
||||||
}
|
|
||||||
acc := testutil.Accumulator{}
|
|
||||||
err := m1.Start(&acc)
|
|
||||||
assert.Error(t, err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Test that the parser parses NATS messages into metrics
|
// Test that the parser parses NATS messages into metrics
|
||||||
func TestRunParser(t *testing.T) {
|
func TestRunParser(t *testing.T) {
|
||||||
n, in := newTestMQTTConsumer()
|
n, in := newTestMQTTConsumer()
|
||||||
|
|||||||
@@ -2,10 +2,8 @@ package mysql
|
|||||||
|
|
||||||
import (
|
import (
|
||||||
"database/sql"
|
"database/sql"
|
||||||
"net/url"
|
|
||||||
"strconv"
|
"strconv"
|
||||||
"strings"
|
"strings"
|
||||||
"time"
|
|
||||||
|
|
||||||
_ "github.com/go-sql-driver/mysql"
|
_ "github.com/go-sql-driver/mysql"
|
||||||
"github.com/influxdata/telegraf"
|
"github.com/influxdata/telegraf"
|
||||||
@@ -28,8 +26,6 @@ var sampleConfig = `
|
|||||||
servers = ["tcp(127.0.0.1:3306)/"]
|
servers = ["tcp(127.0.0.1:3306)/"]
|
||||||
`
|
`
|
||||||
|
|
||||||
var defaultTimeout = time.Second * time.Duration(5)
|
|
||||||
|
|
||||||
func (m *Mysql) SampleConfig() string {
|
func (m *Mysql) SampleConfig() string {
|
||||||
return sampleConfig
|
return sampleConfig
|
||||||
}
|
}
|
||||||
@@ -126,10 +122,6 @@ func (m *Mysql) gatherServer(serv string, acc telegraf.Accumulator) error {
|
|||||||
serv = ""
|
serv = ""
|
||||||
}
|
}
|
||||||
|
|
||||||
serv, err := dsnAddTimeout(serv)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
db, err := sql.Open("mysql", serv)
|
db, err := sql.Open("mysql", serv)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
@@ -215,27 +207,6 @@ func (m *Mysql) gatherServer(serv string, acc telegraf.Accumulator) error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func dsnAddTimeout(dsn string) (string, error) {
|
|
||||||
|
|
||||||
// DSN "?timeout=5s" is not valid, but "/?timeout=5s" is valid ("" and "/"
|
|
||||||
// are the same DSN)
|
|
||||||
if dsn == "" {
|
|
||||||
dsn = "/"
|
|
||||||
}
|
|
||||||
u, err := url.Parse(dsn)
|
|
||||||
if err != nil {
|
|
||||||
return "", err
|
|
||||||
}
|
|
||||||
v := u.Query()
|
|
||||||
|
|
||||||
// Only override timeout if not already defined
|
|
||||||
if _, ok := v["timeout"]; ok == false {
|
|
||||||
v.Add("timeout", defaultTimeout.String())
|
|
||||||
u.RawQuery = v.Encode()
|
|
||||||
}
|
|
||||||
return u.String(), nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
inputs.Add("mysql", func() telegraf.Input {
|
inputs.Add("mysql", func() telegraf.Input {
|
||||||
return &Mysql{}
|
return &Mysql{}
|
||||||
|
|||||||
@@ -84,34 +84,3 @@ func TestMysqlParseDSN(t *testing.T) {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestMysqlDNSAddTimeout(t *testing.T) {
|
|
||||||
tests := []struct {
|
|
||||||
input string
|
|
||||||
output string
|
|
||||||
}{
|
|
||||||
{
|
|
||||||
"",
|
|
||||||
"/?timeout=5s",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"tcp(192.168.1.1:3306)/",
|
|
||||||
"tcp(192.168.1.1:3306)/?timeout=5s",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"root:passwd@tcp(192.168.1.1:3306)/?tls=false",
|
|
||||||
"root:passwd@tcp(192.168.1.1:3306)/?timeout=5s&tls=false",
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"root:passwd@tcp(192.168.1.1:3306)/?tls=false&timeout=10s",
|
|
||||||
"root:passwd@tcp(192.168.1.1:3306)/?tls=false&timeout=10s",
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, test := range tests {
|
|
||||||
output, _ := dsnAddTimeout(test.input)
|
|
||||||
if output != test.output {
|
|
||||||
t.Errorf("Expected %s, got %s\n", test.output, output)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|||||||
@@ -55,7 +55,7 @@ var sampleConfig = `
|
|||||||
## name a queue group
|
## name a queue group
|
||||||
queue_group = "telegraf_consumers"
|
queue_group = "telegraf_consumers"
|
||||||
|
|
||||||
## Data format to consume.
|
## Data format to consume. This can be "json", "influx" or "graphite"
|
||||||
## Each data format has it's own unique set of configuration options, read
|
## Each data format has it's own unique set of configuration options, read
|
||||||
## more about them here:
|
## more about them here:
|
||||||
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
## https://github.com/influxdata/telegraf/blob/master/docs/DATA_FORMATS_INPUT.md
|
||||||
|
|||||||
@@ -52,7 +52,7 @@ It can also check response text.
|
|||||||
### Tags:
|
### Tags:
|
||||||
|
|
||||||
- All measurements have the following tags:
|
- All measurements have the following tags:
|
||||||
- server
|
- host
|
||||||
- port
|
- port
|
||||||
- protocol
|
- protocol
|
||||||
|
|
||||||
@@ -60,7 +60,7 @@ It can also check response text.
|
|||||||
|
|
||||||
```
|
```
|
||||||
$ ./telegraf -config telegraf.conf -input-filter net_response -test
|
$ ./telegraf -config telegraf.conf -input-filter net_response -test
|
||||||
net_response,server=192.168.2.2,port=22,protocol=tcp response_time=0.18070360500000002,string_found=true 1454785464182527094
|
net_response,host=127.0.0.1,port=22,protocol=tcp response_time=0.18070360500000002,string_found=true 1454785464182527094
|
||||||
net_response,server=192.168.2.2,port=2222,protocol=tcp response_time=1.090124776,string_found=false 1454784433658942325
|
net_response,host=127.0.0.1,port=2222,protocol=tcp response_time=1.090124776,string_found=false 1454784433658942325
|
||||||
|
|
||||||
```
|
```
|
||||||
|
|||||||
@@ -169,7 +169,7 @@ func (c *NetResponse) Gather(acc telegraf.Accumulator) error {
|
|||||||
return errors.New("Bad port")
|
return errors.New("Bad port")
|
||||||
}
|
}
|
||||||
// Prepare data
|
// Prepare data
|
||||||
tags := map[string]string{"server": host, "port": port}
|
tags := map[string]string{"host": host, "port": port}
|
||||||
var fields map[string]interface{}
|
var fields map[string]interface{}
|
||||||
// Gather data
|
// Gather data
|
||||||
if c.Protocol == "tcp" {
|
if c.Protocol == "tcp" {
|
||||||
|
|||||||
@@ -69,7 +69,7 @@ func TestTCPOK1(t *testing.T) {
|
|||||||
"string_found": true,
|
"string_found": true,
|
||||||
"response_time": 1.0,
|
"response_time": 1.0,
|
||||||
},
|
},
|
||||||
map[string]string{"server": "127.0.0.1",
|
map[string]string{"host": "127.0.0.1",
|
||||||
"port": "2004",
|
"port": "2004",
|
||||||
"protocol": "tcp",
|
"protocol": "tcp",
|
||||||
},
|
},
|
||||||
@@ -109,7 +109,7 @@ func TestTCPOK2(t *testing.T) {
|
|||||||
"string_found": false,
|
"string_found": false,
|
||||||
"response_time": 1.0,
|
"response_time": 1.0,
|
||||||
},
|
},
|
||||||
map[string]string{"server": "127.0.0.1",
|
map[string]string{"host": "127.0.0.1",
|
||||||
"port": "2004",
|
"port": "2004",
|
||||||
"protocol": "tcp",
|
"protocol": "tcp",
|
||||||
},
|
},
|
||||||
@@ -164,7 +164,7 @@ func TestUDPOK1(t *testing.T) {
|
|||||||
"string_found": true,
|
"string_found": true,
|
||||||
"response_time": 1.0,
|
"response_time": 1.0,
|
||||||
},
|
},
|
||||||
map[string]string{"server": "127.0.0.1",
|
map[string]string{"host": "127.0.0.1",
|
||||||
"port": "2004",
|
"port": "2004",
|
||||||
"protocol": "udp",
|
"protocol": "udp",
|
||||||
},
|
},
|
||||||
|
|||||||
@@ -1,47 +0,0 @@
|
|||||||
# Telegraf Plugin: Nginx
|
|
||||||
|
|
||||||
### Configuration:
|
|
||||||
|
|
||||||
```
|
|
||||||
# Read Nginx's basic status information (ngx_http_stub_status_module)
|
|
||||||
[[inputs.nginx]]
|
|
||||||
## An array of Nginx stub_status URI to gather stats.
|
|
||||||
urls = ["http://localhost/server_status"]
|
|
||||||
```
|
|
||||||
|
|
||||||
### Measurements & Fields:
|
|
||||||
|
|
||||||
- Measurement
|
|
||||||
- accepts
|
|
||||||
- active
|
|
||||||
- handled
|
|
||||||
- reading
|
|
||||||
- requests
|
|
||||||
- waiting
|
|
||||||
- writing
|
|
||||||
|
|
||||||
### Tags:
|
|
||||||
|
|
||||||
- All measurements have the following tags:
|
|
||||||
- port
|
|
||||||
- server
|
|
||||||
|
|
||||||
### Example Output:
|
|
||||||
|
|
||||||
Using this configuration:
|
|
||||||
```
|
|
||||||
[[inputs.nginx]]
|
|
||||||
## An array of Nginx stub_status URI to gather stats.
|
|
||||||
urls = ["http://localhost/status"]
|
|
||||||
```
|
|
||||||
|
|
||||||
When run with:
|
|
||||||
```
|
|
||||||
./telegraf -config telegraf.conf -input-filter nginx -test
|
|
||||||
```
|
|
||||||
|
|
||||||
It produces:
|
|
||||||
```
|
|
||||||
* Plugin: nginx, Collection 1
|
|
||||||
> nginx,port=80,server=localhost accepts=605i,active=2i,handled=605i,reading=0i,requests=12132i,waiting=1i,writing=1i 1456690994701784331
|
|
||||||
```
|
|
||||||
@@ -58,10 +58,7 @@ var tr = &http.Transport{
|
|||||||
ResponseHeaderTimeout: time.Duration(3 * time.Second),
|
ResponseHeaderTimeout: time.Duration(3 * time.Second),
|
||||||
}
|
}
|
||||||
|
|
||||||
var client = &http.Client{
|
var client = &http.Client{Transport: tr}
|
||||||
Transport: tr,
|
|
||||||
Timeout: time.Duration(4 * time.Second),
|
|
||||||
}
|
|
||||||
|
|
||||||
func (n *Nginx) gatherUrl(addr *url.URL, acc telegraf.Accumulator) error {
|
func (n *Nginx) gatherUrl(addr *url.URL, acc telegraf.Accumulator) error {
|
||||||
resp, err := client.Get(addr.String())
|
resp, err := client.Get(addr.String())
|
||||||
|
|||||||
@@ -84,10 +84,7 @@ var tr = &http.Transport{
|
|||||||
ResponseHeaderTimeout: time.Duration(3 * time.Second),
|
ResponseHeaderTimeout: time.Duration(3 * time.Second),
|
||||||
}
|
}
|
||||||
|
|
||||||
var client = &http.Client{
|
var client = &http.Client{Transport: tr}
|
||||||
Transport: tr,
|
|
||||||
Timeout: time.Duration(4 * time.Second),
|
|
||||||
}
|
|
||||||
|
|
||||||
func (n *NSQ) gatherEndpoint(e string, acc telegraf.Accumulator) error {
|
func (n *NSQ) gatherEndpoint(e string, acc telegraf.Accumulator) error {
|
||||||
u, err := buildURL(e)
|
u, err := buildURL(e)
|
||||||
|
|||||||
@@ -1,60 +0,0 @@
|
|||||||
# ntpq Input Plugin
|
|
||||||
|
|
||||||
Get standard NTP query metrics, requires ntpq executable.
|
|
||||||
|
|
||||||
Below is the documentation of the various headers returned from the NTP query
|
|
||||||
command when running `ntpq -p`.
|
|
||||||
|
|
||||||
- remote – The remote peer or server being synced to. “LOCAL” is this local host
|
|
||||||
(included in case there are no remote peers or servers available);
|
|
||||||
- refid – Where or what the remote peer or server is itself synchronised to;
|
|
||||||
- st (stratum) – The remote peer or server Stratum
|
|
||||||
- t (type) – Type (u: unicast or manycast client, b: broadcast or multicast client,
|
|
||||||
l: local reference clock, s: symmetric peer, A: manycast server,
|
|
||||||
B: broadcast server, M: multicast server, see “Automatic Server Discovery“);
|
|
||||||
- when – When last polled (seconds ago, “h” hours ago, or “d” days ago);
|
|
||||||
- poll – Polling frequency: rfc5905 suggests this ranges in NTPv4 from 4 (16s)
|
|
||||||
to 17 (36h) (log2 seconds), however observation suggests the actual displayed
|
|
||||||
value is seconds for a much smaller range of 64 (26) to 1024 (210) seconds;
|
|
||||||
- reach – An 8-bit left-shift shift register value recording polls (bit set =
|
|
||||||
successful, bit reset = fail) displayed in octal;
|
|
||||||
- delay – Round trip communication delay to the remote peer or server (milliseconds);
|
|
||||||
- offset – Mean offset (phase) in the times reported between this local host and
|
|
||||||
the remote peer or server (RMS, milliseconds);
|
|
||||||
- jitter – Mean deviation (jitter) in the time reported for that remote peer or
|
|
||||||
server (RMS of difference of multiple time samples, milliseconds);
|
|
||||||
|
|
||||||
### Configuration:
|
|
||||||
|
|
||||||
```toml
|
|
||||||
# Get standard NTP query metrics, requires ntpq executable
|
|
||||||
[[inputs.ntpq]]
|
|
||||||
## If false, set the -n ntpq flag. Can reduce metric gather times.
|
|
||||||
dns_lookup = true
|
|
||||||
```
|
|
||||||
|
|
||||||
### Measurements & Fields:
|
|
||||||
|
|
||||||
- ntpq
|
|
||||||
- delay (float, milliseconds)
|
|
||||||
- jitter (float, milliseconds)
|
|
||||||
- offset (float, milliseconds)
|
|
||||||
- poll (int, seconds)
|
|
||||||
- reach (int)
|
|
||||||
- when (int, seconds)
|
|
||||||
|
|
||||||
### Tags:
|
|
||||||
|
|
||||||
- All measurements have the following tags:
|
|
||||||
- refid
|
|
||||||
- remote
|
|
||||||
- type
|
|
||||||
- stratum
|
|
||||||
|
|
||||||
### Example Output:
|
|
||||||
|
|
||||||
```
|
|
||||||
$ telegraf -config ~/ws/telegraf.conf -input-filter ntpq -test
|
|
||||||
* Plugin: ntpq, Collection 1
|
|
||||||
> ntpq,refid=.GPSs.,remote=*time.apple.com,stratum=1,type=u delay=91.797,jitter=3.735,offset=12.841,poll=64i,reach=377i,when=35i 1457960478909556134
|
|
||||||
```
|
|
||||||
@@ -1,202 +0,0 @@
|
|||||||
// +build !windows
|
|
||||||
|
|
||||||
package ntpq
|
|
||||||
|
|
||||||
import (
|
|
||||||
"bufio"
|
|
||||||
"bytes"
|
|
||||||
"log"
|
|
||||||
"os/exec"
|
|
||||||
"strconv"
|
|
||||||
"strings"
|
|
||||||
|
|
||||||
"github.com/influxdata/telegraf"
|
|
||||||
"github.com/influxdata/telegraf/plugins/inputs"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Mapping of ntpq header names to tag keys
|
|
||||||
var tagHeaders map[string]string = map[string]string{
|
|
||||||
"remote": "remote",
|
|
||||||
"refid": "refid",
|
|
||||||
"st": "stratum",
|
|
||||||
"t": "type",
|
|
||||||
}
|
|
||||||
|
|
||||||
// Mapping of the ntpq tag key to the index in the command output
|
|
||||||
var tagI map[string]int = map[string]int{
|
|
||||||
"remote": -1,
|
|
||||||
"refid": -1,
|
|
||||||
"stratum": -1,
|
|
||||||
"type": -1,
|
|
||||||
}
|
|
||||||
|
|
||||||
// Mapping of float metrics to their index in the command output
|
|
||||||
var floatI map[string]int = map[string]int{
|
|
||||||
"delay": -1,
|
|
||||||
"offset": -1,
|
|
||||||
"jitter": -1,
|
|
||||||
}
|
|
||||||
|
|
||||||
// Mapping of int metrics to their index in the command output
|
|
||||||
var intI map[string]int = map[string]int{
|
|
||||||
"when": -1,
|
|
||||||
"poll": -1,
|
|
||||||
"reach": -1,
|
|
||||||
}
|
|
||||||
|
|
||||||
type NTPQ struct {
|
|
||||||
runQ func() ([]byte, error)
|
|
||||||
|
|
||||||
DNSLookup bool `toml:"dns_lookup"`
|
|
||||||
}
|
|
||||||
|
|
||||||
func (n *NTPQ) Description() string {
|
|
||||||
return "Get standard NTP query metrics, requires ntpq executable."
|
|
||||||
}
|
|
||||||
|
|
||||||
func (n *NTPQ) SampleConfig() string {
|
|
||||||
return `
|
|
||||||
## If false, set the -n ntpq flag. Can reduce metric gather time.
|
|
||||||
dns_lookup = true
|
|
||||||
`
|
|
||||||
}
|
|
||||||
|
|
||||||
func (n *NTPQ) Gather(acc telegraf.Accumulator) error {
|
|
||||||
out, err := n.runQ()
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
lineCounter := 0
|
|
||||||
scanner := bufio.NewScanner(bytes.NewReader(out))
|
|
||||||
for scanner.Scan() {
|
|
||||||
fields := strings.Fields(scanner.Text())
|
|
||||||
if len(fields) < 2 {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
// If lineCounter == 0, then this is the header line
|
|
||||||
if lineCounter == 0 {
|
|
||||||
for i, field := range fields {
|
|
||||||
// Check if field is a tag:
|
|
||||||
if tagKey, ok := tagHeaders[field]; ok {
|
|
||||||
tagI[tagKey] = i
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
// check if field is a float metric:
|
|
||||||
if _, ok := floatI[field]; ok {
|
|
||||||
floatI[field] = i
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
// check if field is an int metric:
|
|
||||||
if _, ok := intI[field]; ok {
|
|
||||||
intI[field] = i
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
tags := make(map[string]string)
|
|
||||||
mFields := make(map[string]interface{})
|
|
||||||
|
|
||||||
// Get tags from output
|
|
||||||
for key, index := range tagI {
|
|
||||||
if index == -1 {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
tags[key] = fields[index]
|
|
||||||
}
|
|
||||||
|
|
||||||
// Get integer metrics from output
|
|
||||||
for key, index := range intI {
|
|
||||||
if index == -1 {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
if key == "when" {
|
|
||||||
when := fields[index]
|
|
||||||
switch {
|
|
||||||
case strings.HasSuffix(when, "h"):
|
|
||||||
m, err := strconv.Atoi(strings.TrimSuffix(fields[index], "h"))
|
|
||||||
if err != nil {
|
|
||||||
log.Printf("ERROR ntpq: parsing int: %s", fields[index])
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
// seconds in an hour
|
|
||||||
mFields[key] = int64(m) * 360
|
|
||||||
continue
|
|
||||||
case strings.HasSuffix(when, "d"):
|
|
||||||
m, err := strconv.Atoi(strings.TrimSuffix(fields[index], "d"))
|
|
||||||
if err != nil {
|
|
||||||
log.Printf("ERROR ntpq: parsing int: %s", fields[index])
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
// seconds in a day
|
|
||||||
mFields[key] = int64(m) * 86400
|
|
||||||
continue
|
|
||||||
case strings.HasSuffix(when, "m"):
|
|
||||||
m, err := strconv.Atoi(strings.TrimSuffix(fields[index], "m"))
|
|
||||||
if err != nil {
|
|
||||||
log.Printf("ERROR ntpq: parsing int: %s", fields[index])
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
// seconds in a day
|
|
||||||
mFields[key] = int64(m) * 60
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
m, err := strconv.Atoi(fields[index])
|
|
||||||
if err != nil {
|
|
||||||
log.Printf("ERROR ntpq: parsing int: %s", fields[index])
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
mFields[key] = int64(m)
|
|
||||||
}
|
|
||||||
|
|
||||||
// get float metrics from output
|
|
||||||
for key, index := range floatI {
|
|
||||||
if index == -1 {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
m, err := strconv.ParseFloat(fields[index], 64)
|
|
||||||
if err != nil {
|
|
||||||
log.Printf("ERROR ntpq: parsing float: %s", fields[index])
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
mFields[key] = m
|
|
||||||
}
|
|
||||||
|
|
||||||
acc.AddFields("ntpq", mFields, tags)
|
|
||||||
}
|
|
||||||
|
|
||||||
lineCounter++
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (n *NTPQ) runq() ([]byte, error) {
|
|
||||||
bin, err := exec.LookPath("ntpq")
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
var cmd *exec.Cmd
|
|
||||||
if n.DNSLookup {
|
|
||||||
cmd = exec.Command(bin, "-p")
|
|
||||||
} else {
|
|
||||||
cmd = exec.Command(bin, "-p", "-n")
|
|
||||||
}
|
|
||||||
|
|
||||||
return cmd.Output()
|
|
||||||
}
|
|
||||||
|
|
||||||
func init() {
|
|
||||||
inputs.Add("ntpq", func() telegraf.Input {
|
|
||||||
n := &NTPQ{}
|
|
||||||
n.runQ = n.runq
|
|
||||||
return n
|
|
||||||
})
|
|
||||||
}
|
|
||||||
@@ -1,422 +0,0 @@
|
|||||||
// +build !windows
|
|
||||||
|
|
||||||
package ntpq
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"testing"
|
|
||||||
|
|
||||||
"github.com/influxdata/telegraf/testutil"
|
|
||||||
|
|
||||||
"github.com/stretchr/testify/assert"
|
|
||||||
)
|
|
||||||
|
|
||||||
func TestSingleNTPQ(t *testing.T) {
|
|
||||||
tt := tester{
|
|
||||||
ret: []byte(singleNTPQ),
|
|
||||||
err: nil,
|
|
||||||
}
|
|
||||||
n := &NTPQ{
|
|
||||||
runQ: tt.runqTest,
|
|
||||||
}
|
|
||||||
|
|
||||||
acc := testutil.Accumulator{}
|
|
||||||
assert.NoError(t, n.Gather(&acc))
|
|
||||||
|
|
||||||
fields := map[string]interface{}{
|
|
||||||
"when": int64(101),
|
|
||||||
"poll": int64(256),
|
|
||||||
"reach": int64(37),
|
|
||||||
"delay": float64(51.016),
|
|
||||||
"offset": float64(233.010),
|
|
||||||
"jitter": float64(17.462),
|
|
||||||
}
|
|
||||||
tags := map[string]string{
|
|
||||||
"remote": "*uschi5-ntp-002.",
|
|
||||||
"refid": "10.177.80.46",
|
|
||||||
"stratum": "2",
|
|
||||||
"type": "u",
|
|
||||||
}
|
|
||||||
acc.AssertContainsTaggedFields(t, "ntpq", fields, tags)
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestBadIntNTPQ(t *testing.T) {
|
|
||||||
tt := tester{
|
|
||||||
ret: []byte(badIntParseNTPQ),
|
|
||||||
err: nil,
|
|
||||||
}
|
|
||||||
n := &NTPQ{
|
|
||||||
runQ: tt.runqTest,
|
|
||||||
}
|
|
||||||
|
|
||||||
acc := testutil.Accumulator{}
|
|
||||||
assert.NoError(t, n.Gather(&acc))
|
|
||||||
|
|
||||||
fields := map[string]interface{}{
|
|
||||||
"when": int64(101),
|
|
||||||
"reach": int64(37),
|
|
||||||
"delay": float64(51.016),
|
|
||||||
"offset": float64(233.010),
|
|
||||||
"jitter": float64(17.462),
|
|
||||||
}
|
|
||||||
tags := map[string]string{
|
|
||||||
"remote": "*uschi5-ntp-002.",
|
|
||||||
"refid": "10.177.80.46",
|
|
||||||
"stratum": "2",
|
|
||||||
"type": "u",
|
|
||||||
}
|
|
||||||
acc.AssertContainsTaggedFields(t, "ntpq", fields, tags)
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestBadFloatNTPQ(t *testing.T) {
|
|
||||||
tt := tester{
|
|
||||||
ret: []byte(badFloatParseNTPQ),
|
|
||||||
err: nil,
|
|
||||||
}
|
|
||||||
n := &NTPQ{
|
|
||||||
runQ: tt.runqTest,
|
|
||||||
}
|
|
||||||
|
|
||||||
acc := testutil.Accumulator{}
|
|
||||||
assert.NoError(t, n.Gather(&acc))
|
|
||||||
|
|
||||||
fields := map[string]interface{}{
|
|
||||||
"when": int64(2),
|
|
||||||
"poll": int64(256),
|
|
||||||
"reach": int64(37),
|
|
||||||
"delay": float64(51.016),
|
|
||||||
"jitter": float64(17.462),
|
|
||||||
}
|
|
||||||
tags := map[string]string{
|
|
||||||
"remote": "*uschi5-ntp-002.",
|
|
||||||
"refid": "10.177.80.46",
|
|
||||||
"stratum": "2",
|
|
||||||
"type": "u",
|
|
||||||
}
|
|
||||||
acc.AssertContainsTaggedFields(t, "ntpq", fields, tags)
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestDaysNTPQ(t *testing.T) {
|
|
||||||
tt := tester{
|
|
||||||
ret: []byte(whenDaysNTPQ),
|
|
||||||
err: nil,
|
|
||||||
}
|
|
||||||
n := &NTPQ{
|
|
||||||
runQ: tt.runqTest,
|
|
||||||
}
|
|
||||||
|
|
||||||
acc := testutil.Accumulator{}
|
|
||||||
assert.NoError(t, n.Gather(&acc))
|
|
||||||
|
|
||||||
fields := map[string]interface{}{
|
|
||||||
"when": int64(172800),
|
|
||||||
"poll": int64(256),
|
|
||||||
"reach": int64(37),
|
|
||||||
"delay": float64(51.016),
|
|
||||||
"offset": float64(233.010),
|
|
||||||
"jitter": float64(17.462),
|
|
||||||
}
|
|
||||||
tags := map[string]string{
|
|
||||||
"remote": "*uschi5-ntp-002.",
|
|
||||||
"refid": "10.177.80.46",
|
|
||||||
"stratum": "2",
|
|
||||||
"type": "u",
|
|
||||||
}
|
|
||||||
acc.AssertContainsTaggedFields(t, "ntpq", fields, tags)
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestHoursNTPQ(t *testing.T) {
|
|
||||||
tt := tester{
|
|
||||||
ret: []byte(whenHoursNTPQ),
|
|
||||||
err: nil,
|
|
||||||
}
|
|
||||||
n := &NTPQ{
|
|
||||||
runQ: tt.runqTest,
|
|
||||||
}
|
|
||||||
|
|
||||||
acc := testutil.Accumulator{}
|
|
||||||
assert.NoError(t, n.Gather(&acc))
|
|
||||||
|
|
||||||
fields := map[string]interface{}{
|
|
||||||
"when": int64(720),
|
|
||||||
"poll": int64(256),
|
|
||||||
"reach": int64(37),
|
|
||||||
"delay": float64(51.016),
|
|
||||||
"offset": float64(233.010),
|
|
||||||
"jitter": float64(17.462),
|
|
||||||
}
|
|
||||||
tags := map[string]string{
|
|
||||||
"remote": "*uschi5-ntp-002.",
|
|
||||||
"refid": "10.177.80.46",
|
|
||||||
"stratum": "2",
|
|
||||||
"type": "u",
|
|
||||||
}
|
|
||||||
acc.AssertContainsTaggedFields(t, "ntpq", fields, tags)
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestMinutesNTPQ(t *testing.T) {
|
|
||||||
tt := tester{
|
|
||||||
ret: []byte(whenMinutesNTPQ),
|
|
||||||
err: nil,
|
|
||||||
}
|
|
||||||
n := &NTPQ{
|
|
||||||
runQ: tt.runqTest,
|
|
||||||
}
|
|
||||||
|
|
||||||
acc := testutil.Accumulator{}
|
|
||||||
assert.NoError(t, n.Gather(&acc))
|
|
||||||
|
|
||||||
fields := map[string]interface{}{
|
|
||||||
"when": int64(120),
|
|
||||||
"poll": int64(256),
|
|
||||||
"reach": int64(37),
|
|
||||||
"delay": float64(51.016),
|
|
||||||
"offset": float64(233.010),
|
|
||||||
"jitter": float64(17.462),
|
|
||||||
}
|
|
||||||
tags := map[string]string{
|
|
||||||
"remote": "*uschi5-ntp-002.",
|
|
||||||
"refid": "10.177.80.46",
|
|
||||||
"stratum": "2",
|
|
||||||
"type": "u",
|
|
||||||
}
|
|
||||||
acc.AssertContainsTaggedFields(t, "ntpq", fields, tags)
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestBadWhenNTPQ(t *testing.T) {
|
|
||||||
tt := tester{
|
|
||||||
ret: []byte(whenBadNTPQ),
|
|
||||||
err: nil,
|
|
||||||
}
|
|
||||||
n := &NTPQ{
|
|
||||||
runQ: tt.runqTest,
|
|
||||||
}
|
|
||||||
|
|
||||||
acc := testutil.Accumulator{}
|
|
||||||
assert.NoError(t, n.Gather(&acc))
|
|
||||||
|
|
||||||
fields := map[string]interface{}{
|
|
||||||
"poll": int64(256),
|
|
||||||
"reach": int64(37),
|
|
||||||
"delay": float64(51.016),
|
|
||||||
"offset": float64(233.010),
|
|
||||||
"jitter": float64(17.462),
|
|
||||||
}
|
|
||||||
tags := map[string]string{
|
|
||||||
"remote": "*uschi5-ntp-002.",
|
|
||||||
"refid": "10.177.80.46",
|
|
||||||
"stratum": "2",
|
|
||||||
"type": "u",
|
|
||||||
}
|
|
||||||
acc.AssertContainsTaggedFields(t, "ntpq", fields, tags)
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestMultiNTPQ(t *testing.T) {
|
|
||||||
tt := tester{
|
|
||||||
ret: []byte(multiNTPQ),
|
|
||||||
err: nil,
|
|
||||||
}
|
|
||||||
n := &NTPQ{
|
|
||||||
runQ: tt.runqTest,
|
|
||||||
}
|
|
||||||
|
|
||||||
acc := testutil.Accumulator{}
|
|
||||||
assert.NoError(t, n.Gather(&acc))
|
|
||||||
|
|
||||||
fields := map[string]interface{}{
|
|
||||||
"delay": float64(54.033),
|
|
||||||
"jitter": float64(449514),
|
|
||||||
"offset": float64(243.426),
|
|
||||||
"poll": int64(1024),
|
|
||||||
"reach": int64(377),
|
|
||||||
"when": int64(740),
|
|
||||||
}
|
|
||||||
tags := map[string]string{
|
|
||||||
"refid": "10.177.80.37",
|
|
||||||
"remote": "83.137.98.96",
|
|
||||||
"stratum": "2",
|
|
||||||
"type": "u",
|
|
||||||
}
|
|
||||||
acc.AssertContainsTaggedFields(t, "ntpq", fields, tags)
|
|
||||||
|
|
||||||
fields = map[string]interface{}{
|
|
||||||
"delay": float64(60.785),
|
|
||||||
"jitter": float64(449539),
|
|
||||||
"offset": float64(232.597),
|
|
||||||
"poll": int64(1024),
|
|
||||||
"reach": int64(377),
|
|
||||||
"when": int64(739),
|
|
||||||
}
|
|
||||||
tags = map[string]string{
|
|
||||||
"refid": "10.177.80.37",
|
|
||||||
"remote": "81.7.16.52",
|
|
||||||
"stratum": "2",
|
|
||||||
"type": "u",
|
|
||||||
}
|
|
||||||
acc.AssertContainsTaggedFields(t, "ntpq", fields, tags)
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestBadHeaderNTPQ(t *testing.T) {
|
|
||||||
resetVars()
|
|
||||||
tt := tester{
|
|
||||||
ret: []byte(badHeaderNTPQ),
|
|
||||||
err: nil,
|
|
||||||
}
|
|
||||||
n := &NTPQ{
|
|
||||||
runQ: tt.runqTest,
|
|
||||||
}
|
|
||||||
|
|
||||||
acc := testutil.Accumulator{}
|
|
||||||
assert.NoError(t, n.Gather(&acc))
|
|
||||||
|
|
||||||
fields := map[string]interface{}{
|
|
||||||
"when": int64(101),
|
|
||||||
"poll": int64(256),
|
|
||||||
"reach": int64(37),
|
|
||||||
"delay": float64(51.016),
|
|
||||||
"offset": float64(233.010),
|
|
||||||
"jitter": float64(17.462),
|
|
||||||
}
|
|
||||||
tags := map[string]string{
|
|
||||||
"remote": "*uschi5-ntp-002.",
|
|
||||||
"refid": "10.177.80.46",
|
|
||||||
"type": "u",
|
|
||||||
}
|
|
||||||
acc.AssertContainsTaggedFields(t, "ntpq", fields, tags)
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestMissingDelayColumnNTPQ(t *testing.T) {
|
|
||||||
resetVars()
|
|
||||||
tt := tester{
|
|
||||||
ret: []byte(missingDelayNTPQ),
|
|
||||||
err: nil,
|
|
||||||
}
|
|
||||||
n := &NTPQ{
|
|
||||||
runQ: tt.runqTest,
|
|
||||||
}
|
|
||||||
|
|
||||||
acc := testutil.Accumulator{}
|
|
||||||
assert.NoError(t, n.Gather(&acc))
|
|
||||||
|
|
||||||
fields := map[string]interface{}{
|
|
||||||
"when": int64(101),
|
|
||||||
"poll": int64(256),
|
|
||||||
"reach": int64(37),
|
|
||||||
"offset": float64(233.010),
|
|
||||||
"jitter": float64(17.462),
|
|
||||||
}
|
|
||||||
tags := map[string]string{
|
|
||||||
"remote": "*uschi5-ntp-002.",
|
|
||||||
"refid": "10.177.80.46",
|
|
||||||
"type": "u",
|
|
||||||
}
|
|
||||||
acc.AssertContainsTaggedFields(t, "ntpq", fields, tags)
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestFailedNTPQ(t *testing.T) {
|
|
||||||
tt := tester{
|
|
||||||
ret: []byte(singleNTPQ),
|
|
||||||
err: fmt.Errorf("Test failure"),
|
|
||||||
}
|
|
||||||
n := &NTPQ{
|
|
||||||
runQ: tt.runqTest,
|
|
||||||
}
|
|
||||||
|
|
||||||
acc := testutil.Accumulator{}
|
|
||||||
assert.Error(t, n.Gather(&acc))
|
|
||||||
}
|
|
||||||
|
|
||||||
type tester struct {
|
|
||||||
ret []byte
|
|
||||||
err error
|
|
||||||
}
|
|
||||||
|
|
||||||
func (t *tester) runqTest() ([]byte, error) {
|
|
||||||
return t.ret, t.err
|
|
||||||
}
|
|
||||||
|
|
||||||
func resetVars() {
|
|
||||||
// Mapping of ntpq header names to tag keys
|
|
||||||
tagHeaders = map[string]string{
|
|
||||||
"remote": "remote",
|
|
||||||
"refid": "refid",
|
|
||||||
"st": "stratum",
|
|
||||||
"t": "type",
|
|
||||||
}
|
|
||||||
|
|
||||||
// Mapping of the ntpq tag key to the index in the command output
|
|
||||||
tagI = map[string]int{
|
|
||||||
"remote": -1,
|
|
||||||
"refid": -1,
|
|
||||||
"stratum": -1,
|
|
||||||
"type": -1,
|
|
||||||
}
|
|
||||||
|
|
||||||
// Mapping of float metrics to their index in the command output
|
|
||||||
floatI = map[string]int{
|
|
||||||
"delay": -1,
|
|
||||||
"offset": -1,
|
|
||||||
"jitter": -1,
|
|
||||||
}
|
|
||||||
|
|
||||||
// Mapping of int metrics to their index in the command output
|
|
||||||
intI = map[string]int{
|
|
||||||
"when": -1,
|
|
||||||
"poll": -1,
|
|
||||||
"reach": -1,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
var singleNTPQ = ` remote refid st t when poll reach delay offset jitter
|
|
||||||
==============================================================================
|
|
||||||
*uschi5-ntp-002. 10.177.80.46 2 u 101 256 37 51.016 233.010 17.462
|
|
||||||
`
|
|
||||||
|
|
||||||
var badHeaderNTPQ = `remote refid foobar t when poll reach delay offset jitter
|
|
||||||
==============================================================================
|
|
||||||
*uschi5-ntp-002. 10.177.80.46 2 u 101 256 37 51.016 233.010 17.462
|
|
||||||
`
|
|
||||||
|
|
||||||
var missingDelayNTPQ = `remote refid foobar t when poll reach offset jitter
|
|
||||||
==============================================================================
|
|
||||||
*uschi5-ntp-002. 10.177.80.46 2 u 101 256 37 233.010 17.462
|
|
||||||
`
|
|
||||||
|
|
||||||
var whenDaysNTPQ = ` remote refid st t when poll reach delay offset jitter
|
|
||||||
==============================================================================
|
|
||||||
*uschi5-ntp-002. 10.177.80.46 2 u 2d 256 37 51.016 233.010 17.462
|
|
||||||
`
|
|
||||||
|
|
||||||
var whenHoursNTPQ = ` remote refid st t when poll reach delay offset jitter
|
|
||||||
==============================================================================
|
|
||||||
*uschi5-ntp-002. 10.177.80.46 2 u 2h 256 37 51.016 233.010 17.462
|
|
||||||
`
|
|
||||||
|
|
||||||
var whenMinutesNTPQ = ` remote refid st t when poll reach delay offset jitter
|
|
||||||
==============================================================================
|
|
||||||
*uschi5-ntp-002. 10.177.80.46 2 u 2m 256 37 51.016 233.010 17.462
|
|
||||||
`
|
|
||||||
|
|
||||||
var whenBadNTPQ = ` remote refid st t when poll reach delay offset jitter
|
|
||||||
==============================================================================
|
|
||||||
*uschi5-ntp-002. 10.177.80.46 2 u 2q 256 37 51.016 233.010 17.462
|
|
||||||
`
|
|
||||||
|
|
||||||
var badFloatParseNTPQ = ` remote refid st t when poll reach delay offset jitter
|
|
||||||
==============================================================================
|
|
||||||
*uschi5-ntp-002. 10.177.80.46 2 u 2 256 37 51.016 foobar 17.462
|
|
||||||
`
|
|
||||||
|
|
||||||
var badIntParseNTPQ = ` remote refid st t when poll reach delay offset jitter
|
|
||||||
==============================================================================
|
|
||||||
*uschi5-ntp-002. 10.177.80.46 2 u 101 foobar 37 51.016 233.010 17.462
|
|
||||||
`
|
|
||||||
|
|
||||||
var multiNTPQ = ` remote refid st t when poll reach delay offset jitter
|
|
||||||
==============================================================================
|
|
||||||
83.137.98.96 10.177.80.37 2 u 740 1024 377 54.033 243.426 449514.
|
|
||||||
81.7.16.52 10.177.80.37 2 u 739 1024 377 60.785 232.597 449539.
|
|
||||||
131.188.3.221 10.177.80.37 2 u 783 1024 377 111.820 261.921 449528.
|
|
||||||
5.9.29.107 10.177.80.37 2 u 703 1024 377 205.704 160.406 449602.
|
|
||||||
91.189.94.4 10.177.80.37 2 u 673 1024 377 143.047 274.726 449445.
|
|
||||||
`
|
|
||||||
@@ -1,3 +0,0 @@
|
|||||||
// +build windows
|
|
||||||
|
|
||||||
package ntpq
|
|
||||||
@@ -1,331 +0,0 @@
|
|||||||
// Copyright 2011 The Go Authors. All rights reserved.
|
|
||||||
// Use of this source code is governed by a BSD-style
|
|
||||||
// license that can be found in the LICENSE file.
|
|
||||||
|
|
||||||
package phpfpm
|
|
||||||
|
|
||||||
// This file implements FastCGI from the perspective of a child process.
|
|
||||||
|
|
||||||
import (
|
|
||||||
"errors"
|
|
||||||
"fmt"
|
|
||||||
"io"
|
|
||||||
"io/ioutil"
|
|
||||||
"net"
|
|
||||||
"net/http"
|
|
||||||
"net/http/cgi"
|
|
||||||
"os"
|
|
||||||
"strings"
|
|
||||||
"sync"
|
|
||||||
"time"
|
|
||||||
)
|
|
||||||
|
|
||||||
// request holds the state for an in-progress request. As soon as it's complete,
|
|
||||||
// it's converted to an http.Request.
|
|
||||||
type request struct {
|
|
||||||
pw *io.PipeWriter
|
|
||||||
reqId uint16
|
|
||||||
params map[string]string
|
|
||||||
buf [1024]byte
|
|
||||||
rawParams []byte
|
|
||||||
keepConn bool
|
|
||||||
}
|
|
||||||
|
|
||||||
func newRequest(reqId uint16, flags uint8) *request {
|
|
||||||
r := &request{
|
|
||||||
reqId: reqId,
|
|
||||||
params: map[string]string{},
|
|
||||||
keepConn: flags&flagKeepConn != 0,
|
|
||||||
}
|
|
||||||
r.rawParams = r.buf[:0]
|
|
||||||
return r
|
|
||||||
}
|
|
||||||
|
|
||||||
// parseParams reads an encoded []byte into Params.
|
|
||||||
func (r *request) parseParams() {
|
|
||||||
text := r.rawParams
|
|
||||||
r.rawParams = nil
|
|
||||||
for len(text) > 0 {
|
|
||||||
keyLen, n := readSize(text)
|
|
||||||
if n == 0 {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
text = text[n:]
|
|
||||||
valLen, n := readSize(text)
|
|
||||||
if n == 0 {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
text = text[n:]
|
|
||||||
if int(keyLen)+int(valLen) > len(text) {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
key := readString(text, keyLen)
|
|
||||||
text = text[keyLen:]
|
|
||||||
val := readString(text, valLen)
|
|
||||||
text = text[valLen:]
|
|
||||||
r.params[key] = val
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// response implements http.ResponseWriter.
|
|
||||||
type response struct {
|
|
||||||
req *request
|
|
||||||
header http.Header
|
|
||||||
w *bufWriter
|
|
||||||
wroteHeader bool
|
|
||||||
}
|
|
||||||
|
|
||||||
func newResponse(c *child, req *request) *response {
|
|
||||||
return &response{
|
|
||||||
req: req,
|
|
||||||
header: http.Header{},
|
|
||||||
w: newWriter(c.conn, typeStdout, req.reqId),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (r *response) Header() http.Header {
|
|
||||||
return r.header
|
|
||||||
}
|
|
||||||
|
|
||||||
func (r *response) Write(data []byte) (int, error) {
|
|
||||||
if !r.wroteHeader {
|
|
||||||
r.WriteHeader(http.StatusOK)
|
|
||||||
}
|
|
||||||
return r.w.Write(data)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (r *response) WriteHeader(code int) {
|
|
||||||
if r.wroteHeader {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
r.wroteHeader = true
|
|
||||||
if code == http.StatusNotModified {
|
|
||||||
// Must not have body.
|
|
||||||
r.header.Del("Content-Type")
|
|
||||||
r.header.Del("Content-Length")
|
|
||||||
r.header.Del("Transfer-Encoding")
|
|
||||||
} else if r.header.Get("Content-Type") == "" {
|
|
||||||
r.header.Set("Content-Type", "text/html; charset=utf-8")
|
|
||||||
}
|
|
||||||
|
|
||||||
if r.header.Get("Date") == "" {
|
|
||||||
r.header.Set("Date", time.Now().UTC().Format(http.TimeFormat))
|
|
||||||
}
|
|
||||||
|
|
||||||
fmt.Fprintf(r.w, "Status: %d %s\r\n", code, http.StatusText(code))
|
|
||||||
r.header.Write(r.w)
|
|
||||||
r.w.WriteString("\r\n")
|
|
||||||
}
|
|
||||||
|
|
||||||
func (r *response) Flush() {
|
|
||||||
if !r.wroteHeader {
|
|
||||||
r.WriteHeader(http.StatusOK)
|
|
||||||
}
|
|
||||||
r.w.Flush()
|
|
||||||
}
|
|
||||||
|
|
||||||
func (r *response) Close() error {
|
|
||||||
r.Flush()
|
|
||||||
return r.w.Close()
|
|
||||||
}
|
|
||||||
|
|
||||||
type child struct {
|
|
||||||
conn *conn
|
|
||||||
handler http.Handler
|
|
||||||
|
|
||||||
mu sync.Mutex // protects requests:
|
|
||||||
requests map[uint16]*request // keyed by request ID
|
|
||||||
}
|
|
||||||
|
|
||||||
func newChild(rwc io.ReadWriteCloser, handler http.Handler) *child {
|
|
||||||
return &child{
|
|
||||||
conn: newConn(rwc),
|
|
||||||
handler: handler,
|
|
||||||
requests: make(map[uint16]*request),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *child) serve() {
|
|
||||||
defer c.conn.Close()
|
|
||||||
defer c.cleanUp()
|
|
||||||
var rec record
|
|
||||||
for {
|
|
||||||
if err := rec.read(c.conn.rwc); err != nil {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
if err := c.handleRecord(&rec); err != nil {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
var errCloseConn = errors.New("fcgi: connection should be closed")
|
|
||||||
|
|
||||||
var emptyBody = ioutil.NopCloser(strings.NewReader(""))
|
|
||||||
|
|
||||||
// ErrRequestAborted is returned by Read when a handler attempts to read the
|
|
||||||
// body of a request that has been aborted by the web server.
|
|
||||||
var ErrRequestAborted = errors.New("fcgi: request aborted by web server")
|
|
||||||
|
|
||||||
// ErrConnClosed is returned by Read when a handler attempts to read the body of
|
|
||||||
// a request after the connection to the web server has been closed.
|
|
||||||
var ErrConnClosed = errors.New("fcgi: connection to web server closed")
|
|
||||||
|
|
||||||
func (c *child) handleRecord(rec *record) error {
|
|
||||||
c.mu.Lock()
|
|
||||||
req, ok := c.requests[rec.h.Id]
|
|
||||||
c.mu.Unlock()
|
|
||||||
if !ok && rec.h.Type != typeBeginRequest && rec.h.Type != typeGetValues {
|
|
||||||
// The spec says to ignore unknown request IDs.
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
switch rec.h.Type {
|
|
||||||
case typeBeginRequest:
|
|
||||||
if req != nil {
|
|
||||||
// The server is trying to begin a request with the same ID
|
|
||||||
// as an in-progress request. This is an error.
|
|
||||||
return errors.New("fcgi: received ID that is already in-flight")
|
|
||||||
}
|
|
||||||
|
|
||||||
var br beginRequest
|
|
||||||
if err := br.read(rec.content()); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
if br.role != roleResponder {
|
|
||||||
c.conn.writeEndRequest(rec.h.Id, 0, statusUnknownRole)
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
req = newRequest(rec.h.Id, br.flags)
|
|
||||||
c.mu.Lock()
|
|
||||||
c.requests[rec.h.Id] = req
|
|
||||||
c.mu.Unlock()
|
|
||||||
return nil
|
|
||||||
case typeParams:
|
|
||||||
// NOTE(eds): Technically a key-value pair can straddle the boundary
|
|
||||||
// between two packets. We buffer until we've received all parameters.
|
|
||||||
if len(rec.content()) > 0 {
|
|
||||||
req.rawParams = append(req.rawParams, rec.content()...)
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
req.parseParams()
|
|
||||||
return nil
|
|
||||||
case typeStdin:
|
|
||||||
content := rec.content()
|
|
||||||
if req.pw == nil {
|
|
||||||
var body io.ReadCloser
|
|
||||||
if len(content) > 0 {
|
|
||||||
// body could be an io.LimitReader, but it shouldn't matter
|
|
||||||
// as long as both sides are behaving.
|
|
||||||
body, req.pw = io.Pipe()
|
|
||||||
} else {
|
|
||||||
body = emptyBody
|
|
||||||
}
|
|
||||||
go c.serveRequest(req, body)
|
|
||||||
}
|
|
||||||
if len(content) > 0 {
|
|
||||||
// TODO(eds): This blocks until the handler reads from the pipe.
|
|
||||||
// If the handler takes a long time, it might be a problem.
|
|
||||||
req.pw.Write(content)
|
|
||||||
} else if req.pw != nil {
|
|
||||||
req.pw.Close()
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
case typeGetValues:
|
|
||||||
values := map[string]string{"FCGI_MPXS_CONNS": "1"}
|
|
||||||
c.conn.writePairs(typeGetValuesResult, 0, values)
|
|
||||||
return nil
|
|
||||||
case typeData:
|
|
||||||
// If the filter role is implemented, read the data stream here.
|
|
||||||
return nil
|
|
||||||
case typeAbortRequest:
|
|
||||||
c.mu.Lock()
|
|
||||||
delete(c.requests, rec.h.Id)
|
|
||||||
c.mu.Unlock()
|
|
||||||
c.conn.writeEndRequest(rec.h.Id, 0, statusRequestComplete)
|
|
||||||
if req.pw != nil {
|
|
||||||
req.pw.CloseWithError(ErrRequestAborted)
|
|
||||||
}
|
|
||||||
if !req.keepConn {
|
|
||||||
// connection will close upon return
|
|
||||||
return errCloseConn
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
default:
|
|
||||||
b := make([]byte, 8)
|
|
||||||
b[0] = byte(rec.h.Type)
|
|
||||||
c.conn.writeRecord(typeUnknownType, 0, b)
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *child) serveRequest(req *request, body io.ReadCloser) {
|
|
||||||
r := newResponse(c, req)
|
|
||||||
httpReq, err := cgi.RequestFromMap(req.params)
|
|
||||||
if err != nil {
|
|
||||||
// there was an error reading the request
|
|
||||||
r.WriteHeader(http.StatusInternalServerError)
|
|
||||||
c.conn.writeRecord(typeStderr, req.reqId, []byte(err.Error()))
|
|
||||||
} else {
|
|
||||||
httpReq.Body = body
|
|
||||||
c.handler.ServeHTTP(r, httpReq)
|
|
||||||
}
|
|
||||||
r.Close()
|
|
||||||
c.mu.Lock()
|
|
||||||
delete(c.requests, req.reqId)
|
|
||||||
c.mu.Unlock()
|
|
||||||
c.conn.writeEndRequest(req.reqId, 0, statusRequestComplete)
|
|
||||||
|
|
||||||
// Consume the entire body, so the host isn't still writing to
|
|
||||||
// us when we close the socket below in the !keepConn case,
|
|
||||||
// otherwise we'd send a RST. (golang.org/issue/4183)
|
|
||||||
// TODO(bradfitz): also bound this copy in time. Or send
|
|
||||||
// some sort of abort request to the host, so the host
|
|
||||||
// can properly cut off the client sending all the data.
|
|
||||||
// For now just bound it a little and
|
|
||||||
io.CopyN(ioutil.Discard, body, 100<<20)
|
|
||||||
body.Close()
|
|
||||||
|
|
||||||
if !req.keepConn {
|
|
||||||
c.conn.Close()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *child) cleanUp() {
|
|
||||||
c.mu.Lock()
|
|
||||||
defer c.mu.Unlock()
|
|
||||||
for _, req := range c.requests {
|
|
||||||
if req.pw != nil {
|
|
||||||
// race with call to Close in c.serveRequest doesn't matter because
|
|
||||||
// Pipe(Reader|Writer).Close are idempotent
|
|
||||||
req.pw.CloseWithError(ErrConnClosed)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Serve accepts incoming FastCGI connections on the listener l, creating a new
|
|
||||||
// goroutine for each. The goroutine reads requests and then calls handler
|
|
||||||
// to reply to them.
|
|
||||||
// If l is nil, Serve accepts connections from os.Stdin.
|
|
||||||
// If handler is nil, http.DefaultServeMux is used.
|
|
||||||
func Serve(l net.Listener, handler http.Handler) error {
|
|
||||||
if l == nil {
|
|
||||||
var err error
|
|
||||||
l, err = net.FileListener(os.Stdin)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
defer l.Close()
|
|
||||||
}
|
|
||||||
if handler == nil {
|
|
||||||
handler = http.DefaultServeMux
|
|
||||||
}
|
|
||||||
for {
|
|
||||||
rw, err := l.Accept()
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
c := newChild(rw, handler)
|
|
||||||
go c.serve()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -1,86 +0,0 @@
|
|||||||
package phpfpm
|
|
||||||
|
|
||||||
import (
|
|
||||||
"errors"
|
|
||||||
"io"
|
|
||||||
"net"
|
|
||||||
"strconv"
|
|
||||||
"strings"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Create an fcgi client
|
|
||||||
func newFcgiClient(h string, args ...interface{}) (*conn, error) {
|
|
||||||
var con net.Conn
|
|
||||||
if len(args) != 1 {
|
|
||||||
return nil, errors.New("fcgi: not enough params")
|
|
||||||
}
|
|
||||||
|
|
||||||
var err error
|
|
||||||
switch args[0].(type) {
|
|
||||||
case int:
|
|
||||||
addr := h + ":" + strconv.FormatInt(int64(args[0].(int)), 10)
|
|
||||||
con, err = net.Dial("tcp", addr)
|
|
||||||
case string:
|
|
||||||
laddr := net.UnixAddr{Name: args[0].(string), Net: h}
|
|
||||||
con, err = net.DialUnix(h, nil, &laddr)
|
|
||||||
default:
|
|
||||||
err = errors.New("fcgi: we only accept int (port) or string (socket) params.")
|
|
||||||
}
|
|
||||||
fcgi := &conn{
|
|
||||||
rwc: con,
|
|
||||||
}
|
|
||||||
|
|
||||||
return fcgi, err
|
|
||||||
}
|
|
||||||
|
|
||||||
func (client *conn) Request(
|
|
||||||
env map[string]string,
|
|
||||||
requestData string,
|
|
||||||
) (retout []byte, reterr []byte, err error) {
|
|
||||||
defer client.rwc.Close()
|
|
||||||
var reqId uint16 = 1
|
|
||||||
|
|
||||||
err = client.writeBeginRequest(reqId, uint16(roleResponder), 0)
|
|
||||||
if err != nil {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
err = client.writePairs(typeParams, reqId, env)
|
|
||||||
if err != nil {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
if len(requestData) > 0 {
|
|
||||||
if err = client.writeRecord(typeStdin, reqId, []byte(requestData)); err != nil {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
rec := &record{}
|
|
||||||
var err1 error
|
|
||||||
|
|
||||||
// recive untill EOF or FCGI_END_REQUEST
|
|
||||||
READ_LOOP:
|
|
||||||
for {
|
|
||||||
err1 = rec.read(client.rwc)
|
|
||||||
if err1 != nil && strings.Contains(err1.Error(), "use of closed network connection") {
|
|
||||||
if err1 != io.EOF {
|
|
||||||
err = err1
|
|
||||||
}
|
|
||||||
break
|
|
||||||
}
|
|
||||||
|
|
||||||
switch {
|
|
||||||
case rec.h.Type == typeStdout:
|
|
||||||
retout = append(retout, rec.content()...)
|
|
||||||
case rec.h.Type == typeStderr:
|
|
||||||
reterr = append(reterr, rec.content()...)
|
|
||||||
case rec.h.Type == typeEndRequest:
|
|
||||||
fallthrough
|
|
||||||
default:
|
|
||||||
break READ_LOOP
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return
|
|
||||||
}
|
|
||||||
@@ -1,280 +0,0 @@
|
|||||||
// Copyright 2011 The Go Authors. All rights reserved.
|
|
||||||
// Use of this source code is governed by a BSD-style
|
|
||||||
// license that can be found in the LICENSE file.
|
|
||||||
|
|
||||||
package phpfpm
|
|
||||||
|
|
||||||
import (
|
|
||||||
"bytes"
|
|
||||||
"errors"
|
|
||||||
"io"
|
|
||||||
"io/ioutil"
|
|
||||||
"net/http"
|
|
||||||
"testing"
|
|
||||||
)
|
|
||||||
|
|
||||||
var sizeTests = []struct {
|
|
||||||
size uint32
|
|
||||||
bytes []byte
|
|
||||||
}{
|
|
||||||
{0, []byte{0x00}},
|
|
||||||
{127, []byte{0x7F}},
|
|
||||||
{128, []byte{0x80, 0x00, 0x00, 0x80}},
|
|
||||||
{1000, []byte{0x80, 0x00, 0x03, 0xE8}},
|
|
||||||
{33554431, []byte{0x81, 0xFF, 0xFF, 0xFF}},
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestSize(t *testing.T) {
|
|
||||||
b := make([]byte, 4)
|
|
||||||
for i, test := range sizeTests {
|
|
||||||
n := encodeSize(b, test.size)
|
|
||||||
if !bytes.Equal(b[:n], test.bytes) {
|
|
||||||
t.Errorf("%d expected %x, encoded %x", i, test.bytes, b)
|
|
||||||
}
|
|
||||||
size, n := readSize(test.bytes)
|
|
||||||
if size != test.size {
|
|
||||||
t.Errorf("%d expected %d, read %d", i, test.size, size)
|
|
||||||
}
|
|
||||||
if len(test.bytes) != n {
|
|
||||||
t.Errorf("%d did not consume all the bytes", i)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
var streamTests = []struct {
|
|
||||||
desc string
|
|
||||||
recType recType
|
|
||||||
reqId uint16
|
|
||||||
content []byte
|
|
||||||
raw []byte
|
|
||||||
}{
|
|
||||||
{"single record", typeStdout, 1, nil,
|
|
||||||
[]byte{1, byte(typeStdout), 0, 1, 0, 0, 0, 0},
|
|
||||||
},
|
|
||||||
// this data will have to be split into two records
|
|
||||||
{"two records", typeStdin, 300, make([]byte, 66000),
|
|
||||||
bytes.Join([][]byte{
|
|
||||||
// header for the first record
|
|
||||||
{1, byte(typeStdin), 0x01, 0x2C, 0xFF, 0xFF, 1, 0},
|
|
||||||
make([]byte, 65536),
|
|
||||||
// header for the second
|
|
||||||
{1, byte(typeStdin), 0x01, 0x2C, 0x01, 0xD1, 7, 0},
|
|
||||||
make([]byte, 472),
|
|
||||||
// header for the empty record
|
|
||||||
{1, byte(typeStdin), 0x01, 0x2C, 0, 0, 0, 0},
|
|
||||||
},
|
|
||||||
nil),
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
type nilCloser struct {
|
|
||||||
io.ReadWriter
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *nilCloser) Close() error { return nil }
|
|
||||||
|
|
||||||
func TestStreams(t *testing.T) {
|
|
||||||
var rec record
|
|
||||||
outer:
|
|
||||||
for _, test := range streamTests {
|
|
||||||
buf := bytes.NewBuffer(test.raw)
|
|
||||||
var content []byte
|
|
||||||
for buf.Len() > 0 {
|
|
||||||
if err := rec.read(buf); err != nil {
|
|
||||||
t.Errorf("%s: error reading record: %v", test.desc, err)
|
|
||||||
continue outer
|
|
||||||
}
|
|
||||||
content = append(content, rec.content()...)
|
|
||||||
}
|
|
||||||
if rec.h.Type != test.recType {
|
|
||||||
t.Errorf("%s: got type %d expected %d", test.desc, rec.h.Type, test.recType)
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
if rec.h.Id != test.reqId {
|
|
||||||
t.Errorf("%s: got request ID %d expected %d", test.desc, rec.h.Id, test.reqId)
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
if !bytes.Equal(content, test.content) {
|
|
||||||
t.Errorf("%s: read wrong content", test.desc)
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
buf.Reset()
|
|
||||||
c := newConn(&nilCloser{buf})
|
|
||||||
w := newWriter(c, test.recType, test.reqId)
|
|
||||||
if _, err := w.Write(test.content); err != nil {
|
|
||||||
t.Errorf("%s: error writing record: %v", test.desc, err)
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
if err := w.Close(); err != nil {
|
|
||||||
t.Errorf("%s: error closing stream: %v", test.desc, err)
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
if !bytes.Equal(buf.Bytes(), test.raw) {
|
|
||||||
t.Errorf("%s: wrote wrong content", test.desc)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
type writeOnlyConn struct {
|
|
||||||
buf []byte
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *writeOnlyConn) Write(p []byte) (int, error) {
|
|
||||||
c.buf = append(c.buf, p...)
|
|
||||||
return len(p), nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *writeOnlyConn) Read(p []byte) (int, error) {
|
|
||||||
return 0, errors.New("conn is write-only")
|
|
||||||
}
|
|
||||||
|
|
||||||
func (c *writeOnlyConn) Close() error {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestGetValues(t *testing.T) {
|
|
||||||
var rec record
|
|
||||||
rec.h.Type = typeGetValues
|
|
||||||
|
|
||||||
wc := new(writeOnlyConn)
|
|
||||||
c := newChild(wc, nil)
|
|
||||||
err := c.handleRecord(&rec)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("handleRecord: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
const want = "\x01\n\x00\x00\x00\x12\x06\x00" +
|
|
||||||
"\x0f\x01FCGI_MPXS_CONNS1" +
|
|
||||||
"\x00\x00\x00\x00\x00\x00\x01\n\x00\x00\x00\x00\x00\x00"
|
|
||||||
if got := string(wc.buf); got != want {
|
|
||||||
t.Errorf(" got: %q\nwant: %q\n", got, want)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func nameValuePair11(nameData, valueData string) []byte {
|
|
||||||
return bytes.Join(
|
|
||||||
[][]byte{
|
|
||||||
{byte(len(nameData)), byte(len(valueData))},
|
|
||||||
[]byte(nameData),
|
|
||||||
[]byte(valueData),
|
|
||||||
},
|
|
||||||
nil,
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
func makeRecord(
|
|
||||||
recordType recType,
|
|
||||||
requestId uint16,
|
|
||||||
contentData []byte,
|
|
||||||
) []byte {
|
|
||||||
requestIdB1 := byte(requestId >> 8)
|
|
||||||
requestIdB0 := byte(requestId)
|
|
||||||
|
|
||||||
contentLength := len(contentData)
|
|
||||||
contentLengthB1 := byte(contentLength >> 8)
|
|
||||||
contentLengthB0 := byte(contentLength)
|
|
||||||
return bytes.Join([][]byte{
|
|
||||||
{1, byte(recordType), requestIdB1, requestIdB0, contentLengthB1,
|
|
||||||
contentLengthB0, 0, 0},
|
|
||||||
contentData,
|
|
||||||
},
|
|
||||||
nil)
|
|
||||||
}
|
|
||||||
|
|
||||||
// a series of FastCGI records that start a request and begin sending the
|
|
||||||
// request body
|
|
||||||
var streamBeginTypeStdin = bytes.Join([][]byte{
|
|
||||||
// set up request 1
|
|
||||||
makeRecord(typeBeginRequest, 1,
|
|
||||||
[]byte{0, byte(roleResponder), 0, 0, 0, 0, 0, 0}),
|
|
||||||
// add required parameters to request 1
|
|
||||||
makeRecord(typeParams, 1, nameValuePair11("REQUEST_METHOD", "GET")),
|
|
||||||
makeRecord(typeParams, 1, nameValuePair11("SERVER_PROTOCOL", "HTTP/1.1")),
|
|
||||||
makeRecord(typeParams, 1, nil),
|
|
||||||
// begin sending body of request 1
|
|
||||||
makeRecord(typeStdin, 1, []byte("0123456789abcdef")),
|
|
||||||
},
|
|
||||||
nil)
|
|
||||||
|
|
||||||
var cleanUpTests = []struct {
|
|
||||||
input []byte
|
|
||||||
err error
|
|
||||||
}{
|
|
||||||
// confirm that child.handleRecord closes req.pw after aborting req
|
|
||||||
{
|
|
||||||
bytes.Join([][]byte{
|
|
||||||
streamBeginTypeStdin,
|
|
||||||
makeRecord(typeAbortRequest, 1, nil),
|
|
||||||
},
|
|
||||||
nil),
|
|
||||||
ErrRequestAborted,
|
|
||||||
},
|
|
||||||
// confirm that child.serve closes all pipes after error reading record
|
|
||||||
{
|
|
||||||
bytes.Join([][]byte{
|
|
||||||
streamBeginTypeStdin,
|
|
||||||
nil,
|
|
||||||
},
|
|
||||||
nil),
|
|
||||||
ErrConnClosed,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
type nopWriteCloser struct {
|
|
||||||
io.ReadWriter
|
|
||||||
}
|
|
||||||
|
|
||||||
func (nopWriteCloser) Close() error {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Test that child.serve closes the bodies of aborted requests and closes the
|
|
||||||
// bodies of all requests before returning. Causes deadlock if either condition
|
|
||||||
// isn't met. See issue 6934.
|
|
||||||
func TestChildServeCleansUp(t *testing.T) {
|
|
||||||
for _, tt := range cleanUpTests {
|
|
||||||
input := make([]byte, len(tt.input))
|
|
||||||
copy(input, tt.input)
|
|
||||||
rc := nopWriteCloser{bytes.NewBuffer(input)}
|
|
||||||
done := make(chan bool)
|
|
||||||
c := newChild(rc, http.HandlerFunc(func(
|
|
||||||
w http.ResponseWriter,
|
|
||||||
r *http.Request,
|
|
||||||
) {
|
|
||||||
// block on reading body of request
|
|
||||||
_, err := io.Copy(ioutil.Discard, r.Body)
|
|
||||||
if err != tt.err {
|
|
||||||
t.Errorf("Expected %#v, got %#v", tt.err, err)
|
|
||||||
}
|
|
||||||
// not reached if body of request isn't closed
|
|
||||||
done <- true
|
|
||||||
}))
|
|
||||||
go c.serve()
|
|
||||||
// wait for body of request to be closed or all goroutines to block
|
|
||||||
<-done
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
type rwNopCloser struct {
|
|
||||||
io.Reader
|
|
||||||
io.Writer
|
|
||||||
}
|
|
||||||
|
|
||||||
func (rwNopCloser) Close() error {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Verifies it doesn't crash. Issue 11824.
|
|
||||||
func TestMalformedParams(t *testing.T) {
|
|
||||||
input := []byte{
|
|
||||||
// beginRequest, requestId=1, contentLength=8, role=1, keepConn=1
|
|
||||||
1, 1, 0, 1, 0, 8, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0,
|
|
||||||
// params, requestId=1, contentLength=10, k1Len=50, v1Len=50 (malformed, wrong length)
|
|
||||||
1, 4, 0, 1, 0, 10, 0, 0, 50, 50, 3, 4, 5, 6, 7, 8, 9, 10,
|
|
||||||
// end of params
|
|
||||||
1, 4, 0, 1, 0, 0, 0, 0,
|
|
||||||
}
|
|
||||||
rw := rwNopCloser{bytes.NewReader(input), ioutil.Discard}
|
|
||||||
c := newChild(rw, http.DefaultServeMux)
|
|
||||||
c.serve()
|
|
||||||
}
|
|
||||||
@@ -112,7 +112,6 @@ func (g *phpfpm) gatherServer(addr string, acc telegraf.Accumulator) error {
|
|||||||
statusPath string
|
statusPath string
|
||||||
)
|
)
|
||||||
|
|
||||||
var err error
|
|
||||||
if strings.HasPrefix(addr, "fcgi://") || strings.HasPrefix(addr, "cgi://") {
|
if strings.HasPrefix(addr, "fcgi://") || strings.HasPrefix(addr, "cgi://") {
|
||||||
u, err := url.Parse(addr)
|
u, err := url.Parse(addr)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -121,12 +120,7 @@ func (g *phpfpm) gatherServer(addr string, acc telegraf.Accumulator) error {
|
|||||||
socketAddr := strings.Split(u.Host, ":")
|
socketAddr := strings.Split(u.Host, ":")
|
||||||
fcgiIp := socketAddr[0]
|
fcgiIp := socketAddr[0]
|
||||||
fcgiPort, _ := strconv.Atoi(socketAddr[1])
|
fcgiPort, _ := strconv.Atoi(socketAddr[1])
|
||||||
fcgi, err = newFcgiClient(fcgiIp, fcgiPort)
|
fcgi, _ = NewClient(fcgiIp, fcgiPort)
|
||||||
if len(u.Path) > 1 {
|
|
||||||
statusPath = strings.Trim(u.Path, "/")
|
|
||||||
} else {
|
|
||||||
statusPath = "status"
|
|
||||||
}
|
|
||||||
} else {
|
} else {
|
||||||
socketAddr := strings.Split(addr, ":")
|
socketAddr := strings.Split(addr, ":")
|
||||||
if len(socketAddr) >= 2 {
|
if len(socketAddr) >= 2 {
|
||||||
@@ -140,13 +134,8 @@ func (g *phpfpm) gatherServer(addr string, acc telegraf.Accumulator) error {
|
|||||||
if _, err := os.Stat(socketPath); os.IsNotExist(err) {
|
if _, err := os.Stat(socketPath); os.IsNotExist(err) {
|
||||||
return fmt.Errorf("Socket doesn't exist '%s': %s", socketPath, err)
|
return fmt.Errorf("Socket doesn't exist '%s': %s", socketPath, err)
|
||||||
}
|
}
|
||||||
fcgi, err = newFcgiClient("unix", socketPath)
|
fcgi, _ = NewClient("unix", socketPath)
|
||||||
}
|
}
|
||||||
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
return g.gatherFcgi(fcgi, statusPath, acc)
|
return g.gatherFcgi(fcgi, statusPath, acc)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -17,6 +17,11 @@ import (
|
|||||||
"errors"
|
"errors"
|
||||||
"io"
|
"io"
|
||||||
"sync"
|
"sync"
|
||||||
|
|
||||||
|
"net"
|
||||||
|
"strconv"
|
||||||
|
|
||||||
|
"strings"
|
||||||
)
|
)
|
||||||
|
|
||||||
// recType is a record type, as defined by
|
// recType is a record type, as defined by
|
||||||
@@ -272,3 +277,74 @@ func (w *streamWriter) Close() error {
|
|||||||
// send empty record to close the stream
|
// send empty record to close the stream
|
||||||
return w.c.writeRecord(w.recType, w.reqId, nil)
|
return w.c.writeRecord(w.recType, w.reqId, nil)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func NewClient(h string, args ...interface{}) (fcgi *conn, err error) {
|
||||||
|
var con net.Conn
|
||||||
|
if len(args) != 1 {
|
||||||
|
err = errors.New("fcgi: not enough params")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
switch args[0].(type) {
|
||||||
|
case int:
|
||||||
|
addr := h + ":" + strconv.FormatInt(int64(args[0].(int)), 10)
|
||||||
|
con, err = net.Dial("tcp", addr)
|
||||||
|
case string:
|
||||||
|
laddr := net.UnixAddr{Name: args[0].(string), Net: h}
|
||||||
|
con, err = net.DialUnix(h, nil, &laddr)
|
||||||
|
default:
|
||||||
|
err = errors.New("fcgi: we only accept int (port) or string (socket) params.")
|
||||||
|
}
|
||||||
|
fcgi = &conn{
|
||||||
|
rwc: con,
|
||||||
|
}
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
func (client *conn) Request(env map[string]string, requestData string) (retout []byte, reterr []byte, err error) {
|
||||||
|
defer client.rwc.Close()
|
||||||
|
var reqId uint16 = 1
|
||||||
|
|
||||||
|
err = client.writeBeginRequest(reqId, uint16(roleResponder), 0)
|
||||||
|
if err != nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
err = client.writePairs(typeParams, reqId, env)
|
||||||
|
if err != nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(requestData) > 0 {
|
||||||
|
if err = client.writeRecord(typeStdin, reqId, []byte(requestData)); err != nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
rec := &record{}
|
||||||
|
var err1 error
|
||||||
|
|
||||||
|
// recive untill EOF or FCGI_END_REQUEST
|
||||||
|
READ_LOOP:
|
||||||
|
for {
|
||||||
|
err1 = rec.read(client.rwc)
|
||||||
|
if err1 != nil && strings.Contains(err1.Error(), "use of closed network connection") {
|
||||||
|
if err1 != io.EOF {
|
||||||
|
err = err1
|
||||||
|
}
|
||||||
|
break
|
||||||
|
}
|
||||||
|
|
||||||
|
switch {
|
||||||
|
case rec.h.Type == typeStdout:
|
||||||
|
retout = append(retout, rec.content()...)
|
||||||
|
case rec.h.Type == typeStderr:
|
||||||
|
reterr = append(reterr, rec.content()...)
|
||||||
|
case rec.h.Type == typeEndRequest:
|
||||||
|
fallthrough
|
||||||
|
default:
|
||||||
|
break READ_LOOP
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return
|
||||||
|
}
|
||||||
@@ -4,30 +4,27 @@ import (
|
|||||||
"bytes"
|
"bytes"
|
||||||
"database/sql"
|
"database/sql"
|
||||||
"fmt"
|
"fmt"
|
||||||
"regexp"
|
|
||||||
"sort"
|
"sort"
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
"github.com/influxdata/telegraf"
|
"github.com/influxdata/telegraf"
|
||||||
"github.com/influxdata/telegraf/plugins/inputs"
|
"github.com/influxdata/telegraf/plugins/inputs"
|
||||||
|
|
||||||
"github.com/lib/pq"
|
_ "github.com/lib/pq"
|
||||||
)
|
)
|
||||||
|
|
||||||
type Postgresql struct {
|
type Postgresql struct {
|
||||||
Address string
|
Address string
|
||||||
Databases []string
|
Databases []string
|
||||||
OrderedColumns []string
|
OrderedColumns []string
|
||||||
AllColumns []string
|
AllColumns []string
|
||||||
sanitizedAddress string
|
|
||||||
}
|
}
|
||||||
|
|
||||||
var ignoredColumns = map[string]bool{"datid": true, "datname": true, "stats_reset": true}
|
var ignoredColumns = map[string]bool{"datid": true, "datname": true, "stats_reset": true}
|
||||||
|
|
||||||
var sampleConfig = `
|
var sampleConfig = `
|
||||||
## specify address via a url matching:
|
## specify address via a url matching:
|
||||||
## postgres://[pqgotest[:password]]@localhost[/dbname]\
|
## postgres://[pqgotest[:password]]@localhost[/dbname]?sslmode=[disable|verify-ca|verify-full]
|
||||||
## ?sslmode=[disable|verify-ca|verify-full]
|
|
||||||
## or a simple string:
|
## or a simple string:
|
||||||
## host=localhost user=pqotest password=... sslmode=... dbname=app_production
|
## host=localhost user=pqotest password=... sslmode=... dbname=app_production
|
||||||
##
|
##
|
||||||
@@ -136,23 +133,6 @@ type scanner interface {
|
|||||||
Scan(dest ...interface{}) error
|
Scan(dest ...interface{}) error
|
||||||
}
|
}
|
||||||
|
|
||||||
var passwordKVMatcher, _ = regexp.Compile("password=\\S+ ?")
|
|
||||||
|
|
||||||
func (p *Postgresql) SanitizedAddress() (_ string, err error) {
|
|
||||||
var canonicalizedAddress string
|
|
||||||
if strings.HasPrefix(p.Address, "postgres://") || strings.HasPrefix(p.Address, "postgresql://") {
|
|
||||||
canonicalizedAddress, err = pq.ParseURL(p.Address)
|
|
||||||
if err != nil {
|
|
||||||
return p.sanitizedAddress, err
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
canonicalizedAddress = p.Address
|
|
||||||
}
|
|
||||||
p.sanitizedAddress = passwordKVMatcher.ReplaceAllString(canonicalizedAddress, "")
|
|
||||||
|
|
||||||
return p.sanitizedAddress, err
|
|
||||||
}
|
|
||||||
|
|
||||||
func (p *Postgresql) accRow(row scanner, acc telegraf.Accumulator) error {
|
func (p *Postgresql) accRow(row scanner, acc telegraf.Accumulator) error {
|
||||||
var columnVars []interface{}
|
var columnVars []interface{}
|
||||||
var dbname bytes.Buffer
|
var dbname bytes.Buffer
|
||||||
@@ -185,13 +165,7 @@ func (p *Postgresql) accRow(row scanner, acc telegraf.Accumulator) error {
|
|||||||
dbname.WriteString("postgres")
|
dbname.WriteString("postgres")
|
||||||
}
|
}
|
||||||
|
|
||||||
var tagAddress string
|
tags := map[string]string{"server": p.Address, "db": dbname.String()}
|
||||||
tagAddress, err = p.SanitizedAddress()
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
tags := map[string]string{"server": tagAddress, "db": dbname.String()}
|
|
||||||
|
|
||||||
fields := make(map[string]interface{})
|
fields := make(map[string]interface{})
|
||||||
for col, val := range columnMap {
|
for col, val := range columnMap {
|
||||||
|
|||||||
@@ -1,231 +0,0 @@
|
|||||||
# PostgreSQL plugin
|
|
||||||
|
|
||||||
This postgresql plugin provides metrics for your postgres database. It has been
|
|
||||||
designed to parse ithe sql queries in the plugin section of your telegraf.conf.
|
|
||||||
|
|
||||||
For now only two queries are specified and it's up to you to add more; some per
|
|
||||||
query parameters have been added :
|
|
||||||
|
|
||||||
* The SQl query itself
|
|
||||||
* The minimum version supported (here in numeric display visible in pg_settings)
|
|
||||||
* A boolean to define if the query have to be run against some specific
|
|
||||||
* variables (defined in the databaes variable of the plugin section)
|
|
||||||
* The list of the column that have to be defined has tags
|
|
||||||
|
|
||||||
```
|
|
||||||
[[inputs.postgresql_extensible]]
|
|
||||||
# specify address via a url matching:
|
|
||||||
# postgres://[pqgotest[:password]]@localhost[/dbname]?sslmode=...
|
|
||||||
# or a simple string:
|
|
||||||
# host=localhost user=pqotest password=... sslmode=... dbname=app_production
|
|
||||||
#
|
|
||||||
# All connection parameters are optional. #
|
|
||||||
# Without the dbname parameter, the driver will default to a database
|
|
||||||
# with the same name as the user. This dbname is just for instantiating a
|
|
||||||
# connection with the server and doesn't restrict the databases we are trying
|
|
||||||
# to grab metrics for.
|
|
||||||
#
|
|
||||||
address = "host=localhost user=postgres sslmode=disable"
|
|
||||||
# A list of databases to pull metrics about. If not specified, metrics for all
|
|
||||||
# databases are gathered.
|
|
||||||
# databases = ["app_production", "testing"]
|
|
||||||
#
|
|
||||||
# Define the toml config where the sql queries are stored
|
|
||||||
# New queries can be added, if the withdbname is set to true and there is no
|
|
||||||
# databases defined in the 'databases field', the sql query is ended by a 'is
|
|
||||||
# not null' in order to make the query succeed.
|
|
||||||
# Be careful that the sqlquery must contain the where clause with a part of
|
|
||||||
# the filtering, the plugin will add a 'IN (dbname list)' clause if the
|
|
||||||
# withdbname is set to true
|
|
||||||
# Example :
|
|
||||||
# The sqlquery : "SELECT * FROM pg_stat_database where datname" become
|
|
||||||
# "SELECT * FROM pg_stat_database where datname IN ('postgres', 'pgbench')"
|
|
||||||
# because the databases variable was set to ['postgres', 'pgbench' ] and the
|
|
||||||
# withdbname was true.
|
|
||||||
# Be careful that if the withdbname is set to false you d'ont have to define
|
|
||||||
# the where clause (aka with the dbname)
|
|
||||||
# the tagvalue field is used to define custom tags (separated by comas)
|
|
||||||
#
|
|
||||||
# Structure :
|
|
||||||
# [[inputs.postgresql_extensible.query]]
|
|
||||||
# sqlquery string
|
|
||||||
# version string
|
|
||||||
# withdbname boolean
|
|
||||||
# tagvalue string (coma separated)
|
|
||||||
[[inputs.postgresql_extensible.query]]
|
|
||||||
sqlquery="SELECT * FROM pg_stat_database where datname"
|
|
||||||
version=901
|
|
||||||
withdbname=false
|
|
||||||
tagvalue=""
|
|
||||||
[[inputs.postgresql_extensible.query]]
|
|
||||||
sqlquery="SELECT * FROM pg_stat_bgwriter"
|
|
||||||
version=901
|
|
||||||
withdbname=false
|
|
||||||
tagvalue=""
|
|
||||||
```
|
|
||||||
|
|
||||||
The system can be easily extended using homemade metrics collection tools or
|
|
||||||
using postgreql extensions ([pg_stat_statements](http://www.postgresql.org/docs/current/static/pgstatstatements.html), [pg_proctab](https://github.com/markwkm/pg_proctab),[powa](http://dalibo.github.io/powa/)...)
|
|
||||||
|
|
||||||
# Sample Queries :
|
|
||||||
- telegraf.conf postgresql_extensible queries (assuming that you have configured
|
|
||||||
correctly your connection)
|
|
||||||
```
|
|
||||||
[[inputs.postgresql_extensible.query]]
|
|
||||||
sqlquery="SELECT * FROM pg_stat_database"
|
|
||||||
version=901
|
|
||||||
withdbname=false
|
|
||||||
tagvalue=""
|
|
||||||
[[inputs.postgresql_extensible.query]]
|
|
||||||
sqlquery="SELECT * FROM pg_stat_bgwriter"
|
|
||||||
version=901
|
|
||||||
withdbname=false
|
|
||||||
tagvalue=""
|
|
||||||
[[inputs.postgresql_extensible.query]]
|
|
||||||
sqlquery="select * from sessions"
|
|
||||||
version=901
|
|
||||||
withdbname=false
|
|
||||||
tagvalue="db,username,state"
|
|
||||||
[[inputs.postgresql_extensible.query]]
|
|
||||||
sqlquery="select setting as max_connections from pg_settings where \
|
|
||||||
name='max_connections'"
|
|
||||||
version=801
|
|
||||||
withdbname=false
|
|
||||||
tagvalue=""
|
|
||||||
[[inputs.postgresql_extensible.query]]
|
|
||||||
sqlquery="select * from pg_stat_kcache"
|
|
||||||
version=901
|
|
||||||
withdbname=false
|
|
||||||
tagvalue=""
|
|
||||||
[[inputs.postgresql_extensible.query]]
|
|
||||||
sqlquery="select setting as shared_buffers from pg_settings where \
|
|
||||||
name='shared_buffers'"
|
|
||||||
version=801
|
|
||||||
withdbname=false
|
|
||||||
tagvalue=""
|
|
||||||
[[inputs.postgresql_extensible.query]]
|
|
||||||
sqlquery="SELECT db, count( distinct blocking_pid ) AS num_blocking_sessions,\
|
|
||||||
count( distinct blocked_pid) AS num_blocked_sessions FROM \
|
|
||||||
public.blocking_procs group by db"
|
|
||||||
version=901
|
|
||||||
withdbname=false
|
|
||||||
tagvalue="db"
|
|
||||||
```
|
|
||||||
|
|
||||||
# Postgresql Side
|
|
||||||
postgresql.conf :
|
|
||||||
```
|
|
||||||
shared_preload_libraries = 'pg_stat_statements,pg_stat_kcache'
|
|
||||||
```
|
|
||||||
|
|
||||||
Please follow the requirements to setup those extensions.
|
|
||||||
|
|
||||||
In the database (can be a specific monitoring db)
|
|
||||||
```
|
|
||||||
create extension pg_stat_statements;
|
|
||||||
create extension pg_stat_kcache;
|
|
||||||
create extension pg_proctab;
|
|
||||||
```
|
|
||||||
(assuming that the extension is installed on the OS Layer)
|
|
||||||
|
|
||||||
- pg_stat_kcache is available on the postgresql.org yum repo
|
|
||||||
- pg_proctab is available at : https://github.com/markwkm/pg_proctab
|
|
||||||
|
|
||||||
##Views
|
|
||||||
- Blocking sessions
|
|
||||||
```
|
|
||||||
CREATE OR REPLACE VIEW public.blocking_procs AS
|
|
||||||
SELECT a.datname AS db,
|
|
||||||
kl.pid AS blocking_pid,
|
|
||||||
ka.usename AS blocking_user,
|
|
||||||
ka.query AS blocking_query,
|
|
||||||
bl.pid AS blocked_pid,
|
|
||||||
a.usename AS blocked_user,
|
|
||||||
a.query AS blocked_query,
|
|
||||||
to_char(age(now(), a.query_start), 'HH24h:MIm:SSs'::text) AS age
|
|
||||||
FROM pg_locks bl
|
|
||||||
JOIN pg_stat_activity a ON bl.pid = a.pid
|
|
||||||
JOIN pg_locks kl ON bl.locktype = kl.locktype AND NOT bl.database IS
|
|
||||||
DISTINCT FROM kl.database AND NOT bl.relation IS DISTINCT FROM kl.relation
|
|
||||||
AND NOT bl.page IS DISTINCT FROM kl.page AND NOT bl.tuple IS DISTINCT FROM
|
|
||||||
kl.tuple AND NOT bl.virtualxid IS DISTINCT FROM kl.virtualxid AND NOT
|
|
||||||
bl.transactionid IS DISTINCT FROM kl.transactionid AND NOT bl.classid IS
|
|
||||||
DISTINCT FROM kl.classid AND NOT bl.objid IS DISTINCT FROM kl.objid AND
|
|
||||||
NOT bl.objsubid IS DISTINCT FROM kl.objsubid AND bl.pid <> kl.pid
|
|
||||||
JOIN pg_stat_activity ka ON kl.pid = ka.pid
|
|
||||||
WHERE kl.granted AND NOT bl.granted
|
|
||||||
ORDER BY a.query_start;
|
|
||||||
```
|
|
||||||
- Sessions Statistics
|
|
||||||
```
|
|
||||||
CREATE OR REPLACE VIEW public.sessions AS
|
|
||||||
WITH proctab AS (
|
|
||||||
SELECT pg_proctab.pid,
|
|
||||||
CASE
|
|
||||||
WHEN pg_proctab.state::text = 'R'::bpchar::text
|
|
||||||
THEN 'running'::text
|
|
||||||
WHEN pg_proctab.state::text = 'D'::bpchar::text
|
|
||||||
THEN 'sleep-io'::text
|
|
||||||
WHEN pg_proctab.state::text = 'S'::bpchar::text
|
|
||||||
THEN 'sleep-waiting'::text
|
|
||||||
WHEN pg_proctab.state::text = 'Z'::bpchar::text
|
|
||||||
THEN 'zombie'::text
|
|
||||||
WHEN pg_proctab.state::text = 'T'::bpchar::text
|
|
||||||
THEN 'stopped'::text
|
|
||||||
ELSE NULL::text
|
|
||||||
END AS proc_state,
|
|
||||||
pg_proctab.ppid,
|
|
||||||
pg_proctab.utime,
|
|
||||||
pg_proctab.stime,
|
|
||||||
pg_proctab.vsize,
|
|
||||||
pg_proctab.rss,
|
|
||||||
pg_proctab.processor,
|
|
||||||
pg_proctab.rchar,
|
|
||||||
pg_proctab.wchar,
|
|
||||||
pg_proctab.syscr,
|
|
||||||
pg_proctab.syscw,
|
|
||||||
pg_proctab.reads,
|
|
||||||
pg_proctab.writes,
|
|
||||||
pg_proctab.cwrites
|
|
||||||
FROM pg_proctab() pg_proctab(pid, comm, fullcomm, state, ppid, pgrp,
|
|
||||||
session, tty_nr, tpgid, flags, minflt, cminflt, majflt, cmajflt,
|
|
||||||
utime, stime, cutime, cstime, priority, nice, num_threads,
|
|
||||||
itrealvalue, starttime, vsize, rss, exit_signal, processor,
|
|
||||||
rt_priority, policy, delayacct_blkio_ticks, uid, username, rchar,
|
|
||||||
wchar, syscr, syscw, reads, writes, cwrites)
|
|
||||||
), stat_activity AS (
|
|
||||||
SELECT pg_stat_activity.datname,
|
|
||||||
pg_stat_activity.pid,
|
|
||||||
pg_stat_activity.usename,
|
|
||||||
CASE
|
|
||||||
WHEN pg_stat_activity.query IS NULL THEN 'no query'::text
|
|
||||||
WHEN pg_stat_activity.query IS NOT NULL AND
|
|
||||||
pg_stat_activity.state = 'idle'::text THEN 'no query'::text
|
|
||||||
ELSE regexp_replace(pg_stat_activity.query, '[\n\r]+'::text,
|
|
||||||
' '::text, 'g'::text)
|
|
||||||
END AS query
|
|
||||||
FROM pg_stat_activity
|
|
||||||
)
|
|
||||||
SELECT stat.datname::name AS db,
|
|
||||||
stat.usename::name AS username,
|
|
||||||
stat.pid,
|
|
||||||
proc.proc_state::text AS state,
|
|
||||||
('"'::text || stat.query) || '"'::text AS query,
|
|
||||||
(proc.utime/1000)::bigint AS session_usertime,
|
|
||||||
(proc.stime/1000)::bigint AS session_systemtime,
|
|
||||||
proc.vsize AS session_virtual_memory_size,
|
|
||||||
proc.rss AS session_resident_memory_size,
|
|
||||||
proc.processor AS session_processor_number,
|
|
||||||
proc.rchar AS session_bytes_read,
|
|
||||||
proc.rchar-proc.reads AS session_logical_bytes_read,
|
|
||||||
proc.wchar AS session_bytes_written,
|
|
||||||
proc.wchar-proc.writes AS session_logical_bytes_writes,
|
|
||||||
proc.syscr AS session_read_io,
|
|
||||||
proc.syscw AS session_write_io,
|
|
||||||
proc.reads AS session_physical_reads,
|
|
||||||
proc.writes AS session_physical_writes,
|
|
||||||
proc.cwrites AS session_cancel_writes
|
|
||||||
FROM proctab proc,
|
|
||||||
stat_activity stat
|
|
||||||
WHERE proc.pid = stat.pid;
|
|
||||||
```
|
|
||||||
@@ -1,278 +0,0 @@
|
|||||||
package postgresql_extensible
|
|
||||||
|
|
||||||
import (
|
|
||||||
"bytes"
|
|
||||||
"database/sql"
|
|
||||||
"fmt"
|
|
||||||
"regexp"
|
|
||||||
"strings"
|
|
||||||
|
|
||||||
"github.com/influxdata/telegraf"
|
|
||||||
"github.com/influxdata/telegraf/plugins/inputs"
|
|
||||||
|
|
||||||
"github.com/lib/pq"
|
|
||||||
)
|
|
||||||
|
|
||||||
type Postgresql struct {
|
|
||||||
Address string
|
|
||||||
Databases []string
|
|
||||||
OrderedColumns []string
|
|
||||||
AllColumns []string
|
|
||||||
AdditionalTags []string
|
|
||||||
sanitizedAddress string
|
|
||||||
Query []struct {
|
|
||||||
Sqlquery string
|
|
||||||
Version int
|
|
||||||
Withdbname bool
|
|
||||||
Tagvalue string
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
type query []struct {
|
|
||||||
Sqlquery string
|
|
||||||
Version int
|
|
||||||
Withdbname bool
|
|
||||||
Tagvalue string
|
|
||||||
}
|
|
||||||
|
|
||||||
var ignoredColumns = map[string]bool{"datid": true, "datname": true, "stats_reset": true}
|
|
||||||
|
|
||||||
var sampleConfig = `
|
|
||||||
## specify address via a url matching:
|
|
||||||
## postgres://[pqgotest[:password]]@localhost[/dbname]\
|
|
||||||
## ?sslmode=[disable|verify-ca|verify-full]
|
|
||||||
## or a simple string:
|
|
||||||
## host=localhost user=pqotest password=... sslmode=... dbname=app_production
|
|
||||||
#
|
|
||||||
## All connection parameters are optional. #
|
|
||||||
## Without the dbname parameter, the driver will default to a database
|
|
||||||
## with the same name as the user. This dbname is just for instantiating a
|
|
||||||
## connection with the server and doesn't restrict the databases we are trying
|
|
||||||
## to grab metrics for.
|
|
||||||
#
|
|
||||||
address = "host=localhost user=postgres sslmode=disable"
|
|
||||||
## A list of databases to pull metrics about. If not specified, metrics for all
|
|
||||||
## databases are gathered.
|
|
||||||
## databases = ["app_production", "testing"]
|
|
||||||
#
|
|
||||||
## Define the toml config where the sql queries are stored
|
|
||||||
## New queries can be added, if the withdbname is set to true and there is no
|
|
||||||
## databases defined in the 'databases field', the sql query is ended by a
|
|
||||||
## 'is not null' in order to make the query succeed.
|
|
||||||
## Example :
|
|
||||||
## The sqlquery : "SELECT * FROM pg_stat_database where datname" become
|
|
||||||
## "SELECT * FROM pg_stat_database where datname IN ('postgres', 'pgbench')"
|
|
||||||
## because the databases variable was set to ['postgres', 'pgbench' ] and the
|
|
||||||
## withdbname was true. Be careful that if the withdbname is set to false you
|
|
||||||
## don't have to define the where clause (aka with the dbname) the tagvalue
|
|
||||||
## field is used to define custom tags (separated by comas)
|
|
||||||
#
|
|
||||||
## Structure :
|
|
||||||
## [[inputs.postgresql_extensible.query]]
|
|
||||||
## sqlquery string
|
|
||||||
## version string
|
|
||||||
## withdbname boolean
|
|
||||||
## tagvalue string (coma separated)
|
|
||||||
[[inputs.postgresql_extensible.query]]
|
|
||||||
sqlquery="SELECT * FROM pg_stat_database"
|
|
||||||
version=901
|
|
||||||
withdbname=false
|
|
||||||
tagvalue=""
|
|
||||||
[[inputs.postgresql_extensible.query]]
|
|
||||||
sqlquery="SELECT * FROM pg_stat_bgwriter"
|
|
||||||
version=901
|
|
||||||
withdbname=false
|
|
||||||
tagvalue=""
|
|
||||||
`
|
|
||||||
|
|
||||||
func (p *Postgresql) SampleConfig() string {
|
|
||||||
return sampleConfig
|
|
||||||
}
|
|
||||||
|
|
||||||
func (p *Postgresql) Description() string {
|
|
||||||
return "Read metrics from one or many postgresql servers"
|
|
||||||
}
|
|
||||||
|
|
||||||
func (p *Postgresql) IgnoredColumns() map[string]bool {
|
|
||||||
return ignoredColumns
|
|
||||||
}
|
|
||||||
|
|
||||||
var localhost = "host=localhost sslmode=disable"
|
|
||||||
|
|
||||||
func (p *Postgresql) Gather(acc telegraf.Accumulator) error {
|
|
||||||
|
|
||||||
var sql_query string
|
|
||||||
var query_addon string
|
|
||||||
var db_version int
|
|
||||||
var query string
|
|
||||||
var tag_value string
|
|
||||||
|
|
||||||
if p.Address == "" || p.Address == "localhost" {
|
|
||||||
p.Address = localhost
|
|
||||||
}
|
|
||||||
|
|
||||||
db, err := sql.Open("postgres", p.Address)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
defer db.Close()
|
|
||||||
|
|
||||||
// Retreiving the database version
|
|
||||||
|
|
||||||
query = `select substring(setting from 1 for 3) as version from pg_settings where name='server_version_num'`
|
|
||||||
err = db.QueryRow(query).Scan(&db_version)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
// We loop in order to process each query
|
|
||||||
// Query is not run if Database version does not match the query version.
|
|
||||||
|
|
||||||
for i := range p.Query {
|
|
||||||
sql_query = p.Query[i].Sqlquery
|
|
||||||
tag_value = p.Query[i].Tagvalue
|
|
||||||
|
|
||||||
if p.Query[i].Withdbname {
|
|
||||||
if len(p.Databases) != 0 {
|
|
||||||
query_addon = fmt.Sprintf(` IN ('%s')`,
|
|
||||||
strings.Join(p.Databases, "','"))
|
|
||||||
} else {
|
|
||||||
query_addon = " is not null"
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
query_addon = ""
|
|
||||||
}
|
|
||||||
sql_query += query_addon
|
|
||||||
|
|
||||||
if p.Query[i].Version <= db_version {
|
|
||||||
rows, err := db.Query(sql_query)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
defer rows.Close()
|
|
||||||
|
|
||||||
// grab the column information from the result
|
|
||||||
p.OrderedColumns, err = rows.Columns()
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
} else {
|
|
||||||
for _, v := range p.OrderedColumns {
|
|
||||||
p.AllColumns = append(p.AllColumns, v)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
p.AdditionalTags = nil
|
|
||||||
if tag_value != "" {
|
|
||||||
tag_list := strings.Split(tag_value, ",")
|
|
||||||
for t := range tag_list {
|
|
||||||
p.AdditionalTags = append(p.AdditionalTags, tag_list[t])
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
for rows.Next() {
|
|
||||||
err = p.accRow(rows, acc)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
type scanner interface {
|
|
||||||
Scan(dest ...interface{}) error
|
|
||||||
}
|
|
||||||
|
|
||||||
var passwordKVMatcher, _ = regexp.Compile("password=\\S+ ?")
|
|
||||||
|
|
||||||
func (p *Postgresql) SanitizedAddress() (_ string, err error) {
|
|
||||||
var canonicalizedAddress string
|
|
||||||
if strings.HasPrefix(p.Address, "postgres://") || strings.HasPrefix(p.Address, "postgresql://") {
|
|
||||||
canonicalizedAddress, err = pq.ParseURL(p.Address)
|
|
||||||
if err != nil {
|
|
||||||
return p.sanitizedAddress, err
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
canonicalizedAddress = p.Address
|
|
||||||
}
|
|
||||||
p.sanitizedAddress = passwordKVMatcher.ReplaceAllString(canonicalizedAddress, "")
|
|
||||||
|
|
||||||
return p.sanitizedAddress, err
|
|
||||||
}
|
|
||||||
|
|
||||||
func (p *Postgresql) accRow(row scanner, acc telegraf.Accumulator) error {
|
|
||||||
var columnVars []interface{}
|
|
||||||
var dbname bytes.Buffer
|
|
||||||
|
|
||||||
// this is where we'll store the column name with its *interface{}
|
|
||||||
columnMap := make(map[string]*interface{})
|
|
||||||
|
|
||||||
for _, column := range p.OrderedColumns {
|
|
||||||
columnMap[column] = new(interface{})
|
|
||||||
}
|
|
||||||
|
|
||||||
// populate the array of interface{} with the pointers in the right order
|
|
||||||
for i := 0; i < len(columnMap); i++ {
|
|
||||||
columnVars = append(columnVars, columnMap[p.OrderedColumns[i]])
|
|
||||||
}
|
|
||||||
|
|
||||||
// deconstruct array of variables and send to Scan
|
|
||||||
err := row.Scan(columnVars...)
|
|
||||||
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
if columnMap["datname"] != nil {
|
|
||||||
// extract the database name from the column map
|
|
||||||
dbnameChars := (*columnMap["datname"]).([]uint8)
|
|
||||||
for i := 0; i < len(dbnameChars); i++ {
|
|
||||||
dbname.WriteString(string(dbnameChars[i]))
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
dbname.WriteString("postgres")
|
|
||||||
}
|
|
||||||
|
|
||||||
var tagAddress string
|
|
||||||
tagAddress, err = p.SanitizedAddress()
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
// Process the additional tags
|
|
||||||
|
|
||||||
tags := map[string]string{}
|
|
||||||
tags["server"] = tagAddress
|
|
||||||
tags["db"] = dbname.String()
|
|
||||||
var isATag int
|
|
||||||
fields := make(map[string]interface{})
|
|
||||||
for col, val := range columnMap {
|
|
||||||
_, ignore := ignoredColumns[col]
|
|
||||||
//if !ignore && *val != "" {
|
|
||||||
if !ignore {
|
|
||||||
isATag = 0
|
|
||||||
for tag := range p.AdditionalTags {
|
|
||||||
if col == p.AdditionalTags[tag] {
|
|
||||||
isATag = 1
|
|
||||||
value_type_p := fmt.Sprintf(`%T`, *val)
|
|
||||||
if value_type_p == "[]uint8" {
|
|
||||||
tags[col] = fmt.Sprintf(`%s`, *val)
|
|
||||||
} else if value_type_p == "int64" {
|
|
||||||
tags[col] = fmt.Sprintf(`%v`, *val)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if isATag == 0 {
|
|
||||||
fields[col] = *val
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
acc.AddFields("postgresql", fields, tags)
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func init() {
|
|
||||||
inputs.Add("postgresql_extensible", func() telegraf.Input {
|
|
||||||
return &Postgresql{}
|
|
||||||
})
|
|
||||||
}
|
|
||||||
@@ -1,98 +0,0 @@
|
|||||||
package postgresql_extensible
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"testing"
|
|
||||||
|
|
||||||
"github.com/influxdata/telegraf/testutil"
|
|
||||||
"github.com/stretchr/testify/assert"
|
|
||||||
"github.com/stretchr/testify/require"
|
|
||||||
)
|
|
||||||
|
|
||||||
func TestPostgresqlGeneratesMetrics(t *testing.T) {
|
|
||||||
if testing.Short() {
|
|
||||||
t.Skip("Skipping integration test in short mode")
|
|
||||||
}
|
|
||||||
|
|
||||||
p := &Postgresql{
|
|
||||||
Address: fmt.Sprintf("host=%s user=postgres sslmode=disable",
|
|
||||||
testutil.GetLocalHost()),
|
|
||||||
Databases: []string{"postgres"},
|
|
||||||
Query: query{
|
|
||||||
{Sqlquery: "select * from pg_stat_database",
|
|
||||||
Version: 901,
|
|
||||||
Withdbname: false,
|
|
||||||
Tagvalue: ""},
|
|
||||||
},
|
|
||||||
}
|
|
||||||
var acc testutil.Accumulator
|
|
||||||
err := p.Gather(&acc)
|
|
||||||
require.NoError(t, err)
|
|
||||||
|
|
||||||
availableColumns := make(map[string]bool)
|
|
||||||
for _, col := range p.AllColumns {
|
|
||||||
availableColumns[col] = true
|
|
||||||
}
|
|
||||||
intMetrics := []string{
|
|
||||||
"xact_commit",
|
|
||||||
"xact_rollback",
|
|
||||||
"blks_read",
|
|
||||||
"blks_hit",
|
|
||||||
"tup_returned",
|
|
||||||
"tup_fetched",
|
|
||||||
"tup_inserted",
|
|
||||||
"tup_updated",
|
|
||||||
"tup_deleted",
|
|
||||||
"conflicts",
|
|
||||||
"temp_files",
|
|
||||||
"temp_bytes",
|
|
||||||
"deadlocks",
|
|
||||||
"numbackends",
|
|
||||||
}
|
|
||||||
|
|
||||||
floatMetrics := []string{
|
|
||||||
"blk_read_time",
|
|
||||||
"blk_write_time",
|
|
||||||
}
|
|
||||||
|
|
||||||
metricsCounted := 0
|
|
||||||
|
|
||||||
for _, metric := range intMetrics {
|
|
||||||
_, ok := availableColumns[metric]
|
|
||||||
if ok {
|
|
||||||
assert.True(t, acc.HasIntField("postgresql", metric))
|
|
||||||
metricsCounted++
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, metric := range floatMetrics {
|
|
||||||
_, ok := availableColumns[metric]
|
|
||||||
if ok {
|
|
||||||
assert.True(t, acc.HasFloatField("postgresql", metric))
|
|
||||||
metricsCounted++
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
assert.True(t, metricsCounted > 0)
|
|
||||||
assert.Equal(t, len(availableColumns)-len(p.IgnoredColumns()), metricsCounted)
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestPostgresqlIgnoresUnwantedColumns(t *testing.T) {
|
|
||||||
if testing.Short() {
|
|
||||||
t.Skip("Skipping integration test in short mode")
|
|
||||||
}
|
|
||||||
|
|
||||||
p := &Postgresql{
|
|
||||||
Address: fmt.Sprintf("host=%s user=postgres sslmode=disable",
|
|
||||||
testutil.GetLocalHost()),
|
|
||||||
}
|
|
||||||
|
|
||||||
var acc testutil.Accumulator
|
|
||||||
|
|
||||||
err := p.Gather(&acc)
|
|
||||||
require.NoError(t, err)
|
|
||||||
|
|
||||||
for col := range p.IgnoredColumns() {
|
|
||||||
assert.False(t, acc.HasMeasurement(col))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@@ -35,10 +35,6 @@ The above configuration would result in output like:
|
|||||||
# Measurements
|
# Measurements
|
||||||
Note: prefix can be set by the user, per process.
|
Note: prefix can be set by the user, per process.
|
||||||
|
|
||||||
|
|
||||||
Threads related measurement names:
|
|
||||||
- procstat_[prefix_]num_threads value=5
|
|
||||||
|
|
||||||
File descriptor related measurement names:
|
File descriptor related measurement names:
|
||||||
- procstat_[prefix_]num_fds value=4
|
- procstat_[prefix_]num_fds value=4
|
||||||
|
|
||||||
|
|||||||
@@ -43,8 +43,6 @@ var sampleConfig = `
|
|||||||
|
|
||||||
## Field name prefix
|
## Field name prefix
|
||||||
prefix = ""
|
prefix = ""
|
||||||
## comment this out if you want raw cpu_time stats
|
|
||||||
fielddrop = ["cpu_time_*"]
|
|
||||||
`
|
`
|
||||||
|
|
||||||
func (_ *Procstat) SampleConfig() string {
|
func (_ *Procstat) SampleConfig() string {
|
||||||
|
|||||||
@@ -52,7 +52,6 @@ func NewSpecProcessor(
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (p *SpecProcessor) pushMetrics() {
|
func (p *SpecProcessor) pushMetrics() {
|
||||||
p.pushNThreadsStats()
|
|
||||||
p.pushFDStats()
|
p.pushFDStats()
|
||||||
p.pushCtxStats()
|
p.pushCtxStats()
|
||||||
p.pushIOStats()
|
p.pushIOStats()
|
||||||
@@ -61,15 +60,6 @@ func (p *SpecProcessor) pushMetrics() {
|
|||||||
p.flush()
|
p.flush()
|
||||||
}
|
}
|
||||||
|
|
||||||
func (p *SpecProcessor) pushNThreadsStats() error {
|
|
||||||
numThreads, err := p.proc.NumThreads()
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("NumThreads error: %s\n", err)
|
|
||||||
}
|
|
||||||
p.add("num_threads", numThreads)
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (p *SpecProcessor) pushFDStats() error {
|
func (p *SpecProcessor) pushFDStats() error {
|
||||||
fds, err := p.proc.NumFDs()
|
fds, err := p.proc.NumFDs()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
|
|||||||
@@ -1,75 +0,0 @@
|
|||||||
# Prometheus Input Plugin
|
|
||||||
|
|
||||||
The prometheus input plugin gathers metrics from any webpage
|
|
||||||
exposing metrics with Prometheus format
|
|
||||||
|
|
||||||
### Configuration:
|
|
||||||
|
|
||||||
Example for Kubernetes apiserver
|
|
||||||
```toml
|
|
||||||
# Get all metrics from Kube-apiserver
|
|
||||||
[[inputs.prometheus]]
|
|
||||||
# An array of urls to scrape metrics from.
|
|
||||||
urls = ["http://my-kube-apiserver:8080/metrics"]
|
|
||||||
```
|
|
||||||
|
|
||||||
You can use more complex configuration
|
|
||||||
to filter and some tags
|
|
||||||
|
|
||||||
```toml
|
|
||||||
# Get all metrics from Kube-apiserver
|
|
||||||
[[inputs.prometheus]]
|
|
||||||
# An array of urls to scrape metrics from.
|
|
||||||
urls = ["http://my-kube-apiserver:8080/metrics"]
|
|
||||||
# Get only metrics with "apiserver_" string is in metric name
|
|
||||||
namepass = ["apiserver_"]
|
|
||||||
# Add a metric name prefix
|
|
||||||
name_prefix = "k8s_"
|
|
||||||
# Add tags to be able to make beautiful dashboards
|
|
||||||
[inputs.prometheus.tags]
|
|
||||||
kubeservice = "kube-apiserver"
|
|
||||||
```
|
|
||||||
|
|
||||||
### Measurements & Fields & Tags:
|
|
||||||
|
|
||||||
Measurements and fields could be any thing.
|
|
||||||
It just depends of what you're quering.
|
|
||||||
|
|
||||||
Example:
|
|
||||||
|
|
||||||
```
|
|
||||||
# HELP go_gc_duration_seconds A summary of the GC invocation durations.
|
|
||||||
# TYPE go_gc_duration_seconds summary
|
|
||||||
go_gc_duration_seconds{quantile="0"} 0.00010425500000000001
|
|
||||||
go_gc_duration_seconds{quantile="0.25"} 0.000139108
|
|
||||||
go_gc_duration_seconds{quantile="0.5"} 0.00015749400000000002
|
|
||||||
go_gc_duration_seconds{quantile="0.75"} 0.000331463
|
|
||||||
go_gc_duration_seconds{quantile="1"} 0.000667154
|
|
||||||
go_gc_duration_seconds_sum 0.0018183950000000002
|
|
||||||
go_gc_duration_seconds_count 7
|
|
||||||
# HELP go_goroutines Number of goroutines that currently exist.
|
|
||||||
# TYPE go_goroutines gauge
|
|
||||||
go_goroutines 15
|
|
||||||
```
|
|
||||||
|
|
||||||
- go_goroutines,
|
|
||||||
- gauge (integer, unit)
|
|
||||||
- go_gc_duration_seconds
|
|
||||||
- field3 (integer, bytes)
|
|
||||||
|
|
||||||
- All measurements have the following tags:
|
|
||||||
- url=http://my-kube-apiserver:8080/metrics
|
|
||||||
- go_goroutines has the following tags:
|
|
||||||
- kubeservice=kube-apiserver
|
|
||||||
- go_gc_duration_seconds has the following tags:
|
|
||||||
- kubeservice=kube-apiserver
|
|
||||||
|
|
||||||
### Example Output:
|
|
||||||
|
|
||||||
Example of output with configuration given above:
|
|
||||||
|
|
||||||
```
|
|
||||||
$ ./telegraf -config telegraf.conf -test
|
|
||||||
k8s_go_goroutines,kubeservice=kube-apiserver,url=http://my-kube-apiserver:8080/metrics gauge=536 1456857329391929813
|
|
||||||
k8s_go_gc_duration_seconds,kubeservice=kube-apiserver,url=http://my-kube-apiserver:8080/metrics 0=0.038002142,0.25=0.041732467,0.5=0.04336492,0.75=0.047271799,1=0.058295811,count=0,sum=208.334617406 1456857329391929813
|
|
||||||
```
|
|
||||||
@@ -1,171 +0,0 @@
|
|||||||
package prometheus
|
|
||||||
|
|
||||||
// Parser inspired from
|
|
||||||
// https://github.com/prometheus/prom2json/blob/master/main.go
|
|
||||||
|
|
||||||
import (
|
|
||||||
"bufio"
|
|
||||||
"bytes"
|
|
||||||
"fmt"
|
|
||||||
"io"
|
|
||||||
"math"
|
|
||||||
"mime"
|
|
||||||
|
|
||||||
"github.com/influxdata/telegraf"
|
|
||||||
|
|
||||||
"github.com/matttproud/golang_protobuf_extensions/pbutil"
|
|
||||||
dto "github.com/prometheus/client_model/go"
|
|
||||||
"github.com/prometheus/common/expfmt"
|
|
||||||
)
|
|
||||||
|
|
||||||
// PrometheusParser is an object for Parsing incoming metrics.
|
|
||||||
type PrometheusParser struct {
|
|
||||||
// PromFormat
|
|
||||||
PromFormat map[string]string
|
|
||||||
// DefaultTags will be added to every parsed metric
|
|
||||||
// DefaultTags map[string]string
|
|
||||||
}
|
|
||||||
|
|
||||||
// Parse returns a slice of Metrics from a text representation of a
|
|
||||||
// metrics
|
|
||||||
func (p *PrometheusParser) Parse(buf []byte) ([]telegraf.Metric, error) {
|
|
||||||
var metrics []telegraf.Metric
|
|
||||||
var parser expfmt.TextParser
|
|
||||||
// parse even if the buffer begins with a newline
|
|
||||||
buf = bytes.TrimPrefix(buf, []byte("\n"))
|
|
||||||
// Read raw data
|
|
||||||
buffer := bytes.NewBuffer(buf)
|
|
||||||
reader := bufio.NewReader(buffer)
|
|
||||||
|
|
||||||
// Get format
|
|
||||||
mediatype, params, err := mime.ParseMediaType(p.PromFormat["Content-Type"])
|
|
||||||
// Prepare output
|
|
||||||
metricFamilies := make(map[string]*dto.MetricFamily)
|
|
||||||
if err == nil && mediatype == "application/vnd.google.protobuf" &&
|
|
||||||
params["encoding"] == "delimited" &&
|
|
||||||
params["proto"] == "io.prometheus.client.MetricFamily" {
|
|
||||||
for {
|
|
||||||
metricFamily := &dto.MetricFamily{}
|
|
||||||
if _, err = pbutil.ReadDelimited(reader, metricFamily); err != nil {
|
|
||||||
if err == io.EOF {
|
|
||||||
break
|
|
||||||
}
|
|
||||||
return nil, fmt.Errorf("reading metric family protocol buffer failed: %s", err)
|
|
||||||
}
|
|
||||||
metricFamilies[metricFamily.GetName()] = metricFamily
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
metricFamilies, err = parser.TextToMetricFamilies(reader)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("reading text format failed: %s", err)
|
|
||||||
}
|
|
||||||
// read metrics
|
|
||||||
for metricName, mf := range metricFamilies {
|
|
||||||
for _, m := range mf.Metric {
|
|
||||||
// reading tags
|
|
||||||
tags := makeLabels(m)
|
|
||||||
/*
|
|
||||||
for key, value := range p.DefaultTags {
|
|
||||||
tags[key] = value
|
|
||||||
}
|
|
||||||
*/
|
|
||||||
// reading fields
|
|
||||||
fields := make(map[string]interface{})
|
|
||||||
if mf.GetType() == dto.MetricType_SUMMARY {
|
|
||||||
// summary metric
|
|
||||||
fields = makeQuantiles(m)
|
|
||||||
fields["count"] = float64(m.GetHistogram().GetSampleCount())
|
|
||||||
fields["sum"] = float64(m.GetSummary().GetSampleSum())
|
|
||||||
} else if mf.GetType() == dto.MetricType_HISTOGRAM {
|
|
||||||
// historgram metric
|
|
||||||
fields = makeBuckets(m)
|
|
||||||
fields["count"] = float64(m.GetHistogram().GetSampleCount())
|
|
||||||
fields["sum"] = float64(m.GetSummary().GetSampleSum())
|
|
||||||
|
|
||||||
} else {
|
|
||||||
// standard metric
|
|
||||||
fields = getNameAndValue(m)
|
|
||||||
}
|
|
||||||
// converting to telegraf metric
|
|
||||||
if len(fields) > 0 {
|
|
||||||
metric, err := telegraf.NewMetric(metricName, tags, fields)
|
|
||||||
if err == nil {
|
|
||||||
metrics = append(metrics, metric)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return metrics, err
|
|
||||||
}
|
|
||||||
|
|
||||||
// Parse one line
|
|
||||||
func (p *PrometheusParser) ParseLine(line string) (telegraf.Metric, error) {
|
|
||||||
metrics, err := p.Parse([]byte(line + "\n"))
|
|
||||||
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
if len(metrics) < 1 {
|
|
||||||
return nil, fmt.Errorf(
|
|
||||||
"Can not parse the line: %s, for data format: prometheus", line)
|
|
||||||
}
|
|
||||||
|
|
||||||
return metrics[0], nil
|
|
||||||
}
|
|
||||||
|
|
||||||
/*
|
|
||||||
// Set default tags
|
|
||||||
func (p *PrometheusParser) SetDefaultTags(tags map[string]string) {
|
|
||||||
p.DefaultTags = tags
|
|
||||||
}
|
|
||||||
*/
|
|
||||||
|
|
||||||
// Get Quantiles from summary metric
|
|
||||||
func makeQuantiles(m *dto.Metric) map[string]interface{} {
|
|
||||||
fields := make(map[string]interface{})
|
|
||||||
for _, q := range m.GetSummary().Quantile {
|
|
||||||
if !math.IsNaN(q.GetValue()) {
|
|
||||||
fields[fmt.Sprint(q.GetQuantile())] = float64(q.GetValue())
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return fields
|
|
||||||
}
|
|
||||||
|
|
||||||
// Get Buckets from histogram metric
|
|
||||||
func makeBuckets(m *dto.Metric) map[string]interface{} {
|
|
||||||
fields := make(map[string]interface{})
|
|
||||||
for _, b := range m.GetHistogram().Bucket {
|
|
||||||
fields[fmt.Sprint(b.GetUpperBound())] = float64(b.GetCumulativeCount())
|
|
||||||
}
|
|
||||||
return fields
|
|
||||||
}
|
|
||||||
|
|
||||||
// Get labels from metric
|
|
||||||
func makeLabels(m *dto.Metric) map[string]string {
|
|
||||||
result := map[string]string{}
|
|
||||||
for _, lp := range m.Label {
|
|
||||||
result[lp.GetName()] = lp.GetValue()
|
|
||||||
}
|
|
||||||
return result
|
|
||||||
}
|
|
||||||
|
|
||||||
// Get name and value from metric
|
|
||||||
func getNameAndValue(m *dto.Metric) map[string]interface{} {
|
|
||||||
fields := make(map[string]interface{})
|
|
||||||
if m.Gauge != nil {
|
|
||||||
if !math.IsNaN(m.GetGauge().GetValue()) {
|
|
||||||
fields["gauge"] = float64(m.GetGauge().GetValue())
|
|
||||||
}
|
|
||||||
} else if m.Counter != nil {
|
|
||||||
if !math.IsNaN(m.GetGauge().GetValue()) {
|
|
||||||
fields["counter"] = float64(m.GetCounter().GetValue())
|
|
||||||
}
|
|
||||||
} else if m.Untyped != nil {
|
|
||||||
if !math.IsNaN(m.GetGauge().GetValue()) {
|
|
||||||
fields["value"] = float64(m.GetUntyped().GetValue())
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return fields
|
|
||||||
}
|
|
||||||
@@ -1,175 +0,0 @@
|
|||||||
package prometheus
|
|
||||||
|
|
||||||
import (
|
|
||||||
"testing"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/stretchr/testify/assert"
|
|
||||||
)
|
|
||||||
|
|
||||||
var exptime = time.Date(2009, time.November, 10, 23, 0, 0, 0, time.UTC)
|
|
||||||
|
|
||||||
const validUniqueGauge = `# HELP cadvisor_version_info A metric with a constant '1' value labeled by kernel version, OS version, docker version, cadvisor version & cadvisor revision.
|
|
||||||
# TYPE cadvisor_version_info gauge
|
|
||||||
cadvisor_version_info{cadvisorRevision="",cadvisorVersion="",dockerVersion="1.8.2",kernelVersion="3.10.0-229.20.1.el7.x86_64",osVersion="CentOS Linux 7 (Core)"} 1
|
|
||||||
`
|
|
||||||
|
|
||||||
const validUniqueCounter = `# HELP get_token_fail_count Counter of failed Token() requests to the alternate token source
|
|
||||||
# TYPE get_token_fail_count counter
|
|
||||||
get_token_fail_count 0
|
|
||||||
`
|
|
||||||
|
|
||||||
const validUniqueLine = `# HELP get_token_fail_count Counter of failed Token() requests to the alternate token source
|
|
||||||
`
|
|
||||||
|
|
||||||
const validUniqueSummary = `# HELP http_request_duration_microseconds The HTTP request latencies in microseconds.
|
|
||||||
# TYPE http_request_duration_microseconds summary
|
|
||||||
http_request_duration_microseconds{handler="prometheus",quantile="0.5"} 552048.506
|
|
||||||
http_request_duration_microseconds{handler="prometheus",quantile="0.9"} 5.876804288e+06
|
|
||||||
http_request_duration_microseconds{handler="prometheus",quantile="0.99"} 5.876804288e+06
|
|
||||||
http_request_duration_microseconds_sum{handler="prometheus"} 1.8909097205e+07
|
|
||||||
http_request_duration_microseconds_count{handler="prometheus"} 9
|
|
||||||
`
|
|
||||||
|
|
||||||
const validUniqueHistogram = `# HELP apiserver_request_latencies Response latency distribution in microseconds for each verb, resource and client.
|
|
||||||
# TYPE apiserver_request_latencies histogram
|
|
||||||
apiserver_request_latencies_bucket{resource="bindings",verb="POST",le="125000"} 1994
|
|
||||||
apiserver_request_latencies_bucket{resource="bindings",verb="POST",le="250000"} 1997
|
|
||||||
apiserver_request_latencies_bucket{resource="bindings",verb="POST",le="500000"} 2000
|
|
||||||
apiserver_request_latencies_bucket{resource="bindings",verb="POST",le="1e+06"} 2005
|
|
||||||
apiserver_request_latencies_bucket{resource="bindings",verb="POST",le="2e+06"} 2012
|
|
||||||
apiserver_request_latencies_bucket{resource="bindings",verb="POST",le="4e+06"} 2017
|
|
||||||
apiserver_request_latencies_bucket{resource="bindings",verb="POST",le="8e+06"} 2024
|
|
||||||
apiserver_request_latencies_bucket{resource="bindings",verb="POST",le="+Inf"} 2025
|
|
||||||
apiserver_request_latencies_sum{resource="bindings",verb="POST"} 1.02726334e+08
|
|
||||||
apiserver_request_latencies_count{resource="bindings",verb="POST"} 2025
|
|
||||||
`
|
|
||||||
|
|
||||||
const validData = `# HELP cadvisor_version_info A metric with a constant '1' value labeled by kernel version, OS version, docker version, cadvisor version & cadvisor revision.
|
|
||||||
# TYPE cadvisor_version_info gauge
|
|
||||||
cadvisor_version_info{cadvisorRevision="",cadvisorVersion="",dockerVersion="1.8.2",kernelVersion="3.10.0-229.20.1.el7.x86_64",osVersion="CentOS Linux 7 (Core)"} 1
|
|
||||||
# HELP go_gc_duration_seconds A summary of the GC invocation durations.
|
|
||||||
# TYPE go_gc_duration_seconds summary
|
|
||||||
go_gc_duration_seconds{quantile="0"} 0.013534896000000001
|
|
||||||
go_gc_duration_seconds{quantile="0.25"} 0.02469263
|
|
||||||
go_gc_duration_seconds{quantile="0.5"} 0.033727822000000005
|
|
||||||
go_gc_duration_seconds{quantile="0.75"} 0.03840335
|
|
||||||
go_gc_duration_seconds{quantile="1"} 0.049956604
|
|
||||||
go_gc_duration_seconds_sum 1970.341293002
|
|
||||||
go_gc_duration_seconds_count 65952
|
|
||||||
# HELP http_request_duration_microseconds The HTTP request latencies in microseconds.
|
|
||||||
# TYPE http_request_duration_microseconds summary
|
|
||||||
http_request_duration_microseconds{handler="prometheus",quantile="0.5"} 552048.506
|
|
||||||
http_request_duration_microseconds{handler="prometheus",quantile="0.9"} 5.876804288e+06
|
|
||||||
http_request_duration_microseconds{handler="prometheus",quantile="0.99"} 5.876804288e+06
|
|
||||||
http_request_duration_microseconds_sum{handler="prometheus"} 1.8909097205e+07
|
|
||||||
http_request_duration_microseconds_count{handler="prometheus"} 9
|
|
||||||
# HELP get_token_fail_count Counter of failed Token() requests to the alternate token source
|
|
||||||
# TYPE get_token_fail_count counter
|
|
||||||
get_token_fail_count 0
|
|
||||||
# HELP apiserver_request_latencies Response latency distribution in microseconds for each verb, resource and client.
|
|
||||||
# TYPE apiserver_request_latencies histogram
|
|
||||||
apiserver_request_latencies_bucket{resource="bindings",verb="POST",le="125000"} 1994
|
|
||||||
apiserver_request_latencies_bucket{resource="bindings",verb="POST",le="250000"} 1997
|
|
||||||
apiserver_request_latencies_bucket{resource="bindings",verb="POST",le="500000"} 2000
|
|
||||||
apiserver_request_latencies_bucket{resource="bindings",verb="POST",le="1e+06"} 2005
|
|
||||||
apiserver_request_latencies_bucket{resource="bindings",verb="POST",le="2e+06"} 2012
|
|
||||||
apiserver_request_latencies_bucket{resource="bindings",verb="POST",le="4e+06"} 2017
|
|
||||||
apiserver_request_latencies_bucket{resource="bindings",verb="POST",le="8e+06"} 2024
|
|
||||||
apiserver_request_latencies_bucket{resource="bindings",verb="POST",le="+Inf"} 2025
|
|
||||||
apiserver_request_latencies_sum{resource="bindings",verb="POST"} 1.02726334e+08
|
|
||||||
apiserver_request_latencies_count{resource="bindings",verb="POST"} 2025
|
|
||||||
`
|
|
||||||
|
|
||||||
const prometheusMulti = `
|
|
||||||
cpu,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
|
|
||||||
cpu,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
|
|
||||||
cpu,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
|
|
||||||
cpu,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
|
|
||||||
cpu,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
|
|
||||||
cpu,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
|
|
||||||
cpu,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
|
|
||||||
`
|
|
||||||
|
|
||||||
const prometheusMultiSomeInvalid = `
|
|
||||||
cpu,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
|
|
||||||
cpu,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
|
|
||||||
cpu,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
|
|
||||||
cpu,cpu=cpu3, host=foo,datacenter=us-east usage_idle=99,usage_busy=1
|
|
||||||
cpu,cpu=cpu4 , usage_idle=99,usage_busy=1
|
|
||||||
cpu,host=foo,datacenter=us-east usage_idle=99,usage_busy=1
|
|
||||||
`
|
|
||||||
|
|
||||||
func TestParseValidPrometheus(t *testing.T) {
|
|
||||||
parser := PrometheusParser{}
|
|
||||||
|
|
||||||
// Gauge value
|
|
||||||
metrics, err := parser.Parse([]byte(validUniqueGauge))
|
|
||||||
assert.NoError(t, err)
|
|
||||||
assert.Len(t, metrics, 1)
|
|
||||||
assert.Equal(t, "cadvisor_version_info", metrics[0].Name())
|
|
||||||
assert.Equal(t, map[string]interface{}{
|
|
||||||
"gauge": float64(1),
|
|
||||||
}, metrics[0].Fields())
|
|
||||||
assert.Equal(t, map[string]string{
|
|
||||||
"osVersion": "CentOS Linux 7 (Core)",
|
|
||||||
"dockerVersion": "1.8.2",
|
|
||||||
"kernelVersion": "3.10.0-229.20.1.el7.x86_64",
|
|
||||||
}, metrics[0].Tags())
|
|
||||||
|
|
||||||
// Counter value
|
|
||||||
//parser.SetDefaultTags(map[string]string{"mytag": "mytagvalue"})
|
|
||||||
metrics, err = parser.Parse([]byte(validUniqueCounter))
|
|
||||||
assert.NoError(t, err)
|
|
||||||
assert.Len(t, metrics, 1)
|
|
||||||
assert.Equal(t, "get_token_fail_count", metrics[0].Name())
|
|
||||||
assert.Equal(t, map[string]interface{}{
|
|
||||||
"counter": float64(0),
|
|
||||||
}, metrics[0].Fields())
|
|
||||||
assert.Equal(t, map[string]string{}, metrics[0].Tags())
|
|
||||||
|
|
||||||
// Summary data
|
|
||||||
//parser.SetDefaultTags(map[string]string{})
|
|
||||||
metrics, err = parser.Parse([]byte(validUniqueSummary))
|
|
||||||
assert.NoError(t, err)
|
|
||||||
assert.Len(t, metrics, 1)
|
|
||||||
assert.Equal(t, "http_request_duration_microseconds", metrics[0].Name())
|
|
||||||
assert.Equal(t, map[string]interface{}{
|
|
||||||
"0.5": 552048.506,
|
|
||||||
"0.9": 5.876804288e+06,
|
|
||||||
"0.99": 5.876804288e+06,
|
|
||||||
"count": 0.0,
|
|
||||||
"sum": 1.8909097205e+07,
|
|
||||||
}, metrics[0].Fields())
|
|
||||||
assert.Equal(t, map[string]string{"handler": "prometheus"}, metrics[0].Tags())
|
|
||||||
|
|
||||||
// histogram data
|
|
||||||
metrics, err = parser.Parse([]byte(validUniqueHistogram))
|
|
||||||
assert.NoError(t, err)
|
|
||||||
assert.Len(t, metrics, 1)
|
|
||||||
assert.Equal(t, "apiserver_request_latencies", metrics[0].Name())
|
|
||||||
assert.Equal(t, map[string]interface{}{
|
|
||||||
"500000": 2000.0,
|
|
||||||
"count": 2025.0,
|
|
||||||
"sum": 0.0,
|
|
||||||
"250000": 1997.0,
|
|
||||||
"2e+06": 2012.0,
|
|
||||||
"4e+06": 2017.0,
|
|
||||||
"8e+06": 2024.0,
|
|
||||||
"+Inf": 2025.0,
|
|
||||||
"125000": 1994.0,
|
|
||||||
"1e+06": 2005.0,
|
|
||||||
}, metrics[0].Fields())
|
|
||||||
assert.Equal(t,
|
|
||||||
map[string]string{"verb": "POST", "resource": "bindings"},
|
|
||||||
metrics[0].Tags())
|
|
||||||
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestParseLineInvalidPrometheus(t *testing.T) {
|
|
||||||
parser := PrometheusParser{}
|
|
||||||
metric, err := parser.ParseLine(validUniqueLine)
|
|
||||||
assert.NotNil(t, err)
|
|
||||||
assert.Nil(t, metric)
|
|
||||||
|
|
||||||
}
|
|
||||||
@@ -1,42 +1,31 @@
|
|||||||
package prometheus
|
package prometheus
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"crypto/tls"
|
|
||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
"github.com/influxdata/telegraf"
|
"github.com/influxdata/telegraf"
|
||||||
"github.com/influxdata/telegraf/plugins/inputs"
|
"github.com/influxdata/telegraf/plugins/inputs"
|
||||||
"io/ioutil"
|
"github.com/prometheus/common/expfmt"
|
||||||
"net"
|
"github.com/prometheus/common/model"
|
||||||
|
"io"
|
||||||
"net/http"
|
"net/http"
|
||||||
"sync"
|
"sync"
|
||||||
"time"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
type Prometheus struct {
|
type Prometheus struct {
|
||||||
Urls []string
|
Urls []string
|
||||||
|
|
||||||
// Use SSL but skip chain & host verification
|
|
||||||
InsecureSkipVerify bool
|
|
||||||
// Bearer Token authorization file path
|
|
||||||
BearerToken string `toml:"bearer_token"`
|
|
||||||
}
|
}
|
||||||
|
|
||||||
var sampleConfig = `
|
var sampleConfig = `
|
||||||
## An array of urls to scrape metrics from.
|
## An array of urls to scrape metrics from.
|
||||||
urls = ["http://localhost:9100/metrics"]
|
urls = ["http://localhost:9100/metrics"]
|
||||||
|
|
||||||
## Use SSL but skip chain & host verification
|
|
||||||
# insecure_skip_verify = false
|
|
||||||
## Use bearer token for authorization
|
|
||||||
# bearer_token = /path/to/bearer/token
|
|
||||||
`
|
`
|
||||||
|
|
||||||
func (p *Prometheus) SampleConfig() string {
|
func (r *Prometheus) SampleConfig() string {
|
||||||
return sampleConfig
|
return sampleConfig
|
||||||
}
|
}
|
||||||
|
|
||||||
func (p *Prometheus) Description() string {
|
func (r *Prometheus) Description() string {
|
||||||
return "Read metrics from one or many prometheus clients"
|
return "Read metrics from one or many prometheus clients"
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -44,16 +33,16 @@ var ErrProtocolError = errors.New("prometheus protocol error")
|
|||||||
|
|
||||||
// Reads stats from all configured servers accumulates stats.
|
// Reads stats from all configured servers accumulates stats.
|
||||||
// Returns one of the errors encountered while gather stats (if any).
|
// Returns one of the errors encountered while gather stats (if any).
|
||||||
func (p *Prometheus) Gather(acc telegraf.Accumulator) error {
|
func (g *Prometheus) Gather(acc telegraf.Accumulator) error {
|
||||||
var wg sync.WaitGroup
|
var wg sync.WaitGroup
|
||||||
|
|
||||||
var outerr error
|
var outerr error
|
||||||
|
|
||||||
for _, serv := range p.Urls {
|
for _, serv := range g.Urls {
|
||||||
wg.Add(1)
|
wg.Add(1)
|
||||||
go func(serv string) {
|
go func(serv string) {
|
||||||
defer wg.Done()
|
defer wg.Done()
|
||||||
outerr = p.gatherURL(serv, acc)
|
outerr = g.gatherURL(serv, acc)
|
||||||
}(serv)
|
}(serv)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -62,43 +51,8 @@ func (p *Prometheus) Gather(acc telegraf.Accumulator) error {
|
|||||||
return outerr
|
return outerr
|
||||||
}
|
}
|
||||||
|
|
||||||
var tr = &http.Transport{
|
func (g *Prometheus) gatherURL(url string, acc telegraf.Accumulator) error {
|
||||||
ResponseHeaderTimeout: time.Duration(3 * time.Second),
|
resp, err := http.Get(url)
|
||||||
}
|
|
||||||
|
|
||||||
var client = &http.Client{
|
|
||||||
Transport: tr,
|
|
||||||
Timeout: time.Duration(4 * time.Second),
|
|
||||||
}
|
|
||||||
|
|
||||||
func (p *Prometheus) gatherURL(url string, acc telegraf.Accumulator) error {
|
|
||||||
collectDate := time.Now()
|
|
||||||
var req, err = http.NewRequest("GET", url, nil)
|
|
||||||
req.Header = make(http.Header)
|
|
||||||
var token []byte
|
|
||||||
var resp *http.Response
|
|
||||||
|
|
||||||
var rt http.RoundTripper = &http.Transport{
|
|
||||||
Dial: (&net.Dialer{
|
|
||||||
Timeout: 5 * time.Second,
|
|
||||||
KeepAlive: 30 * time.Second,
|
|
||||||
}).Dial,
|
|
||||||
TLSHandshakeTimeout: 5 * time.Second,
|
|
||||||
TLSClientConfig: &tls.Config{
|
|
||||||
InsecureSkipVerify: p.InsecureSkipVerify,
|
|
||||||
},
|
|
||||||
ResponseHeaderTimeout: time.Duration(3 * time.Second),
|
|
||||||
}
|
|
||||||
|
|
||||||
if p.BearerToken != "" {
|
|
||||||
token, err = ioutil.ReadFile(p.BearerToken)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
req.Header.Set("Authorization", "Bearer "+string(token))
|
|
||||||
}
|
|
||||||
|
|
||||||
resp, err = rt.RoundTrip(req)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("error making HTTP request to %s: %s", url, err)
|
return fmt.Errorf("error making HTTP request to %s: %s", url, err)
|
||||||
}
|
}
|
||||||
@@ -106,33 +60,38 @@ func (p *Prometheus) gatherURL(url string, acc telegraf.Accumulator) error {
|
|||||||
if resp.StatusCode != http.StatusOK {
|
if resp.StatusCode != http.StatusOK {
|
||||||
return fmt.Errorf("%s returned HTTP status %s", url, resp.Status)
|
return fmt.Errorf("%s returned HTTP status %s", url, resp.Status)
|
||||||
}
|
}
|
||||||
|
format := expfmt.ResponseFormat(resp.Header)
|
||||||
|
|
||||||
body, err := ioutil.ReadAll(resp.Body)
|
decoder := expfmt.NewDecoder(resp.Body, format)
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("error reading body: %s", err)
|
options := &expfmt.DecodeOptions{
|
||||||
|
Timestamp: model.Now(),
|
||||||
|
}
|
||||||
|
sampleDecoder := &expfmt.SampleDecoder{
|
||||||
|
Dec: decoder,
|
||||||
|
Opts: options,
|
||||||
}
|
}
|
||||||
|
|
||||||
// Headers
|
for {
|
||||||
headers := make(map[string]string)
|
var samples model.Vector
|
||||||
for key, value := range headers {
|
err := sampleDecoder.Decode(&samples)
|
||||||
headers[key] = value
|
if err == io.EOF {
|
||||||
}
|
break
|
||||||
|
} else if err != nil {
|
||||||
// Prepare Prometheus parser config
|
return fmt.Errorf("error getting processing samples for %s: %s",
|
||||||
promparser := PrometheusParser{
|
url, err)
|
||||||
PromFormat: headers,
|
}
|
||||||
}
|
for _, sample := range samples {
|
||||||
|
tags := make(map[string]string)
|
||||||
metrics, err := promparser.Parse(body)
|
for key, value := range sample.Metric {
|
||||||
if err != nil {
|
if key == model.MetricNameLabel {
|
||||||
return fmt.Errorf("error getting processing samples for %s: %s",
|
continue
|
||||||
url, err)
|
}
|
||||||
}
|
tags[string(key)] = string(value)
|
||||||
// Add (or not) collected metrics
|
}
|
||||||
for _, metric := range metrics {
|
acc.Add("prometheus_"+string(sample.Metric[model.MetricNameLabel]),
|
||||||
tags := metric.Tags()
|
float64(sample.Value), tags)
|
||||||
tags["url"] = url
|
}
|
||||||
acc.AddFields(metric.Name(), metric.Fields(), tags, collectDate)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
return nil
|
return nil
|
||||||
|
|||||||
@@ -40,6 +40,16 @@ func TestPrometheusGeneratesMetrics(t *testing.T) {
|
|||||||
err := p.Gather(&acc)
|
err := p.Gather(&acc)
|
||||||
require.NoError(t, err)
|
require.NoError(t, err)
|
||||||
|
|
||||||
assert.True(t, acc.HasFloatField("go_gc_duration_seconds", "count"))
|
expected := []struct {
|
||||||
assert.True(t, acc.HasFloatField("go_goroutines", "gauge"))
|
name string
|
||||||
|
value float64
|
||||||
|
tags map[string]string
|
||||||
|
}{
|
||||||
|
{"prometheus_go_gc_duration_seconds_count", 7, map[string]string{}},
|
||||||
|
{"prometheus_go_goroutines", 15, map[string]string{}},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, e := range expected {
|
||||||
|
assert.True(t, acc.HasFloatField(e.name, "value"))
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -122,11 +122,7 @@ func (r *RabbitMQ) Description() string {
|
|||||||
|
|
||||||
func (r *RabbitMQ) Gather(acc telegraf.Accumulator) error {
|
func (r *RabbitMQ) Gather(acc telegraf.Accumulator) error {
|
||||||
if r.Client == nil {
|
if r.Client == nil {
|
||||||
tr := &http.Transport{ResponseHeaderTimeout: time.Duration(3 * time.Second)}
|
r.Client = &http.Client{}
|
||||||
r.Client = &http.Client{
|
|
||||||
Transport: tr,
|
|
||||||
Timeout: time.Duration(4 * time.Second),
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
var errChan = make(chan error, len(gatherFunctions))
|
var errChan = make(chan error, len(gatherFunctions))
|
||||||
|
|||||||
@@ -177,11 +177,8 @@ func (r *Raindrops) getTags(addr *url.URL) map[string]string {
|
|||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
inputs.Add("raindrops", func() telegraf.Input {
|
inputs.Add("raindrops", func() telegraf.Input {
|
||||||
return &Raindrops{http_client: &http.Client{
|
return &Raindrops{http_client: &http.Client{Transport: &http.Transport{
|
||||||
Transport: &http.Transport{
|
ResponseHeaderTimeout: time.Duration(3 * time.Second),
|
||||||
ResponseHeaderTimeout: time.Duration(3 * time.Second),
|
}}}
|
||||||
},
|
|
||||||
Timeout: time.Duration(4 * time.Second),
|
|
||||||
}}
|
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,86 +0,0 @@
|
|||||||
# Telegraf Plugin: Redis
|
|
||||||
|
|
||||||
### Configuration:
|
|
||||||
|
|
||||||
```
|
|
||||||
# Read Redis's basic status information
|
|
||||||
[[inputs.redis]]
|
|
||||||
## specify servers via a url matching:
|
|
||||||
## [protocol://][:password]@address[:port]
|
|
||||||
## e.g.
|
|
||||||
## tcp://localhost:6379
|
|
||||||
## tcp://:password@192.168.99.100
|
|
||||||
##
|
|
||||||
## If no servers are specified, then localhost is used as the host.
|
|
||||||
## If no port is specified, 6379 is used
|
|
||||||
servers = ["tcp://localhost:6379"]
|
|
||||||
```
|
|
||||||
|
|
||||||
### Measurements & Fields:
|
|
||||||
|
|
||||||
- Measurement
|
|
||||||
- uptime_in_seconds
|
|
||||||
- connected_clients
|
|
||||||
- used_memory
|
|
||||||
- used_memory_rss
|
|
||||||
- used_memory_peak
|
|
||||||
- used_memory_lua
|
|
||||||
- rdb_changes_since_last_save
|
|
||||||
- total_connections_received
|
|
||||||
- total_commands_processed
|
|
||||||
- instantaneous_ops_per_sec
|
|
||||||
- instantaneous_input_kbps
|
|
||||||
- instantaneous_output_kbps
|
|
||||||
- sync_full
|
|
||||||
- sync_partial_ok
|
|
||||||
- sync_partial_err
|
|
||||||
- expired_keys
|
|
||||||
- evicted_keys
|
|
||||||
- keyspace_hits
|
|
||||||
- keyspace_misses
|
|
||||||
- pubsub_channels
|
|
||||||
- pubsub_patterns
|
|
||||||
- latest_fork_usec
|
|
||||||
- connected_slaves
|
|
||||||
- master_repl_offset
|
|
||||||
- repl_backlog_active
|
|
||||||
- repl_backlog_size
|
|
||||||
- repl_backlog_histlen
|
|
||||||
- mem_fragmentation_ratio
|
|
||||||
- used_cpu_sys
|
|
||||||
- used_cpu_user
|
|
||||||
- used_cpu_sys_children
|
|
||||||
- used_cpu_user_children
|
|
||||||
|
|
||||||
### Tags:
|
|
||||||
|
|
||||||
- All measurements have the following tags:
|
|
||||||
- port
|
|
||||||
- server
|
|
||||||
|
|
||||||
### Example Output:
|
|
||||||
|
|
||||||
Using this configuration:
|
|
||||||
```
|
|
||||||
[[inputs.redis]]
|
|
||||||
## specify servers via a url matching:
|
|
||||||
## [protocol://][:password]@address[:port]
|
|
||||||
## e.g.
|
|
||||||
## tcp://localhost:6379
|
|
||||||
## tcp://:password@192.168.99.100
|
|
||||||
##
|
|
||||||
## If no servers are specified, then localhost is used as the host.
|
|
||||||
## If no port is specified, 6379 is used
|
|
||||||
servers = ["tcp://localhost:6379"]
|
|
||||||
```
|
|
||||||
|
|
||||||
When run with:
|
|
||||||
```
|
|
||||||
./telegraf -config telegraf.conf -input-filter redis -test
|
|
||||||
```
|
|
||||||
|
|
||||||
It produces:
|
|
||||||
```
|
|
||||||
* Plugin: redis, Collection 1
|
|
||||||
> redis,port=6379,server=localhost clients=1i,connected_slaves=0i,evicted_keys=0i,expired_keys=0i,instantaneous_ops_per_sec=0i,keyspace_hitrate=0,keyspace_hits=0i,keyspace_misses=2i,latest_fork_usec=0i,master_repl_offset=0i,mem_fragmentation_ratio=3.58,pubsub_channels=0i,pubsub_patterns=0i,rdb_changes_since_last_save=0i,repl_backlog_active=0i,repl_backlog_histlen=0i,repl_backlog_size=1048576i,sync_full=0i,sync_partial_err=0i,sync_partial_ok=0i,total_commands_processed=4i,total_connections_received=2i,uptime=869i,used_cpu_sys=0.07,used_cpu_sys_children=0,used_cpu_user=0.1,used_cpu_user_children=0,used_memory=502048i,used_memory_lua=33792i,used_memory_peak=501128i,used_memory_rss=1798144i 1457052084987848383
|
|
||||||
```
|
|
||||||
@@ -9,7 +9,6 @@ import (
|
|||||||
"strconv"
|
"strconv"
|
||||||
"strings"
|
"strings"
|
||||||
"sync"
|
"sync"
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/influxdata/telegraf"
|
"github.com/influxdata/telegraf"
|
||||||
"github.com/influxdata/telegraf/plugins/inputs"
|
"github.com/influxdata/telegraf/plugins/inputs"
|
||||||
@@ -31,8 +30,6 @@ var sampleConfig = `
|
|||||||
servers = ["tcp://localhost:6379"]
|
servers = ["tcp://localhost:6379"]
|
||||||
`
|
`
|
||||||
|
|
||||||
var defaultTimeout = 5 * time.Second
|
|
||||||
|
|
||||||
func (r *Redis) SampleConfig() string {
|
func (r *Redis) SampleConfig() string {
|
||||||
return sampleConfig
|
return sampleConfig
|
||||||
}
|
}
|
||||||
@@ -123,15 +120,12 @@ func (r *Redis) gatherServer(addr *url.URL, acc telegraf.Accumulator) error {
|
|||||||
addr.Host = addr.Host + ":" + defaultPort
|
addr.Host = addr.Host + ":" + defaultPort
|
||||||
}
|
}
|
||||||
|
|
||||||
c, err := net.DialTimeout("tcp", addr.Host, defaultTimeout)
|
c, err := net.Dial("tcp", addr.Host)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return fmt.Errorf("Unable to connect to redis server '%s': %s", addr.Host, err)
|
return fmt.Errorf("Unable to connect to redis server '%s': %s", addr.Host, err)
|
||||||
}
|
}
|
||||||
defer c.Close()
|
defer c.Close()
|
||||||
|
|
||||||
// Extend connection
|
|
||||||
c.SetDeadline(time.Now().Add(defaultTimeout))
|
|
||||||
|
|
||||||
if addr.User != nil {
|
if addr.User != nil {
|
||||||
pwd, set := addr.User.Password()
|
pwd, set := addr.User.Password()
|
||||||
if set && pwd != "" {
|
if set && pwd != "" {
|
||||||
|
|||||||
@@ -5,7 +5,6 @@ import (
|
|||||||
"fmt"
|
"fmt"
|
||||||
"net/http"
|
"net/http"
|
||||||
"net/url"
|
"net/url"
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/influxdata/telegraf"
|
"github.com/influxdata/telegraf"
|
||||||
"github.com/influxdata/telegraf/plugins/inputs"
|
"github.com/influxdata/telegraf/plugins/inputs"
|
||||||
@@ -21,12 +20,7 @@ type Riak struct {
|
|||||||
|
|
||||||
// NewRiak return a new instance of Riak with a default http client
|
// NewRiak return a new instance of Riak with a default http client
|
||||||
func NewRiak() *Riak {
|
func NewRiak() *Riak {
|
||||||
tr := &http.Transport{ResponseHeaderTimeout: time.Duration(3 * time.Second)}
|
return &Riak{client: http.DefaultClient}
|
||||||
client := &http.Client{
|
|
||||||
Transport: tr,
|
|
||||||
Timeout: time.Duration(4 * time.Second),
|
|
||||||
}
|
|
||||||
return &Riak{client: client}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
// Type riakStats represents the data that is received from Riak
|
// Type riakStats represents the data that is received from Riak
|
||||||
|
|||||||
@@ -49,7 +49,7 @@ func (s *Sensors) Gather(acc telegraf.Accumulator) error {
|
|||||||
var found bool
|
var found bool
|
||||||
|
|
||||||
for _, sensor := range s.Sensors {
|
for _, sensor := range s.Sensors {
|
||||||
parts := strings.SplitN(sensor, ":", 2)
|
parts := strings.SplitN(":", sensor, 2)
|
||||||
|
|
||||||
if parts[0] == chipName {
|
if parts[0] == chipName {
|
||||||
if parts[1] == "*" || parts[1] == featureLabel {
|
if parts[1] == "*" || parts[1] == featureLabel {
|
||||||
|
|||||||
@@ -1,549 +0,0 @@
|
|||||||
# SNMP Input Plugin
|
|
||||||
|
|
||||||
The SNMP input plugin gathers metrics from SNMP agents
|
|
||||||
|
|
||||||
### Configuration:
|
|
||||||
|
|
||||||
|
|
||||||
#### Very simple example
|
|
||||||
|
|
||||||
In this example, the plugin will gather value of OIDS:
|
|
||||||
|
|
||||||
- `.1.3.6.1.2.1.2.2.1.4.1`
|
|
||||||
|
|
||||||
```toml
|
|
||||||
# Very Simple Example
|
|
||||||
[[inputs.snmp]]
|
|
||||||
|
|
||||||
[[inputs.snmp.host]]
|
|
||||||
address = "127.0.0.1:161"
|
|
||||||
# SNMP community
|
|
||||||
community = "public" # default public
|
|
||||||
# SNMP version (1, 2 or 3)
|
|
||||||
# Version 3 not supported yet
|
|
||||||
version = 2 # default 2
|
|
||||||
# Simple list of OIDs to get, in addition to "collect"
|
|
||||||
get_oids = [".1.3.6.1.2.1.2.2.1.4.1"]
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
#### Simple example
|
|
||||||
|
|
||||||
In this example, Telegraf gathers value of OIDS:
|
|
||||||
|
|
||||||
- named **ifnumber**
|
|
||||||
- named **interface_speed**
|
|
||||||
|
|
||||||
With **inputs.snmp.get** section the plugin gets the oid number:
|
|
||||||
|
|
||||||
- **ifnumber** => `.1.3.6.1.2.1.2.1.0`
|
|
||||||
- **interface_speed** => *ifSpeed*
|
|
||||||
|
|
||||||
As you can see *ifSpeed* is not a valid OID. In order to get
|
|
||||||
the valid OID, the plugin uses `snmptranslate_file` to match the OID:
|
|
||||||
|
|
||||||
- **ifnumber** => `.1.3.6.1.2.1.2.1.0`
|
|
||||||
- **interface_speed** => *ifSpeed* => `.1.3.6.1.2.1.2.2.1.5`
|
|
||||||
|
|
||||||
Also as the plugin will append `instance` to the corresponding OID:
|
|
||||||
|
|
||||||
- **ifnumber** => `.1.3.6.1.2.1.2.1.0`
|
|
||||||
- **interface_speed** => *ifSpeed* => `.1.3.6.1.2.1.2.2.1.5.1`
|
|
||||||
|
|
||||||
In this example, the plugin will gather value of OIDS:
|
|
||||||
|
|
||||||
- `.1.3.6.1.2.1.2.1.0`
|
|
||||||
- `.1.3.6.1.2.1.2.2.1.5.1`
|
|
||||||
|
|
||||||
|
|
||||||
```toml
|
|
||||||
# Simple example
|
|
||||||
[[inputs.snmp]]
|
|
||||||
## Use 'oids.txt' file to translate oids to names
|
|
||||||
## To generate 'oids.txt' you need to run:
|
|
||||||
## snmptranslate -m all -Tz -On | sed -e 's/"//g' > /tmp/oids.txt
|
|
||||||
## Or if you have an other MIB folder with custom MIBs
|
|
||||||
## snmptranslate -M /mycustommibfolder -Tz -On -m all | sed -e 's/"//g' > oids.txt
|
|
||||||
snmptranslate_file = "/tmp/oids.txt"
|
|
||||||
[[inputs.snmp.host]]
|
|
||||||
address = "127.0.0.1:161"
|
|
||||||
# SNMP community
|
|
||||||
community = "public" # default public
|
|
||||||
# SNMP version (1, 2 or 3)
|
|
||||||
# Version 3 not supported yet
|
|
||||||
version = 2 # default 2
|
|
||||||
# Which get/bulk do you want to collect for this host
|
|
||||||
collect = ["ifnumber", "interface_speed"]
|
|
||||||
|
|
||||||
[[inputs.snmp.get]]
|
|
||||||
name = "ifnumber"
|
|
||||||
oid = ".1.3.6.1.2.1.2.1.0"
|
|
||||||
|
|
||||||
[[inputs.snmp.get]]
|
|
||||||
name = "interface_speed"
|
|
||||||
oid = "ifSpeed"
|
|
||||||
instance = "1"
|
|
||||||
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
#### Simple bulk example
|
|
||||||
|
|
||||||
In this example, Telegraf gathers value of OIDS:
|
|
||||||
|
|
||||||
- named **ifnumber**
|
|
||||||
- named **interface_speed**
|
|
||||||
- named **if_out_octets**
|
|
||||||
|
|
||||||
With **inputs.snmp.get** section the plugin gets oid number:
|
|
||||||
|
|
||||||
- **ifnumber** => `.1.3.6.1.2.1.2.1.0`
|
|
||||||
- **interface_speed** => *ifSpeed*
|
|
||||||
|
|
||||||
With **inputs.snmp.bulk** section the plugin gets the oid number:
|
|
||||||
|
|
||||||
- **if_out_octets** => *ifOutOctets*
|
|
||||||
|
|
||||||
As you can see *ifSpeed* and *ifOutOctets* are not a valid OID.
|
|
||||||
In order to get the valid OID, the plugin uses `snmptranslate_file`
|
|
||||||
to match the OID:
|
|
||||||
|
|
||||||
- **ifnumber** => `.1.3.6.1.2.1.2.1.0`
|
|
||||||
- **interface_speed** => *ifSpeed* => `.1.3.6.1.2.1.2.2.1.5`
|
|
||||||
- **if_out_octets** => *ifOutOctets* => `.1.3.6.1.2.1.2.2.1.16`
|
|
||||||
|
|
||||||
Also, the plugin will append `instance` to the corresponding OID:
|
|
||||||
|
|
||||||
- **ifnumber** => `.1.3.6.1.2.1.2.1.0`
|
|
||||||
- **interface_speed** => *ifSpeed* => `.1.3.6.1.2.1.2.2.1.5.1`
|
|
||||||
|
|
||||||
And **if_out_octets** is a bulk request, the plugin will gathers all
|
|
||||||
OIDS in the table.
|
|
||||||
|
|
||||||
- `.1.3.6.1.2.1.2.2.1.16.1`
|
|
||||||
- `.1.3.6.1.2.1.2.2.1.16.2`
|
|
||||||
- `.1.3.6.1.2.1.2.2.1.16.3`
|
|
||||||
- `.1.3.6.1.2.1.2.2.1.16.4`
|
|
||||||
- `.1.3.6.1.2.1.2.2.1.16.5`
|
|
||||||
- `...`
|
|
||||||
|
|
||||||
In this example, the plugin will gather value of OIDS:
|
|
||||||
|
|
||||||
- `.1.3.6.1.2.1.2.1.0`
|
|
||||||
- `.1.3.6.1.2.1.2.2.1.5.1`
|
|
||||||
- `.1.3.6.1.2.1.2.2.1.16.1`
|
|
||||||
- `.1.3.6.1.2.1.2.2.1.16.2`
|
|
||||||
- `.1.3.6.1.2.1.2.2.1.16.3`
|
|
||||||
- `.1.3.6.1.2.1.2.2.1.16.4`
|
|
||||||
- `.1.3.6.1.2.1.2.2.1.16.5`
|
|
||||||
- `...`
|
|
||||||
|
|
||||||
|
|
||||||
```toml
|
|
||||||
# Simple bulk example
|
|
||||||
[[inputs.snmp]]
|
|
||||||
## Use 'oids.txt' file to translate oids to names
|
|
||||||
## To generate 'oids.txt' you need to run:
|
|
||||||
## snmptranslate -m all -Tz -On | sed -e 's/"//g' > /tmp/oids.txt
|
|
||||||
## Or if you have an other MIB folder with custom MIBs
|
|
||||||
## snmptranslate -M /mycustommibfolder -Tz -On -m all | sed -e 's/"//g' > oids.txt
|
|
||||||
snmptranslate_file = "/tmp/oids.txt"
|
|
||||||
[[inputs.snmp.host]]
|
|
||||||
address = "127.0.0.1:161"
|
|
||||||
# SNMP community
|
|
||||||
community = "public" # default public
|
|
||||||
# SNMP version (1, 2 or 3)
|
|
||||||
# Version 3 not supported yet
|
|
||||||
version = 2 # default 2
|
|
||||||
# Which get/bulk do you want to collect for this host
|
|
||||||
collect = ["interface_speed", "if_number", "if_out_octets"]
|
|
||||||
|
|
||||||
[[inputs.snmp.get]]
|
|
||||||
name = "interface_speed"
|
|
||||||
oid = "ifSpeed"
|
|
||||||
instance = "1"
|
|
||||||
|
|
||||||
[[inputs.snmp.get]]
|
|
||||||
name = "if_number"
|
|
||||||
oid = "ifNumber"
|
|
||||||
|
|
||||||
[[inputs.snmp.bulk]]
|
|
||||||
name = "if_out_octets"
|
|
||||||
oid = "ifOutOctets"
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
#### Table example
|
|
||||||
|
|
||||||
In this example, we remove collect attribute to the host section,
|
|
||||||
but you can still use it in combination of the following part.
|
|
||||||
|
|
||||||
Note: This example is like a bulk request a but using an
|
|
||||||
other configuration
|
|
||||||
|
|
||||||
Telegraf gathers value of OIDS of the table:
|
|
||||||
|
|
||||||
- named **iftable1**
|
|
||||||
|
|
||||||
With **inputs.snmp.table** section the plugin gets oid number:
|
|
||||||
|
|
||||||
- **iftable1** => `.1.3.6.1.2.1.31.1.1.1`
|
|
||||||
|
|
||||||
Also **iftable1** is a table, the plugin will gathers all
|
|
||||||
OIDS in the table and in the subtables
|
|
||||||
|
|
||||||
- `.1.3.6.1.2.1.31.1.1.1.1`
|
|
||||||
- `.1.3.6.1.2.1.31.1.1.1.1.1`
|
|
||||||
- `.1.3.6.1.2.1.31.1.1.1.1.2`
|
|
||||||
- `.1.3.6.1.2.1.31.1.1.1.1.3`
|
|
||||||
- `.1.3.6.1.2.1.31.1.1.1.1.4`
|
|
||||||
- `.1.3.6.1.2.1.31.1.1.1.1....`
|
|
||||||
- `.1.3.6.1.2.1.31.1.1.1.2`
|
|
||||||
- `.1.3.6.1.2.1.31.1.1.1.2....`
|
|
||||||
- `.1.3.6.1.2.1.31.1.1.1.3`
|
|
||||||
- `.1.3.6.1.2.1.31.1.1.1.3....`
|
|
||||||
- `.1.3.6.1.2.1.31.1.1.1.4`
|
|
||||||
- `.1.3.6.1.2.1.31.1.1.1.4....`
|
|
||||||
- `.1.3.6.1.2.1.31.1.1.1.5`
|
|
||||||
- `.1.3.6.1.2.1.31.1.1.1.5....`
|
|
||||||
- `.1.3.6.1.2.1.31.1.1.1.6....`
|
|
||||||
- `...`
|
|
||||||
|
|
||||||
```toml
|
|
||||||
# Table example
|
|
||||||
[[inputs.snmp]]
|
|
||||||
## Use 'oids.txt' file to translate oids to names
|
|
||||||
## To generate 'oids.txt' you need to run:
|
|
||||||
## snmptranslate -m all -Tz -On | sed -e 's/"//g' > /tmp/oids.txt
|
|
||||||
## Or if you have an other MIB folder with custom MIBs
|
|
||||||
## snmptranslate -M /mycustommibfolder -Tz -On -m all | sed -e 's/"//g' > oids.txt
|
|
||||||
snmptranslate_file = "/tmp/oids.txt"
|
|
||||||
[[inputs.snmp.host]]
|
|
||||||
address = "127.0.0.1:161"
|
|
||||||
# SNMP community
|
|
||||||
community = "public" # default public
|
|
||||||
# SNMP version (1, 2 or 3)
|
|
||||||
# Version 3 not supported yet
|
|
||||||
version = 2 # default 2
|
|
||||||
# Which get/bulk do you want to collect for this host
|
|
||||||
# Which table do you want to collect
|
|
||||||
[[inputs.snmp.host.table]]
|
|
||||||
name = "iftable1"
|
|
||||||
|
|
||||||
# table without mapping neither subtables
|
|
||||||
# This is like bulk request
|
|
||||||
[[inputs.snmp.table]]
|
|
||||||
name = "iftable1"
|
|
||||||
oid = ".1.3.6.1.2.1.31.1.1.1"
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
#### Table with subtable example
|
|
||||||
|
|
||||||
In this example, we remove collect attribute to the host section,
|
|
||||||
but you can still use it in combination of the following part.
|
|
||||||
|
|
||||||
Note: This example is like a bulk request a but using an
|
|
||||||
other configuration
|
|
||||||
|
|
||||||
Telegraf gathers value of OIDS of the table:
|
|
||||||
|
|
||||||
- named **iftable2**
|
|
||||||
|
|
||||||
With **inputs.snmp.table** section *AND* **sub_tables** attribute,
|
|
||||||
the plugin will get OIDS from subtables:
|
|
||||||
|
|
||||||
- **iftable2** => `.1.3.6.1.2.1.2.2.1.13`
|
|
||||||
|
|
||||||
Also **iftable2** is a table, the plugin will gathers all
|
|
||||||
OIDS in subtables:
|
|
||||||
|
|
||||||
- `.1.3.6.1.2.1.2.2.1.13.1`
|
|
||||||
- `.1.3.6.1.2.1.2.2.1.13.2`
|
|
||||||
- `.1.3.6.1.2.1.2.2.1.13.3`
|
|
||||||
- `.1.3.6.1.2.1.2.2.1.13.4`
|
|
||||||
- `.1.3.6.1.2.1.2.2.1.13....`
|
|
||||||
|
|
||||||
|
|
||||||
```toml
|
|
||||||
# Table with subtable example
|
|
||||||
[[inputs.snmp]]
|
|
||||||
## Use 'oids.txt' file to translate oids to names
|
|
||||||
## To generate 'oids.txt' you need to run:
|
|
||||||
## snmptranslate -m all -Tz -On | sed -e 's/"//g' > /tmp/oids.txt
|
|
||||||
## Or if you have an other MIB folder with custom MIBs
|
|
||||||
## snmptranslate -M /mycustommibfolder -Tz -On -m all | sed -e 's/"//g' > oids.txt
|
|
||||||
snmptranslate_file = "/tmp/oids.txt"
|
|
||||||
[[inputs.snmp.host]]
|
|
||||||
address = "127.0.0.1:161"
|
|
||||||
# SNMP community
|
|
||||||
community = "public" # default public
|
|
||||||
# SNMP version (1, 2 or 3)
|
|
||||||
# Version 3 not supported yet
|
|
||||||
version = 2 # default 2
|
|
||||||
# Which table do you want to collect
|
|
||||||
[[inputs.snmp.host.table]]
|
|
||||||
name = "iftable2"
|
|
||||||
|
|
||||||
# table without mapping but with subtables
|
|
||||||
[[inputs.snmp.table]]
|
|
||||||
name = "iftable2"
|
|
||||||
sub_tables = [".1.3.6.1.2.1.2.2.1.13"]
|
|
||||||
# note
|
|
||||||
# oid attribute is useless
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
#### Table with mapping example
|
|
||||||
|
|
||||||
In this example, we remove collect attribute to the host section,
|
|
||||||
but you can still use it in combination of the following part.
|
|
||||||
|
|
||||||
Telegraf gathers value of OIDS of the table:
|
|
||||||
|
|
||||||
- named **iftable3**
|
|
||||||
|
|
||||||
With **inputs.snmp.table** section the plugin gets oid number:
|
|
||||||
|
|
||||||
- **iftable3** => `.1.3.6.1.2.1.31.1.1.1`
|
|
||||||
|
|
||||||
Also **iftable2** is a table, the plugin will gathers all
|
|
||||||
OIDS in the table and in the subtables
|
|
||||||
|
|
||||||
- `.1.3.6.1.2.1.31.1.1.1.1`
|
|
||||||
- `.1.3.6.1.2.1.31.1.1.1.1.1`
|
|
||||||
- `.1.3.6.1.2.1.31.1.1.1.1.2`
|
|
||||||
- `.1.3.6.1.2.1.31.1.1.1.1.3`
|
|
||||||
- `.1.3.6.1.2.1.31.1.1.1.1.4`
|
|
||||||
- `.1.3.6.1.2.1.31.1.1.1.1....`
|
|
||||||
- `.1.3.6.1.2.1.31.1.1.1.2`
|
|
||||||
- `.1.3.6.1.2.1.31.1.1.1.2....`
|
|
||||||
- `.1.3.6.1.2.1.31.1.1.1.3`
|
|
||||||
- `.1.3.6.1.2.1.31.1.1.1.3....`
|
|
||||||
- `.1.3.6.1.2.1.31.1.1.1.4`
|
|
||||||
- `.1.3.6.1.2.1.31.1.1.1.4....`
|
|
||||||
- `.1.3.6.1.2.1.31.1.1.1.5`
|
|
||||||
- `.1.3.6.1.2.1.31.1.1.1.5....`
|
|
||||||
- `.1.3.6.1.2.1.31.1.1.1.6....`
|
|
||||||
- `...`
|
|
||||||
|
|
||||||
But the **include_instances** attribute will filter which OIDS
|
|
||||||
will be gathered; As you see, there is an other attribute, `mapping_table`.
|
|
||||||
`include_instances` and `mapping_table` permit to build a hash table
|
|
||||||
to filter only OIDS you want.
|
|
||||||
Let's say, we have the following data on SNMP server:
|
|
||||||
- OID: `.1.3.6.1.2.1.31.1.1.1.1.1` has as value: `enp5s0`
|
|
||||||
- OID: `.1.3.6.1.2.1.31.1.1.1.1.2` has as value: `enp5s1`
|
|
||||||
- OID: `.1.3.6.1.2.1.31.1.1.1.1.3` has as value: `enp5s2`
|
|
||||||
- OID: `.1.3.6.1.2.1.31.1.1.1.1.4` has as value: `eth0`
|
|
||||||
- OID: `.1.3.6.1.2.1.31.1.1.1.1.5` has as value: `eth1`
|
|
||||||
|
|
||||||
The plugin will build the following hash table:
|
|
||||||
|
|
||||||
| instance name | instance id |
|
|
||||||
|---------------|-------------|
|
|
||||||
| `enp5s0` | `1` |
|
|
||||||
| `enp5s1` | `2` |
|
|
||||||
| `enp5s2` | `3` |
|
|
||||||
| `eth0` | `4` |
|
|
||||||
| `eth1` | `5` |
|
|
||||||
|
|
||||||
With the **include_instances** attribute, the plugin will gather
|
|
||||||
the following OIDS:
|
|
||||||
|
|
||||||
- `.1.3.6.1.2.1.31.1.1.1.1.1`
|
|
||||||
- `.1.3.6.1.2.1.31.1.1.1.1.5`
|
|
||||||
- `.1.3.6.1.2.1.31.1.1.1.2.1`
|
|
||||||
- `.1.3.6.1.2.1.31.1.1.1.2.5`
|
|
||||||
- `.1.3.6.1.2.1.31.1.1.1.3.1`
|
|
||||||
- `.1.3.6.1.2.1.31.1.1.1.3.5`
|
|
||||||
- `.1.3.6.1.2.1.31.1.1.1.4.1`
|
|
||||||
- `.1.3.6.1.2.1.31.1.1.1.4.5`
|
|
||||||
- `.1.3.6.1.2.1.31.1.1.1.5.1`
|
|
||||||
- `.1.3.6.1.2.1.31.1.1.1.5.5`
|
|
||||||
- `.1.3.6.1.2.1.31.1.1.1.6.1`
|
|
||||||
- `.1.3.6.1.2.1.31.1.1.1.6.5`
|
|
||||||
- `...`
|
|
||||||
|
|
||||||
Note: the plugin will add instance name as tag *instance*
|
|
||||||
|
|
||||||
```toml
|
|
||||||
# Simple table with mapping example
|
|
||||||
[[inputs.snmp]]
|
|
||||||
## Use 'oids.txt' file to translate oids to names
|
|
||||||
## To generate 'oids.txt' you need to run:
|
|
||||||
## snmptranslate -m all -Tz -On | sed -e 's/"//g' > /tmp/oids.txt
|
|
||||||
## Or if you have an other MIB folder with custom MIBs
|
|
||||||
## snmptranslate -M /mycustommibfolder -Tz -On -m all | sed -e 's/"//g' > oids.txt
|
|
||||||
snmptranslate_file = "/tmp/oids.txt"
|
|
||||||
[[inputs.snmp.host]]
|
|
||||||
address = "127.0.0.1:161"
|
|
||||||
# SNMP community
|
|
||||||
community = "public" # default public
|
|
||||||
# SNMP version (1, 2 or 3)
|
|
||||||
# Version 3 not supported yet
|
|
||||||
version = 2 # default 2
|
|
||||||
# Which table do you want to collect
|
|
||||||
[[inputs.snmp.host.table]]
|
|
||||||
name = "iftable3"
|
|
||||||
include_instances = ["enp5s0", "eth1"]
|
|
||||||
|
|
||||||
# table with mapping but without subtables
|
|
||||||
[[inputs.snmp.table]]
|
|
||||||
name = "iftable3"
|
|
||||||
oid = ".1.3.6.1.2.1.31.1.1.1"
|
|
||||||
# if empty. get all instances
|
|
||||||
mapping_table = ".1.3.6.1.2.1.31.1.1.1.1"
|
|
||||||
# if empty, get all subtables
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
#### Table with both mapping and subtable example
|
|
||||||
|
|
||||||
In this example, we remove collect attribute to the host section,
|
|
||||||
but you can still use it in combination of the following part.
|
|
||||||
|
|
||||||
Telegraf gathers value of OIDS of the table:
|
|
||||||
|
|
||||||
- named **iftable4**
|
|
||||||
|
|
||||||
With **inputs.snmp.table** section *AND* **sub_tables** attribute,
|
|
||||||
the plugin will get OIDS from subtables:
|
|
||||||
|
|
||||||
- **iftable4** => `.1.3.6.1.2.1.31.1.1.1`
|
|
||||||
|
|
||||||
Also **iftable2** is a table, the plugin will gathers all
|
|
||||||
OIDS in the table and in the subtables
|
|
||||||
|
|
||||||
- `.1.3.6.1.2.1.31.1.1.1.6.1
|
|
||||||
- `.1.3.6.1.2.1.31.1.1.1.6.2`
|
|
||||||
- `.1.3.6.1.2.1.31.1.1.1.6.3`
|
|
||||||
- `.1.3.6.1.2.1.31.1.1.1.6.4`
|
|
||||||
- `.1.3.6.1.2.1.31.1.1.1.6....`
|
|
||||||
- `.1.3.6.1.2.1.31.1.1.1.10.1`
|
|
||||||
- `.1.3.6.1.2.1.31.1.1.1.10.2`
|
|
||||||
- `.1.3.6.1.2.1.31.1.1.1.10.3`
|
|
||||||
- `.1.3.6.1.2.1.31.1.1.1.10.4`
|
|
||||||
- `.1.3.6.1.2.1.31.1.1.1.10....`
|
|
||||||
|
|
||||||
But the **include_instances** attribute will filter which OIDS
|
|
||||||
will be gathered; As you see, there is an other attribute, `mapping_table`.
|
|
||||||
`include_instances` and `mapping_table` permit to build a hash table
|
|
||||||
to filter only OIDS you want.
|
|
||||||
Let's say, we have the following data on SNMP server:
|
|
||||||
- OID: `.1.3.6.1.2.1.31.1.1.1.1.1` has as value: `enp5s0`
|
|
||||||
- OID: `.1.3.6.1.2.1.31.1.1.1.1.2` has as value: `enp5s1`
|
|
||||||
- OID: `.1.3.6.1.2.1.31.1.1.1.1.3` has as value: `enp5s2`
|
|
||||||
- OID: `.1.3.6.1.2.1.31.1.1.1.1.4` has as value: `eth0`
|
|
||||||
- OID: `.1.3.6.1.2.1.31.1.1.1.1.5` has as value: `eth1`
|
|
||||||
|
|
||||||
The plugin will build the following hash table:
|
|
||||||
|
|
||||||
| instance name | instance id |
|
|
||||||
|---------------|-------------|
|
|
||||||
| `enp5s0` | `1` |
|
|
||||||
| `enp5s1` | `2` |
|
|
||||||
| `enp5s2` | `3` |
|
|
||||||
| `eth0` | `4` |
|
|
||||||
| `eth1` | `5` |
|
|
||||||
|
|
||||||
With the **include_instances** attribute, the plugin will gather
|
|
||||||
the following OIDS:
|
|
||||||
|
|
||||||
- `.1.3.6.1.2.1.31.1.1.1.6.1`
|
|
||||||
- `.1.3.6.1.2.1.31.1.1.1.6.5`
|
|
||||||
- `.1.3.6.1.2.1.31.1.1.1.10.1`
|
|
||||||
- `.1.3.6.1.2.1.31.1.1.1.10.5`
|
|
||||||
|
|
||||||
Note: the plugin will add instance name as tag *instance*
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
```toml
|
|
||||||
# Table with both mapping and subtable example
|
|
||||||
[[inputs.snmp]]
|
|
||||||
## Use 'oids.txt' file to translate oids to names
|
|
||||||
## To generate 'oids.txt' you need to run:
|
|
||||||
## snmptranslate -m all -Tz -On | sed -e 's/"//g' > /tmp/oids.txt
|
|
||||||
## Or if you have an other MIB folder with custom MIBs
|
|
||||||
## snmptranslate -M /mycustommibfolder -Tz -On -m all | sed -e 's/"//g' > oids.txt
|
|
||||||
snmptranslate_file = "/tmp/oids.txt"
|
|
||||||
[[inputs.snmp.host]]
|
|
||||||
address = "127.0.0.1:161"
|
|
||||||
# SNMP community
|
|
||||||
community = "public" # default public
|
|
||||||
# SNMP version (1, 2 or 3)
|
|
||||||
# Version 3 not supported yet
|
|
||||||
version = 2 # default 2
|
|
||||||
# Which table do you want to collect
|
|
||||||
[[inputs.snmp.host.table]]
|
|
||||||
name = "iftable4"
|
|
||||||
include_instances = ["enp5s0", "eth1"]
|
|
||||||
|
|
||||||
# table with both mapping and subtables
|
|
||||||
[[inputs.snmp.table]]
|
|
||||||
name = "iftable4"
|
|
||||||
# if empty get all instances
|
|
||||||
mapping_table = ".1.3.6.1.2.1.31.1.1.1.1"
|
|
||||||
# if empty get all subtables
|
|
||||||
# sub_tables could be not "real subtables"
|
|
||||||
sub_tables=[".1.3.6.1.2.1.2.2.1.13", "bytes_recv", "bytes_send"]
|
|
||||||
# note
|
|
||||||
# oid attribute is useless
|
|
||||||
|
|
||||||
# SNMP SUBTABLES
|
|
||||||
[[inputs.snmp.subtable]]
|
|
||||||
name = "bytes_recv"
|
|
||||||
oid = ".1.3.6.1.2.1.31.1.1.1.6"
|
|
||||||
unit = "octets"
|
|
||||||
|
|
||||||
[[inputs.snmp.subtable]]
|
|
||||||
name = "bytes_send"
|
|
||||||
oid = ".1.3.6.1.2.1.31.1.1.1.10"
|
|
||||||
unit = "octets"
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Configuration notes
|
|
||||||
|
|
||||||
- In **inputs.snmp.table** section, the `oid` attribute is useless if
|
|
||||||
the `sub_tables` attributes is defined
|
|
||||||
|
|
||||||
- In **inputs.snmp.subtable** section, you can put a name from `snmptranslate_file`
|
|
||||||
as `oid` attribute instead of a valid OID
|
|
||||||
|
|
||||||
### Measurements & Fields:
|
|
||||||
|
|
||||||
With the last example (Table with both mapping and subtable example):
|
|
||||||
|
|
||||||
- ifHCOutOctets
|
|
||||||
- ifHCOutOctets
|
|
||||||
- ifInDiscards
|
|
||||||
- ifInDiscards
|
|
||||||
- ifHCInOctets
|
|
||||||
- ifHCInOctets
|
|
||||||
|
|
||||||
### Tags:
|
|
||||||
|
|
||||||
With the last example (Table with both mapping and subtable example):
|
|
||||||
|
|
||||||
- ifHCOutOctets
|
|
||||||
- host
|
|
||||||
- instance
|
|
||||||
- unit
|
|
||||||
- ifInDiscards
|
|
||||||
- host
|
|
||||||
- instance
|
|
||||||
- ifHCInOctets
|
|
||||||
- host
|
|
||||||
- instance
|
|
||||||
- unit
|
|
||||||
|
|
||||||
### Example Output:
|
|
||||||
|
|
||||||
With the last example (Table with both mapping and subtable example):
|
|
||||||
|
|
||||||
```
|
|
||||||
ifHCOutOctets,host=127.0.0.1,instance=enp5s0,unit=octets ifHCOutOctets=10565628i 1456878706044462901
|
|
||||||
ifInDiscards,host=127.0.0.1,instance=enp5s0 ifInDiscards=0i 1456878706044510264
|
|
||||||
ifHCInOctets,host=127.0.0.1,instance=enp5s0,unit=octets ifHCInOctets=76351777i 1456878706044531312
|
|
||||||
```
|
|
||||||
@@ -4,6 +4,7 @@ import (
|
|||||||
"io/ioutil"
|
"io/ioutil"
|
||||||
"log"
|
"log"
|
||||||
"net"
|
"net"
|
||||||
|
"regexp"
|
||||||
"strconv"
|
"strconv"
|
||||||
"strings"
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
@@ -19,16 +20,7 @@ type Snmp struct {
|
|||||||
Host []Host
|
Host []Host
|
||||||
Get []Data
|
Get []Data
|
||||||
Bulk []Data
|
Bulk []Data
|
||||||
Table []Table
|
|
||||||
Subtable []Subtable
|
|
||||||
SnmptranslateFile string
|
SnmptranslateFile string
|
||||||
|
|
||||||
nameToOid map[string]string
|
|
||||||
initNode Node
|
|
||||||
subTableMap map[string]Subtable
|
|
||||||
|
|
||||||
// TODO change as unexportable
|
|
||||||
//OidInstanceMapping map[string]map[string]string
|
|
||||||
}
|
}
|
||||||
|
|
||||||
type Host struct {
|
type Host struct {
|
||||||
@@ -44,54 +36,9 @@ type Host struct {
|
|||||||
Collect []string
|
Collect []string
|
||||||
// easy get oids
|
// easy get oids
|
||||||
GetOids []string
|
GetOids []string
|
||||||
// Table
|
|
||||||
Table []HostTable
|
|
||||||
// Oids
|
// Oids
|
||||||
getOids []Data
|
getOids []Data
|
||||||
bulkOids []Data
|
bulkOids []Data
|
||||||
tables []HostTable
|
|
||||||
// array of processed oids
|
|
||||||
// to skip oid duplication
|
|
||||||
processedOids []string
|
|
||||||
}
|
|
||||||
|
|
||||||
type Table struct {
|
|
||||||
// name = "iftable"
|
|
||||||
Name string
|
|
||||||
// oid = ".1.3.6.1.2.1.31.1.1.1"
|
|
||||||
Oid string
|
|
||||||
//if empty get all instances
|
|
||||||
//mapping_table = ".1.3.6.1.2.1.31.1.1.1.1"
|
|
||||||
MappingTable string
|
|
||||||
// if empty get all subtables
|
|
||||||
// sub_tables could be not "real subtables"
|
|
||||||
//sub_tables=[".1.3.6.1.2.1.2.2.1.13", "bytes_recv", "bytes_send"]
|
|
||||||
SubTables []string
|
|
||||||
}
|
|
||||||
|
|
||||||
type HostTable struct {
|
|
||||||
// name = "iftable"
|
|
||||||
Name string
|
|
||||||
// Includes only these instances
|
|
||||||
// include_instances = ["eth0", "eth1"]
|
|
||||||
IncludeInstances []string
|
|
||||||
// Excludes only these instances
|
|
||||||
// exclude_instances = ["eth20", "eth21"]
|
|
||||||
ExcludeInstances []string
|
|
||||||
// From Table struct
|
|
||||||
oid string
|
|
||||||
mappingTable string
|
|
||||||
subTables []string
|
|
||||||
}
|
|
||||||
|
|
||||||
// TODO find better names
|
|
||||||
type Subtable struct {
|
|
||||||
//name = "bytes_send"
|
|
||||||
Name string
|
|
||||||
//oid = ".1.3.6.1.2.1.31.1.1.1.10"
|
|
||||||
Oid string
|
|
||||||
//unit = "octets"
|
|
||||||
Unit string
|
|
||||||
}
|
}
|
||||||
|
|
||||||
type Data struct {
|
type Data struct {
|
||||||
@@ -116,8 +63,13 @@ type Node struct {
|
|||||||
subnodes map[string]Node
|
subnodes map[string]Node
|
||||||
}
|
}
|
||||||
|
|
||||||
// TODO move this var to snmp struct
|
var initNode = Node{
|
||||||
var OidInstanceMapping = make(map[string]map[string]string)
|
id: "1",
|
||||||
|
name: "",
|
||||||
|
subnodes: make(map[string]Node),
|
||||||
|
}
|
||||||
|
|
||||||
|
var NameToOid = make(map[string]string)
|
||||||
|
|
||||||
var sampleConfig = `
|
var sampleConfig = `
|
||||||
## Use 'oids.txt' file to translate oids to names
|
## Use 'oids.txt' file to translate oids to names
|
||||||
@@ -161,7 +113,7 @@ var sampleConfig = `
|
|||||||
[[inputs.snmp.get]]
|
[[inputs.snmp.get]]
|
||||||
name = "interface_speed"
|
name = "interface_speed"
|
||||||
oid = "ifSpeed"
|
oid = "ifSpeed"
|
||||||
instance = "0"
|
instance = 0
|
||||||
|
|
||||||
[[inputs.snmp.get]]
|
[[inputs.snmp.get]]
|
||||||
name = "sysuptime"
|
name = "sysuptime"
|
||||||
@@ -177,49 +129,6 @@ var sampleConfig = `
|
|||||||
name = "ifoutoctets"
|
name = "ifoutoctets"
|
||||||
max_repetition = 127
|
max_repetition = 127
|
||||||
oid = "ifOutOctets"
|
oid = "ifOutOctets"
|
||||||
|
|
||||||
[[inputs.snmp.host]]
|
|
||||||
address = "192.168.2.13:161"
|
|
||||||
#address = "127.0.0.1:161"
|
|
||||||
community = "public"
|
|
||||||
version = 2
|
|
||||||
timeout = 2.0
|
|
||||||
retries = 2
|
|
||||||
#collect = ["mybulk", "sysservices", "sysdescr", "systype"]
|
|
||||||
collect = ["sysuptime" ]
|
|
||||||
[[inputs.snmp.host.table]]
|
|
||||||
name = "iftable3"
|
|
||||||
include_instances = ["enp5s0", "eth1"]
|
|
||||||
|
|
||||||
# SNMP TABLEs
|
|
||||||
# table without mapping neither subtables
|
|
||||||
[[inputs.snmp.table]]
|
|
||||||
name = "iftable1"
|
|
||||||
oid = ".1.3.6.1.2.1.31.1.1.1"
|
|
||||||
|
|
||||||
# table without mapping but with subtables
|
|
||||||
[[inputs.snmp.table]]
|
|
||||||
name = "iftable2"
|
|
||||||
oid = ".1.3.6.1.2.1.31.1.1.1"
|
|
||||||
sub_tables = [".1.3.6.1.2.1.2.2.1.13"]
|
|
||||||
|
|
||||||
# table with mapping but without subtables
|
|
||||||
[[inputs.snmp.table]]
|
|
||||||
name = "iftable3"
|
|
||||||
oid = ".1.3.6.1.2.1.31.1.1.1"
|
|
||||||
# if empty. get all instances
|
|
||||||
mapping_table = ".1.3.6.1.2.1.31.1.1.1.1"
|
|
||||||
# if empty, get all subtables
|
|
||||||
|
|
||||||
# table with both mapping and subtables
|
|
||||||
[[inputs.snmp.table]]
|
|
||||||
name = "iftable4"
|
|
||||||
oid = ".1.3.6.1.2.1.31.1.1.1"
|
|
||||||
# if empty get all instances
|
|
||||||
mapping_table = ".1.3.6.1.2.1.31.1.1.1.1"
|
|
||||||
# if empty get all subtables
|
|
||||||
# sub_tables could be not "real subtables"
|
|
||||||
sub_tables=[".1.3.6.1.2.1.2.2.1.13", "bytes_recv", "bytes_send"]
|
|
||||||
`
|
`
|
||||||
|
|
||||||
// SampleConfig returns sample configuration message
|
// SampleConfig returns sample configuration message
|
||||||
@@ -280,36 +189,21 @@ func findnodename(node Node, ids []string) (string, string) {
|
|||||||
}
|
}
|
||||||
|
|
||||||
func (s *Snmp) Gather(acc telegraf.Accumulator) error {
|
func (s *Snmp) Gather(acc telegraf.Accumulator) error {
|
||||||
// TODO put this in cache on first run
|
|
||||||
// Create subtables mapping
|
|
||||||
if len(s.subTableMap) == 0 {
|
|
||||||
s.subTableMap = make(map[string]Subtable)
|
|
||||||
for _, sb := range s.Subtable {
|
|
||||||
s.subTableMap[sb.Name] = sb
|
|
||||||
}
|
|
||||||
}
|
|
||||||
// TODO put this in cache on first run
|
|
||||||
// Create oid tree
|
// Create oid tree
|
||||||
if s.SnmptranslateFile != "" && len(s.initNode.subnodes) == 0 {
|
if s.SnmptranslateFile != "" && len(initNode.subnodes) == 0 {
|
||||||
s.nameToOid = make(map[string]string)
|
|
||||||
s.initNode = Node{
|
|
||||||
id: "1",
|
|
||||||
name: "",
|
|
||||||
subnodes: make(map[string]Node),
|
|
||||||
}
|
|
||||||
|
|
||||||
data, err := ioutil.ReadFile(s.SnmptranslateFile)
|
data, err := ioutil.ReadFile(s.SnmptranslateFile)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Printf("Reading SNMPtranslate file error: %s", err)
|
log.Printf("Reading SNMPtranslate file error: %s", err)
|
||||||
return err
|
return err
|
||||||
} else {
|
} else {
|
||||||
for _, line := range strings.Split(string(data), "\n") {
|
for _, line := range strings.Split(string(data), "\n") {
|
||||||
oids := strings.Fields(string(line))
|
oidsRegEx := regexp.MustCompile(`([^\t]*)\t*([^\t]*)`)
|
||||||
if len(oids) == 2 && oids[1] != "" {
|
oids := oidsRegEx.FindStringSubmatch(string(line))
|
||||||
oid_name := oids[0]
|
if oids[2] != "" {
|
||||||
oid := oids[1]
|
oid_name := oids[1]
|
||||||
fillnode(s.initNode, oid_name, strings.Split(string(oid), "."))
|
oid := oids[2]
|
||||||
s.nameToOid[oid_name] = oid
|
fillnode(initNode, oid_name, strings.Split(string(oid), "."))
|
||||||
|
NameToOid[oid_name] = oid
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -333,7 +227,7 @@ func (s *Snmp) Gather(acc telegraf.Accumulator) error {
|
|||||||
// Get Easy GET oids
|
// Get Easy GET oids
|
||||||
for _, oidstring := range host.GetOids {
|
for _, oidstring := range host.GetOids {
|
||||||
oid := Data{}
|
oid := Data{}
|
||||||
if val, ok := s.nameToOid[oidstring]; ok {
|
if val, ok := NameToOid[oidstring]; ok {
|
||||||
// TODO should we add the 0 instance ?
|
// TODO should we add the 0 instance ?
|
||||||
oid.Name = oidstring
|
oid.Name = oidstring
|
||||||
oid.Oid = val
|
oid.Oid = val
|
||||||
@@ -354,7 +248,7 @@ func (s *Snmp) Gather(acc telegraf.Accumulator) error {
|
|||||||
// Get GET oids
|
// Get GET oids
|
||||||
for _, oid := range s.Get {
|
for _, oid := range s.Get {
|
||||||
if oid.Name == oid_name {
|
if oid.Name == oid_name {
|
||||||
if val, ok := s.nameToOid[oid.Oid]; ok {
|
if val, ok := NameToOid[oid.Oid]; ok {
|
||||||
// TODO should we add the 0 instance ?
|
// TODO should we add the 0 instance ?
|
||||||
if oid.Instance != "" {
|
if oid.Instance != "" {
|
||||||
oid.rawOid = "." + val + "." + oid.Instance
|
oid.rawOid = "." + val + "." + oid.Instance
|
||||||
@@ -370,7 +264,7 @@ func (s *Snmp) Gather(acc telegraf.Accumulator) error {
|
|||||||
// Get GETBULK oids
|
// Get GETBULK oids
|
||||||
for _, oid := range s.Bulk {
|
for _, oid := range s.Bulk {
|
||||||
if oid.Name == oid_name {
|
if oid.Name == oid_name {
|
||||||
if val, ok := s.nameToOid[oid.Oid]; ok {
|
if val, ok := NameToOid[oid.Oid]; ok {
|
||||||
oid.rawOid = "." + val
|
oid.rawOid = "." + val
|
||||||
} else {
|
} else {
|
||||||
oid.rawOid = oid.Oid
|
oid.rawOid = oid.Oid
|
||||||
@@ -379,219 +273,18 @@ func (s *Snmp) Gather(acc telegraf.Accumulator) error {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
// Table
|
|
||||||
for _, hostTable := range host.Table {
|
|
||||||
for _, snmpTable := range s.Table {
|
|
||||||
if hostTable.Name == snmpTable.Name {
|
|
||||||
table := hostTable
|
|
||||||
table.oid = snmpTable.Oid
|
|
||||||
table.mappingTable = snmpTable.MappingTable
|
|
||||||
table.subTables = snmpTable.SubTables
|
|
||||||
host.tables = append(host.tables, table)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
// Launch Mapping
|
|
||||||
// TODO put this in cache on first run
|
|
||||||
// TODO save mapping and computed oids
|
|
||||||
// to do it only the first time
|
|
||||||
// only if len(s.OidInstanceMapping) == 0
|
|
||||||
if len(OidInstanceMapping) >= 0 {
|
|
||||||
if err := host.SNMPMap(acc, s.nameToOid, s.subTableMap); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
// Launch Get requests
|
// Launch Get requests
|
||||||
if err := host.SNMPGet(acc, s.initNode); err != nil {
|
if err := host.SNMPGet(acc); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
if err := host.SNMPBulk(acc, s.initNode); err != nil {
|
if err := host.SNMPBulk(acc); err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (h *Host) SNMPMap(acc telegraf.Accumulator, nameToOid map[string]string, subTableMap map[string]Subtable) error {
|
func (h *Host) SNMPGet(acc telegraf.Accumulator) error {
|
||||||
// Get snmp client
|
|
||||||
snmpClient, err := h.GetSNMPClient()
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
// Deconnection
|
|
||||||
defer snmpClient.Conn.Close()
|
|
||||||
// Prepare OIDs
|
|
||||||
for _, table := range h.tables {
|
|
||||||
// We don't have mapping
|
|
||||||
if table.mappingTable == "" {
|
|
||||||
if len(table.subTables) == 0 {
|
|
||||||
// If We don't have mapping table
|
|
||||||
// neither subtables list
|
|
||||||
// This is just a bulk request
|
|
||||||
oid := Data{}
|
|
||||||
oid.Oid = table.oid
|
|
||||||
if val, ok := nameToOid[oid.Oid]; ok {
|
|
||||||
oid.rawOid = "." + val
|
|
||||||
} else {
|
|
||||||
oid.rawOid = oid.Oid
|
|
||||||
}
|
|
||||||
h.bulkOids = append(h.bulkOids, oid)
|
|
||||||
} else {
|
|
||||||
// If We don't have mapping table
|
|
||||||
// but we have subtables
|
|
||||||
// This is a bunch of bulk requests
|
|
||||||
// For each subtable ...
|
|
||||||
for _, sb := range table.subTables {
|
|
||||||
// ... we create a new Data (oid) object
|
|
||||||
oid := Data{}
|
|
||||||
// Looking for more information about this subtable
|
|
||||||
ssb, exists := subTableMap[sb]
|
|
||||||
if exists {
|
|
||||||
// We found a subtable section in config files
|
|
||||||
oid.Oid = ssb.Oid
|
|
||||||
oid.rawOid = ssb.Oid
|
|
||||||
oid.Unit = ssb.Unit
|
|
||||||
} else {
|
|
||||||
// We did NOT find a subtable section in config files
|
|
||||||
oid.Oid = sb
|
|
||||||
oid.rawOid = sb
|
|
||||||
}
|
|
||||||
// TODO check oid validity
|
|
||||||
|
|
||||||
// Add the new oid to getOids list
|
|
||||||
h.bulkOids = append(h.bulkOids, oid)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
// We have a mapping table
|
|
||||||
// We need to query this table
|
|
||||||
// To get mapping between instance id
|
|
||||||
// and instance name
|
|
||||||
oid_asked := table.mappingTable
|
|
||||||
oid_next := oid_asked
|
|
||||||
need_more_requests := true
|
|
||||||
// Set max repetition
|
|
||||||
maxRepetition := uint8(32)
|
|
||||||
// Launch requests
|
|
||||||
for need_more_requests {
|
|
||||||
// Launch request
|
|
||||||
result, err3 := snmpClient.GetBulk([]string{oid_next}, 0, maxRepetition)
|
|
||||||
if err3 != nil {
|
|
||||||
return err3
|
|
||||||
}
|
|
||||||
|
|
||||||
lastOid := ""
|
|
||||||
for _, variable := range result.Variables {
|
|
||||||
lastOid = variable.Name
|
|
||||||
if strings.HasPrefix(variable.Name, oid_asked) {
|
|
||||||
switch variable.Type {
|
|
||||||
// handle instance names
|
|
||||||
case gosnmp.OctetString:
|
|
||||||
// Check if instance is in includes instances
|
|
||||||
getInstances := true
|
|
||||||
if len(table.IncludeInstances) > 0 {
|
|
||||||
getInstances = false
|
|
||||||
for _, instance := range table.IncludeInstances {
|
|
||||||
if instance == string(variable.Value.([]byte)) {
|
|
||||||
getInstances = true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
// Check if instance is in excludes instances
|
|
||||||
if len(table.ExcludeInstances) > 0 {
|
|
||||||
getInstances = true
|
|
||||||
for _, instance := range table.ExcludeInstances {
|
|
||||||
if instance == string(variable.Value.([]byte)) {
|
|
||||||
getInstances = false
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
// We don't want this instance
|
|
||||||
if !getInstances {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
// remove oid table from the complete oid
|
|
||||||
// in order to get the current instance id
|
|
||||||
key := strings.Replace(variable.Name, oid_asked, "", 1)
|
|
||||||
|
|
||||||
if len(table.subTables) == 0 {
|
|
||||||
// We have a mapping table
|
|
||||||
// but no subtables
|
|
||||||
// This is just a bulk request
|
|
||||||
|
|
||||||
// Building mapping table
|
|
||||||
mapping := map[string]string{strings.Trim(key, "."): string(variable.Value.([]byte))}
|
|
||||||
_, exists := OidInstanceMapping[table.oid]
|
|
||||||
if exists {
|
|
||||||
OidInstanceMapping[table.oid][strings.Trim(key, ".")] = string(variable.Value.([]byte))
|
|
||||||
} else {
|
|
||||||
OidInstanceMapping[table.oid] = mapping
|
|
||||||
}
|
|
||||||
|
|
||||||
// Add table oid in bulk oid list
|
|
||||||
oid := Data{}
|
|
||||||
oid.Oid = table.oid
|
|
||||||
if val, ok := nameToOid[oid.Oid]; ok {
|
|
||||||
oid.rawOid = "." + val
|
|
||||||
} else {
|
|
||||||
oid.rawOid = oid.Oid
|
|
||||||
}
|
|
||||||
h.bulkOids = append(h.bulkOids, oid)
|
|
||||||
} else {
|
|
||||||
// We have a mapping table
|
|
||||||
// and some subtables
|
|
||||||
// This is a bunch of get requests
|
|
||||||
// This is the best case :)
|
|
||||||
|
|
||||||
// For each subtable ...
|
|
||||||
for _, sb := range table.subTables {
|
|
||||||
// ... we create a new Data (oid) object
|
|
||||||
oid := Data{}
|
|
||||||
// Looking for more information about this subtable
|
|
||||||
ssb, exists := subTableMap[sb]
|
|
||||||
if exists {
|
|
||||||
// We found a subtable section in config files
|
|
||||||
oid.Oid = ssb.Oid + key
|
|
||||||
oid.rawOid = ssb.Oid + key
|
|
||||||
oid.Unit = ssb.Unit
|
|
||||||
oid.Instance = string(variable.Value.([]byte))
|
|
||||||
} else {
|
|
||||||
// We did NOT find a subtable section in config files
|
|
||||||
oid.Oid = sb + key
|
|
||||||
oid.rawOid = sb + key
|
|
||||||
oid.Instance = string(variable.Value.([]byte))
|
|
||||||
}
|
|
||||||
// TODO check oid validity
|
|
||||||
|
|
||||||
// Add the new oid to getOids list
|
|
||||||
h.getOids = append(h.getOids, oid)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
default:
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
break
|
|
||||||
}
|
|
||||||
}
|
|
||||||
// Determine if we need more requests
|
|
||||||
if strings.HasPrefix(lastOid, oid_asked) {
|
|
||||||
need_more_requests = true
|
|
||||||
oid_next = lastOid
|
|
||||||
} else {
|
|
||||||
need_more_requests = false
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
// Mapping finished
|
|
||||||
|
|
||||||
// Create newoids based on mapping
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (h *Host) SNMPGet(acc telegraf.Accumulator, initNode Node) error {
|
|
||||||
// Get snmp client
|
// Get snmp client
|
||||||
snmpClient, err := h.GetSNMPClient()
|
snmpClient, err := h.GetSNMPClient()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -624,7 +317,7 @@ func (h *Host) SNMPGet(acc telegraf.Accumulator, initNode Node) error {
|
|||||||
return err3
|
return err3
|
||||||
}
|
}
|
||||||
// Handle response
|
// Handle response
|
||||||
_, err = h.HandleResponse(oidsList, result, acc, initNode)
|
_, err = h.HandleResponse(oidsList, result, acc)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@@ -632,7 +325,7 @@ func (h *Host) SNMPGet(acc telegraf.Accumulator, initNode Node) error {
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (h *Host) SNMPBulk(acc telegraf.Accumulator, initNode Node) error {
|
func (h *Host) SNMPBulk(acc telegraf.Accumulator) error {
|
||||||
// Get snmp client
|
// Get snmp client
|
||||||
snmpClient, err := h.GetSNMPClient()
|
snmpClient, err := h.GetSNMPClient()
|
||||||
if err != nil {
|
if err != nil {
|
||||||
@@ -667,7 +360,7 @@ func (h *Host) SNMPBulk(acc telegraf.Accumulator, initNode Node) error {
|
|||||||
return err3
|
return err3
|
||||||
}
|
}
|
||||||
// Handle response
|
// Handle response
|
||||||
last_oid, err := h.HandleResponse(oidsList, result, acc, initNode)
|
last_oid, err := h.HandleResponse(oidsList, result, acc)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
@@ -719,19 +412,12 @@ func (h *Host) GetSNMPClient() (*gosnmp.GoSNMP, error) {
|
|||||||
return snmpClient, nil
|
return snmpClient, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (h *Host) HandleResponse(oids map[string]Data, result *gosnmp.SnmpPacket, acc telegraf.Accumulator, initNode Node) (string, error) {
|
func (h *Host) HandleResponse(oids map[string]Data, result *gosnmp.SnmpPacket, acc telegraf.Accumulator) (string, error) {
|
||||||
var lastOid string
|
var lastOid string
|
||||||
for _, variable := range result.Variables {
|
for _, variable := range result.Variables {
|
||||||
lastOid = variable.Name
|
lastOid = variable.Name
|
||||||
nextresult:
|
// Remove unwanted oid
|
||||||
// Get only oid wanted
|
|
||||||
for oid_key, oid := range oids {
|
for oid_key, oid := range oids {
|
||||||
// Skip oids already processed
|
|
||||||
for _, processedOid := range h.processedOids {
|
|
||||||
if variable.Name == processedOid {
|
|
||||||
break nextresult
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if strings.HasPrefix(variable.Name, oid_key) {
|
if strings.HasPrefix(variable.Name, oid_key) {
|
||||||
switch variable.Type {
|
switch variable.Type {
|
||||||
// handle Metrics
|
// handle Metrics
|
||||||
@@ -745,27 +431,11 @@ func (h *Host) HandleResponse(oids map[string]Data, result *gosnmp.SnmpPacket, a
|
|||||||
// Get name and instance
|
// Get name and instance
|
||||||
var oid_name string
|
var oid_name string
|
||||||
var instance string
|
var instance string
|
||||||
// Get oidname and instance from translate file
|
// Get oidname and instannce from translate file
|
||||||
oid_name, instance = findnodename(initNode,
|
oid_name, instance = findnodename(initNode,
|
||||||
strings.Split(string(variable.Name[1:]), "."))
|
strings.Split(string(variable.Name[1:]), "."))
|
||||||
// Set instance tag
|
|
||||||
// From mapping table
|
if instance != "" {
|
||||||
mapping, inMappingNoSubTable := OidInstanceMapping[oid_key]
|
|
||||||
if inMappingNoSubTable {
|
|
||||||
// filter if the instance in not in
|
|
||||||
// OidInstanceMapping mapping map
|
|
||||||
if instance_name, exists := mapping[instance]; exists {
|
|
||||||
tags["instance"] = instance_name
|
|
||||||
} else {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
} else if oid.Instance != "" {
|
|
||||||
// From config files
|
|
||||||
tags["instance"] = oid.Instance
|
|
||||||
} else if instance != "" {
|
|
||||||
// Using last id of the current oid, ie:
|
|
||||||
// with .1.3.6.1.2.1.31.1.1.1.10.3
|
|
||||||
// instance is 3
|
|
||||||
tags["instance"] = instance
|
tags["instance"] = instance
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -783,7 +453,6 @@ func (h *Host) HandleResponse(oids map[string]Data, result *gosnmp.SnmpPacket, a
|
|||||||
fields := make(map[string]interface{})
|
fields := make(map[string]interface{})
|
||||||
fields[string(field_name)] = variable.Value
|
fields[string(field_name)] = variable.Value
|
||||||
|
|
||||||
h.processedOids = append(h.processedOids, variable.Name)
|
|
||||||
acc.AddFields(field_name, fields, tags)
|
acc.AddFields(field_name, fields, tags)
|
||||||
case gosnmp.NoSuchObject, gosnmp.NoSuchInstance:
|
case gosnmp.NoSuchObject, gosnmp.NoSuchInstance:
|
||||||
// Oid not found
|
// Oid not found
|
||||||
|
|||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user